Compare commits

...

111 Commits
1.1.0 ... 1.2.1

Author SHA1 Message Date
Nik Bougalis
a3470c225b Set version to 1.2.1 2019-02-25 13:01:32 -08:00
seelabs
c5d215d901 Add delivered amount to the ledger RPC command 2019-02-25 13:01:12 -08:00
JoelKatz
9dbf8495ee Avoid a race condition during peer status change 2019-02-25 12:59:35 -08:00
Nik Bougalis
2529edd2b6 Properly transition state to disconnected:
If the number of peers a server has is below the configured
minimum peer limit, this commit will properly transition the
server's state to "disconnected".

The default limit for the minimum number of peers required was
0 meaning that a server that was connected but lost all its
peers would never transition to disconnected, since it could
never drop below zero peers.

This commit redefines the default minimum number of peers to 1
and produces a warning if the server is configured in a way
that will prevent it from ever achieving sufficient connectivity.
2019-02-25 12:59:35 -08:00
Nik Bougalis
e974c7d8a4 Avoid directly using memcpy to deserialize data 2019-02-25 12:59:34 -08:00
Nik Bougalis
b335adb674 Make validators opt out of crawl:
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.

This commit makes servers that are configured as validators
opt out of crawling.
2019-02-25 12:59:34 -08:00
Nik Bougalis
c6ab880c03 Display validator status only to admin requests:
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.

Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.

Servers will only provide this enhanced state information for
connections that have elevated privileges.

Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2019-02-25 12:59:31 -08:00
Mike Ellery
7779dcdda0 Set version to 1.2.0 2019-02-12 16:41:03 -08:00
Mike Ellery
132f1b218c Set version to 1.2.0-rc2 2019-01-30 15:37:56 -08:00
Mike Ellery
e5d6f16f19 Remove [ips] section from sample config 2019-01-30 15:33:39 -08:00
Mike Ellery
8f973621fc Set version to 1.2.0-rc1 2019-01-28 12:02:33 -08:00
Mike Ellery
b75c2d71a5 Make sample config comment consistent with code 2019-01-28 11:53:30 -08:00
Nik Bougalis
eed210bb67 Set version to 1.2.0-b11 2019-01-18 12:13:22 -08:00
Mike Ellery
eab2a0d668 Improve debug information generated for the LedgerTrie 2019-01-18 12:13:21 -08:00
Howard Hinnant
148bbf4e8f Add safe_cast (RIPD-1702):
This change ensures that no overflow can occur when casting
between enums and integral types.
2019-01-18 12:13:21 -08:00
Joseph Busch
494724578a Enchance /crawl API endpoint with local server information (RIPD-1644):
The /crawl API endpoint allows developers to examine the structure of
the XRP Ledger's overlay network.

This commit adds additional information about the local server to the
/crawl endpoint, making it possible for developers to create data-rich
network-wide status dashboards.

Related:
 - https://developers.ripple.com/peer-protocol.html
 - https://github.com/ripple/rippled-network-crawler
2019-01-18 12:13:21 -08:00
Nik Bougalis
ea76103d5f Detect malformed data earlier during deserialization (RIPD-1695):
When deserializing specially crafted data, the code would ignore certain
types of errors. Reserializing objects created from such data results in
failures or generates a different serialization, which is not ideal.

Also addresses: RIPD-1677, RIPD-1682, RIPD-1686 and RIPD-1689.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2019-01-18 12:13:21 -08:00
Nik Bougalis
2151110976 Improve message buffering (RIPD-1699):
Specially crafted messages could cause the server to buffer large
amounts of memory which could increase memory pressure.

This commit changes how messages are buffered and imposes a limit
on the amount of data that the server is willing to buffer.

Acknowledgements:
Aaron Hook for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find. For information
on Ripple's Bug Bounty program, please visit:

    https://ripple.com/bug-bounty
2019-01-17 18:39:04 -08:00
Nik Bougalis
dfb45baa93 Set version to 1.2.0-b10 2018-12-28 13:32:27 -08:00
f443439f1f Add zaphod.alloy.ee to default hub configuration 2018-12-28 13:31:19 -08:00
Howard Hinnant
6d0b108ec1 Upgrade sqlite to 3.26 (fix #2810) 2018-12-28 13:31:19 -08:00
Howard Hinnant
710f9ee1ac Relax overly-strict assert in Serializer constructor (RIPD-1701):
The constructor would previously assert that the specified buffer pointer
was non-null, even if the buffer size is specified as 0. While reasonable,
this also makes it more difficult to use this API.
2018-12-28 13:31:19 -08:00
Howard Hinnant
76d5ecb595 Verify invariants when calling SHAMapInnerNodeV2::addRaw (RIPD-1700) 2018-12-28 13:32:09 -08:00
Joseph Busch
ba9ca1378e Strict input validation against expected schema (RIPD-1709, RIPD-1710) 2018-12-28 13:31:19 -08:00
Miguel Portilla
1be8094ee2 Improve crawl shard resource usage 2018-12-28 13:31:19 -08:00
Nik Bougalis
96c949a997 Set version to 1.2.0-b9 2018-12-11 13:01:05 -08:00
Mike Ellery
9121e26708 Update libarchive to 3.3.3 from official repo 2018-12-11 12:52:29 -08:00
Edward Hennis
2432f13903 Reserve correct vector size for fee calculations:
* Using txnsExpected_, which is influenced by both the config
  and network behavior, can reserve far too much or far too
  little memory, wasting time and resources.
* Not an issue during normal operation, but a user could
  cause problems on their local node with extreme configuration
  settings.
2018-12-11 12:51:46 -08:00
Edward Hennis
259fb1c32e Fix unit test with incorrectly hard-coded parameter:
* initFee was using a lot of logic that could look unclear. Add
  some documentation explaining why certain values were used.
* Because initFee had side effects, callers needed to repeat the
  max queue size computation, making the initial problem more
  likely. Instead, return the max queue size value, so the caller
  can reuse it.
* A newer test (testInFlightBalance()) was incorrectly using a
  hard-coded queue limit. Fix it to use initFee's new return
  value.
2018-12-11 12:51:46 -08:00
Rome Reginelli
e0515b0015 Correct amount serialization comments 2018-12-11 12:51:46 -08:00
John Freeman
412a3ec710 Fix the --rpc_port command-line argument
The --rpc_port command-line option is effectively ignored. We construct
an `Endpoint` with the given port, but then drop it on the floor.
(Perhaps the author thought the `Endpoint::at_port` method is a mutation
instead of a transformation.) This small change adds the missing
assignment to hold on to the new endpoint.

Fixes #2764
2018-12-11 12:50:05 -08:00
Nik Bougalis
30bba29da2 Merge master (1.1.2) into develop (1.2.0-b8) 2018-12-11 12:48:32 -08:00
Nik Bougalis
4f3a76dec0 Set version to 1.1.2 2018-11-29 21:49:10 -08:00
Nik Bougalis
61f443e3bb Properly bypass connection limits for cluster peers (fix #2795) 2018-11-29 21:38:35 -08:00
Brad Chase
bd2a38f584 Improve preferred ledger calculation:
This changeset ensures the preferred ledger calculation
properly distinguishes the absence of trusted validations
from a preferred ledger which is the genesis ledger.
2018-11-29 21:38:12 -08:00
Nik Bougalis
4cff94f7a4 Set version to 1.2.0-b8 2018-11-25 17:39:49 -08:00
Mark Travis
fbdbffed67 Report duration in current state. 2018-11-25 17:37:31 -08:00
Scott Schurr
ad5c5f1969 STObject::applyTemplate() throws with description of error:
The `STObject` member function `setType()` has been renamed to
applyTemplate() and modified to throw if there is a template
mismatch.

The error description in the exception is, in certain cases,
used, to better indicate why a particular transaction was
considered ill formed.

Fixes #2585.
2018-11-25 17:37:31 -08:00
John Freeman
c354809e1c Implement missing string conversions for JSON
`Json::Value::isConvertibleTo` indicates that unsigned integers and
reals are convertible to string, but trying to do so (with
`Json::Value::asString`) throws an exception because its internal switch
is missing these cases. This change fills them in (and adds tests).

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing this issue.

Closes #2778
2018-11-25 17:37:14 -08:00
John Freeman
dc4d76f626 Prefer regex to manual parsing in parseURL:
Although `parseURL` used a regex to pull the authority out of the URL
being parsed, it performed manual parsing of the hostname and port.

This commit rolls the parsing of the username and password, if any,
directly into the regex. The hostname can be a name, an IPv4 or an
IPv6 address.

Fixes #2751
2018-11-21 17:08:21 -08:00
Edward Hennis
c1a02440dc Load validator list from file:
* Adds local file:// URL support to the [validator_list_sites] stanza.
  The file:// URL must not contain a hostname. Allows a rippled node
  operator to "sideload" a new list if their node is unable to reach
  a validator list's web site before an old list expires. Lists
  loaded from a file will be validated in the same way a downloaded
  list is validated.
* Generalize file/dir "guards" from Config test so they can be reused
  in other tests.
* Check for error when reading validators.txt. Saves some parsing and
  checking of an empty string, and will give a more meaningful error.
* Completes RIPD-1674.
2018-11-20 19:49:39 -08:00
Edward Hennis
e7a69cce65 Account for minimum reserve in potential spend:
* Relevant when deciding whether an account can queue multiple
  transactions. If the potential spend of the already queued
  transactions would dip into the reserve, the reserve is
  preserved for fees.
* Also change several direct modifications of the owner count to
  call adjustOwnerCount to preserve overflow checking.
* Update related unit testcase
* Resolves #2251
2018-11-20 19:49:39 -08:00
Howard Hinnant
60dc949314 Remove custom terminate handler
* Reduce the amount of code we have to maintain.
* Remove the potential for degrading stack dumps.
2018-11-20 19:45:02 -08:00
Nik Bougalis
cc824685e7 Set version to 1.2.0-b7 2018-11-09 07:40:46 -08:00
JoelKatz
be70d81bd7 Perform some extra checks on ledger changes
Perform some extra checks on the close time and sequence number
of a candidate for network consensus ledger. This tightens
defenses against some "insane/hostile supermajority" attacks.
2018-11-09 07:40:41 -08:00
JoelKatz
6df96f08df Ensure websocket PING/PONG token has length 8 (RIPD-1670) 2018-11-09 07:40:41 -08:00
JoelKatz
9ad2b9be45 Fix a rare race condition on shutdown:
If we happen to get very unlucky and close the door when no
accept operation is pending, the do_accept loop would never
terminate.
2018-11-09 07:40:41 -08:00
JoelKatz
0d2b2923da Control memory growth from slow writes
* Don't allow a write batch to grow without bound
* Don't fetch history if write load is high
2018-11-09 07:40:41 -08:00
Mike Ellery
265f5f1fb1 Delete old protobuf subtree 2018-11-08 18:58:13 -08:00
Mike Ellery
a2ab6c4b02 Build protobuf as ExternalProject when not found 2018-11-08 18:58:13 -08:00
Mike Ellery
6bdc9e7b30 Use correct manifest cache when loading ValidatorList 2018-11-08 18:58:13 -08:00
Nik Bougalis
c71eb45240 Eliminate potential undefined behavior (RIPD-1685):
Under certain conditions, we could call `memcpy` or `memcmp` with a null
source pointer. Even when specifying 0 as the amount of data to copy this
could result in undefined behavior under the C and C++ standards.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2018-11-08 18:58:13 -08:00
Nik Bougalis
753600a2a0 Reset the validator list fetch timer if an error occurs 2018-11-08 18:58:12 -08:00
Nik Bougalis
945493d9cf Allow servers to detect transaction censorship attempts (RIPD-1626):
The XRP Ledger is designed to be censorship resistant. Any attempt to
censor transactions would require coordinated action by a majority of
the system's validators.

Importantly, the design of the system is such that such an attempt is
detectable and can be easily proven since every validators must sign
the validations it publishes.

This commit adds an automated censorship detector. While the server is
in sync, the detector tracks all transactions that, in the view of the
server, should have been included and issues warnings of increasing
severity for any transactions which, have not after several rounds.
2018-11-08 18:58:11 -08:00
Nik Bougalis
2a8b0e4b88 Set version to 1.2.0-b6 2018-11-06 10:27:29 -08:00
Nik Bougalis
513b1dd194 Add support for Ed25519 seeds encoded using ripple-lib:
When Ed25519 support was added to ripple-lib, a way to specify
whether a seed should be used to derive a "classic" secp256k1
keypair or a "new" Ed25519 keypair was needed, and the
requirements were that:

1. previously seeds would, correctly, generate a secp256k1
   keypair.
2. users would not have to know about whether the seed was
   used to generate a secp256k1 or an Ed25519 keypair.

To address these requirements, the decision was made to encode
the type of key within the seed and a custom encoding was
designed.

The encoding uses a token type of 1 and prefixes the actual
seed with a 2 byte header, selected to ensure that all such
keypairs will, when encoded, begin with the string "sEd".

This custom encoding is non-standard and was not previously
documented; as a result, it is not widely supported and other
sofware may treat such keys as invalid. This can make it
difficult for users that have stored such a seed to use
wallets or other tooling that is not based on ripple-lib.

This commit adds support to rippled for automatically
detecting and properly handling such seeds.
2018-11-06 10:27:13 -08:00
Nik Bougalis
77462b8f72 Remove deprecated 'validation_seed' RPC command:
The 'validation_seed' RPC command was used to change the validation
key used by a validator at runtime.

Its implementation was commented out with commit fa796a2eb5
which has been included in the codebase since the 0.30.0 release
and there are no plans to reintroduce the functionality at this
point.

Validator operators should migrate to using validator manifests
instead.

This fixes #2748.
2018-11-06 10:27:12 -08:00
Nik Bougalis
1682fe3a39 Cleanup unused Beast bits and pieces:
This cleanup does not remove Boost.Beast code, but old-style Beast
which is no longer relevant or helpful.
2018-11-06 10:27:10 -08:00
Edward Hennis
58f786cbb4 Make the FeeEscalation amendment permanent (RIPD-1654):
The FeeEscalation amendment has been enabled on the XRP Ledger network
since May 19, 2016. The transaction which activated this amendment is:
5B1F1E8E791A9C243DD728680F108FEF1F28F21BA3B202B8F66E7833CA71D3C3.

This change removes all conditional code based around the FeeEscalation
amendment, but leaves the amendment definition itself since removing the
definition would cause nodes to think an unknown amendment was activate
causing them to become amendment blocked.

The commit also removes the redundant precomputed hashes from the
supportedAmendments vector.
2018-11-06 10:26:29 -08:00
Edward Hennis
a96cb8fc1c Remove undocumented experimental options from RPC sign (RIPD-1653):
The `x_assume_tx` and `x_queue_okay` experimental options were
associated with the transaction queue that were not officially
supported.
2018-11-06 10:26:29 -08:00
Joe Loser
c587012e5c Inline calls to cachedRead:
Problem:
- There are only a few call sites to cachedRead, and all of them
  currently do more work than is required since we know the type in each
  case.

Solution:
- "Inline" the codepath to cachedRead, but do not check if the type is
  valid. In all such call sites, we know the keylet to read directly.

This fixes #2550
2018-11-06 10:26:29 -08:00
Mike Ellery
ad4bbd8dff Add source filtering for coverage with option to disable 2018-11-06 10:26:29 -08:00
Mike Ellery
202d91c9f0 Remove unused json_batchallocator.h 2018-11-06 10:26:29 -08:00
Howard Hinnant
146ea5d44e Remove a use after std::move
Fixes: #2538
Fixes: #2536
2018-11-06 10:26:29 -08:00
Howard Hinnant
157c066f2b Fix memory leak in Json move assignment operator
*  When move assignment is creates a cyclic ownership pattern
   memory was being leaked.  This patch breaks the cycle.

*  Fixes: #2572
2018-11-06 10:26:29 -08:00
Howard Hinnant
156e8dae83 Replace WaitableEvent with portable std primitives:
The WaitableEvent class was a leftover from the pre-Boost
version of Beast and used Windows- and pthread-specific
APIs.

This refactor replaces that functionality by using only
interfaces provided by the C++ standard, making the code
more portable.

Closes #2402.
2018-11-06 10:26:29 -08:00
Markus Teufelberger
5e96da51f9 Remove the state file for the random number generator 2018-11-06 10:26:29 -08:00
Nik Bougalis
cb71d493a0 Set version to 1.2.0-b5 2018-10-23 08:33:18 -07:00
MarkusTeufelberger
8124c1f51f remove duplicated include
The errno.h header is already included for both Linux and Android above
2018-10-23 08:24:11 -07:00
Nik Bougalis
6ed2270bc9 Merge master (1.1.1) into develop (1.2.0-b4) 2018-10-23 08:21:43 -07:00
Mike Ellery
4e7c038520 Set version to 1.2.0-b4 2018-10-19 12:24:51 -07:00
1535239824@qq.com
7b48dc36f5 Add fixTakerDryOfferRemoval amendment 2018-10-19 12:23:25 -07:00
Miguel Portilla
d5c0e1216d Change conflicting example websocket port 2018-10-19 12:22:47 -07:00
Scott Schurr
a999894dae Allow rippled to compile with C++17:
Many of the warnings on Windows were not resolved, just
silenced with _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS.
They need to be resolved in a future commit.
2018-10-19 12:21:57 -07:00
Scott Schurr
63e167b7a3 ledger_entry RPC by index matches other forms [RIPD-1538] 2018-10-19 12:21:10 -07:00
MarkusTeufelberger
8fc6a8175b Remove unused execinfo.h header
Fixes #2671 and #2159
2018-10-19 12:20:11 -07:00
Edward Hennis
af1697cc6a Improve RPC error message for fee command:
* If rippled is not synced to the network, `fee` will return a
  "no network" error instead of the possibly confusing "not enabled"
  error.
* Resolves RIPD-1588
2018-10-19 12:19:20 -07:00
Mark Travis
e98c76110a Remove outdated example configs. 2018-10-19 12:18:29 -07:00
Mike Ellery
7fe1d4b9c2 Accept redirects from validator list sites:
Honor location header/redirect from validator sites. Limit retries per
refresh interval to 3. Shorten refresh interval after HTTP/network errors.

Fixes: RIPD-1669
2018-10-19 12:16:57 -07:00
Nik Bougalis
b36e11bc49 Properly handle expired validator lists when validating (RIPD-1661):
A validator that was configured to use a published validator list could
exhibit aberrent behavior if that validator list expired.

This commit introduces additional logic that makes validators operating
with an expired validator list bow out of the consensus process instead
of continuing to publish validations. Normal operation will resume once
a non-expired validator list becomes available.

This commit also enhances status reporting when using the `server_info`
and `validators` commands. Before, only the expiration time of the list
would be returned; now, its current status is also reported in a format
that is clearer.
2018-10-19 12:15:36 -07:00
seelabs
72e6005f56 Set version to 1.1.1 2018-10-19 13:12:40 -04:00
Nik Bougalis
152d698957 Properly handle expired validator lists when validating (RIPD-1661):
A validator that was configured to use a published validator list could
exhibit aberrent behavior if that validator list expired.

This commit introduces additional logic that makes validators operating
with an expired validator list bow out of the consensus process instead
of continuing to publish validations. Normal operation will resume once
a non-expired validator list becomes available.

This commit also enhances status reporting when using the `server_info`
and `validators` commands. Before, only the expiration time of the list
would be returned; now, its current status is also reported in a format
that is clearer.
2018-10-19 13:08:56 -04:00
Mike Ellery
7c96bbafbd CI rpm build fix 2018-10-11 11:08:55 -07:00
Mike Ellery
bdaad19e70 Accept redirects from validator list sites:
Honor location header/redirect from validator sites. Limit retries per
refresh interval to 3. Shorten refresh interval after HTTP/network errors.

Fixes: RIPD-1669
2018-10-11 11:08:27 -07:00
seelabs
63c3fc30d8 Set version to 1.2.0-b3 2018-10-10 13:09:25 -04:00
Joe Loser
1ac9694dbc Simplify strHex:
Problem:
- There are several specific overloads with some custom code that can be
  easily replaced using Boost.Hex.

Solution:
- Introduce `strHex(itr, itr)` to return a string given a begin and end
  iterator.
- Remove `strHex(itr, size)` in favor of the `strHex(T)` where T is
  something that has a `begin()` member function. This allows us to
  remove the strHex overloads for `std::string`, Blob, and Slice.
2018-10-10 13:09:22 -04:00
Miguel Portilla
3661dc88fe Add RPC command shard crawl (RIPD-1663) 2018-10-10 12:16:01 -04:00
Edward Hennis
86c066cd7e Include entire src tree in multiconfig projects:
* For example Visual Studio, XCode. This will allow easily working with
  any file in the IDE.
* Also ignore the file created by Visual Studio when using cmake
  integration.
* Use conditional for unity/nounity sources (h/t @mellery451)
2018-10-10 10:25:25 -04:00
Mike Ellery
d70464032c Add dependency for NuDB ExternalProject 2018-10-10 10:19:00 -04:00
Scott Schurr
0bbe6e226c Remove beast::Journal default constructor 2018-10-10 10:18:03 -04:00
Mike Ellery
49e61cc0a6 Improve codecov builds:
- allow private token for jenkins/codecov
- add custom targets for gcc/clang to generate codecov reports
- use CMake coverage target in jenkins build
- optional coverage_test argument when configuring the build
2018-10-10 10:15:10 -04:00
Scott Schurr
6572fc8e95 Implement MultiSignReserve amendment [RIPD-1647]:
Reduces the account reserve for a multisigning SignerList from
(conditionally) 3 to 10 OwnerCounts to (unconditionally) 1
OwnerCount.  Includes a transition process.
2018-10-01 18:17:33 -07:00
Nik Bougalis
3ce4dda5cb Set version to 1.2.0-b2 2018-10-01 11:26:31 -07:00
Edward Hennis
7295cf979b Grow the open ledger expected transactions quickly (RIPD-1630):
* When increasing the expected ledger size, add on an extra 20%.
* When decreasing the expected ledger size, take the minimum of the
  validated ledger size or the old expected size, and subract another 50%.
* Update fee escalation documentation.
* Refactor the FeeMetrics object to use values from Setup
2018-10-01 11:26:22 -07:00
Edward Hennis
e14f913244 Update TxQ developer docs:
* Rename a couple of member variables for clarity.
2018-10-01 11:26:22 -07:00
Joe Loser
cd1c5a30dd Add user defined literals for megabytes and kilobytes 2018-10-01 11:26:22 -07:00
Joe Loser
8dd8433bb6 Remove unused function in AutoSocket.h 2018-10-01 07:40:56 -07:00
Scott Schurr
eeb9d92fb0 Add RPCCall unit tests (RIPD-1634) 2018-10-01 07:40:56 -07:00
Scott Schurr
4104778067 Improve transaction error condition handling (RIPD-1578, RIPD-1593):
As described in #2314, when an offer executed with `Fill or Kill`
semantics, the server would return `tesSUCCESS` even if the order
couldn't be filled and was aborted. This would require additional
processing of metadata by users to determine the effects of the
transaction.

This commit introduces the `fix1578` amendment which, if enabled,
will cause the server to return the new `tecKILLED` error code
instead of `tesSUCCESS` for `Fill or Kill` orders that could not
be filled.

Additionally, the `fix1578` amendment will prevent the setting of
the `No Ripple` flag on trust lines with negative balance; trying
to set the flag on such a trust line will fail with the new error
code `tecNEGATIVE_BALANCE`.
2018-09-30 14:10:40 -07:00
Spec
4dcb3c9199 Avoid dispatching multiple fetch pack threads 2018-09-30 13:54:59 -07:00
Nik Bougalis
b0092aee24 Set version to 1.2.0-b1 2018-09-28 09:15:12 -07:00
Mike Ellery
bb52b04c25 Remove subtrees for soci, sqlite, lz4, snappy, nudb 2018-09-28 09:15:06 -07:00
Mike Ellery
83dac8b382 Use ExternalProject for NIH dependencies
Fixes: RIPD-1648

 - use ExternalProject for snappy, lz4, SOCI, and sqlite3
 - use FetchContent for NuDB
 - update SOCI from 79e222e3c2278e6108137a2d26d3689418b37544 to
   3a1f602b3021b925d38828e3ff95f9e7f8887ff7
 - update lz4 from c10863b98e1503af90616ae99725ecd120265dfb to v1.8.2
 - update sqlite3 from 3.21 to 3.24
 - update snappy from b02bfa754ebf27921d8da3bd2517eab445b84ff9 to 1.1.7
 - update NuDB from 00adc6a4f16679a376f40c967f77dfa544c179c1 to 1.0.0
2018-09-28 09:15:06 -07:00
Mike Ellery
8a4951947d Improve ssl and nih in cmake:
- provide better override handling for ssl dir
- include build type in nih cache for single config
  to avoid cmake cache collision
2018-09-28 09:15:06 -07:00
Mike Ellery
ab6163e989 Remove test sensitivity to error text from OpenSSL 2018-09-28 09:15:06 -07:00
Mike Ellery
5741a8356f Refine json object test for NDEBUG case 2018-09-28 09:15:06 -07:00
seelabs
b2f2d89a08 Support boost 1.68 2018-09-28 09:15:06 -07:00
seelabs
c946043280 Suppress clang warning on intentional self assignment 2018-09-28 09:15:06 -07:00
Miguel Portilla
820546c873 Report fetch pack errors with shards 2018-09-28 09:15:06 -07:00
Scott Schurr
b36e9dd1b4 Remove noisy log write from Stoppable.cpp 2018-09-28 09:15:06 -07:00
Scott Schurr
582d1691a9 Improve error descriptions in JSONRPC unit test 2018-09-28 09:15:06 -07:00
1342 changed files with 16220 additions and 544995 deletions

1
.gitignore vendored
View File

@@ -92,3 +92,4 @@ Builds/VisualStudio2015/*.sdf
# MSVC
*.pdb
.vs/
CMakeSettings.json

9
.gitmodules vendored
View File

@@ -1,9 +0,0 @@
[submodule "src/nudb/extras/beast"]
path = src/nudb/extras/beast
url = https://github.com/vinniefalco/Beast.git
[submodule "src/nudb/extras/rocksdb"]
path = src/nudb/extras/rocksdb
url = https://github.com/facebook/rocksdb.git
[submodule "src/nudb/doc/docca"]
path = src/nudb/doc/docca
url = https://github.com/vinniefalco/docca.git

View File

@@ -36,26 +36,10 @@ matrix:
include:
- compiler: gcc
env: GCC_VER=5 TARGET=debug
# - compiler: gcc
# env: GCC_VER=5 TARGET=debug.nounity
# - compiler: gcc
# env: GCC_VER=5 TARGET=coverage PATH=$PWD/cmake/bin:$PATH
env: GCC_VER=5 BUILD_TYPE=Debug
- compiler: clang
env: GCC_VER=5 TARGET=debug
# - compiler: clang
# env: GCC_VER=5 TARGET=debug.nounity
# The clang cmake builds do not link.
# - compiler: clang
# env: GCC_VER=5 TARGET=debug CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PWD/cmake/bin:$PATH
# - compiler: clang
# env: GCC_VER=5 TARGET=debug.nounity CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PWD/cmake/bin:$PATH
env: GCC_VER=5 BUILD_TYPE=Debug
cache:
directories:

View File

@@ -115,3 +115,134 @@ macro (exclude_if_included target_)
endif ()
endmacro ()
function (print_ep_logs _target)
ExternalProject_Get_Property (${_target} STAMP_DIR)
add_custom_command(TARGET ${_target} POST_BUILD
COMMENT "${_target} BUILD OUTPUT"
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-out.log
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-err.log
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
endfunction ()
#[=========================================================[
This is a function override for one function in the
standard ExternalProject module. We want to change
the generated build script slightly to include printing
the build logs in the case of failure. Those modifications
have been made here. This function override could break
in the future if the ExternalProject module changes internal
function names or changes the way it generates the build
scripts.
See:
https://gitlab.kitware.com/cmake/cmake/blob/df1ddeec128d68cc636f2dde6c2acd87af5658b6/Modules/ExternalProject.cmake#L1855-1952
#]=========================================================]
function(_ep_write_log_script name step cmd_var)
ExternalProject_Get_Property(${name} stamp_dir)
set(command "${${cmd_var}}")
set(make "")
set(code_cygpath_make "")
if(command MATCHES "^\\$\\(MAKE\\)")
# GNU make recognizes the string "$(MAKE)" as recursive make, so
# ensure that it appears directly in the makefile.
string(REGEX REPLACE "^\\$\\(MAKE\\)" "\${make}" command "${command}")
set(make "-Dmake=$(MAKE)")
if(WIN32 AND NOT CYGWIN)
set(code_cygpath_make "
if(\${make} MATCHES \"^/\")
execute_process(
COMMAND cygpath -w \${make}
OUTPUT_VARIABLE cygpath_make
ERROR_VARIABLE cygpath_make
RESULT_VARIABLE cygpath_error
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(NOT cygpath_error)
set(make \${cygpath_make})
endif()
endif()
")
endif()
endif()
set(config "")
if("${CMAKE_CFG_INTDIR}" MATCHES "^\\$")
string(REPLACE "${CMAKE_CFG_INTDIR}" "\${config}" command "${command}")
set(config "-Dconfig=${CMAKE_CFG_INTDIR}")
endif()
# Wrap multiple 'COMMAND' lines up into a second-level wrapper
# script so all output can be sent to one log file.
if(command MATCHES "(^|;)COMMAND;")
set(code_execute_process "
${code_cygpath_make}
execute_process(COMMAND \${command} RESULT_VARIABLE result)
if(result)
set(msg \"Command failed (\${result}):\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
message(FATAL_ERROR \"\${msg}\")
endif()
")
set(code "")
set(cmd "")
set(sep "")
foreach(arg IN LISTS command)
if("x${arg}" STREQUAL "xCOMMAND")
if(NOT "x${cmd}" STREQUAL "x")
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
endif()
set(cmd "")
set(sep "")
else()
string(APPEND cmd "${sep}${arg}")
set(sep ";")
endif()
endforeach()
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
file(GENERATE OUTPUT "${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake" CONTENT "${code}")
set(command ${CMAKE_COMMAND} "-Dmake=\${make}" "-Dconfig=\${config}" -P ${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake)
endif()
# Wrap the command in a script to log output to files.
set(script ${stamp_dir}/${name}-${step}-$<CONFIG>.cmake)
set(logbase ${stamp_dir}/${name}-${step})
set(code "
${code_cygpath_make}
function (_echo_file _fil)
file (READ \${_fil} _cont)
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"\${_cont}\")
endfunction ()
set(command \"${command}\")
execute_process(
COMMAND \${command}
RESULT_VARIABLE result
OUTPUT_FILE \"${logbase}-out.log\"
ERROR_FILE \"${logbase}-err.log\"
)
if(result)
set(msg \"Command failed: \${result}\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"Build output for ${logbase} : \")
_echo_file (\"${logbase}-out.log\")
_echo_file (\"${logbase}-err.log\")
set(msg \"\${msg}\\nSee above\\n\")
message(FATAL_ERROR \"\${msg}\")
else()
set(msg \"${name} ${step} command succeeded. See also ${logbase}-*.log\")
message(STATUS \"\${msg}\")
endif()
")
file(GENERATE OUTPUT "${script}" CONTENT "${code}")
set(command ${CMAKE_COMMAND} ${make} ${config} -P ${script})
set(${cmd_var} "${command}" PARENT_SCOPE)
endfunction()

View File

@@ -0,0 +1,60 @@
#[=========================================================[
SQLITE doesn't provide build files in the
standard source-only distribution. So we wrote
a simple cmake file and we copy it to the
external project folder so that we can use
this file to build the lib with ExternalProject
#]=========================================================]
add_library (sqlite3 STATIC sqlite3.c)
#[=========================================================[
When compiled with SQLITE_THREADSAFE=1, SQLite operates
in serialized mode. In this mode, SQLite can be safely
used by multiple threads with no restriction.
NOTE: This implies a global mutex!
When compiled with SQLITE_THREADSAFE=2, SQLite can be
used in a multithreaded program so long as no two
threads attempt to use the same database connection at
the same time.
NOTE: This is the preferred threading model, but not
currently enabled because we need to investigate our
use-model and concurrency requirements.
TODO: consider whether any other options should be
used: https://www.sqlite.org/compile.html
#]=========================================================]
target_compile_definitions (sqlite3
PRIVATE
SQLITE_THREADSAFE=1
HAVE_USLEEP=1)
target_compile_options (sqlite3
PRIVATE
$<$<BOOL:${MSVC}>:
-wd4100
-wd4127
-wd4232
-wd4244
-wd4701
-wd4706
-wd4996
>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-array-bounds>)
install (
TARGETS
sqlite3
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install (
FILES
sqlite3.h
sqlite3ext.h
DESTINATION include)

View File

@@ -458,6 +458,14 @@ function(_Boost_GUESS_COMPILER_PREFIX _ret)
elseif (GHSMULTI)
set(_boost_COMPILER "-ghs")
elseif("x${CMAKE_CXX_COMPILER_ID}" STREQUAL "xMSVC")
#[========================================================[
NOTE: newer versions of FindBoost from kitware
change this version check to use MSVC_TOOLSET_VERSION.
That variable only exists in make 3.12 or greater, so
until all envs (including bundled visual studio) have
this min version of cmake, stick with this
CMAKE_CXX_COMPILER_VERSION check
#]========================================================]
if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 19.10)
set(_boost_COMPILER "-vc141;-vc140")
elseif (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 19)
@@ -816,10 +824,28 @@ function(_Boost_COMPONENT_DEPENDENCIES component _ret)
set(_Boost_TIMER_DEPENDENCIES chrono system)
set(_Boost_WAVE_DEPENDENCIES filesystem system serialization thread chrono date_time atomic)
set(_Boost_WSERIALIZATION_DEPENDENCIES serialization)
elseif(NOT Boost_VERSION VERSION_LESS 106700 AND Boost_VERSION VERSION_LESS 106800)
set(_Boost_CHRONO_DEPENDENCIES system)
set(_Boost_CONTEXT_DEPENDENCIES thread chrono system date_time)
set(_Boost_COROUTINE_DEPENDENCIES context system)
set(_Boost_FIBER_DEPENDENCIES context thread chrono system date_time)
set(_Boost_FILESYSTEM_DEPENDENCIES system)
set(_Boost_IOSTREAMS_DEPENDENCIES regex)
set(_Boost_LOG_DEPENDENCIES date_time log_setup system filesystem thread regex chrono atomic)
set(_Boost_MATH_DEPENDENCIES math_c99 math_c99f math_c99l math_tr1 math_tr1f math_tr1l atomic)
set(_Boost_MPI_DEPENDENCIES serialization)
set(_Boost_MPI_PYTHON_DEPENDENCIES python${component_python_version} mpi serialization)
set(_Boost_NUMPY_DEPENDENCIES python${component_python_version})
set(_Boost_RANDOM_DEPENDENCIES system)
set(_Boost_THREAD_DEPENDENCIES chrono system date_time atomic)
set(_Boost_TIMER_DEPENDENCIES chrono system)
set(_Boost_WAVE_DEPENDENCIES filesystem system serialization thread chrono date_time atomic)
set(_Boost_WSERIALIZATION_DEPENDENCIES serialization)
else()
if(NOT Boost_VERSION VERSION_LESS 106700)
if(NOT Boost_VERSION VERSION_LESS 106800)
set(_Boost_CHRONO_DEPENDENCIES system)
set(_Boost_CONTEXT_DEPENDENCIES thread chrono system date_time)
set(_Boost_CONTRACT_DEPENDENCIES thread chrono system date_time)
set(_Boost_COROUTINE_DEPENDENCIES context system)
set(_Boost_FIBER_DEPENDENCIES context thread chrono system date_time)
set(_Boost_FILESYSTEM_DEPENDENCIES system)
@@ -835,7 +861,7 @@ function(_Boost_COMPONENT_DEPENDENCIES component _ret)
set(_Boost_WAVE_DEPENDENCIES filesystem system serialization thread chrono date_time atomic)
set(_Boost_WSERIALIZATION_DEPENDENCIES serialization)
endif()
if(NOT Boost_VERSION VERSION_LESS 106800)
if(NOT Boost_VERSION VERSION_LESS 106900)
message(WARNING "New Boost version may have incorrect or missing dependencies and imported targets")
endif()
endif()
@@ -872,7 +898,8 @@ function(_Boost_COMPONENT_HEADERS component _hdrs)
set(_Boost_ATOMIC_HEADERS "boost/atomic.hpp")
set(_Boost_CHRONO_HEADERS "boost/chrono.hpp")
set(_Boost_CONTAINER_HEADERS "boost/container/container_fwd.hpp")
set(_Boost_CONTEXT_HEADERS "boost/context/all.hpp")
set(_Boost_CONTRACT_HEADERS "boost/contract.hpp")
set(_Boost_CONTEXT_HEADERS "boost/context/detail/fcontext.hpp")
set(_Boost_COROUTINE_HEADERS "boost/coroutine/all.hpp")
set(_Boost_DATE_TIME_HEADERS "boost/date_time/date.hpp")
set(_Boost_EXCEPTION_HEADERS "boost/exception/exception.hpp")
@@ -1081,7 +1108,7 @@ else()
# _Boost_COMPONENT_HEADERS. See the instructions at the top of
# _Boost_COMPONENT_DEPENDENCIES.
set(_Boost_KNOWN_VERSIONS ${Boost_ADDITIONAL_VERSIONS}
"1.67.0" "1.67" "1.66.0" "1.66" "1.65.1" "1.65.0" "1.65"
"1.68.0" "1.68" "1.67.0" "1.67" "1.66.0" "1.66" "1.65.1" "1.65.0" "1.65"
"1.64.0" "1.64" "1.63.0" "1.63" "1.62.0" "1.62" "1.61.0" "1.61" "1.60.0" "1.60"
"1.59.0" "1.59" "1.58.0" "1.58" "1.57.0" "1.57" "1.56.0" "1.56" "1.55.0" "1.55"
"1.54.0" "1.54" "1.53.0" "1.53" "1.52.0" "1.52" "1.51.0" "1.51"
@@ -1480,14 +1507,14 @@ if(NOT "x${CMAKE_CXX_COMPILER_ARCHITECTURE_ID}" STREQUAL "x" AND NOT Boost_VERSI
string(APPEND _boost_ARCHITECTURE_TAG "-")
# This needs to be kept in-sync with the section of CMakePlatformId.h.in
# inside 'defined(_WIN32) && defined(_MSC_VER)'
if(${CMAKE_CXX_COMPILER_ARCHITECTURE_ID} STREQUAL "IA64")
if(CMAKE_CXX_COMPILER_ARCHITECTURE_ID STREQUAL "IA64")
string(APPEND _boost_ARCHITECTURE_TAG "i")
elseif(${CMAKE_CXX_COMPILER_ARCHITECTURE_ID} STREQUAL "X86"
OR ${CMAKE_CXX_COMPILER_ARCHITECTURE_ID} STREQUAL "x64")
elseif(CMAKE_CXX_COMPILER_ARCHITECTURE_ID STREQUAL "X86"
OR CMAKE_CXX_COMPILER_ARCHITECTURE_ID STREQUAL "x64")
string(APPEND _boost_ARCHITECTURE_TAG "x")
elseif(${CMAKE_CXX_COMPILER_ARCHITECTURE_ID} MATCHES "^ARM")
elseif(CMAKE_CXX_COMPILER_ARCHITECTURE_ID MATCHES "^ARM")
string(APPEND _boost_ARCHITECTURE_TAG "a")
elseif(${CMAKE_CXX_COMPILER_ARCHITECTURE_ID} STREQUAL "MIPS")
elseif(CMAKE_CXX_COMPILER_ARCHITECTURE_ID STREQUAL "MIPS")
string(APPEND _boost_ARCHITECTURE_TAG "m")
endif()

View File

@@ -27,20 +27,20 @@ find_dependency (Boost 1.67
#[=========================================================[
OpenSSL
#]=========================================================]
if (APPLE AND NOT DEFINED ENV{OPENSSL_ROOT_DIR})
find_program (HOMEBREW brew)
if (NOT HOMEBREW STREQUAL "HOMEBREW-NOTFOUND")
execute_process (COMMAND ${HOMEBREW} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program (homebrew brew)
if (homebrew)
execute_process (COMMAND ${homebrew} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if ((NOT DEFINED OPENSSL_ROOT) AND (DEFINED ENV{OPENSSL_ROOT}))
set (OPENSSL_ROOT $ENV{OPENSSL_ROOT})
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT}" OPENSSL_ROOT)
if (static OR APPLE OR MSVC)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()

View File

@@ -0,0 +1,15 @@
#[=========================================================[
This is a CMake script file that is used to write
the contents of a file to stdout (using the cmake
echo command). The input file is passed via the
IN_FILE variable.
#]=========================================================]
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
endif ()

View File

@@ -0,0 +1,13 @@
# This patches unsigned-types.h in the soci official sources
# so as to remove type range check exceptions that cause
# us trouble when using boost::optional to select int values
file (STRINGS include/soci/unsigned-types.h sourcecode)
foreach (line_ ${sourcecode})
if (line_ MATCHES "^[ \\t]+throw[ ]+soci_error[ ]*\\([ ]*\"Value outside of allowed.+$")
set (line_ "//${CMAKE_MATCH_0}")
endif ()
file (APPEND include/soci/unsigned-types.h.patched "${line_}\n")
endforeach ()
file (RENAME include/soci/unsigned-types.h include/soci/unsigned-types.h.orig)
file (RENAME include/soci/unsigned-types.h.patched include/soci/unsigned-types.h)

View File

@@ -102,10 +102,10 @@ to get the correct 32-/64-bit variant.
Boost 1.67 or later is required.
After [downloading boost](http://www.boost.org/users/download/) and unpacking it
to `c:\lib`. As of this writing, the most recent version of boost is 1.67.0,
which will unpack into a directory named `boost_1_67_0`. We recommended either
to `c:\lib`. As of this writing, the most recent version of boost is 1.68.0,
which will unpack into a directory named `boost_1_68_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_67_0`, so that you can more easily switch between versions.
boost_1_68_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
@@ -237,7 +237,7 @@ execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_67_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_68_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
```
Now launch Visual Studio 2017 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`

View File

@@ -30,9 +30,9 @@ with the following process: After changing to the directory where
you wish to download and compile boost, run
```
$ wget https://dl.bintray.com/boostorg/release/1.67.0/source/boost_1_67_0.tar.gz
$ tar -xzf boost_1_67_0.tar.gz
$ cd boost_1_67_0
$ wget https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.gz
$ tar -xzf boost_1_68_0.tar.gz
$ cd boost_1_68_0
$ ./bootstrap.sh
$ ./b2 headers
$ ./b2 -j<Num Parallel>
@@ -81,14 +81,14 @@ git checkout develop
If you didn't persistently set the `BOOST_ROOT` environment variable to the
directory in which you compiled boost, then you should set it temporarily.
For example, you built Boost in your home directory `~/boost_1_67_0`, you
For example, you built Boost in your home directory `~/boost_1_68_0`, you
would do for any shell in which you want to build:
```
export BOOST_ROOT=~/boost_1_67_0
export BOOST_ROOT=~/boost_1_68_0
```
Alternatively, you can add `DBOOST_ROOT=~/boost_1_67_0` to the command line when
Alternatively, you can add `DBOOST_ROOT=~/boost_1_68_0` to the command line when
invoking `cmake`.
### Generate and Build

View File

@@ -64,7 +64,7 @@ Boost 1.67 or later is required.
We want to compile boost with clang/libc++
Download [a release](https://dl.bintray.com/boostorg/release/1.67.0/source/boost_1_67_0.tar.bz2)
Download [a release](https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.bz2)
Extract it to a folder, making note of where, open a terminal, then:
@@ -120,11 +120,11 @@ If you didn't persistently set the `BOOST_ROOT` environment variable to the
root of the extracted directory above, then you should set it temporarily.
For example, assuming your username were `Abigail` and you extracted Boost
1.67.0 in `/Users/Abigail/Downloads/boost_1_67_0`, you would do for any
1.68.0 in `/Users/Abigail/Downloads/boost_1_68_0`, you would do for any
shell in which you want to build:
```
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_67_0
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_68_0
```
### Generate and Build

File diff suppressed because it is too large Load Diff

157
Jenkinsfile vendored
View File

@@ -111,8 +111,8 @@ try {
stage ('Parallel Build') {
String[][] variants = [
['gcc.Release' ,'-Dassert=ON' ,'MANUAL_TESTS=true' ],
['gcc.Debug' ,'-Dcoverage=ON' ],
['docs' ],
['gcc.Debug' ,'-Dcoverage=ON' ,'TARGET=coverage_report', 'SKIP_TESTS=true'],
['docs' ,'' ,'TARGET=docs' ],
['msvc.Debug' ],
['msvc.Debug' ,'' ,'NINJA_BUILD=true' ],
['msvc.Debug' ,'-Dunity=OFF' ],
@@ -150,8 +150,11 @@ try {
def compiler = getFirstPart(bldtype)
def config = getSecondPart(bldtype)
def target = 'install' // currently ignored for windows builds
if (compiler == 'docs') {
compiler = 'gcc'
config = 'Release'
target = 'docs'
}
def cc =
(compiler == 'clang') ? '/opt/llvm-5.0.1/bin/clang' : 'gcc'
@@ -167,6 +170,7 @@ try {
def max_minutes = 25
def env_vars = [
"TARGET=${target}",
"BUILD_TYPE=${config}",
"COMPILER=${compiler}",
"PARALLEL_TESTS=${pt}",
@@ -187,6 +191,8 @@ try {
echo "COMPILER: ${compiler}"
echo "BUILD_TYPE: ${config}"
echo "USE_CC: ${ucc}"
env_vars.addAll([
"NIH_CACHE_ROOT=${cdir}"])
if (compiler == 'msvc') {
env_vars.addAll([
'BOOST_ROOT=c:\\lib\\boost_1_67',
@@ -214,68 +220,91 @@ try {
env_vars.addAll(extra_env)
}
withCredentials(
[string(
credentialsId: 'RIPPLED_CODECOV_TOKEN',
variable: 'CODECOV_TOKEN')])
{
withEnv(env_vars) {
myStage(bldlabel)
try {
timeout(
time: max_minutes * 2,
units: 'MINUTES')
{
if (compiler == 'msvc') {
powershell "Remove-Item -Path \"${bldlabel}.txt\" -Force -ErrorAction Ignore"
// we capture stdout to variable because I could
// not figure out how to make powershell redirect internally
output = powershell (
returnStdout: true,
script: windowsBuildCmd())
// if the powershell command fails (has nonzero exit)
// then the command above throws, we don't get our output,
// and we never create this output file.
// SEE https://issues.jenkins-ci.org/browse/JENKINS-44930
// Alternatively, figure out how to reliably redirect
// all output above to a file (Start/Stop transcript does not work)
writeFile(
file: "${bldlabel}.txt",
text: output)
}
else {
sh "rm -fv ${bldlabel}.txt"
// execute the bld command in a redirecting shell
// to capture output
sh redhatBuildCmd(bldlabel)
}
// try to figure out codecov token to use. Look for
// MY_CODECOV_TOKEN id first so users can set that
// on job scope but then default to RIPPLED_CODECOV_TOKEN
// which should be globally scoped
def codecov_token = ''
try {
withCredentials( [string( credentialsId: 'MY_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
}
catch (e) {
// this might throw when MY_CODECOV_TOKEN doesn't exist
}
if (codecov_token == '') {
withCredentials( [string( credentialsId: 'RIPPLED_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
}
env_vars.addAll(["CODECOV_TOKEN=${codecov_token}"])
withEnv(env_vars) {
myStage(bldlabel)
try {
timeout(
time: max_minutes * 2,
units: 'MINUTES')
{
if (compiler == 'msvc') {
powershell "Remove-Item -Path \"${bldlabel}.txt\" -Force -ErrorAction Ignore"
// we capture stdout to variable because I could
// not figure out how to make powershell redirect internally
output = powershell (
returnStdout: true,
script: windowsBuildCmd())
// if the powershell command fails (has nonzero exit)
// then the command above throws, we don't get our output,
// and we never create this output file.
// SEE https://issues.jenkins-ci.org/browse/JENKINS-44930
// Alternatively, figure out how to reliably redirect
// all output above to a file (Start/Stop transcript does not work)
writeFile(
file: "${bldlabel}.txt",
text: output)
}
else {
sh "rm -fv ${bldlabel}.txt"
// execute the bld command in a redirecting shell
// to capture output
sh redhatBuildCmd(bldlabel)
}
}
finally {
if (bldtype == 'docs') {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: false,
keepAll: true,
reportName: 'Doxygen',
reportDir: 'build/docs/html_doc',
reportFiles: 'index.html')
}
def envs = ''
for (int j = 0; j < extra_env.size(); j++) {
envs += ", <br/>" + extra_env[j]
}
def cmake_txt = cmake_extra
if (cmake_txt != '') {
cmake_txt = " <br/>" + cmake_txt
}
def st = reportStatus(bldlabel, bldtype + cmake_txt + envs, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
} //try-catch-finally
} //withEnv
} //withCredentials
}
finally {
if (bldtype == 'docs') {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportName: 'Doxygen',
reportDir: "build/${bldlabel}/html_doc",
reportFiles: 'index.html')
}
if (isCoverage(cmake_extra)) {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: false,
keepAll: true,
reportName: 'Coverage',
reportDir: "build/${bldlabel}/coverage",
reportFiles: 'index.html')
}
def envs = ''
for (int j = 0; j < extra_env.size(); j++) {
envs += ", <br/>" + extra_env[j]
}
def cmake_txt = cmake_extra
if (cmake_txt != '') {
cmake_txt = " <br/>" + cmake_txt
}
def st = reportStatus(bldlabel, bldtype + cmake_txt + envs, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
} //try-catch-finally
} //withEnv
} //node
} //builds item
} //for variants
@@ -542,7 +571,7 @@ def getSecondPart(bld) {
// functions in groovy....
@NonCPS
def upDir(path) {
def matcher = path =~ /^(.+)\/(.+?)/
def matcher = path =~ /^(.+)[\/\\](.+?)/
matcher ? matcher[0][1] : path
}
@@ -672,7 +701,7 @@ function error {
exit 1
}
yum install -y yum-utils
yum install -y yum-utils openssl-static zlib-static
rpm -i /opt/rippled-rpm/*.rpm
rc=$?; if [[ $rc != 0 ]]; then
error "error installing rpms"

View File

@@ -14,6 +14,99 @@ If you are using Red Hat Enterprise Linux 7 or CentOS 7, you can [update using `
# Releases
## Version 1.2.1
The `rippled` 1.2.1 release introduces several fixes including a change in the
information reported via the enhanced crawl functionality introduced in the
1.2.0 release, a fix for a potential race condition when processing a status
change message for a peer, and for a technical flaw that could cause a server
to not properly detect that it had lost all its peers.
The release also adds the `delivered_amount` field to more responses to simplify
the handling of payment or check cashing transactions.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Fix a race condition during `TMStatusChange` handling (c8249981)
- Properly transition state to disconnected (9d027394)
- Display validator status only in response to admin requests (2d6a518a)
- Add the `delivered_amount` to more RPC commands (f2756914)
## Version 1.2.0
The `rippled` 1.2.0 release introduces the MultisignReserve Amendment, which
reduces the reserve requirement associated with signer lists. This release also
includes incremental improvements to the code that handles offers. Furthermore,
`rippled` now also has the ability to automatically detect transaction
censorship attempts and issue warnings of increasing severity for transactions
that should have been included in a closed ledger after several rounds of
consensus.
**New and Updated Features**
- Reduce the account reserve for a Multisign SignerList (6572fc8)
- Improve transaction error condition handling (4104778)
- Allow servers to automatically detect transaction censorship attempts (945493d)
- Load validator list from file (c1a0244)
- Add RPC command shard crawl (17e0d09)
- Add RPC Call unit tests (eeb9d92)
- Grow the open ledger expected transactions quickly (7295cf9)
- Avoid dispatching multiple fetch pack threads (4dcb3c9)
- Remove unused function in AutoSocket.h (8dd8433)
- Update TxQ developer docs (e14f913)
- Add user defined literals for megabytes and kilobytes (cd1c5a3)
- Make the FeeEscalation Amendment permanent (58f786c)
- Remove undocumented experimental options from RPC sign (a96cb8f)
- Improve RPC error message for fee command (af1697c)
- Improve ledger_entry commands inconsistent behavior (63e167b)
**Bug Fixes**
- Accept redirects from validator list sites (7fe1d4b)
- Implement missing string conversions for JSON (c0e9418)
- Eliminate potential undefined behavior (c71eb45)
- Add safe_cast to sure no overflow in casts between enums and integral types (a7e4541)
## Version 1.1.2
The `rippled` 1.1.2 release introduces a fix for an issue that could have
prevented cluster peers from successfully bypassing connection limits when
connecting to other servers on the same cluster. Additionally, it improves
logic used to determine what the preferred ledger is during suboptimal
network conditions.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Properly bypass connection limits for cluster peers (#2795, #2796)
- Improve preferred ledger calculation (#2784)
## Version 1.1.1
The `rippled` 1.1.1 release adds support for redirections when retrieving
validator lists and changes the way that validators with an expired list
behave. Additionally, informational commands return more useful information
to allow server operators to determine the state of their server
**New and Updated Features**
- Enhance status reporting when using the `server_info` and `validators` commands (#2734)
- Accept redirects from validator list sites: (#2715)
**Bug Fixes**
- Properly handle expired validator lists when validating (#2734)
## Version 1.1.0
The `rippled` 1.1.0 release release includes the `DepositPreAuth` amendment, which combined with the previously released `DepositAuth` amendment, allows users to pre-authorize incoming transactions to accounts, by whitelisting sender addresses. The 1.1.0 release also includes incremental improvements to several previously released features (`fix1515` amendment), deprecates support for the `sign` and `sign_for` commands from the rippled API and improves invariant checking for enhanced security.

View File

@@ -11,6 +11,7 @@ environment:
# CMake honors these environment variables, setting the include/lib paths.
BOOST_ROOT: C:/%RIPPLED_DEPS_PATH%/boost
OPENSSL_ROOT: C:/%RIPPLED_DEPS_PATH%/openssl
NIH_CACHE_ROOT: C:/%RIPPLED_DEPS_PATH%/
# We've had trouble with AppVeyor apparently not having a stack as large
# as the *nix CI platforms. AppVeyor support suggested that we try
@@ -74,7 +75,7 @@ build_script:
Push-Location "build/$cmake_target"
cmake -G"Visual Studio 15 2017 Win64" ../..
if ($LastExitCode -ne 0) { throw "CMake failed" }
cmake --build . --config $env:buildconfig -- -m
cmake --build . --config $env:buildconfig --parallel 3
if ($LastExitCode -ne 0) { throw "CMake build failed" }
Pop-Location
@@ -87,7 +88,7 @@ test_script:
- ps: |
& {
# Run the rippled unit tests
& $exe --unittest --unittest-log
& $exe --unittest --unittest-log --unittest-jobs 2
# https://connect.microsoft.com/PowerShell/feedback/details/751703/option-to-stop-script-if-command-line-exe-fails
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
}

View File

@@ -5,7 +5,7 @@
# debugging.
set -ex
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
echo "using CC: $CC"
echo "using CC: ${CC}"
"${CC}" --version
export CC
COMPNAME=$(basename $CC)
@@ -15,11 +15,14 @@ if [[ $CXX ]]; then
export CXX
fi
: ${BUILD_TYPE:=Debug}
echo "BUILD TYPE: $BUILD_TYPE"
echo "BUILD TYPE: ${BUILD_TYPE}"
: ${TARGET:=install}
echo "BUILD TARGET: ${TARGET}"
# Ensure APP defaults to rippled if it's not set.
: ${APP:=rippled}
echo "using APP: $APP"
echo "using APP: ${APP}"
JOBS=${NUM_PROCESSORS:-2}
if [[ ${TRAVIS:-false} != "true" ]]; then
@@ -34,6 +37,12 @@ else
time=
fi
if [[ -z "${MAX_TIME:-}" ]] ; then
timeout_cmd=""
else
timeout_cmd="timeout ${MAX_TIME}"
fi
echo "cmake building ${APP}"
: ${CMAKE_EXTRA_ARGS:=""}
if [[ ${NINJA_BUILD:-} == true ]]; then
@@ -41,9 +50,10 @@ if [[ ${NINJA_BUILD:-} == true ]]; then
fi
coverage=false
if [[ "${CMAKE_EXTRA_ARGS}" =~ -Dcoverage=((on)|(ON)) ]] ; then
if [[ "${TARGET}" == "coverage_report" ]] ; then
echo "coverage option detected."
coverage=true
export PATH=$PATH:${LCOV_ROOT}/usr/bin
fi
#
@@ -73,9 +83,12 @@ fi
mkdir -p "build/${BUILD_DIR}"
pushd "build/${BUILD_DIR}"
$time cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
if [[ ${BUILD_TYPE} == "docs" ]]; then
$time cmake --build . --target docs -- $BUILDARGS
# generate
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
# build
export DESTDIR=$(pwd)/_INSTALLED_
time ${timeout_cmd} cmake --build . --target ${TARGET} -- $BUILDARGS
if [[ ${TARGET} == "docs" ]]; then
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f html_doc/index.html ]; then
@@ -84,15 +97,13 @@ if [[ ${BUILD_TYPE} == "docs" ]]; then
echo "1 case, 1 test total, 1 failures"
fi
exit
else
$time cmake --build . -- $BUILDARGS
fi
popd
export APP_PATH="$PWD/build/${BUILD_DIR}/${APP}"
echo "using APP_PATH: $APP_PATH"
echo "using APP_PATH: ${APP_PATH}"
# See what we've actually built
ldd $APP_PATH
ldd ${APP_PATH}
function join_by { local IFS="$1"; shift; echo "$*"; }
@@ -123,24 +134,15 @@ if [[ ${APP} == "rippled" ]]; then
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
# Only report on src/ripple files
export LCOV_FILES="*/src/ripple/*"
# Nothing to explicitly exclude
export LCOV_EXCLUDE_FILES="LCOV_NO_EXCLUDE"
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
else
: ${LCOV_FILES:="*/src/*"}
# Don't exclude anything
: ${LCOV_EXCLUDE_FILES:="LCOV_NO_EXCLUDE"}
fi
if [[ $coverage == true ]]; then
export PATH=$PATH:$LCOV_ROOT/usr/bin
# Create baseline coverage data file
lcov --no-external -c -i -d . -o baseline.info | grep -v "ignoring data for external file"
if [[ ${coverage} == true ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
fi
if [[ ${SKIP_TESTS:-} == true ]]; then
@@ -148,45 +150,20 @@ if [[ ${SKIP_TESTS:-} == true ]]; then
exit
fi
if [[ "${MAX_TIME:-}" == "" ]] ; then
tcmd=""
else
tcmd="timeout ${MAX_TIME}"
fi
if [[ ${DEBUGGER:-true} == "true" && -v GDB_ROOT && -x $GDB_ROOT/bin/gdb ]]; then
$GDB_ROOT/bin/gdb -v
if [[ ${DEBUGGER:-true} == "true" && -v GDB_ROOT && -x ${GDB_ROOT}/bin/gdb ]]; then
${GDB_ROOT}/bin/gdb -v
# Execute unit tests under gdb, printing a call stack
# if we get a crash.
export APP_ARGS
$tcmd $GDB_ROOT/bin/gdb -return-child-result -quiet -batch \
${timeout_cmd} ${GDB_ROOT}/bin/gdb -return-child-result -quiet -batch \
-ex "set env MALLOC_CHECK_=3" \
-ex "set print thread-events off" \
-ex run \
-ex "thread apply all backtrace full" \
-ex "quit" \
--args $APP_PATH $APP_ARGS
--args ${APP_PATH} ${APP_ARGS}
else
$tcmd $APP_PATH $APP_ARGS
fi
if [[ $coverage == true ]]; then
# Create test coverage data file
lcov --no-external -c -d . -o tests.info | grep -v "ignoring data for external file"
# Combine baseline and test coverage data
lcov -a baseline.info -a tests.info -o lcov-all.info
# Included files
lcov -e "lcov-all.info" "${LCOV_FILES}" -o lcov.pre.info
# Excluded files
lcov --remove lcov.pre.info "${LCOV_EXCLUDE_FILES}" -o lcov.info
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
${timeout_cmd} ${APP_PATH} ${APP_ARGS}
fi

View File

@@ -20,7 +20,9 @@
#
# 7. Voting
#
# 8. Example Settings
# 8. Misc Settings
#
# 9. Example Settings
#
#-------------------------------------------------------------------------------
#
@@ -376,7 +378,10 @@
# [ips]
# r.ripple.com 51235
#
# The default is: [ips_fixed] addresses (if present) or r.ripple.com 51235
# The default is:
# [ips_fixed] addresses (if present)
# or
# ( r.ripple.com 51235 , zaphod.alloy.ee 51235 )
#
#
# [ips_fixed]
@@ -538,6 +543,20 @@
# into the ledger at the minimum required fee before the required
# fee escalates. Default: no maximum.
#
# normal_consensus_increase_percent = <number>
#
# (Optional) When the ledger has more transactions than "expected",
# and performance is humming along nicely, the expected ledger size
# is updated to the previous ledger size plus this percentage.
# Default: 20
#
# slow_consensus_decrease_percent = <number>
#
# (Optional) When consensus takes longer than appropriate, the
# expected ledger size is updated to the minimum of the previous
# ledger size or the "expected" ledger size minus this percentage.
# Default: 50
#
# maximum_txn_per_account = <number>
#
# Maximum number of transactions that one account can have in the
@@ -1025,6 +1044,46 @@
# [signing_support]
# true
#
# [crawl]
#
# List of options to control what data is reported through the /crawl endpoint
# See https://developers.ripple.com/peer-protocol.html#peer-crawler
#
# <flag>
#
# Enable or disable access to /crawl requests. Default is '1'
#
# overlay = <flag>
#
# Report information about peers this server is connected to, similar
# to the "peers" RPC API. Default is '1'.
#
# server = <flag>
#
# Report information about the local server, similar to the "server_state"
# RPC API. Default is '1'.
#
# counts = <flag>
#
# Report information about the local server health counters, similar to
# the "get_counts" RPC API. Default is '0'.
#
# unl = <flag>
#
# Report information about the local server's validator lists, similar to
# the "validators" and "validator_list_sites" RPC APIs. Default is '1'.
#
# Example:
#
# [crawl]
# 0
#
# [crawl]
# overlay = 1 # report peer overlay info
# server = 1 # report local server info
# counts = 0 # do not report server counts
# unl = 1 # report server validator lists
#
#-------------------------------------------------------------------------------
#
# 9. Example Settings
@@ -1099,7 +1158,7 @@ admin = 127.0.0.1
protocol = ws
#[port_ws_public]
#port = 5005
#port = 6005
#ip = 127.0.0.1
#protocol = wss
@@ -1147,12 +1206,8 @@ time.apple.com
time.nist.gov
pool.ntp.org
# Where to find some other servers speaking the Ripple protocol.
[ips]
r.ripple.com 51235
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
# use the following [ips] section instead:
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235

View File

@@ -1,10 +0,0 @@
[Unit]
Description=Ripple Peer-to-Peer Network Daemon
[Service]
Type=simple
User=nobody
ExecStart=/usr/bin/rippled --conf=/etc/rippled/rippled.cfg
[Install]
WantedBy=multi-user.target

View File

@@ -1,114 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: ripple
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the ripple network node
# Description: starts rippled using start-stop-daemon
### END INIT INFO
set -e
NAME=rippled
USER="rippled"
GROUP="rippled"
PIDFILE=/var/run/$NAME.pid
DAEMON=/usr/local/sbin/rippled
DAEMON_OPTS="--conf /etc/ripple/rippled.cfg"
NET_OPTS="--net $DAEMON_OPTS"
LOGDIR="/var/log/rippled"
DBDIR="/var/db/rippled/db/hyperldb"
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
# I wish it didn't come down to this, but this is the easiest way to ensure
# sanity of an install.
if [ ! -d $LOGDIR ]; then
mkdir -p $LOGDIR
chown $USER:$GROUP $LOGDIR
fi
if [ ! -d $DBDIR ]; then
mkdir -p $DBDIR
chown -R $USER:$GROUP $DBDIR
fi
case "$1" in
start)
echo -n "Starting daemon: "$NAME
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP --verbose -- $NET_OPTS
echo "."
;;
stop)
echo -n "Stopping daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
echo "."
;;
restart)
echo -n "Restarting daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP -- $NET_OPTS
echo "."
;;
status)
echo "Status of $NAME:"
echo -n "PID of $NAME: "
if [ -f "$PIDFILE" ]; then
cat $PIDFILE
$DAEMON $DAEMON_OPTS server_info
else
echo "$NAME not running."
fi
echo "."
;;
fetch)
echo "$NAME ledger fetching info:"
$DAEMON $DAEMON_OPTS fetch_info
echo "."
;;
uptime)
echo "$NAME uptime:"
$DAEMON $DAEMON_OPTS get_counts
echo "."
;;
startconfig)
echo "$NAME is being started with the following command line:"
echo "$DAEMON $NET_OPTS"
echo "."
;;
command)
# Truncate the script's argument vector by one position to get rid of
# this entry.
shift
# Pass the remainder of the argument vector to rippled.
$DAEMON $DAEMON_OPTS "$@"
echo "."
;;
test)
$DAEMON $DAEMON_OPTS ping
echo "."
;;
*)
echo "Usage: $0 {start|stop|restart|status|fetch|uptime|startconfig|"
echo " command|test}"
exit 1
esac
exit 0

View File

@@ -14,10 +14,8 @@
#
# List of the validation public keys of nodes to always accept as validators.
#
# The latest list of recommended validators can be obtained from
# https://ripple.com/ripple.txt
#
# See also https://wiki.ripple.com/Ripple.txt
# Manually listing validator keys is not recommended for production networks.
# See validator_list_sites and validator_list_keys below.
#
# Examples:
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
@@ -27,9 +25,13 @@
#
# List of URIs serving lists of recommended validators.
#
# The latest list of recommended validator sites can be
# obtained from https://ripple.com/ripple.txt
#
# Examples:
# https://vl.ripple.com
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
#
# [validator_list_keys]
#
@@ -39,6 +41,9 @@
# publisher key.
# Validator list keys should be hex-encoded.
#
# The latest list of recommended validator keys can be
# obtained from https://ripple.com/ripple.txt
#
# Examples:
# ed499d732bded01504a7407c224412ef550cc1ade638a4de4eb88af7c36cb8b282
# 0202d3f36a801349f3be534e3f64cfa77dede6e1b6310a0b48f40f20f955cec945

View File

@@ -105,6 +105,9 @@ WARN_LOGFILE =
#---------------------------------------------------------------------------
INPUT = \
\
../src/ripple/app/misc/TxQ.h \
../src/ripple/app/tx/apply.h \
../src/ripple/app/tx/applySteps.h \
../src/ripple/protocol/STObject.h \
../src/ripple/protocol/JsonFields.h \
../src/test/jtx/AbstractClient.h \

View File

@@ -16,15 +16,23 @@ Source folders:
| Folder | Upstream Repo | Description |
|:----------------|:---------------------------------------------|:------------|
| `beast` | https://github.com/boostorg/beast | Cross-platform library for WebSocket and HTTP built on [Boost.Asio](https://think-async.com/Asio) |
| `beast` | N/A | legacy utility code that was formerly associated with boost::beast
| `ed25519-donna` | https://github.com/floodyberry/ed25519-donna | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `lz4` | https://github.com/lz4/lz4 | LZ4 lossless compression algorithm |
| `nudb` | https://github.com/vinniefalco/NuDB | Constant-time insert-only key/value database for SSD drives (Less memory usage than RocksDB.) |
| `protobuf` | https://github.com/google/protobuf | Protocol buffer data interchange format. Ripple has changed some names in order to support the unity-style of build (a single .cpp added to the project, instead of linking to a separately built static library). |
| `ripple` | N/A | **Core source code for `rippled`** |
| `rocksdb2` | https://github.com/facebook/rocksdb | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | https://github.com/bitcoin-core/secp256k1 | ECDSA digital signatures using the **secp256k1** curve |
| `snappy` | https://github.com/google/snappy | "Snappy" lossless compression algorithm. (Technically, the source is in `snappy/snappy`, while `snappy/` also has config options that aren't part of the upstream repository.) |
| `soci` | https://github.com/SOCI/soci | Abstraction layer for database access. |
| `sqlite` | https://www.sqlite.org/src | An embedded database engine that writes to simple files. (Technically not a subtree, just a direct copy of the [SQLite source distribution](http://sqlite.org/download.html).) |
| `test` | N/A | **Unit tests for `rippled`** |
The following dependencies are downloaded and built using ExternalProject
(or FetchContent, where possible). Refer to CMakeLists.txt file for
details about how these sources are built :
| Name | Upstream Repo | Description |
|:----------------|:---------------------------------------------|:------------|
| `lz4` | https://github.com/lz4/lz4 | LZ4 lossless compression algorithm |
| `nudb` | https://github.com/vinniefalco/NuDB | Constant-time insert-only key/value database for SSD drives (Less memory usage than RocksDB.) |
| `snappy` | https://github.com/google/snappy | "Snappy" lossless compression algorithm. |
| `soci` | https://github.com/SOCI/soci | Abstraction layer for database access. |
| `sqlite` | https://www.sqlite.org/src | An embedded database engine that writes to simple files. |

View File

@@ -1,21 +0,0 @@
# Set the default behavior
* text eol=lf
# Explicitly declare source files
*.c text eol=lf
*.h text eol=lf
# Denote files that should not be modified.
*.odt binary
*.png binary
# Visual Studio
*.sln text eol=crlf
*.vcxproj* text eol=crlf
*.vcproj* text eol=crlf
*.suo binary
*.rc text eol=crlf
# Windows
*.bat text eol=crlf
*.cmd text eol=crlf

31
src/lz4/.gitignore vendored
View File

@@ -1,31 +0,0 @@
# Object files
*.o
*.ko
# Libraries
*.lib
*.a
# Shared objects (inc. Windows DLLs)
*.dll
*.so
*.so.*
*.dylib
*.dSYM # apple
# Executables
*.exe
*.out
*.app
lz4
# IDE / editors files
.clang_complete
_codelite/
_codelite_lz4/
bin/
*.zip
# Mac
.DS_Store
*.dSYM

View File

@@ -1,150 +0,0 @@
language: c
matrix:
fast_finish: true
include:
# OS X Mavericks
- os: osx
install:
- export CC=clang
env: Ubu=OS_X_Mavericks Cmd='make -C tests test-lz4 MOREFLAGS="-Werror -Wconversion -Wno-sign-conversion" && CFLAGS=-m32 make -C tests clean test-lz4-contentSize' COMPILER=clang
# Container-based 12.04 LTS Server Edition 64 bit (doesn't support 32-bit includes)
- os: linux
sudo: false
env: Ubu=12.04cont Cmd='make -C tests test-lz4 test-lz4c test-fasttest test-fullbench' COMPILER=cc
- os: linux
sudo: false
env: Ubu=12.04cont Cmd='make -C tests test-frametest test-fuzzer' COMPILER=cc
- os: linux
sudo: false
env: Ubu=12.04cont Cmd="make gpptest && make clean examples && make clean cmake && make clean travis-install && make clean clangtest" COMPILER=cc
# 14.04 LTS Server Edition 64 bit
- env: Ubu=14.04 Cmd='make -C tests test MOREFLAGS=-mx32' COMPILER=cc
dist: trusty
sudo: required
addons:
apt:
packages:
- libc6-dev-i386
- gcc-multilib
- env: Ubu=14.04 Cmd='make usan' COMPILER=clang
dist: trusty
sudo: required
addons:
apt:
packages:
- clang
- env: Ubu=14.04 Cmd='make c_standards && make -C tests test-lz4 test-mem' COMPILER=cc
dist: trusty
sudo: required
addons:
apt:
packages:
- valgrind
- env: Ubu=14.04 Cmd='make -C tests test-lz4c32 test-fullbench32 versionsTest' COMPILER=cc
dist: trusty
sudo: required
addons:
apt:
packages:
- python3
- libc6-dev-i386
- gcc-multilib
- env: Ubu=14.04 Cmd='make -C tests test-frametest32 test-fuzzer32' COMPILER=cc
dist: trusty
sudo: required
addons:
apt:
packages:
- libc6-dev-i386
- gcc-multilib
- env: Ubu=14.04 Cmd='make c_standards CC=gcc-6 && make -C tests test-lz4 CC=gcc-6 MOREFLAGS=-Werror' COMPILER=gcc-6
dist: trusty
sudo: required
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- gcc-6
- env: Ubu=14.04 Cmd='make platformTest CC=arm-linux-gnueabi-gcc QEMU_SYS=qemu-arm-static && make platformTest CC=aarch64-linux-gnu-gcc QEMU_SYS=qemu-aarch64-static' COMPILER=arm-linux-gnueabi-gcc
dist: trusty
sudo: required
addons:
apt:
packages:
- qemu-system-arm
- qemu-user-static
- gcc-arm-linux-gnueabi
- libc6-dev-armel-cross
- gcc-aarch64-linux-gnu
- libc6-dev-arm64-cross
- env: Ubu=14.04 Cmd='make -C tests test-lz4 clean test-lz4c32 CC=gcc-5 MOREFLAGS=-Werror' COMPILER=gcc-5
dist: trusty
sudo: required
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- libc6-dev-i386
- gcc-multilib
- gcc-5
- gcc-5-multilib
- env: Ubu=14.04 Cmd='make -C tests test-lz4 CC=clang-3.8' COMPILER=clang-3.8
dist: trusty
sudo: required
addons:
apt:
sources:
- ubuntu-toolchain-r-test
- llvm-toolchain-precise-3.8
packages:
- clang-3.8
- env: Ubu=14.04 Cmd='make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc-static && make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc64-static MOREFLAGS=-m64' COMPILER=powerpc-linux-gnu-gcc
dist: trusty
sudo: required
addons:
apt:
packages:
- qemu-system-ppc
- qemu-user-static
- gcc-powerpc-linux-gnu
- env: Ubu=14.04 Cmd='make staticAnalyze' COMPILER=clang
dist: trusty
sudo: required
addons:
apt:
packages:
- clang
- env: Ubu=14.04 Cmd='make clean all CC=gcc-4.4 MOREFLAGS=-Werror && make clean && CFLAGS=-fPIC LDFLAGS="-pie -fPIE -D_FORTIFY_SOURCE=2" make -C programs' COMPILER=gcc-4.4
dist: trusty
sudo: required
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- libc6-dev-i386
- gcc-multilib
- gcc-4.4
script:
- echo Cmd=$Cmd
- $COMPILER -v
- sh -c "$Cmd"

View File

@@ -1,15 +0,0 @@
Installation
=============
```
make
make install # this command may require root access
```
LZ4's `Makefile` supports standard [Makefile conventions],
including [staged installs], [redirection], or [command redefinition].
[Makefile conventions]: https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html
[staged installs]: https://www.gnu.org/prep/standards/html_node/DESTDIR.html
[redirection]: https://www.gnu.org/prep/standards/html_node/Directory-Variables.html
[command redefinition]: https://www.gnu.org/prep/standards/html_node/Utilities-in-Makefiles.html

Binary file not shown.

View File

@@ -1,181 +0,0 @@
# ################################################################
# LZ4 - Makefile
# Copyright (C) Yann Collet 2011-2016
# All rights reserved.
#
# This Makefile is validated for Linux, macOS, *BSD, Hurd, Solaris, MSYS2 targets
#
# BSD license
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice, this
# list of conditions and the following disclaimer in the documentation and/or
# other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# You can contact the author at :
# - LZ4 source repository : https://github.com/lz4/lz4
# - LZ4 forum froup : https://groups.google.com/forum/#!forum/lz4c
# ################################################################
LZ4DIR = lib
PRGDIR = programs
TESTDIR = tests
EXDIR = examples
# Define nul output
ifneq (,$(filter Windows%,$(OS)))
EXT = .exe
VOID = nul
else
EXT =
VOID = /dev/null
endif
.PHONY: default
default: lib-release lz4-release
.PHONY: all
all: allmost manuals
.PHONY: allmost
allmost: lib lz4 examples
.PHONY: lib lib-release
lib lib-release:
@$(MAKE) -C $(LZ4DIR) $@
.PHONY: lz4 lz4-release
lz4 : lib
lz4-release : lib-release
lz4 lz4-release :
@$(MAKE) -C $(PRGDIR) $@
@cp $(PRGDIR)/lz4$(EXT) .
.PHONY: examples
examples: lib lz4
$(MAKE) -C $(EXDIR) test
.PHONY: manuals
manuals:
@$(MAKE) -C contrib/gen_manual $@
.PHONY: clean
clean:
@$(MAKE) -C $(LZ4DIR) $@ > $(VOID)
@$(MAKE) -C $(PRGDIR) $@ > $(VOID)
@$(MAKE) -C $(TESTDIR) $@ > $(VOID)
@$(MAKE) -C $(EXDIR) $@ > $(VOID)
@$(MAKE) -C contrib/gen_manual $@
@$(RM) lz4$(EXT)
@echo Cleaning completed
#-----------------------------------------------------------------------------
# make install is validated only for Linux, OSX, BSD, Hurd and Solaris targets
#-----------------------------------------------------------------------------
ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD NetBSD DragonFly SunOS))
HOST_OS = POSIX
.PHONY: install uninstall
install uninstall:
@$(MAKE) -C $(LZ4DIR) $@
@$(MAKE) -C $(PRGDIR) $@
travis-install:
$(MAKE) -j1 install DESTDIR=~/install_test_dir
cmake:
@cd contrib/cmake_unofficial; cmake $(CMAKE_PARAMS) CMakeLists.txt; $(MAKE)
endif
ifneq (,$(filter MSYS%,$(shell uname)))
HOST_OS = MSYS
CMAKE_PARAMS = -G"MSYS Makefiles"
endif
#------------------------------------------------------------------------
#make tests validated only for MSYS, Linux, OSX, kFreeBSD and Hurd targets
#------------------------------------------------------------------------
ifneq (,$(filter $(HOST_OS),MSYS POSIX))
.PHONY: list
list:
@$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$@$$' | xargs
.PHONY: test
test:
$(MAKE) -C $(TESTDIR) $@
clangtest: clean
clang -v
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(LZ4DIR) all CC=clang
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(PRGDIR) all CC=clang
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(TESTDIR) all CC=clang
clangtest-native: clean
clang -v
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(LZ4DIR) all CC=clang
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(PRGDIR) native CC=clang
@CFLAGS="-O3 -Werror -Wconversion -Wno-sign-conversion" $(MAKE) -C $(TESTDIR) native CC=clang
usan: clean
CC=clang CFLAGS="-O3 -g -fsanitize=undefined" $(MAKE) test FUZZER_TIME="-T1mn" NB_LOOPS=-i1
usan32: clean
CFLAGS="-m32 -O3 -g -fsanitize=undefined" $(MAKE) test FUZZER_TIME="-T1mn" NB_LOOPS=-i1
staticAnalyze: clean
CFLAGS=-g scan-build --status-bugs -v $(MAKE) all
platformTest: clean
@echo "\n ---- test lz4 with $(CC) compiler ----"
@$(CC) -v
CFLAGS="-O3 -Werror" $(MAKE) -C $(LZ4DIR) all
CFLAGS="-O3 -Werror -static" $(MAKE) -C $(PRGDIR) all
CFLAGS="-O3 -Werror -static" $(MAKE) -C $(TESTDIR) all
$(MAKE) -C $(TESTDIR) test-platform
.PHONY: versionsTest
versionsTest: clean
$(MAKE) -C $(TESTDIR) $@
gpptest: clean
g++ -v
CC=g++ $(MAKE) -C $(LZ4DIR) all CFLAGS="-O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
CC=g++ $(MAKE) -C $(PRGDIR) all CFLAGS="-O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
CC=g++ $(MAKE) -C $(TESTDIR) all CFLAGS="-O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
gpptest32: clean
g++ -v
CC=g++ $(MAKE) -C $(LZ4DIR) all CFLAGS="-m32 -O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
CC=g++ $(MAKE) -C $(PRGDIR) native CFLAGS="-m32 -O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
CC=g++ $(MAKE) -C $(TESTDIR) native CFLAGS="-m32 -O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror"
c_standards: clean
# note : lz4 is not C90 compatible, because it requires long long support
CFLAGS="-std=gnu90 -Werror" $(MAKE) clean allmost
CFLAGS="-std=c99 -Werror" $(MAKE) clean allmost
CFLAGS="-std=gnu99 -Werror" $(MAKE) clean allmost
CFLAGS="-std=c11 -Werror" $(MAKE) clean allmost
endif

View File

@@ -1,231 +0,0 @@
v1.8.0
cli : fix : do not modify /dev/null permissions, reported by @Maokaman1
cli : added GNU separator -- specifying that all following arguments are files
API : added LZ4_compress_HC_destSize(), by Oleg (@remittor)
API : added LZ4F_resetDecompressionContext()
API : lz4frame : negative compression levels trigger fast acceleration, request by Lawrence Chan
API : lz4frame : can control block checksum and dictionary ID
API : fix : expose obsolete decoding functions, reported by Chen Yufei
API : experimental : lz4frame_static : new dictionary compression API
build : fix : static lib installation, by Ido Rosen
build : dragonFlyBSD, OpenBSD, NetBSD supported
build : LZ4_MEMORY_USAGE can be modified at compile time, through external define
doc : Updated LZ4 Frame format to v1.6.0, restoring Dictionary-ID field
doc : lz4 api manual, by Przemyslaw Skibinski
v1.7.5
lz4hc : new high compression mode : levels 10-12 compress more and slower, by Przemyslaw Skibinski
lz4cat : fix : works with relative path (#284) and stdin (#285) (reported by @beiDei8z)
cli : fix minor notification when using -r recursive mode
API : lz4frame : LZ4F_frameBound(0) gives upper bound of *flush() and *End() operations (#290, #280)
doc : markdown version of man page, by Takayuki Matsuoka (#279)
build : Makefile : fix make -jX lib+exe concurrency (#277)
build : cmake : improvements by Michał Górny (#296)
v1.7.4.2
fix : Makefile : release build compatible with PIE and customized compilation directives provided through environment variables (#274, reported by Antoine Martin)
v1.7.4
Improved : much better speed in -mx32 mode
cli : fix : Large file support in 32-bits mode on Mac OS-X
fix : compilation on gcc 4.4 (#272), reported by Antoine Martin
v1.7.3
Changed : moved to versioning; package, cli and library have same version number
Improved: Small decompression speed boost
Improved: Small compression speed improvement on 64-bits systems
Improved: Small compression ratio and speed improvement on small files
Improved: Significant speed boost on ARMv6 and ARMv7
Fix : better ratio on 64-bits big-endian targets
Improved cmake build script, by Evan Nemerson
New liblz4-dll project, by Przemyslaw Skibinki
Makefile: Generates object files (*.o) for faster (re)compilation on low power systems
cli : new : --rm and --help commands
cli : new : preserved file attributes, by Przemyslaw Skibinki
cli : fix : crash on some invalid inputs
cli : fix : -t correctly validates lz4-compressed files, by Nick Terrell
cli : fix : detects and reports fread() errors, thanks to Hiroshi Fujishima report #243
cli : bench : new : -r recursive mode
lz4cat : can cat multiple files in a single command line (#184)
Added : doc/lz4_manual.html, by Przemyslaw Skibinski
Added : dictionary compression and frame decompression examples, by Nick Terrell
Added : Debianization, by Evgeniy Polyakov
r131
New : Dos/DJGPP target, thanks to Louis Santillan (#114)
Added : Example using lz4frame library, by Zbigniew Jędrzejewski-Szmek (#118)
Changed: xxhash symbols are modified (namespace emulation) within liblz4
r130:
Fixed : incompatibility sparse mode vs console, reported by Yongwoon Cho (#105)
Fixed : LZ4IO exits too early when frame crc not present, reported by Yongwoon Cho (#106)
Fixed : incompatibility sparse mode vs append mode, reported by Takayuki Matsuoka (#110)
Performance fix : big compression speed boost for clang (+30%)
New : cross-version test, by Takayuki Matsuoka
r129:
Added : LZ4_compress_fast(), LZ4_compress_fast_continue()
Added : LZ4_compress_destSize()
Changed: New lz4 and lz4hc compression API. Previous function prototypes still supported.
Changed: Sparse file support enabled by default
New : LZ4 CLI improved performance compressing/decompressing multiple files (#86, kind contribution from Kyle J. Harper & Takayuki Matsuoka)
Fixed : GCC 4.9+ optimization bug - Reported by Markus Trippelsdorf, Greg Slazinski & Evan Nemerson
Changed: Enums converted to LZ4F_ namespace convention - by Takayuki Matsuoka
Added : AppVeyor CI environment, for Visual tests - Suggested by Takayuki Matsuoka
Modified:Obsolete functions generate warnings - Suggested by Evan Nemerson, contributed by Takayuki Matsuoka
Fixed : Bug #75 (unfinished stream), reported by Yongwoon Cho
Updated: Documentation converted to MarkDown format
r128:
New : lz4cli sparse file support (Requested by Neil Wilson, and contributed by Takayuki Matsuoka)
New : command -m, to compress multiple files in a single command (suggested by Kyle J. Harper)
Fixed : Restored lz4hc compression ratio (slightly lower since r124)
New : lz4 cli supports long commands (suggested by Takayuki Matsuoka)
New : lz4frame & lz4cli frame content size support
New : lz4frame supports skippable frames, as requested by Sergey Cherepanov
Changed: Default "make install" directory is /usr/local, as notified by Ron Johnson
New : lz4 cli supports "pass-through" mode, requested by Neil Wilson
New : datagen can generate sparse files
New : scan-build tests, thanks to kind help by Takayuki Matsuoka
New : g++ compatibility tests
New : arm cross-compilation test, thanks to kind help by Takayuki Matsuoka
Fixed : Fuzzer + frametest compatibility with NetBSD (issue #48, reported by Thomas Klausner)
Added : Visual project directory
Updated: Man page & Specification
r127:
N/A : added a file on SVN
r126:
New : lz4frame API is now integrated into liblz4
Fixed : GCC 4.9 bug on highest performance settings, reported by Greg Slazinski
Fixed : bug within LZ4 HC streaming mode, reported by James Boyle
Fixed : older compiler don't like nameless unions, reported by Cheyi Lin
Changed : lz4 is C90 compatible
Changed : added -pedantic option, fixed a few mminor warnings
r125:
Changed : endian and alignment code
Changed : directory structure : new "lib" directory
Updated : lz4io, now uses lz4frame
Improved: slightly improved decoding speed
Fixed : LZ4_compress_limitedOutput(); Special thanks to Christopher Speller !
Fixed : some alignment warnings under clang
Fixed : deprecated function LZ4_slideInputBufferHC()
r124:
New : LZ4 HC streaming mode
Fixed : LZ4F_compressBound() using null preferencesPtr
Updated : xxHash to r38
Updated library number, to 1.4.0
r123:
Added : experimental lz4frame API, thanks to Takayuki Matsuoka and Christopher Jackson for testings
Fix : s390x support, thanks to Nobuhiro Iwamatsu
Fix : test mode (-t) no longer requires confirmation, thanks to Thary Nguyen
r122:
Fix : AIX & AIX64 support (SamG)
Fix : mips 64-bits support (lew van)
Added : Examples directory, using code examples from Takayuki Matsuoka
Updated : Framing specification, to v1.4.1
Updated : xxHash, to r36
r121:
Added : Makefile : install for kFreeBSD and Hurd (Nobuhiro Iwamatsu)
Fix : Makefile : install for OS-X and BSD, thanks to Takayuki Matsuoka
r120:
Modified : Streaming API, using strong types
Added : LZ4_versionNumber(), thanks to Takayuki Matsuoka
Fix : OS-X : library install name, thanks to Clemens Lang
Updated : Makefile : synchronize library version number with lz4.h, thanks to Takayuki Matsuoka
Updated : Makefile : stricter compilation flags
Added : pkg-config, thanks to Zbigniew Jędrzejewski-Szmek (issue 135)
Makefile : lz4-test only test native binaries, as suggested by Michał Górny (issue 136)
Updated : xxHash to r35
r119:
Fix : Issue 134 : extended malicious address space overflow in 32-bits mode for some specific configurations
r118:
New : LZ4 Streaming API (Fast version), special thanks to Takayuki Matsuoka
New : datagen : parametrable synthetic data generator for tests
Improved : fuzzer, support more test cases, more parameters, ability to jump to specific test
fix : support ppc64le platform (issue 131)
fix : Issue 52 (malicious address space overflow in 32-bits mode when using large custom format)
fix : Makefile : minor issue 130 : header files permissions
r117:
Added : man pages for lz4c and lz4cat
Added : automated tests on Travis, thanks to Takayuki Matsuoka !
fix : block-dependency command line (issue 127)
fix : lz4fullbench (issue 128)
r116:
hotfix (issue 124 & 125)
r115:
Added : lz4cat utility, installed on POSX systems (issue 118)
OS-X compatible compilation of dynamic library (issue 115)
r114:
Makefile : library correctly compiled with -O3 switch (issue 114)
Makefile : library compilation compatible with clang
Makefile : library is versioned and linked (issue 119)
lz4.h : no more static inline prototypes (issue 116)
man : improved header/footer (issue 111)
Makefile : Use system default $(CC) & $(MAKE) variables (issue 112)
xxhash : updated to r34
r113:
Large decompression speed improvement for GCC 32-bits. Thanks to Valery Croizier !
LZ4HC : Compression Level is now a programmable parameter (CLI from 4 to 9)
Separated IO routines from command line (lz4io.c)
Version number into lz4.h (suggested by Francesc Alted)
r112:
quickfix
r111 :
Makefile : added capability to install libraries
Modified Directory tree, to better separate libraries from programs.
r110 :
lz4 & lz4hc : added capability to allocate state & stream state with custom allocator (issue 99)
fuzzer & fullbench : updated to test new functions
man : documented -l command (Legacy format, for Linux kernel compression) (issue 102)
cmake : improved version by Mika Attila, building programs and libraries (issue 100)
xxHash : updated to r33
Makefile : clean also delete local package .tar.gz
r109 :
lz4.c : corrected issue 98 (LZ4_compress_limitedOutput())
Makefile : can specify version number from makefile
r108 :
lz4.c : corrected compression efficiency issue 97 in 64-bits chained mode (-BD) for streams > 4 GB (thanks Roman Strashkin for reporting)
r107 :
Makefile : support DESTDIR for staged installs. Thanks Jorge Aparicio.
Makefile : make install installs both lz4 and lz4c (Jorge Aparicio)
Makefile : removed -Wno-implicit-declaration compilation switch
lz4cli.c : include <stduni.h> for isatty() (Luca Barbato)
lz4.h : introduced LZ4_MAX_INPUT_SIZE constant (Shay Green)
lz4.h : LZ4_compressBound() : unified macro and inline definitions (Shay Green)
lz4.h : LZ4_decompressSafe_partial() : clarify comments (Shay Green)
lz4.c : LZ4_compress() verify input size condition (Shay Green)
bench.c : corrected a bug in free memory size evaluation
cmake : install into bin/ directory (Richard Yao)
cmake : check for just C compiler (Elan Ruusamae)
r106 :
Makefile : make dist modify text files in the package to respect Unix EoL convention
lz4cli.c : corrected small display bug in HC mode
r105 :
Makefile : New install script and man page, contributed by Prasad Pandit
lz4cli.c : Minor modifications, for easier extensibility
COPYING : added license file
LZ4_Streaming_Format.odt : modified file name to remove white space characters
Makefile : .exe suffix now properly added only for Windows target

View File

@@ -1,114 +0,0 @@
LZ4 - Extremely fast compression
================================
LZ4 is lossless compression algorithm,
providing compression speed at 400 MB/s per core,
scalable with multi-cores CPU.
It features an extremely fast decoder,
with speed in multiple GB/s per core,
typically reaching RAM speed limits on multi-core systems.
Speed can be tuned dynamically, selecting an "acceleration" factor
which trades compression ratio for more speed up.
On the other end, a high compression derivative, LZ4_HC, is also provided,
trading CPU time for improved compression ratio.
All versions feature the same decompression speed.
LZ4 library is provided as open-source software using BSD 2-Clause license.
|Branch |Status |
|------------|---------|
|master | [![Build Status][travisMasterBadge]][travisLink] [![Build status][AppveyorMasterBadge]][AppveyorLink] [![coverity][coverBadge]][coverlink] |
|dev | [![Build Status][travisDevBadge]][travisLink] [![Build status][AppveyorDevBadge]][AppveyorLink] |
[travisMasterBadge]: https://travis-ci.org/lz4/lz4.svg?branch=master "Continuous Integration test suite"
[travisDevBadge]: https://travis-ci.org/lz4/lz4.svg?branch=dev "Continuous Integration test suite"
[travisLink]: https://travis-ci.org/lz4/lz4
[AppveyorMasterBadge]: https://ci.appveyor.com/api/projects/status/github/lz4/lz4?branch=master&svg=true "Windows test suite"
[AppveyorDevBadge]: https://ci.appveyor.com/api/projects/status/github/lz4/lz4?branch=dev&svg=true "Windows test suite"
[AppveyorLink]: https://ci.appveyor.com/project/YannCollet/lz4-1lndh
[coverBadge]: https://scan.coverity.com/projects/4735/badge.svg "Static code analysis of Master branch"
[coverlink]: https://scan.coverity.com/projects/4735
> **Branch Policy:**
> - The "master" branch is considered stable, at all times.
> - The "dev" branch is the one where all contributions must be merged
before being promoted to master.
> + If you plan to propose a patch, please commit into the "dev" branch,
or its own feature branch.
Direct commit to "master" are not permitted.
Benchmarks
-------------------------
The benchmark uses [lzbench], from @inikep
compiled with GCC v6.2.0 on Linux 64-bits.
The reference system uses a Core i7-3930K CPU @ 4.5GHz.
Benchmark evaluates the compression of reference [Silesia Corpus]
in single-thread mode.
[lzbench]: https://github.com/inikep/lzbench
[Silesia Corpus]: http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
| Compressor | Ratio | Compression | Decompression |
| ---------- | ----- | ----------- | ------------- |
| memcpy | 1.000 | 7300 MB/s | 7300 MB/s |
|**LZ4 fast 8 (v1.7.3)**| 1.799 |**911 MB/s** | **3360 MB/s** |
|**LZ4 default (v1.7.3)**|**2.101**|**625 MB/s** | **3220 MB/s** |
| LZO 2.09 | 2.108 | 620 MB/s | 845 MB/s |
| QuickLZ 1.5.0 | 2.238 | 510 MB/s | 600 MB/s |
| Snappy 1.1.3 | 2.091 | 450 MB/s | 1550 MB/s |
| LZF v3.6 | 2.073 | 365 MB/s | 820 MB/s |
| [Zstandard] 1.1.1 -1 | 2.876 | 330 MB/s | 930 MB/s |
| [Zstandard] 1.1.1 -3 | 3.164 | 200 MB/s | 810 MB/s |
| [zlib] deflate 1.2.8 -1| 2.730 | 100 MB/s | 370 MB/s |
|**LZ4 HC -9 (v1.7.3)** |**2.720**| 34 MB/s | **3240 MB/s** |
| [zlib] deflate 1.2.8 -6| 3.099 | 33 MB/s | 390 MB/s |
[zlib]: http://www.zlib.net/
[Zstandard]: http://www.zstd.net/
LZ4 is also compatible and well optimized for x32 mode, for which it provides an additional +10% speed performance.
Installation
-------------------------
```
make
make install # this command may require root access
```
LZ4's `Makefile` supports standard [Makefile conventions],
including [staged installs], [redirection], or [command redefinition].
[Makefile conventions]: https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html
[staged installs]: https://www.gnu.org/prep/standards/html_node/DESTDIR.html
[redirection]: https://www.gnu.org/prep/standards/html_node/Directory-Variables.html
[command redefinition]: https://www.gnu.org/prep/standards/html_node/Utilities-in-Makefiles.html
Documentation
-------------------------
The raw LZ4 block compression format is detailed within [lz4_Block_format].
To compress an arbitrarily long file or data stream, multiple blocks are required.
Organizing these blocks and providing a common header format to handle their content
is the purpose of the Frame format, defined into [lz4_Frame_format].
Interoperable versions of LZ4 must respect this frame format.
[lz4_Block_format]: doc/lz4_Block_format.md
[lz4_Frame_format]: doc/lz4_Frame_format.md
Other source versions
-------------------------
Beyond the C reference source,
many contributors have created versions of lz4 in multiple languages
(Java, C#, Python, Perl, Ruby, etc.).
A list of known source ports is maintained on the [LZ4 Homepage].
[LZ4 Homepage]: http://www.lz4.org

View File

@@ -1,141 +0,0 @@
version: 1.0.{build}
environment:
matrix:
- COMPILER: "visual"
CONFIGURATION: "Debug"
PLATFORM: "x64"
- COMPILER: "visual"
CONFIGURATION: "Debug"
PLATFORM: "Win32"
- COMPILER: "visual"
CONFIGURATION: "Release"
PLATFORM: "x64"
- COMPILER: "visual"
CONFIGURATION: "Release"
PLATFORM: "Win32"
- COMPILER: "gcc"
PLATFORM: "mingw64"
- COMPILER: "gcc"
PLATFORM: "mingw32"
- COMPILER: "gcc"
PLATFORM: "clang"
install:
- ECHO Installing %COMPILER% %PLATFORM% %CONFIGURATION%
- MKDIR bin
- if [%COMPILER%]==[gcc] SET PATH_ORIGINAL=%PATH%
- if [%COMPILER%]==[gcc] (
SET "PATH_MINGW32=c:\MinGW\bin;c:\MinGW\usr\bin" &&
SET "PATH_MINGW64=c:\msys64\mingw64\bin;c:\msys64\usr\bin" &&
COPY C:\MinGW\bin\mingw32-make.exe C:\MinGW\bin\make.exe &&
COPY C:\MinGW\bin\gcc.exe C:\MinGW\bin\cc.exe
) else (
IF [%PLATFORM%]==[x64] (SET ADDITIONALPARAM=/p:LibraryPath="C:\Program Files\Microsoft SDKs\Windows\v7.1\lib\x64;c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\lib\amd64;C:\Program Files (x86)\Microsoft Visual Studio 10.0\;C:\Program Files (x86)\Microsoft Visual Studio 10.0\lib\amd64;")
)
build_script:
- if [%PLATFORM%]==[mingw32] SET PATH=%PATH_MINGW32%;%PATH_ORIGINAL%
- if [%PLATFORM%]==[mingw64] SET PATH=%PATH_MINGW64%;%PATH_ORIGINAL%
- if [%PLATFORM%]==[clang] SET PATH=%PATH_MINGW64%;%PATH_ORIGINAL%
- ECHO *** &&
ECHO Building %COMPILER% %PLATFORM% %CONFIGURATION% &&
ECHO ***
- if [%PLATFORM%]==[clang] (clang -v)
- if [%COMPILER%]==[gcc] (gcc -v)
- if [%COMPILER%]==[gcc] (
echo ----- &&
make -v &&
echo ----- &&
if not [%PLATFORM%]==[clang] (
make -C programs lz4 && make -C tests fullbench && make -C lib lib
) ELSE (
make -C programs lz4 CC=clang MOREFLAGS="--target=x86_64-w64-mingw32 -Werror -Wconversion -Wno-sign-conversion" &&
make -C tests fullbench CC=clang MOREFLAGS="--target=x86_64-w64-mingw32 -Werror -Wconversion -Wno-sign-conversion" &&
make -C lib lib CC=clang MOREFLAGS="--target=x86_64-w64-mingw32 -Werror -Wconversion -Wno-sign-conversion"
)
)
- if [%COMPILER%]==[gcc] if not [%PLATFORM%]==[clang] (
MKDIR bin\dll bin\static bin\example bin\include &&
COPY tests\fullbench.c bin\example\ &&
COPY lib\xxhash.c bin\example\ &&
COPY lib\xxhash.h bin\example\ &&
COPY lib\lz4.h bin\include\ &&
COPY lib\lz4hc.h bin\include\ &&
COPY lib\lz4frame.h bin\include\ &&
COPY lib\liblz4.a bin\static\liblz4_static.lib &&
COPY lib\dll\liblz4.* bin\dll\ &&
COPY lib\dll\example\Makefile bin\example\ &&
COPY lib\dll\example\fullbench-dll.* bin\example\ &&
COPY lib\dll\example\README.md bin\ &&
COPY programs\lz4.exe bin\lz4.exe
)
- if [%COMPILER%]==[gcc] if [%PLATFORM%]==[mingw64] (
7z.exe a bin\lz4_x64.zip NEWS .\bin\lz4.exe .\bin\README.md .\bin\example .\bin\dll .\bin\static .\bin\include &&
appveyor PushArtifact bin\lz4_x64.zip
)
- if [%COMPILER%]==[gcc] if [%PLATFORM%]==[mingw32] (
7z.exe a bin\lz4_x86.zip NEWS .\bin\lz4.exe .\bin\README.md .\bin\example .\bin\dll .\bin\static .\bin\include &&
appveyor PushArtifact bin\lz4_x86.zip
)
- if [%COMPILER%]==[gcc] (COPY tests\fullbench.exe programs\)
- if [%COMPILER%]==[visual] (
ECHO *** &&
ECHO *** Building Visual Studio 2010 %PLATFORM%\%CONFIGURATION% &&
ECHO *** &&
msbuild "visual\VS2010\lz4.sln" %ADDITIONALPARAM% /m /verbosity:minimal /property:PlatformToolset=v100 /t:Clean,Build /p:Platform=%PLATFORM% /p:Configuration=%CONFIGURATION% /logger:"C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll" &&
ECHO *** &&
ECHO *** Building Visual Studio 2012 %PLATFORM%\%CONFIGURATION% &&
ECHO *** &&
msbuild "visual\VS2010\lz4.sln" /m /verbosity:minimal /property:PlatformToolset=v110 /t:Clean,Build /p:Platform=%PLATFORM% /p:Configuration=%CONFIGURATION% /logger:"C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll" &&
ECHO *** &&
ECHO *** Building Visual Studio 2013 %PLATFORM%\%CONFIGURATION% &&
ECHO *** &&
msbuild "visual\VS2010\lz4.sln" /m /verbosity:minimal /property:PlatformToolset=v120 /t:Clean,Build /p:Platform=%PLATFORM% /p:Configuration=%CONFIGURATION% /logger:"C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll" &&
ECHO *** &&
ECHO *** Building Visual Studio 2015 %PLATFORM%\%CONFIGURATION% &&
ECHO *** &&
msbuild "visual\VS2010\lz4.sln" /m /verbosity:minimal /property:PlatformToolset=v140 /t:Clean,Build /p:Platform=%PLATFORM% /p:Configuration=%CONFIGURATION% /logger:"C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll" &&
COPY visual\VS2010\bin\%PLATFORM%_%CONFIGURATION%\*.exe programs\
)
test_script:
- ECHO *** &&
ECHO Testing %COMPILER% %PLATFORM% %CONFIGURATION% &&
ECHO ***
- if not [%COMPILER%]==[unknown] (
CD programs &&
lz4 -h &&
lz4 -i1b lz4.exe &&
lz4 -i1b5 lz4.exe &&
lz4 -i1b10 lz4.exe &&
lz4 -i1b15 lz4.exe &&
echo ------- lz4 tested ------- &&
fullbench.exe -i1 fullbench.exe
)
artifacts:
- path: bin\lz4_x64.zip
- path: bin\lz4_x86.zip
deploy:
- provider: GitHub
artifact: bin\lz4_x64.zip
auth_token:
secure: w6UJaGie0qbZvffr/fqyhO/Vj8rMiQWnv9a8qm3gxfngdHDTMT42wYupqJpIExId
force_update: true
prerelease: true
on:
COMPILER: gcc
PLATFORM: "mingw64"
appveyor_repo_tag: true
- provider: GitHub
artifact: bin\lz4_x86.zip
auth_token:
secure: w6UJaGie0qbZvffr/fqyhO/Vj8rMiQWnv9a8qm3gxfngdHDTMT42wYupqJpIExId
force_update: true
prerelease: true
on:
COMPILER: gcc
PLATFORM: "mingw32"
appveyor_repo_tag: true

View File

@@ -1,39 +0,0 @@
dependencies:
override:
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test; sudo apt-get -y -qq update
- sudo apt-get -y install qemu-system-ppc qemu-user-static gcc-powerpc-linux-gnu
- sudo apt-get -y install qemu-system-arm gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo apt-get -y install libc6-dev-i386 clang gcc-5 gcc-5-multilib gcc-6 valgrind
test:
override:
# Tests compilers and C standards
- clang -v; make clangtest && make clean
- g++ -v; make gpptest && make clean
- gcc -v; make c_standards && make clean
- gcc-5 -v; make -C tests test-lz4 CC=gcc-5 MOREFLAGS=-Werror && make clean
- gcc-5 -v; make -C tests test-lz4c32 CC=gcc-5 MOREFLAGS="-I/usr/include/x86_64-linux-gnu -Werror" && make clean
- gcc-6 -v; make c_standards CC=gcc-6 && make clean
- gcc-6 -v; make -C tests test-lz4 CC=gcc-6 MOREFLAGS=-Werror && make clean
# Shorter tests
- make cmake && make clean
- make -C tests test-lz4
- make -C tests test-lz4c
- make -C tests test-fasttest
- make -C tests test-frametest
- make -C tests test-fullbench
- make -C tests test-fuzzer && make clean
- make -C lib all && make clean
- pyenv global 3.4.4; CFLAGS=-I/usr/include/x86_64-linux-gnu make versionsTest && make clean
- make travis-install && make clean
# Longer tests
- gcc -v; make -C tests test32 MOREFLAGS="-I/usr/include/x86_64-linux-gnu" && make clean
- make usan && make clean
- clang -v; make staticAnalyze && make clean
# Valgrind tests
- make -C tests test-mem && make clean
# ARM, AArch64, PowerPC, PowerPC64 tests
- make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc-static && make clean
- make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc64-static MOREFLAGS=-m64 && make clean
- make platformTest CC=arm-linux-gnueabi-gcc QEMU_SYS=qemu-arm-static && make clean
- make platformTest CC=aarch64-linux-gnu-gcc QEMU_SYS=qemu-aarch64-static && make clean

View File

@@ -1,9 +0,0 @@
# cmake artefact
CMakeCache.txt
CMakeFiles
*.cmake
Makefile
liblz4.pc
lz4c
install_manifest.txt

View File

@@ -1,221 +0,0 @@
# CMake support for LZ4
#
# To the extent possible under law, the author(s) have dedicated all
# copyright and related and neighboring rights to this software to
# the public domain worldwide. This software is distributed without
# any warranty.
#
# For details, see <http://creativecommons.org/publicdomain/zero/1.0/>.
#
# LZ4's CMake support is maintained by Evan Nemerson; when filing
# bugs please mention @nemequ to make sure I see it.
set(LZ4_TOP_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../..")
# Parse version information
file(STRINGS "${LZ4_TOP_SOURCE_DIR}/lib/lz4.h" LZ4_VERSION_MAJOR REGEX "^#define LZ4_VERSION_MAJOR +([0-9]+) +.*$")
string(REGEX REPLACE "^#define LZ4_VERSION_MAJOR +([0-9]+) +.*$" "\\1" LZ4_VERSION_MAJOR "${LZ4_VERSION_MAJOR}")
file(STRINGS "${LZ4_TOP_SOURCE_DIR}/lib/lz4.h" LZ4_VERSION_MINOR REGEX "^#define LZ4_VERSION_MINOR +([0-9]+) +.*$")
string(REGEX REPLACE "^#define LZ4_VERSION_MINOR +([0-9]+) +.*$" "\\1" LZ4_VERSION_MINOR "${LZ4_VERSION_MINOR}")
file(STRINGS "${LZ4_TOP_SOURCE_DIR}/lib/lz4.h" LZ4_VERSION_RELEASE REGEX "^#define LZ4_VERSION_RELEASE +([0-9]+) +.*$")
string(REGEX REPLACE "^#define LZ4_VERSION_RELEASE +([0-9]+) +.*$" "\\1" LZ4_VERSION_RELEASE "${LZ4_VERSION_RELEASE}")
set(LZ4_VERSION_STRING "${LZ4_VERSION_MAJOR}.${LZ4_VERSION_MINOR}.${LZ4_VERSION_RELEASE}")
mark_as_advanced(LZ4_VERSION_STRING LZ4_VERSION_MAJOR LZ4_VERSION_MINOR LZ4_VERSION_RELEASE)
if("${CMAKE_VERSION}" VERSION_LESS "3.0")
project(LZ4 C)
else()
cmake_policy (SET CMP0048 NEW)
project(LZ4
VERSION ${LZ4_VERSION_STRING}
LANGUAGES C)
endif()
cmake_minimum_required (VERSION 2.8.6)
# If LZ4 is being bundled in another project, we don't want to
# install anything. However, we want to let people override this, so
# we'll use the LZ4_BUNDLED_MODE variable to let them do that; just
# set it to OFF in your project before you add_subdirectory(lz4/contrib/cmake_unofficial).
get_directory_property(LZ4_PARENT_DIRECTORY PARENT_DIRECTORY)
if("${LZ4_BUNDLED_MODE}" STREQUAL "")
# Bundled mode hasn't been set one way or the other, set the default
# depending on whether or not we are the top-level project.
if("${LZ4_PARENT_DIRECTORY}" STREQUAL "")
set(LZ4_BUNDLED_MODE OFF)
else()
set(LZ4_BUNDLED_MODE ON)
endif()
endif()
mark_as_advanced(LZ4_BUNDLED_MODE)
# CPack
if(NOT LZ4_BUNDLED_MODE AND NOT CPack_CMake_INCLUDED)
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "LZ4 compression library")
set(CPACK_PACKAGE_DESCRIPTION_FILE "${LZ4_TOP_SOURCE_DIR}/README.md")
set(CPACK_RESOURCE_FILE_LICENSE "${LZ4_TOP_SOURCE_DIR}/LICENSE")
set(CPACK_PACKAGE_VERSION_MAJOR ${LZ4_VERSION_MAJOR})
set(CPACK_PACKAGE_VERSION_MINOR ${LZ4_VERSION_MINOR})
set(CPACK_PACKAGE_VERSION_PATCH ${LZ4_VERSION_RELEASE})
include(CPack)
endif(NOT LZ4_BUNDLED_MODE AND NOT CPack_CMake_INCLUDED)
# Allow people to choose whether to build shared or static libraries
# via the BUILD_SHARED_LIBS option unless we are in bundled mode, in
# which case we always use static libraries.
include(CMakeDependentOption)
CMAKE_DEPENDENT_OPTION(BUILD_SHARED_LIBS "Build shared libraries" ON "NOT LZ4_BUNDLED_MODE" OFF)
CMAKE_DEPENDENT_OPTION(BUILD_STATIC_LIBS "Build static libraries" OFF "BUILD_SHARED_LIBS" ON)
if(NOT BUILD_SHARED_LIBS AND NOT BUILD_STATIC_LIBS)
message(FATAL_ERROR "Both BUILD_SHARED_LIBS and BUILD_STATIC_LIBS have been disabled")
endif()
set(LZ4_LIB_SOURCE_DIR "${LZ4_TOP_SOURCE_DIR}/lib")
set(LZ4_PROG_SOURCE_DIR "${LZ4_TOP_SOURCE_DIR}/programs")
include_directories("${LZ4_LIB_SOURCE_DIR}")
# CLI sources
set(LZ4_SOURCES
"${LZ4_LIB_SOURCE_DIR}/lz4.c"
"${LZ4_LIB_SOURCE_DIR}/lz4hc.c"
"${LZ4_LIB_SOURCE_DIR}/lz4.h"
"${LZ4_LIB_SOURCE_DIR}/lz4hc.h"
"${LZ4_LIB_SOURCE_DIR}/lz4frame.c"
"${LZ4_LIB_SOURCE_DIR}/lz4frame.h"
"${LZ4_LIB_SOURCE_DIR}/xxhash.c")
set(LZ4_CLI_SOURCES
"${LZ4_PROG_SOURCE_DIR}/bench.c"
"${LZ4_PROG_SOURCE_DIR}/lz4cli.c"
"${LZ4_PROG_SOURCE_DIR}/lz4io.c"
"${LZ4_PROG_SOURCE_DIR}/datagen.c")
# Whether to use position independent code for the static library. If
# we're building a shared library this is ignored and PIC is always
# used.
option(LZ4_POSITION_INDEPENDENT_LIB "Use position independent code for static library (if applicable)" ON)
# liblz4
set(LZ4_LIBRARIES_BUILT)
if(BUILD_SHARED_LIBS)
add_library(lz4_shared SHARED ${LZ4_SOURCES})
set_target_properties(lz4_shared PROPERTIES
OUTPUT_NAME lz4
SOVERSION "${LZ4_VERSION_MAJOR}"
VERSION "${LZ4_VERSION_STRING}")
list(APPEND LZ4_LIBRARIES_BUILT lz4_shared)
endif()
if(BUILD_STATIC_LIBS)
add_library(lz4_static STATIC ${LZ4_SOURCES})
set_target_properties(lz4_static PROPERTIES
OUTPUT_NAME lz4
POSITION_INDEPENDENT_CODE ${LZ4_POSITION_INDEPENDENT_LIB})
list(APPEND LZ4_LIBRARIES_BUILT lz4_static)
endif()
# link to shared whenever possible, to static otherwise
if(BUILD_SHARED_LIBS)
set(LZ4_LINK_LIBRARY lz4_shared)
else()
set(LZ4_LINK_LIBRARY lz4_static)
endif()
# lz4
add_executable(lz4cli ${LZ4_CLI_SOURCES})
set_target_properties(lz4cli PROPERTIES OUTPUT_NAME lz4)
target_link_libraries(lz4cli ${LZ4_LINK_LIBRARY})
# lz4c
add_executable(lz4c ${LZ4_CLI_SOURCES})
set_target_properties(lz4c PROPERTIES COMPILE_DEFINITIONS "ENABLE_LZ4C_LEGACY_OPTIONS")
target_link_libraries(lz4c ${LZ4_LINK_LIBRARY})
# Extra warning flags
include (CheckCCompilerFlag)
foreach (flag
# GCC-style
-Wall
-Wextra
-Wundef
-Wcast-qual
-Wcast-align
-Wshadow
-Wswitch-enum
-Wdeclaration-after-statement
-Wstrict-prototypes
-Wpointer-arith
# MSVC-style
/W4)
# Because https://gcc.gnu.org/wiki/FAQ#wnowarning
string(REGEX REPLACE "\\-Wno\\-(.+)" "-W\\1" flag_to_test "${flag}")
string(REGEX REPLACE "[^a-zA-Z0-9]+" "_" test_name "CFLAG_${flag_to_test}")
check_c_compiler_flag("${ADD_COMPILER_FLAGS_PREPEND} ${flag_to_test}" ${test_name})
if(${test_name})
set(CMAKE_C_FLAGS "${flag} ${CMAKE_C_FLAGS}")
endif()
unset(test_name)
unset(flag_to_test)
endforeach (flag)
if(NOT LZ4_BUNDLED_MODE)
include(GNUInstallDirs)
install(TARGETS lz4cli lz4c
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}")
install(TARGETS ${LZ4_LIBRARIES_BUILT}
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}"
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}")
install(FILES
"${LZ4_LIB_SOURCE_DIR}/lz4.h"
"${LZ4_LIB_SOURCE_DIR}/lz4frame.h"
"${LZ4_LIB_SOURCE_DIR}/lz4hc.h"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}")
install(FILES "${LZ4_PROG_SOURCE_DIR}/lz4.1"
DESTINATION "${CMAKE_INSTALL_MANDIR}/man1")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/liblz4.pc"
DESTINATION "${CMAKE_INSTALL_LIBDIR}/pkgconfig")
# install lz4cat and unlz4 symlinks on *nix
if(UNIX)
install(CODE "
foreach(f lz4cat unlz4)
set(dest \"\$ENV{DESTDIR}${CMAKE_INSTALL_FULL_BINDIR}/\${f}\")
message(STATUS \"Symlinking: \${dest} -> lz4\")
execute_process(
COMMAND \"${CMAKE_COMMAND}\" -E create_symlink lz4 \"\${dest}\")
endforeach()
")
# create manpage aliases
foreach(f lz4cat unlz4)
file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/${f}.1" ".so man1/lz4.1\n")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/${f}.1"
DESTINATION "${CMAKE_INSTALL_MANDIR}/man1")
endforeach()
endif(UNIX)
endif(NOT LZ4_BUNDLED_MODE)
# pkg-config
set(PREFIX "${CMAKE_INSTALL_PREFIX}")
if("${CMAKE_INSTALL_FULL_LIBDIR}" STREQUAL "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}")
set(LIBDIR "\${prefix}/${CMAKE_INSTALL_LIBDIR}")
else()
set(LIBDIR "${CMAKE_INSTALL_FULL_LIBDIR}")
endif()
if("${CMAKE_INSTALL_FULL_INCLUDEDIR}" STREQUAL "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}")
set(INCLUDEDIR "\${prefix}/${CMAKE_INSTALL_INCLUDEDIR}")
else()
set(INCLUDEDIR "${CMAKE_INSTALL_FULL_INCLUDEDIR}")
endif()
# for liblz4.pc substitution
set(VERSION ${LZ4_VERSION_STRING})
configure_file(${LZ4_LIB_SOURCE_DIR}/liblz4.pc.in liblz4.pc @ONLY)

View File

@@ -1,10 +0,0 @@
liblz4 (1.7.2) unstable; urgency=low
* Changed : moved to versioning; package, cli and library have same version number
* Improved: Small decompression speed boost (+4%)
* Improved: Performance on ARMv6 and ARMv7
* Added : Debianization, by Evgeniy Polyakov
* Makefile: Generates object files (*.o) for faster (re)compilation on low power systems
* Fix : cli : crash on some invalid inputs
-- Yann Collet <Cyan4973@github.com> Sun, 28 Jun 2015 01:00:00 +0000

View File

@@ -1 +0,0 @@
7

View File

@@ -1,23 +0,0 @@
Source: liblz4
Section: devel
Priority: optional
Maintainer: Evgeniy Polyakov <zbr@ioremap.net>
Build-Depends:
cmake (>= 2.6),
debhelper (>= 7.0.50~),
cdbs
Standards-Version: 3.8.0
Homepage: http://www.lz4.org/
Vcs-Git: git://github.com/lz4/lz4.git
Vcs-Browser: https://github.com/lz4/lz4
Package: liblz4
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Extremely Fast Compression algorithm http://www.lz4.org
Package: liblz4-dev
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Extremely Fast Compression algorithm http://www.lz4.org
Development files.

View File

@@ -1,9 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: liblz4
Upstream-Contact: Yann Collet <Cyan4973@github.com>
Source: https://github.com/lz4/lz4
Files: *
Copyright: (C) 2011+ Yann Collet
License: GPL-2+
The full text of license: https://github.com/Cyan4973/lz4/blob/master/lib/LICENSE

View File

@@ -1 +0,0 @@
usr/bin

View File

@@ -1,2 +0,0 @@
usr/include/lz4*
usr/lib/liblz4.so

View File

@@ -1,2 +0,0 @@
usr/lib/liblz4.so.*
usr/bin/*

View File

@@ -1,8 +0,0 @@
#!/usr/bin/make -f
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/cmake.mk
DEB_CMAKE_EXTRA_FLAGS := -DCMAKE_BUILD_TYPE=RelWithDebInfo ../cmake_unofficial

Binary file not shown.

View File

@@ -1,130 +0,0 @@
# Copyright (c) 2015, Louis P. Santillan <lpsantil@gmail.com>
# All rights reserved.
# See LICENSE for licensing details.
DESTDIR ?= /opt/local
# Pulled the code below from lib/Makefile. Might be nicer to derive this somehow without sed
# Version numbers
VERSION ?= 129
RELEASE ?= r$(VERSION)
LIBVER_MAJOR=$(shell sed -n '/define LZ4_VERSION_MAJOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < lib/lz4.h)
LIBVER_MINOR=$(shell sed -n '/define LZ4_VERSION_MINOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < lib/lz4.h)
LIBVER_PATCH=$(shell sed -n '/define LZ4_VERSION_RELEASE/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < lib/lz4.h)
LIBVER=$(LIBVER_MAJOR).$(LIBVER_MINOR).$(LIBVER_PATCH)
######################################################################
CROSS ?= i586-pc-msdosdjgpp
CC = $(CROSS)-gcc
AR = $(CROSS)-ar
LD = $(CROSS)-gcc
CFLAGS ?= -O3 -std=gnu99 -Wall -Wextra -Wundef -Wshadow -Wcast-qual -Wcast-align -Wstrict-prototypes -pedantic -DLZ4_VERSION=\"$(RELEASE)\"
LDFLAGS ?= -s
SRC = programs/bench.c programs/lz4io.c programs/lz4cli.c
OBJ = $(SRC:.c=.o)
SDEPS = $(SRC:.c=.d)
IDIR = lib
EDIR = .
EXE = lz4.exe
LNK = lz4
LDIR = lib
LSRC = lib/lz4.c lib/lz4hc.c lib/lz4frame.c lib/xxhash.c
INC = $(LSRC:.c=.h)
LOBJ = $(LSRC:.c=.o)
LSDEPS = $(LSRC:.c=.d)
LIB = $(LDIR)/lib$(LNK).a
# Since LDFLAGS defaults to "-s", probably better to override unless
# you have a default you would like to maintain
ifeq ($(WITH_DEBUG), 1)
CFLAGS += -g
LDFLAGS += -g
endif
# Since LDFLAGS defaults to "-s", probably better to override unless
# you have a default you would like to maintain
ifeq ($(WITH_PROFILING), 1)
CFLAGS += -pg
LDFLAGS += -pg
endif
%.o: %.c $(INC) Makefile
$(CC) $(CFLAGS) -MMD -MP -I$(IDIR) -c $< -o $@
%.exe: %.o $(LIB) Makefile
$(LD) $< -L$(LDIR) -l$(LNK) $(LDFLAGS) $(LIBDEP) -o $@
######################################################################
######################## DO NOT MODIFY BELOW #########################
######################################################################
.PHONY: all install uninstall showconfig gstat gpush
all: $(LIB) $(EXE)
$(LIB): $(LOBJ)
$(AR) -rcs $@ $^
$(EXE): $(LOBJ) $(OBJ)
$(LD) $(LDFLAGS) $(LOBJ) $(OBJ) -o $(EDIR)/$@
clean:
rm -f $(OBJ) $(EXE) $(LOBJ) $(LIB) *.tmp $(SDEPS) $(LSDEPS) $(TSDEPS)
install: $(INC) $(LIB) $(EXE)
mkdir -p $(DESTDIR)/bin $(DESTDIR)/include $(DESTDIR)/lib
rm -f .footprint
echo $(DESTDIR)/bin/$(EXE) >> .footprint
cp -v $(EXE) $(DESTDIR)/bin/
@for T in $(LIB); \
do ( \
echo $(DESTDIR)/$$T >> .footprint; \
cp -v --parents $$T $(DESTDIR) \
); done
@for T in $(INC); \
do ( \
echo $(DESTDIR)/include/`basename -a $$T` >> .footprint; \
cp -v $$T $(DESTDIR)/include/ \
); done
uninstall: .footprint
@for T in $(shell cat .footprint); do rm -v $$T; done
-include $(SDEPS) $(LSDEPS)
showconfig:
@echo "PWD="$(PWD)
@echo "VERSION="$(VERSION)
@echo "RELEASE="$(RELEASE)
@echo "LIBVER_MAJOR="$(LIBVER_MAJOR)
@echo "LIBVER_MINOR="$(LIBVER_MINOR)
@echo "LIBVER_PATCH="$(LIBVER_PATCH)
@echo "LIBVER="$(LIBVER)
@echo "CROSS="$(CROSS)
@echo "CC="$(CC)
@echo "AR="$(AR)
@echo "LD="$(LD)
@echo "DESTDIR="$(DESTDIR)
@echo "CFLAGS="$(CFLAGS)
@echo "LDFLAGS="$(LDFLAGS)
@echo "SRC="$(SRC)
@echo "OBJ="$(OBJ)
@echo "IDIR="$(IDIR)
@echo "INC="$(INC)
@echo "EDIR="$(EDIR)
@echo "EXE="$(EXE)
@echo "LDIR="$(LDIR)
@echo "LSRC="$(LSRC)
@echo "LOBJ="$(LOBJ)
@echo "LNK="$(LNK)
@echo "LIB="$(LIB)
@echo "SDEPS="$(SDEPS)
@echo "LSDEPS="$(LSDEPS)
gstat:
git status
gpush:
git commit
git push

View File

@@ -1,21 +0,0 @@
# lz4 for DOS/djgpp
This file details on how to compile lz4.exe, and liblz4.a for use on DOS/djgpp using
Andrew Wu's build-djgpp cross compilers ([GH][0], [Binaries][1]) on OSX, Linux.
## Setup
* Download a djgpp tarball [binaries][1] for your platform.
* Extract and install it (`tar jxvf djgpp-linux64-gcc492.tar.bz2`). Note the path. We'll assume `/home/user/djgpp`.
* Add the `bin` folder to your `PATH`. In bash, do `export PATH=/home/user/djgpp/bin:$PATH`.
* The `Makefile` in `contrib/djgpp/` sets up `CC`, `AR`, `LD` for you. So, `CC=i586-pc-msdosdjgpp-gcc`, `AR=i586-pc-msdosdjgpp-ar`, `LD=i586-pc-msdosdjgpp-gcc`.
## Building LZ4 for DOS
In the base dir of lz4 and with `contrib/djgpp/Makefile`, try:
Try:
* `make -f contrib/djgpp/Makefile`
* `make -f contrib/djgpp/Makefile liblz4.a`
* `make -f contrib/djgpp/Makefile lz4.exe`
* `make -f contrib/djgpp/Makefile DESTDIR=/home/user/dos install`, however it doesn't make much sense on a \*nix.
* You can also do `make -f contrib/djgpp/Makefile uninstall`
[0]: https://github.com/andrewwutw/build-djgpp
[1]: https://github.com/andrewwutw/build-djgpp/releases

View File

@@ -1,2 +0,0 @@
# build artefact
gen_manual

View File

@@ -1,76 +0,0 @@
# ################################################################
# Copyright (C) Przemyslaw Skibinski 2016-present
# All rights reserved.
#
# BSD license
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice, this
# list of conditions and the following disclaimer in the documentation and/or
# other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# You can contact the author at :
# - LZ4 source repository : https://github.com/Cyan4973/lz4
# - LZ4 forum froup : https://groups.google.com/forum/#!forum/lz4c
# ################################################################
CFLAGS ?= -O3
CFLAGS += -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow -Wstrict-aliasing=1 -Wswitch-enum -Wno-comment
CFLAGS += $(MOREFLAGS)
FLAGS = $(CPPFLAGS) $(CFLAGS) $(LDFLAGS)
LZ4API = ../../lib/lz4.h
LZ4MANUAL = ../../doc/lz4_manual.html
LZ4FAPI = ../../lib/lz4frame.h
LZ4FMANUAL = ../../doc/lz4frame_manual.html
LIBVER_MAJOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MAJOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LZ4API)`
LIBVER_MINOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MINOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LZ4API)`
LIBVER_PATCH_SCRIPT:=`sed -n '/define LZ4_VERSION_RELEASE/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LZ4API)`
LIBVER_SCRIPT:= $(LIBVER_MAJOR_SCRIPT).$(LIBVER_MINOR_SCRIPT).$(LIBVER_PATCH_SCRIPT)
LZ4VER := $(shell echo $(LIBVER_SCRIPT))
# Define *.exe as extension for Windows systems
ifneq (,$(filter Windows%,$(OS)))
EXT =.exe
else
EXT =
endif
.PHONY: default
default: gen_manual
gen_manual: gen_manual.cpp
$(CXX) $(FLAGS) $^ -o $@$(EXT)
$(LZ4MANUAL) : gen_manual $(LZ4API)
echo "Update lz4 manual in /doc"
./gen_manual $(LZ4VER) $(LZ4API) $@
$(LZ4FMANUAL) : gen_manual $(LZ4FAPI)
echo "Update lz4frame manual in /doc"
./gen_manual $(LZ4VER) $(LZ4FAPI) $@
.PHONY: manuals
manuals: gen_manual $(LZ4MANUAL) $(LZ4FMANUAL)
.PHONY: clean
clean:
@$(RM) gen_manual$(EXT)
@echo Cleaning completed

View File

@@ -1,31 +0,0 @@
gen_manual - a program for automatic generation of manual from source code
==========================================================================
#### Introduction
This simple C++ program generates a single-page HTML manual from `lz4.h`.
The format of recognized comment blocks is following:
- comments of type `/*!` mean: this is a function declaration; switch comments with declarations
- comments of type `/**` and `/*-` mean: this is a comment; use a `<H2>` header for the first line
- comments of type `/*=` and `/**=` mean: use a `<H3>` header and show also all functions until first empty line
- comments of type `/*X` where `X` is different from above-mentioned are ignored
Moreover:
- `LZ4LIB_API` is removed to improve readability
- `typedef` are detected and included even if uncommented
- comments of type `/**<` and `/*!<` are detected and only function declaration is highlighted (bold)
#### Usage
The program requires 3 parameters:
```
gen_manual [lz4_version] [input_file] [output_html]
```
To compile program and generate lz4 manual we have used:
```
make
./gen_manual.exe 1.7.3 ../../lib/lz4.h lz4_manual.html
```

View File

@@ -1,10 +0,0 @@
#!/bin/sh
LIBVER_MAJOR_SCRIPT=`sed -n '/define LZ4_VERSION_MAJOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ../../lib/lz4.h`
LIBVER_MINOR_SCRIPT=`sed -n '/define LZ4_VERSION_MINOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ../../lib/lz4.h`
LIBVER_PATCH_SCRIPT=`sed -n '/define LZ4_VERSION_RELEASE/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ../../lib/lz4.h`
LIBVER_SCRIPT=$LIBVER_MAJOR_SCRIPT.$LIBVER_MINOR_SCRIPT.$LIBVER_PATCH_SCRIPT
echo LZ4_VERSION=$LIBVER_SCRIPT
./gen_manual "lz4 $LIBVER_SCRIPT" ../../lib/lz4.h ./lz4_manual.html
./gen_manual "lz4frame $LIBVER_SCRIPT" ../../lib/lz4frame.h ./lz4frame_manual.html

View File

@@ -1,247 +0,0 @@
/*
Copyright (c) 2016-present, Przemyslaw Skibinski
All rights reserved.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 homepage : http://www.lz4.org
- LZ4 source repository : https://github.com/lz4/lz4
*/
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
using namespace std;
/* trim string at the beginning and at the end */
void trim(string& s, string characters)
{
size_t p = s.find_first_not_of(characters);
s.erase(0, p);
p = s.find_last_not_of(characters);
if (string::npos != p)
s.erase(p+1);
}
/* trim C++ style comments */
void trim_comments(string &s)
{
size_t spos, epos;
spos = s.find("/*");
epos = s.find("*/");
s = s.substr(spos+3, epos-(spos+3));
}
/* get lines until a given terminator */
vector<string> get_lines(vector<string>& input, int& linenum, string terminator)
{
vector<string> out;
string line;
size_t epos;
while ((size_t)linenum < input.size()) {
line = input[linenum];
if (terminator.empty() && line.empty()) { linenum--; break; }
epos = line.find(terminator);
if (!terminator.empty() && epos!=string::npos) {
out.push_back(line);
break;
}
out.push_back(line);
linenum++;
}
return out;
}
/* print line with LZ4LIB_API removed and C++ comments not bold */
void print_line(stringstream &sout, string line)
{
size_t spos, epos;
if (line.substr(0,11) == "LZ4LIB_API ") line = line.substr(11);
if (line.substr(0,12) == "LZ4FLIB_API ") line = line.substr(12);
spos = line.find("/*");
epos = line.find("*/");
if (spos!=string::npos && epos!=string::npos) {
sout << line.substr(0, spos);
sout << "</b>" << line.substr(spos) << "<b>" << endl;
} else {
// fprintf(stderr, "lines=%s\n", line.c_str());
sout << line << endl;
}
}
int main(int argc, char *argv[]) {
char exclam;
int linenum, chapter = 1;
vector<string> input, lines, comments, chapters;
string line, version;
size_t spos, l;
stringstream sout;
ifstream istream;
ofstream ostream;
if (argc < 4) {
cout << "usage: " << argv[0] << " [lz4_version] [input_file] [output_html]" << endl;
return 1;
}
version = string(argv[1]) + " Manual";
istream.open(argv[2], ifstream::in);
if (!istream.is_open()) {
cout << "Error opening file " << argv[2] << endl;
return 1;
}
ostream.open(argv[3], ifstream::out);
if (!ostream.is_open()) {
cout << "Error opening file " << argv[3] << endl;
return 1;
}
while (getline(istream, line)) {
input.push_back(line);
}
for (linenum=0; (size_t)linenum < input.size(); linenum++) {
line = input[linenum];
/* typedefs are detected and included even if uncommented */
if (line.substr(0,7) == "typedef" && line.find("{")!=string::npos) {
lines = get_lines(input, linenum, "}");
sout << "<pre><b>";
for (l=0; l<lines.size(); l++) {
print_line(sout, lines[l]);
}
sout << "</b></pre><BR>" << endl;
continue;
}
/* comments of type /**< and /*!< are detected and only function declaration is highlighted (bold) */
if ((line.find("/**<")!=string::npos || line.find("/*!<")!=string::npos) && line.find("*/")!=string::npos) {
sout << "<pre><b>";
print_line(sout, line);
sout << "</b></pre><BR>" << endl;
continue;
}
spos = line.find("/**=");
if (spos==string::npos) {
spos = line.find("/*!");
if (spos==string::npos)
spos = line.find("/**");
if (spos==string::npos)
spos = line.find("/*-");
if (spos==string::npos)
spos = line.find("/*=");
if (spos==string::npos)
continue;
exclam = line[spos+2];
}
else exclam = '=';
comments = get_lines(input, linenum, "*/");
if (!comments.empty()) comments[0] = line.substr(spos+3);
if (!comments.empty()) comments[comments.size()-1] = comments[comments.size()-1].substr(0, comments[comments.size()-1].find("*/"));
for (l=0; l<comments.size(); l++) {
if (comments[l].find(" *")==0) comments[l] = comments[l].substr(2);
else if (comments[l].find(" *")==0) comments[l] = comments[l].substr(3);
trim(comments[l], "*-=");
}
while (!comments.empty() && comments[comments.size()-1].empty()) comments.pop_back(); // remove empty line at the end
while (!comments.empty() && comments[0].empty()) comments.erase(comments.begin()); // remove empty line at the start
/* comments of type /*! mean: this is a function declaration; switch comments with declarations */
if (exclam == '!') {
if (!comments.empty()) comments.erase(comments.begin()); /* remove first line like "LZ4_XXX() :" */
linenum++;
lines = get_lines(input, linenum, "");
sout << "<pre><b>";
for (l=0; l<lines.size(); l++) {
// fprintf(stderr, "line[%d]=%s\n", l, lines[l].c_str());
print_line(sout, lines[l]);
}
sout << "</b><p>";
for (l=0; l<comments.size(); l++) {
print_line(sout, comments[l]);
}
sout << "</p></pre><BR>" << endl << endl;
} else if (exclam == '=') { /* comments of type /*= and /**= mean: use a <H3> header and show also all functions until first empty line */
trim(comments[0], " ");
sout << "<h3>" << comments[0] << "</h3><pre>";
for (l=1; l<comments.size(); l++) {
print_line(sout, comments[l]);
}
sout << "</pre><b><pre>";
lines = get_lines(input, ++linenum, "");
for (l=0; l<lines.size(); l++) {
print_line(sout, lines[l]);
}
sout << "</pre></b><BR>" << endl;
} else { /* comments of type /** and /*- mean: this is a comment; use a <H2> header for the first line */
if (comments.empty()) continue;
trim(comments[0], " ");
sout << "<a name=\"Chapter" << chapter << "\"></a><h2>" << comments[0] << "</h2><pre>";
chapters.push_back(comments[0]);
chapter++;
for (l=1; l<comments.size(); l++) {
print_line(sout, comments[l]);
}
if (comments.size() > 1)
sout << "<BR></pre>" << endl << endl;
else
sout << "</pre>" << endl << endl;
}
}
ostream << "<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=ISO-8859-1\">\n<title>" << version << "</title>\n</head>\n<body>" << endl;
ostream << "<h1>" << version << "</h1>\n";
ostream << "<hr>\n<a name=\"Contents\"></a><h2>Contents</h2>\n<ol>\n";
for (size_t i=0; i<chapters.size(); i++)
ostream << "<li><a href=\"#Chapter" << i+1 << "\">" << chapters[i].c_str() << "</a></li>\n";
ostream << "</ol>\n<hr>\n";
ostream << sout.str();
ostream << "</html>" << endl << "</body>" << endl;
return 0;
}

View File

@@ -1,135 +0,0 @@
LZ4 Block Format Description
============================
Last revised: 2015-05-07.
Author : Yann Collet
This specification is intended for developers
willing to produce LZ4-compatible compressed data blocks
using any programming language.
LZ4 is an LZ77-type compressor with a fixed, byte-oriented encoding.
There is no entropy encoder back-end nor framing layer.
The latter is assumed to be handled by other parts of the system (see [LZ4 Frame format]).
This design is assumed to favor simplicity and speed.
It helps later on for optimizations, compactness, and features.
This document describes only the block format,
not how the compressor nor decompressor actually work.
The correctness of the decompressor should not depend
on implementation details of the compressor, and vice versa.
[LZ4 Frame format]: lz4_Frame_format.md
Compressed block format
-----------------------
An LZ4 compressed block is composed of sequences.
A sequence is a suite of literals (not-compressed bytes),
followed by a match copy.
Each sequence starts with a token.
The token is a one byte value, separated into two 4-bits fields.
Therefore each field ranges from 0 to 15.
The first field uses the 4 high-bits of the token.
It provides the length of literals to follow.
If the field value is 0, then there is no literal.
If it is 15, then we need to add some more bytes to indicate the full length.
Each additional byte then represent a value from 0 to 255,
which is added to the previous value to produce a total length.
When the byte value is 255, another byte is output.
There can be any number of bytes following the token. There is no "size limit".
(Side note : this is why a not-compressible input block is expanded by 0.4%).
Example 1 : A length of 48 will be represented as :
- 15 : value for the 4-bits High field
- 33 : (=48-15) remaining length to reach 48
Example 2 : A length of 280 will be represented as :
- 15 : value for the 4-bits High field
- 255 : following byte is maxed, since 280-15 >= 255
- 10 : (=280 - 15 - 255) ) remaining length to reach 280
Example 3 : A length of 15 will be represented as :
- 15 : value for the 4-bits High field
- 0 : (=15-15) yes, the zero must be output
Following the token and optional length bytes, are the literals themselves.
They are exactly as numerous as previously decoded (length of literals).
It's possible that there are zero literal.
Following the literals is the match copy operation.
It starts by the offset.
This is a 2 bytes value, in little endian format
(the 1st byte is the "low" byte, the 2nd one is the "high" byte).
The offset represents the position of the match to be copied from.
1 means "current position - 1 byte".
The maximum offset value is 65535, 65536 cannot be coded.
Note that 0 is an invalid value, not used.
Then we need to extract the match length.
For this, we use the second token field, the low 4-bits.
Value, obviously, ranges from 0 to 15.
However here, 0 means that the copy operation will be minimal.
The minimum length of a match, called minmatch, is 4.
As a consequence, a 0 value means 4 bytes, and a value of 15 means 19+ bytes.
Similar to literal length, on reaching the highest possible value (15),
we output additional bytes, one at a time, with values ranging from 0 to 255.
They are added to total to provide the final match length.
A 255 value means there is another byte to read and add.
There is no limit to the number of optional bytes that can be output this way.
(This points towards a maximum achievable compression ratio of about 250).
Decoding the matchlength reaches the end of current sequence.
Next byte will be the start of another sequence.
But before moving to next sequence,
it's time to use the decoded match position and length.
The decoder copies matchlength bytes from match position to current position.
In some cases, matchlength is larger than offset.
Therefore, match pos + match length > current pos,
which means that later bytes to copy are not yet decoded.
This is called an "overlap match", and must be handled with special care.
The most common case is an offset of 1,
meaning the last byte is repeated matchlength times.
Parsing restrictions
-----------------------
There are specific parsing rules to respect in order to remain compatible
with assumptions made by the decoder :
1. The last 5 bytes are always literals
2. The last match must start at least 12 bytes before end of block.
Consequently, a block with less than 13 bytes cannot be compressed.
These rules are in place to ensure that the decoder
will never read beyond the input buffer, nor write beyond the output buffer.
Note that the last sequence is also incomplete,
and stops right after literals.
Additional notes
-----------------------
There is no assumption nor limits to the way the compressor
searches and selects matches within the source data block.
It could be a fast scan, a multi-probe, a full search using BST,
standard hash chains or MMC, well whatever.
Advanced parsing strategies can also be implemented, such as lazy match,
or full optimal parsing.
All these trade-off offer distinctive speed/memory/compression advantages.
Whatever the method used by the compressor, its result will be decodable
by any LZ4 decoder if it follows the format specification described above.

View File

@@ -1,411 +0,0 @@
LZ4 Frame Format Description
============================
### Notices
Copyright (c) 2013-2015 Yann Collet
Permission is granted to copy and distribute this document
for any purpose and without charge,
including translations into other languages
and incorporation into compilations,
provided that the copyright notice and this notice are preserved,
and that any substantive changes or deletions from the original
are clearly marked.
Distribution of this document is unlimited.
### Version
1.6.0 (08/08/2017)
Introduction
------------
The purpose of this document is to define a lossless compressed data format,
that is independent of CPU type, operating system,
file system and character set, suitable for
File compression, Pipe and streaming compression
using the [LZ4 algorithm](http://www.lz4.org).
The data can be produced or consumed,
even for an arbitrarily long sequentially presented input data stream,
using only an a priori bounded amount of intermediate storage,
and hence can be used in data communications.
The format uses the LZ4 compression method,
and optional [xxHash-32 checksum method](https://github.com/Cyan4973/xxHash),
for detection of data corruption.
The data format defined by this specification
does not attempt to allow random access to compressed data.
This specification is intended for use by implementers of software
to compress data into LZ4 format and/or decompress data from LZ4 format.
The text of the specification assumes a basic background in programming
at the level of bits and other primitive data representations.
Unless otherwise indicated below,
a compliant compressor must produce data sets
that conform to the specifications presented here.
It doesnt need to support all options though.
A compliant decompressor must be able to decompress
at least one working set of parameters
that conforms to the specifications presented here.
It may also ignore checksums.
Whenever it does not support a specific parameter within the compressed stream,
it must produce a non-ambiguous error code
and associated error message explaining which parameter is unsupported.
General Structure of LZ4 Frame format
-------------------------------------
| MagicNb | F. Descriptor | Block | (...) | EndMark | C. Checksum |
|:-------:|:-------------:| ----- | ----- | ------- | ----------- |
| 4 bytes | 3-15 bytes | | | 4 bytes | 0-4 bytes |
__Magic Number__
4 Bytes, Little endian format.
Value : 0x184D2204
__Frame Descriptor__
3 to 15 Bytes, to be detailed in the next part.
Most important part of the spec.
__Data Blocks__
To be detailed later on.
Thats where compressed data is stored.
__EndMark__
The flow of blocks ends when the last data block has a size of “0”.
The size is expressed as a 32-bits value.
__Content Checksum__
Content Checksum verify that the full content has been decoded correctly.
The content checksum is the result
of [xxh32() hash function](https://github.com/Cyan4973/xxHash)
digesting the original (decoded) data as input, and a seed of zero.
Content checksum is only present when its associated flag
is set in the frame descriptor.
Content Checksum validates the result,
that all blocks were fully transmitted in the correct order and without error,
and also that the encoding/decoding process itself generated no distortion.
Its usage is recommended.
__Frame Concatenation__
In some circumstances, it may be preferable to append multiple frames,
for example in order to add new data to an existing compressed file
without re-framing it.
In such case, each frame has its own set of descriptor flags.
Each frame is considered independent.
The only relation between frames is their sequential order.
The ability to decode multiple concatenated frames
within a single stream or file
is left outside of this specification.
As an example, the reference lz4 command line utility behavior is
to decode all concatenated frames in their sequential order.
Frame Descriptor
----------------
| FLG | BD | (Content Size) | (Dictionary ID) | HC |
| ------- | ------- |:--------------:|:---------------:| ------- |
| 1 byte | 1 byte | 0 - 8 bytes | 0 - 4 bytes | 1 byte |
The descriptor uses a minimum of 3 bytes,
and up to 15 bytes depending on optional parameters.
__FLG byte__
| BitNb | 7-6 | 5 | 4 | 3 | 2 | 1 | 0 |
| ------- |-------|-------|----------|------|----------|----------|------|
|FieldName|Version|B.Indep|B.Checksum|C.Size|C.Checksum|*Reserved*|DictID|
__BD byte__
| BitNb | 7 | 6-5-4 | 3-2-1-0 |
| ------- | -------- | ------------- | -------- |
|FieldName|*Reserved*| Block MaxSize |*Reserved*|
In the tables, bit 7 is highest bit, while bit 0 is lowest.
__Version Number__
2-bits field, must be set to `01`.
Any other value cannot be decoded by this version of the specification.
Other version numbers will use different flag layouts.
__Block Independence flag__
If this flag is set to “1”, blocks are independent.
If this flag is set to “0”, each block depends on previous ones
(up to LZ4 window size, which is 64 KB).
In such case, its necessary to decode all blocks in sequence.
Block dependency improves compression ratio, especially for small blocks.
On the other hand, it makes random access or multi-threaded decoding impossible.
__Block checksum flag__
If this flag is set, each data block will be followed by a 4-bytes checksum,
calculated by using the xxHash-32 algorithm on the raw (compressed) data block.
The intention is to detect data corruption (storage or transmission errors)
immediately, before decoding.
Block checksum usage is optional.
__Content Size flag__
If this flag is set, the uncompressed size of data included within the frame
will be present as an 8 bytes unsigned little endian value, after the flags.
Content Size usage is optional.
__Content checksum flag__
If this flag is set, a 32-bits content checksum will be appended
after the EndMark.
__Dictionary ID flag__
If this flag is set, a 4-bytes Dict-ID field will be present,
after the descriptor flags and the Content Size.
__Block Maximum Size__
This information is useful to help the decoder allocate memory.
Size here refers to the original (uncompressed) data size.
Block Maximum Size is one value among the following table :
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| --- | --- | --- | --- | ----- | ------ | ---- | ---- |
| N/A | N/A | N/A | N/A | 64 KB | 256 KB | 1 MB | 4 MB |
The decoder may refuse to allocate block sizes above any system-specific size.
Unused values may be used in a future revision of the spec.
A decoder conformant with the current version of the spec
is only able to decode block sizes defined in this spec.
__Reserved bits__
Value of reserved bits **must** be 0 (zero).
Reserved bit might be used in a future version of the specification,
typically enabling new optional features.
When this happens, a decoder respecting the current specification version
shall not be able to decode such a frame.
__Content Size__
This is the original (uncompressed) size.
This information is optional, and only present if the associated flag is set.
Content size is provided using unsigned 8 Bytes, for a maximum of 16 HexaBytes.
Format is Little endian.
This value is informational, typically for display or memory allocation.
It can be skipped by a decoder, or used to validate content correctness.
__Dictionary ID__
Dict-ID is only present if the associated flag is set.
It's an unsigned 32-bits value, stored using little-endian convention.
A dictionary is useful to compress short input sequences.
The compressor can take advantage of the dictionary context
to encode the input in a more compact manner.
It works as a kind of “known prefix” which is used by
both the compressor and the decompressor to “warm-up” reference tables.
The decompressor can use Dict-ID identifier to determine
which dictionary must be used to correctly decode data.
The compressor and the decompressor must use exactly the same dictionary.
It's presumed that the 32-bits dictID uniquely identifies a dictionary.
Within a single frame, a single dictionary can be defined.
When the frame descriptor defines independent blocks,
each block will be initialized with the same dictionary.
If the frame descriptor defines linked blocks,
the dictionary will only be used once, at the beginning of the frame.
__Header Checksum__
One-byte checksum of combined descriptor fields, including optional ones.
The value is the second byte of `xxh32()` : ` (xxh32()>>8) & 0xFF `
using zero as a seed, and the full Frame Descriptor as an input
(including optional fields when they are present).
A wrong checksum indicates an error in the descriptor.
Header checksum is informational and can be skipped.
Data Blocks
-----------
| Block Size | data | (Block Checksum) |
|:----------:| ------ |:----------------:|
| 4 bytes | | 0 - 4 bytes |
__Block Size__
This field uses 4-bytes, format is little-endian.
The highest bit is “1” if data in the block is uncompressed.
The highest bit is “0” if data in the block is compressed by LZ4.
All other bits give the size, in bytes, of the following data block
(the size does not include the block checksum if present).
Block Size shall never be larger than Block Maximum Size.
Such a thing could happen for incompressible source data.
In such case, such a data block shall be passed in uncompressed format.
__Data__
Where the actual data to decode stands.
It might be compressed or not, depending on previous field indications.
Uncompressed size of Data can be any size, up to “block maximum size”.
Note that data block is not necessarily full :
an arbitrary “flush” may happen anytime. Any block can be “partially filled”.
__Block checksum__
Only present if the associated flag is set.
This is a 4-bytes checksum value, in little endian format,
calculated by using the xxHash-32 algorithm on the raw (undecoded) data block,
and a seed of zero.
The intention is to detect data corruption (storage or transmission errors)
before decoding.
Block checksum is cumulative with Content checksum.
Skippable Frames
----------------
| Magic Number | Frame Size | User Data |
|:------------:|:----------:| --------- |
| 4 bytes | 4 bytes | |
Skippable frames allow the integration of user-defined data
into a flow of concatenated frames.
Its design is pretty straightforward,
with the sole objective to allow the decoder to quickly skip
over user-defined data and continue decoding.
For the purpose of facilitating identification,
it is discouraged to start a flow of concatenated frames with a skippable frame.
If there is a need to start such a flow with some user data
encapsulated into a skippable frame,
its recommended to start with a zero-byte LZ4 frame
followed by a skippable frame.
This will make it easier for file type identifiers.
__Magic Number__
4 Bytes, Little endian format.
Value : 0x184D2A5X, which means any value from 0x184D2A50 to 0x184D2A5F.
All 16 values are valid to identify a skippable frame.
__Frame Size__
This is the size, in bytes, of the following User Data
(without including the magic number nor the size field itself).
4 Bytes, Little endian format, unsigned 32-bits.
This means User Data cant be bigger than (2^32-1) Bytes.
__User Data__
User Data can be anything. Data will just be skipped by the decoder.
Legacy frame
------------
The Legacy frame format was defined into the initial versions of “LZ4Demo”.
Newer compressors should not use this format anymore, as it is too restrictive.
Main characteristics of the legacy format :
- Fixed block size : 8 MB.
- All blocks must be completely filled, except the last one.
- All blocks are always compressed, even when compression is detrimental.
- The last block is detected either because
it is followed by the “EOF” (End of File) mark,
or because it is followed by a known Frame Magic Number.
- No checksum
- Convention is Little endian
| MagicNb | B.CSize | CData | B.CSize | CData | (...) | EndMark |
| ------- | ------- | ----- | ------- | ----- | ------- | ------- |
| 4 bytes | 4 bytes | CSize | 4 bytes | CSize | x times | EOF |
__Magic Number__
4 Bytes, Little endian format.
Value : 0x184C2102
__Block Compressed Size__
This is the size, in bytes, of the following compressed data block.
4 Bytes, Little endian format.
__Data__
Where the actual compressed data stands.
Data is always compressed, even when compression is detrimental.
__EndMark__
End of legacy frame is implicit only.
It must be followed by a standard EOF (End Of File) signal,
wether it is a file or a stream.
Alternatively, if the frame is followed by a valid Frame Magic Number,
it is considered completed.
This policy makes it possible to concatenate legacy frames.
Any other value will be interpreted as a block size,
and trigger an error if it does not fit within acceptable range.
Version changes
---------------
1.6.0 : restored Dictionary ID field in Frame header
1.5.1 : changed document format to MarkDown
1.5 : removed Dictionary ID from specification
1.4.1 : changed wording from “stream” to “frame”
1.4 : added skippable streams, re-added stream checksum
1.3 : modified header checksum
1.2 : reduced choice of “block size”, to postpone decision on “dynamic size of BlockSize Field”.
1.1 : optional fields are now part of the descriptor
1.0 : changed “block size” specification, adding a compressed/uncompressed flag
0.9 : reduced scale of “block maximum size” table
0.8 : removed : high compression flag
0.7 : removed : stream checksum
0.6 : settled : stream size uses 8 bytes, endian convention is little endian
0.5: added copyright notice
0.4 : changed format to Google Doc compatible OpenDocument

View File

@@ -1,325 +0,0 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>1.8.0 Manual</title>
</head>
<body>
<h1>1.8.0 Manual</h1>
<hr>
<a name="Contents"></a><h2>Contents</h2>
<ol>
<li><a href="#Chapter1">Introduction</a></li>
<li><a href="#Chapter2">Version</a></li>
<li><a href="#Chapter3">Tuning parameter</a></li>
<li><a href="#Chapter4">Simple Functions</a></li>
<li><a href="#Chapter5">Advanced Functions</a></li>
<li><a href="#Chapter6">Streaming Compression Functions</a></li>
<li><a href="#Chapter7">Streaming Decompression Functions</a></li>
<li><a href="#Chapter8">Private definitions</a></li>
<li><a href="#Chapter9">Obsolete Functions</a></li>
</ol>
<hr>
<a name="Chapter1"></a><h2>Introduction</h2><pre>
LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core,
scalable with multi-cores CPU. It features an extremely fast decoder, with speed in
multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.
The LZ4 compression library provides in-memory compression and decompression functions.
Compression can be done in:
- a single step (described as Simple Functions)
- a single step, reusing a context (described in Advanced Functions)
- unbounded multiple steps (described as Streaming compression)
lz4.h provides block compression functions. It gives full buffer control to user.
Decompressing an lz4-compressed block also requires metadata (such as compressed size).
Each application is free to encode such metadata in whichever way it wants.
An additional format, called LZ4 frame specification (doc/lz4_Frame_format.md),
take care of encoding standard metadata alongside LZ4-compressed blocks.
If your application requires interoperability, it's recommended to use it.
A library is provided to take care of it, see lz4frame.h.
<BR></pre>
<a name="Chapter2"></a><h2>Version</h2><pre></pre>
<pre><b>int LZ4_versionNumber (void); </b>/**< library version number; to be used when checking dll version */<b>
</b></pre><BR>
<pre><b>const char* LZ4_versionString (void); </b>/**< library version string; to be used when checking dll version */<b>
</b></pre><BR>
<a name="Chapter3"></a><h2>Tuning parameter</h2><pre></pre>
<pre><b>#ifndef LZ4_MEMORY_USAGE
# define LZ4_MEMORY_USAGE 14
#endif
</b><p> Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
Increasing memory usage improves compression ratio
Reduced memory usage can improve speed, due to cache effect
Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
</p></pre><BR>
<a name="Chapter4"></a><h2>Simple Functions</h2><pre></pre>
<pre><b>int LZ4_compress_default(const char* source, char* dest, int sourceSize, int maxDestSize);
</b><p> Compresses 'sourceSize' bytes from buffer 'source'
into already allocated 'dest' buffer of size 'maxDestSize'.
Compression is guaranteed to succeed if 'maxDestSize' >= LZ4_compressBound(sourceSize).
It also runs faster, so it's a recommended setting.
If the function cannot compress 'source' into a more limited 'dest' budget,
compression stops *immediately*, and the function result is zero.
As a consequence, 'dest' content is not valid.
This function never writes outside 'dest' buffer, nor read outside 'source' buffer.
sourceSize : Max supported value is LZ4_MAX_INPUT_VALUE
maxDestSize : full or partial size of buffer 'dest' (which must be already allocated)
return : the number of bytes written into buffer 'dest' (necessarily <= maxOutputSize)
or 0 if compression fails
</p></pre><BR>
<pre><b>int LZ4_decompress_safe (const char* source, char* dest, int compressedSize, int maxDecompressedSize);
</b><p> compressedSize : is the precise full size of the compressed block.
maxDecompressedSize : is the size of destination buffer, which must be already allocated.
return : the number of bytes decompressed into destination buffer (necessarily <= maxDecompressedSize)
If destination buffer is not large enough, decoding will stop and output an error code (<0).
If the source stream is detected malformed, the function will stop decoding and return a negative result.
This function is protected against buffer overflow exploits, including malicious data packets.
It never writes outside output buffer, nor reads outside input buffer.
</p></pre><BR>
<a name="Chapter5"></a><h2>Advanced Functions</h2><pre></pre>
<pre><b>int LZ4_compressBound(int inputSize);
</b><p> Provides the maximum size that LZ4 compression may output in a "worst case" scenario (input data not compressible)
This function is primarily useful for memory allocation purposes (destination buffer size).
Macro LZ4_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example).
Note that LZ4_compress_default() compress faster when dest buffer size is >= LZ4_compressBound(srcSize)
inputSize : max supported value is LZ4_MAX_INPUT_SIZE
return : maximum output size in a "worst case" scenario
or 0, if input size is too large ( > LZ4_MAX_INPUT_SIZE)
</p></pre><BR>
<pre><b>int LZ4_compress_fast (const char* source, char* dest, int sourceSize, int maxDestSize, int acceleration);
</b><p> Same as LZ4_compress_default(), but allows to select an "acceleration" factor.
The larger the acceleration value, the faster the algorithm, but also the lesser the compression.
It's a trade-off. It can be fine tuned, with each successive value providing roughly +~3% to speed.
An acceleration value of "1" is the same as regular LZ4_compress_default()
Values <= 0 will be replaced by ACCELERATION_DEFAULT (see lz4.c), which is 1.
</p></pre><BR>
<pre><b>int LZ4_sizeofState(void);
int LZ4_compress_fast_extState (void* state, const char* source, char* dest, int inputSize, int maxDestSize, int acceleration);
</b><p> Same compression function, just using an externally allocated memory space to store compression state.
Use LZ4_sizeofState() to know how much memory must be allocated,
and allocate it on 8-bytes boundaries (using malloc() typically).
Then, provide it as 'void* state' to compression function.
</p></pre><BR>
<pre><b>int LZ4_compress_destSize (const char* source, char* dest, int* sourceSizePtr, int targetDestSize);
</b><p> Reverse the logic, by compressing as much data as possible from 'source' buffer
into already allocated buffer 'dest' of size 'targetDestSize'.
This function either compresses the entire 'source' content into 'dest' if it's large enough,
or fill 'dest' buffer completely with as much data as possible from 'source'.
*sourceSizePtr : will be modified to indicate how many bytes where read from 'source' to fill 'dest'.
New value is necessarily <= old value.
return : Nb bytes written into 'dest' (necessarily <= targetDestSize)
or 0 if compression fails
</p></pre><BR>
<pre><b>int LZ4_decompress_fast (const char* source, char* dest, int originalSize);
</b><p> originalSize : is the original and therefore uncompressed size
return : the number of bytes read from the source buffer (in other words, the compressed size)
If the source stream is detected malformed, the function will stop decoding and return a negative result.
Destination buffer must be already allocated. Its size must be a minimum of 'originalSize' bytes.
note : This function fully respect memory boundaries for properly formed compressed data.
It is a bit faster than LZ4_decompress_safe().
However, it does not provide any protection against intentionally modified data stream (malicious input).
Use this function in trusted environment only (data to decode comes from a trusted source).
</p></pre><BR>
<pre><b>int LZ4_decompress_safe_partial (const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize);
</b><p> This function decompress a compressed block of size 'compressedSize' at position 'source'
into destination buffer 'dest' of size 'maxDecompressedSize'.
The function tries to stop decompressing operation as soon as 'targetOutputSize' has been reached,
reducing decompression time.
return : the number of bytes decoded in the destination buffer (necessarily <= maxDecompressedSize)
Note : this number can be < 'targetOutputSize' should the compressed block to decode be smaller.
Always control how many bytes were decoded.
If the source stream is detected malformed, the function will stop decoding and return a negative result.
This function never writes outside of output buffer, and never reads outside of input buffer. It is therefore protected against malicious data packets
</p></pre><BR>
<a name="Chapter6"></a><h2>Streaming Compression Functions</h2><pre></pre>
<pre><b>LZ4_stream_t* LZ4_createStream(void);
int LZ4_freeStream (LZ4_stream_t* streamPtr);
</b><p> LZ4_createStream() will allocate and initialize an `LZ4_stream_t` structure.
LZ4_freeStream() releases its memory.
</p></pre><BR>
<pre><b>void LZ4_resetStream (LZ4_stream_t* streamPtr);
</b><p> An LZ4_stream_t structure can be allocated once and re-used multiple times.
Use this function to init an allocated `LZ4_stream_t` structure and start a new compression.
</p></pre><BR>
<pre><b>int LZ4_loadDict (LZ4_stream_t* streamPtr, const char* dictionary, int dictSize);
</b><p> Use this function to load a static dictionary into LZ4_stream.
Any previous data will be forgotten, only 'dictionary' will remain in memory.
Loading a size of 0 is allowed.
Return : dictionary size, in bytes (necessarily <= 64 KB)
</p></pre><BR>
<pre><b>int LZ4_compress_fast_continue (LZ4_stream_t* streamPtr, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration);
</b><p> Compress buffer content 'src', using data from previously compressed blocks as dictionary to improve compression ratio.
Important : Previous data blocks are assumed to remain present and unmodified !
'dst' buffer must be already allocated.
If dstCapacity >= LZ4_compressBound(srcSize), compression is guaranteed to succeed, and runs faster.
If not, and if compressed data cannot fit into 'dst' buffer size, compression stops, and function @return==0.
After an error, the stream status is invalid, it can only be reset or freed.
</p></pre><BR>
<pre><b>int LZ4_saveDict (LZ4_stream_t* streamPtr, char* safeBuffer, int dictSize);
</b><p> If previously compressed data block is not guaranteed to remain available at its current memory location,
save it into a safer place (char* safeBuffer).
Note : it's not necessary to call LZ4_loadDict() after LZ4_saveDict(), dictionary is immediately usable.
@return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error.
</p></pre><BR>
<a name="Chapter7"></a><h2>Streaming Decompression Functions</h2><pre> Bufferless synchronous API
<BR></pre>
<pre><b>LZ4_streamDecode_t* LZ4_createStreamDecode(void);
int LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream);
</b><p> creation / destruction of streaming decompression tracking structure
</p></pre><BR>
<pre><b>int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize);
</b><p> Use this function to instruct where to find the dictionary.
Setting a size of 0 is allowed (same effect as reset).
@return : 1 if OK, 0 if error
</p></pre><BR>
<pre><b>int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int compressedSize, int maxDecompressedSize);
int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int originalSize);
</b><p> These decoding functions allow decompression of multiple blocks in "streaming" mode.
Previously decoded blocks *must* remain available at the memory position where they were decoded (up to 64 KB)
In the case of a ring buffers, decoding buffer must be either :
- Exactly same size as encoding buffer, with same update rule (block boundaries at same positions)
In which case, the decoding & encoding ring buffer can have any size, including very small ones ( < 64 KB).
- Larger than encoding buffer, by a minimum of maxBlockSize more bytes.
maxBlockSize is implementation dependent. It's the maximum size you intend to compress into a single block.
In which case, encoding and decoding buffers do not need to be synchronized,
and encoding ring buffer can have any size, including small ones ( < 64 KB).
- _At least_ 64 KB + 8 bytes + maxBlockSize.
In which case, encoding and decoding buffers do not need to be synchronized,
and encoding ring buffer can have any size, including larger than decoding buffer.
Whenever these conditions are not possible, save the last 64KB of decoded data into a safe buffer,
and indicate where it is saved using LZ4_setStreamDecode()
</p></pre><BR>
<pre><b>int LZ4_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxDecompressedSize, const char* dictStart, int dictSize);
int LZ4_decompress_fast_usingDict (const char* source, char* dest, int originalSize, const char* dictStart, int dictSize);
</b><p> These decoding functions work the same as
a combination of LZ4_setStreamDecode() followed by LZ4_decompress_*_continue()
They are stand-alone, and don't need an LZ4_streamDecode_t structure.
</p></pre><BR>
<a name="Chapter8"></a><h2>Private definitions</h2><pre>
Do not use these definitions.
They are exposed to allow static allocation of `LZ4_stream_t` and `LZ4_streamDecode_t`.
Using these definitions will expose code to API and/or ABI break in future versions of the library.
<BR></pre>
<pre><b>typedef struct {
uint32_t hashTable[LZ4_HASH_SIZE_U32];
uint32_t currentOffset;
uint32_t initCheck;
const uint8_t* dictionary;
uint8_t* bufferStart; </b>/* obsolete, used for slideInputBuffer */<b>
uint32_t dictSize;
} LZ4_stream_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
const uint8_t* externalDict;
size_t extDictSize;
const uint8_t* prefixEnd;
size_t prefixSize;
} LZ4_streamDecode_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
unsigned int hashTable[LZ4_HASH_SIZE_U32];
unsigned int currentOffset;
unsigned int initCheck;
const unsigned char* dictionary;
unsigned char* bufferStart; </b>/* obsolete, used for slideInputBuffer */<b>
unsigned int dictSize;
} LZ4_stream_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
const unsigned char* externalDict;
size_t extDictSize;
const unsigned char* prefixEnd;
size_t prefixSize;
} LZ4_streamDecode_t_internal;
</b></pre><BR>
<pre><b>#define LZ4_STREAMSIZE_U64 ((1 << (LZ4_MEMORY_USAGE-3)) + 4)
#define LZ4_STREAMSIZE (LZ4_STREAMSIZE_U64 * sizeof(unsigned long long))
union LZ4_stream_u {
unsigned long long table[LZ4_STREAMSIZE_U64];
LZ4_stream_t_internal internal_donotuse;
} ; </b>/* previously typedef'd to LZ4_stream_t */<b>
</b><p> information structure to track an LZ4 stream.
init this structure before first use.
note : only use in association with static linking !
this definition is not API/ABI safe,
it may change in a future version !
</p></pre><BR>
<pre><b>#define LZ4_STREAMDECODESIZE_U64 4
#define LZ4_STREAMDECODESIZE (LZ4_STREAMDECODESIZE_U64 * sizeof(unsigned long long))
union LZ4_streamDecode_u {
unsigned long long table[LZ4_STREAMDECODESIZE_U64];
LZ4_streamDecode_t_internal internal_donotuse;
} ; </b>/* previously typedef'd to LZ4_streamDecode_t */<b>
</b><p> information structure to track an LZ4 stream during decompression.
init this structure using LZ4_setStreamDecode (or memset()) before first use
note : only use in association with static linking !
this definition is not API/ABI safe,
and may change in a future version !
</p></pre><BR>
<a name="Chapter9"></a><h2>Obsolete Functions</h2><pre></pre>
<pre><b>#ifdef LZ4_DISABLE_DEPRECATE_WARNINGS
# define LZ4_DEPRECATED(message) </b>/* disable deprecation warnings */<b>
#else
# define LZ4_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
# if defined (__cplusplus) && (__cplusplus >= 201402) </b>/* C++14 or greater */<b>
# define LZ4_DEPRECATED(message) [[deprecated(message)]]
# elif (LZ4_GCC_VERSION >= 405) || defined(__clang__)
# define LZ4_DEPRECATED(message) __attribute__((deprecated(message)))
# elif (LZ4_GCC_VERSION >= 301)
# define LZ4_DEPRECATED(message) __attribute__((deprecated))
# elif defined(_MSC_VER)
# define LZ4_DEPRECATED(message) __declspec(deprecated(message))
# else
# pragma message("WARNING: You need to implement LZ4_DEPRECATED for this compiler")
# define LZ4_DEPRECATED(message)
# endif
#endif </b>/* LZ4_DISABLE_DEPRECATE_WARNINGS */<b>
</b><p> Should deprecation warnings be a problem,
it is generally possible to disable them,
typically with -Wno-deprecated-declarations for gcc
or _CRT_SECURE_NO_WARNINGS in Visual.
Otherwise, it's also possible to define LZ4_DISABLE_DEPRECATE_WARNINGS
</p></pre><BR>
</html>
</body>

View File

@@ -1,277 +0,0 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>1.8.0 Manual</title>
</head>
<body>
<h1>1.8.0 Manual</h1>
<hr>
<a name="Contents"></a><h2>Contents</h2>
<ol>
<li><a href="#Chapter1">Introduction</a></li>
<li><a href="#Chapter2">Compiler specifics</a></li>
<li><a href="#Chapter3">Error management</a></li>
<li><a href="#Chapter4">Frame compression types</a></li>
<li><a href="#Chapter5">Simple compression function</a></li>
<li><a href="#Chapter6">Advanced compression functions</a></li>
<li><a href="#Chapter7">Resource Management</a></li>
<li><a href="#Chapter8">Compression</a></li>
<li><a href="#Chapter9">Decompression functions</a></li>
<li><a href="#Chapter10">Streaming decompression functions</a></li>
</ol>
<hr>
<a name="Chapter1"></a><h2>Introduction</h2><pre>
lz4frame.h implements LZ4 frame specification (doc/lz4_Frame_format.md).
lz4frame.h provides frame compression functions that take care
of encoding standard metadata alongside LZ4-compressed blocks.
<BR></pre>
<a name="Chapter2"></a><h2>Compiler specifics</h2><pre></pre>
<a name="Chapter3"></a><h2>Error management</h2><pre></pre>
<pre><b>unsigned LZ4F_isError(LZ4F_errorCode_t code); </b>/**< tells if a `LZ4F_errorCode_t` function result is an error code */<b>
</b></pre><BR>
<pre><b>const char* LZ4F_getErrorName(LZ4F_errorCode_t code); </b>/**< return error code string; useful for debugging */<b>
</b></pre><BR>
<a name="Chapter4"></a><h2>Frame compression types</h2><pre></pre>
<pre><b>typedef enum {
LZ4F_default=0,
LZ4F_max64KB=4,
LZ4F_max256KB=5,
LZ4F_max1MB=6,
LZ4F_max4MB=7
LZ4F_OBSOLETE_ENUM(max64KB)
LZ4F_OBSOLETE_ENUM(max256KB)
LZ4F_OBSOLETE_ENUM(max1MB)
LZ4F_OBSOLETE_ENUM(max4MB)
} LZ4F_blockSizeID_t;
</b></pre><BR>
<pre><b>typedef enum {
LZ4F_blockLinked=0,
LZ4F_blockIndependent
LZ4F_OBSOLETE_ENUM(blockLinked)
LZ4F_OBSOLETE_ENUM(blockIndependent)
} LZ4F_blockMode_t;
</b></pre><BR>
<pre><b>typedef enum {
LZ4F_noContentChecksum=0,
LZ4F_contentChecksumEnabled
LZ4F_OBSOLETE_ENUM(noContentChecksum)
LZ4F_OBSOLETE_ENUM(contentChecksumEnabled)
} LZ4F_contentChecksum_t;
</b></pre><BR>
<pre><b>typedef enum {
LZ4F_noBlockChecksum=0,
LZ4F_blockChecksumEnabled
} LZ4F_blockChecksum_t;
</b></pre><BR>
<pre><b>typedef enum {
LZ4F_frame=0,
LZ4F_skippableFrame
LZ4F_OBSOLETE_ENUM(skippableFrame)
} LZ4F_frameType_t;
</b></pre><BR>
<pre><b>typedef struct {
LZ4F_blockSizeID_t blockSizeID; </b>/* max64KB, max256KB, max1MB, max4MB ; 0 == default */<b>
LZ4F_blockMode_t blockMode; </b>/* LZ4F_blockLinked, LZ4F_blockIndependent ; 0 == default */<b>
LZ4F_contentChecksum_t contentChecksumFlag; </b>/* if enabled, frame is terminated with a 32-bits checksum of decompressed data ; 0 == disabled (default) */<b>
LZ4F_frameType_t frameType; </b>/* read-only field : LZ4F_frame or LZ4F_skippableFrame */<b>
unsigned long long contentSize; </b>/* Size of uncompressed content ; 0 == unknown */<b>
unsigned dictID; </b>/* Dictionary ID, sent by the compressor to help decoder select the correct dictionary; 0 == no dictID provided */<b>
LZ4F_blockChecksum_t blockChecksumFlag; </b>/* if enabled, each block is followed by a checksum of block's compressed data ; 0 == disabled (default) */<b>
} LZ4F_frameInfo_t;
</b><p> makes it possible to set or read frame parameters.
It's not required to set all fields, as long as the structure was initially memset() to zero.
For all fields, 0 sets it to default value
</p></pre><BR>
<pre><b>typedef struct {
LZ4F_frameInfo_t frameInfo;
int compressionLevel; </b>/* 0 == default (fast mode); values above LZ4HC_CLEVEL_MAX count as LZ4HC_CLEVEL_MAX; values below 0 trigger "fast acceleration", proportional to value */<b>
unsigned autoFlush; </b>/* 1 == always flush, to reduce usage of internal buffers */<b>
unsigned reserved[4]; </b>/* must be zero for forward compatibility */<b>
} LZ4F_preferences_t;
</b><p> makes it possible to supply detailed compression parameters to the stream interface.
It's not required to set all fields, as long as the structure was initially memset() to zero.
All reserved fields must be set to zero.
</p></pre><BR>
<a name="Chapter5"></a><h2>Simple compression function</h2><pre></pre>
<pre><b>size_t LZ4F_compressFrameBound(size_t srcSize, const LZ4F_preferences_t* preferencesPtr);
</b><p> Returns the maximum possible size of a frame compressed with LZ4F_compressFrame() given srcSize content and preferences.
Note : this result is only usable with LZ4F_compressFrame(), not with multi-segments compression.
</p></pre><BR>
<pre><b>size_t LZ4F_compressFrame(void* dstBuffer, size_t dstCapacity,
const void* srcBuffer, size_t srcSize,
const LZ4F_preferences_t* preferencesPtr);
</b><p> Compress an entire srcBuffer into a valid LZ4 frame.
dstCapacity MUST be >= LZ4F_compressFrameBound(srcSize, preferencesPtr).
The LZ4F_preferences_t structure is optional : you can provide NULL as argument. All preferences will be set to default.
@return : number of bytes written into dstBuffer.
or an error code if it fails (can be tested using LZ4F_isError())
</p></pre><BR>
<a name="Chapter6"></a><h2>Advanced compression functions</h2><pre></pre>
<pre><b>typedef struct {
unsigned stableSrc; </b>/* 1 == src content will remain present on future calls to LZ4F_compress(); skip copying src content within tmp buffer */<b>
unsigned reserved[3];
} LZ4F_compressOptions_t;
</b></pre><BR>
<a name="Chapter7"></a><h2>Resource Management</h2><pre></pre>
<pre><b>LZ4F_errorCode_t LZ4F_createCompressionContext(LZ4F_cctx** cctxPtr, unsigned version);
LZ4F_errorCode_t LZ4F_freeCompressionContext(LZ4F_cctx* cctx);
</b><p> The first thing to do is to create a compressionContext object, which will be used in all compression operations.
This is achieved using LZ4F_createCompressionContext(), which takes as argument a version.
The version provided MUST be LZ4F_VERSION. It is intended to track potential version mismatch, notably when using DLL.
The function will provide a pointer to a fully allocated LZ4F_cctx object.
If @return != zero, there was an error during context creation.
Object can release its memory using LZ4F_freeCompressionContext();
</p></pre><BR>
<a name="Chapter8"></a><h2>Compression</h2><pre></pre>
<pre><b>size_t LZ4F_compressBegin(LZ4F_cctx* cctx,
void* dstBuffer, size_t dstCapacity,
const LZ4F_preferences_t* prefsPtr);
</b><p> will write the frame header into dstBuffer.
dstCapacity must be >= LZ4F_HEADER_SIZE_MAX bytes.
`prefsPtr` is optional : you can provide NULL as argument, all preferences will then be set to default.
@return : number of bytes written into dstBuffer for the header
or an error code (which can be tested using LZ4F_isError())
</p></pre><BR>
<pre><b>size_t LZ4F_compressBound(size_t srcSize, const LZ4F_preferences_t* prefsPtr);
</b><p> Provides dstCapacity given a srcSize to guarantee operation success in worst case situations.
prefsPtr is optional : you can provide NULL as argument, preferences will be set to cover worst case scenario.
Result is always the same for a srcSize and prefsPtr, so it can be trusted to size reusable buffers.
When srcSize==0, LZ4F_compressBound() provides an upper bound for LZ4F_flush() and LZ4F_compressEnd() operations.
</p></pre><BR>
<pre><b>size_t LZ4F_compressUpdate(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const void* srcBuffer, size_t srcSize, const LZ4F_compressOptions_t* cOptPtr);
</b><p> LZ4F_compressUpdate() can be called repetitively to compress as much data as necessary.
An important rule is that dstCapacity MUST be large enough to ensure operation success even in worst case situations.
This value is provided by LZ4F_compressBound().
If this condition is not respected, LZ4F_compress() will fail (result is an errorCode).
LZ4F_compressUpdate() doesn't guarantee error recovery. When an error occurs, compression context must be freed or resized.
`cOptPtr` is optional : NULL can be provided, in which case all options are set to default.
@return : number of bytes written into `dstBuffer` (it can be zero, meaning input data was just buffered).
or an error code if it fails (which can be tested using LZ4F_isError())
</p></pre><BR>
<pre><b>size_t LZ4F_flush(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const LZ4F_compressOptions_t* cOptPtr);
</b><p> When data must be generated and sent immediately, without waiting for a block to be completely filled,
it's possible to call LZ4_flush(). It will immediately compress any data buffered within cctx.
`dstCapacity` must be large enough to ensure the operation will be successful.
`cOptPtr` is optional : it's possible to provide NULL, all options will be set to default.
@return : number of bytes written into dstBuffer (it can be zero, which means there was no data stored within cctx)
or an error code if it fails (which can be tested using LZ4F_isError())
</p></pre><BR>
<pre><b>size_t LZ4F_compressEnd(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const LZ4F_compressOptions_t* cOptPtr);
</b><p> To properly finish an LZ4 frame, invoke LZ4F_compressEnd().
It will flush whatever data remained within `cctx` (like LZ4_flush())
and properly finalize the frame, with an endMark and a checksum.
`cOptPtr` is optional : NULL can be provided, in which case all options will be set to default.
@return : number of bytes written into dstBuffer (necessarily >= 4 (endMark), or 8 if optional frame checksum is enabled)
or an error code if it fails (which can be tested using LZ4F_isError())
A successful call to LZ4F_compressEnd() makes `cctx` available again for another compression task.
</p></pre><BR>
<a name="Chapter9"></a><h2>Decompression functions</h2><pre></pre>
<pre><b>typedef struct {
unsigned stableDst; </b>/* pledge that at least 64KB+64Bytes of previously decompressed data remain unmodifed where it was decoded. This optimization skips storage operations in tmp buffers */<b>
unsigned reserved[3]; </b>/* must be set to zero for forward compatibility */<b>
} LZ4F_decompressOptions_t;
</b></pre><BR>
<pre><b>LZ4F_errorCode_t LZ4F_createDecompressionContext(LZ4F_dctx** dctxPtr, unsigned version);
LZ4F_errorCode_t LZ4F_freeDecompressionContext(LZ4F_dctx* dctx);
</b><p> Create an LZ4F_dctx object, to track all decompression operations.
The version provided MUST be LZ4F_VERSION.
The function provides a pointer to an allocated and initialized LZ4F_dctx object.
The result is an errorCode, which can be tested using LZ4F_isError().
dctx memory can be released using LZ4F_freeDecompressionContext();
The result of LZ4F_freeDecompressionContext() is indicative of the current state of decompressionContext when being released.
That is, it should be == 0 if decompression has been completed fully and correctly.
</p></pre><BR>
<a name="Chapter10"></a><h2>Streaming decompression functions</h2><pre></pre>
<pre><b>size_t LZ4F_getFrameInfo(LZ4F_dctx* dctx,
LZ4F_frameInfo_t* frameInfoPtr,
const void* srcBuffer, size_t* srcSizePtr);
</b><p> This function extracts frame parameters (max blockSize, dictID, etc.).
Its usage is optional.
Extracted information is typically useful for allocation and dictionary.
This function works in 2 situations :
- At the beginning of a new frame, in which case
it will decode information from `srcBuffer`, starting the decoding process.
Input size must be large enough to successfully decode the entire frame header.
Frame header size is variable, but is guaranteed to be <= LZ4F_HEADER_SIZE_MAX bytes.
It's allowed to provide more input data than this minimum.
- After decoding has been started.
In which case, no input is read, frame parameters are extracted from dctx.
- If decoding has barely started, but not yet extracted information from header,
LZ4F_getFrameInfo() will fail.
The number of bytes consumed from srcBuffer will be updated within *srcSizePtr (necessarily <= original value).
Decompression must resume from (srcBuffer + *srcSizePtr).
@return : an hint about how many srcSize bytes LZ4F_decompress() expects for next call,
or an error code which can be tested using LZ4F_isError().
note 1 : in case of error, dctx is not modified. Decoding operation can resume from beginning safely.
note 2 : frame parameters are *copied into* an already allocated LZ4F_frameInfo_t structure.
</p></pre><BR>
<pre><b>size_t LZ4F_decompress(LZ4F_dctx* dctx,
void* dstBuffer, size_t* dstSizePtr,
const void* srcBuffer, size_t* srcSizePtr,
const LZ4F_decompressOptions_t* dOptPtr);
</b><p> Call this function repetitively to regenerate compressed data from `srcBuffer`.
The function will attempt to decode up to *srcSizePtr bytes from srcBuffer, into dstBuffer of capacity *dstSizePtr.
The number of bytes regenerated into dstBuffer is provided within *dstSizePtr (necessarily <= original value).
The number of bytes consumed from srcBuffer is provided within *srcSizePtr (necessarily <= original value).
Number of bytes consumed can be < number of bytes provided.
It typically happens when dstBuffer is not large enough to contain all decoded data.
Unconsumed source data must be presented again in subsequent invocations.
`dstBuffer` content is expected to be flushed between each invocation, as its content will be overwritten.
`dstBuffer` itself can be changed at will between each consecutive function invocation.
@return : an hint of how many `srcSize` bytes LZ4F_decompress() expects for next call.
Schematically, it's the size of the current (or remaining) compressed block + header of next block.
Respecting the hint provides some small speed benefit, because it skips intermediate buffers.
This is just a hint though, it's always possible to provide any srcSize.
When a frame is fully decoded, @return will be 0 (no more data expected).
If decompression failed, @return is an error code, which can be tested using LZ4F_isError().
After a frame is fully decoded, dctx can be used again to decompress another frame.
After a decompression error, use LZ4F_resetDecompressionContext() before re-using dctx, to return to clean state.
</p></pre><BR>
<pre><b>void LZ4F_resetDecompressionContext(LZ4F_dctx* dctx); </b>/* always successful */<b>
</b><p> In case of an error, the context is left in "undefined" state.
In which case, it's necessary to reset it, before re-using it.
This method can also be used to abruptly stop an unfinished decompression,
and start a new one using the same context.
</p></pre><BR>
</html>
</body>

View File

@@ -1,10 +0,0 @@
/Makefile.lz4*
/printVersion
/doubleBuffer
/dictionaryRandomAccess
/ringBuffer
/ringBufferHC
/lineCompress
/frameCompress
/simpleBuffer
/*.exe

View File

@@ -1,339 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

View File

@@ -1,229 +0,0 @@
// LZ4 HC streaming API example : ring buffer
// Based on previous work from Takayuki Matsuoka
/**************************************
* Compiler Options
**************************************/
#ifdef _MSC_VER /* Visual Studio */
# define _CRT_SECURE_NO_WARNINGS /* for MSVC */
# define snprintf sprintf_s
#endif
#define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-braces" /* GCC bug 53119 : doesn't accept { 0 } as initializer (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119) */
#endif
/**************************************
* Includes
**************************************/
#include "lz4hc.h"
#include "lz4.h"
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
enum {
MESSAGE_MAX_BYTES = 1024,
RING_BUFFER_BYTES = 1024 * 8 + MESSAGE_MAX_BYTES,
DEC_BUFFER_BYTES = RING_BUFFER_BYTES + MESSAGE_MAX_BYTES // Intentionally larger to test unsynchronized ring buffers
};
size_t write_int32(FILE* fp, int32_t i) {
return fwrite(&i, sizeof(i), 1, fp);
}
size_t write_bin(FILE* fp, const void* array, int arrayBytes) {
return fwrite(array, 1, arrayBytes, fp);
}
size_t read_int32(FILE* fp, int32_t* i) {
return fread(i, sizeof(*i), 1, fp);
}
size_t read_bin(FILE* fp, void* array, int arrayBytes) {
return fread(array, 1, arrayBytes, fp);
}
void test_compress(FILE* outFp, FILE* inpFp)
{
LZ4_streamHC_t lz4Stream_body = { 0 };
LZ4_streamHC_t* lz4Stream = &lz4Stream_body;
static char inpBuf[RING_BUFFER_BYTES];
int inpOffset = 0;
for(;;) {
// Read random length ([1,MESSAGE_MAX_BYTES]) data to the ring buffer.
char* const inpPtr = &inpBuf[inpOffset];
const int randomLength = (rand() % MESSAGE_MAX_BYTES) + 1;
const int inpBytes = (int) read_bin(inpFp, inpPtr, randomLength);
if (0 == inpBytes) break;
#define CMPBUFSIZE (LZ4_COMPRESSBOUND(MESSAGE_MAX_BYTES))
{ char cmpBuf[CMPBUFSIZE];
const int cmpBytes = LZ4_compress_HC_continue(lz4Stream, inpPtr, cmpBuf, inpBytes, CMPBUFSIZE);
if(cmpBytes <= 0) break;
write_int32(outFp, cmpBytes);
write_bin(outFp, cmpBuf, cmpBytes);
inpOffset += inpBytes;
// Wraparound the ringbuffer offset
if(inpOffset >= RING_BUFFER_BYTES - MESSAGE_MAX_BYTES)
inpOffset = 0;
}
}
write_int32(outFp, 0);
}
void test_decompress(FILE* outFp, FILE* inpFp)
{
static char decBuf[DEC_BUFFER_BYTES];
int decOffset = 0;
LZ4_streamDecode_t lz4StreamDecode_body = { 0 };
LZ4_streamDecode_t* lz4StreamDecode = &lz4StreamDecode_body;
for(;;) {
int cmpBytes = 0;
char cmpBuf[CMPBUFSIZE];
{ const size_t r0 = read_int32(inpFp, &cmpBytes);
size_t r1;
if(r0 != 1 || cmpBytes <= 0)
break;
r1 = read_bin(inpFp, cmpBuf, cmpBytes);
if(r1 != (size_t) cmpBytes)
break;
}
{ char* const decPtr = &decBuf[decOffset];
const int decBytes = LZ4_decompress_safe_continue(
lz4StreamDecode, cmpBuf, decPtr, cmpBytes, MESSAGE_MAX_BYTES);
if(decBytes <= 0)
break;
decOffset += decBytes;
write_bin(outFp, decPtr, decBytes);
// Wraparound the ringbuffer offset
if(decOffset >= DEC_BUFFER_BYTES - MESSAGE_MAX_BYTES)
decOffset = 0;
}
}
}
// Compare 2 files content
// return 0 if identical
// return ByteNb>0 if different
size_t compare(FILE* f0, FILE* f1)
{
size_t result = 1;
for (;;) {
char b0[65536];
char b1[65536];
const size_t r0 = fread(b0, 1, sizeof(b0), f0);
const size_t r1 = fread(b1, 1, sizeof(b1), f1);
if ((r0==0) && (r1==0)) return 0; // success
if (r0 != r1) {
size_t smallest = r0;
if (r1<r0) smallest = r1;
result += smallest;
break;
}
if (memcmp(b0, b1, r0)) {
unsigned errorPos = 0;
while ((errorPos < r0) && (b0[errorPos]==b1[errorPos])) errorPos++;
result += errorPos;
break;
}
result += sizeof(b0);
}
return result;
}
int main(int argc, const char** argv)
{
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
unsigned fileID = 1;
unsigned pause = 0;
if(argc < 2) {
printf("Please specify input filename\n");
return 0;
}
if (!strcmp(argv[1], "-p")) pause = 1, fileID = 2;
snprintf(inpFilename, 256, "%s", argv[fileID]);
snprintf(lz4Filename, 256, "%s.lz4s-%d", argv[fileID], 9);
snprintf(decFilename, 256, "%s.lz4s-%d.dec", argv[fileID], 9);
printf("input = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("decoded = [%s]\n", decFilename);
// compress
{ FILE* const inpFp = fopen(inpFilename, "rb");
FILE* const outFp = fopen(lz4Filename, "wb");
test_compress(outFp, inpFp);
fclose(outFp);
fclose(inpFp);
}
// decompress
{ FILE* const inpFp = fopen(lz4Filename, "rb");
FILE* const outFp = fopen(decFilename, "wb");
test_decompress(outFp, inpFp);
fclose(outFp);
fclose(inpFp);
}
// verify
{ FILE* const inpFp = fopen(inpFilename, "rb");
FILE* const decFp = fopen(decFilename, "rb");
const size_t cmp = compare(inpFp, decFp);
if(0 == cmp) {
printf("Verify : OK\n");
} else {
printf("Verify : NG : error at pos %u\n", (unsigned)cmp-1);
}
fclose(decFp);
fclose(inpFp);
}
if (pause) {
int unused;
printf("Press enter to continue ...\n");
unused = getchar(); (void)unused; /* silence static analyzer */
}
return 0;
}

View File

@@ -1,98 +0,0 @@
# ##########################################################################
# LZ4 examples - Makefile
# Copyright (C) Yann Collet 2011-2014
#
# GPL v2 License
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# You can contact the author at :
# - LZ4 source repository : https://github.com/Cyan4973/lz4
# - LZ4 forum froup : https://groups.google.com/forum/#!forum/lz4c
# ##########################################################################
# This makefile compile and test
# example programs, using (mostly) LZ4 streaming library,
# kindly provided by Takayuki Matsuoka
# ##########################################################################
CPPFLAGS += -I../lib
CFLAGS ?= -O3
CFLAGS += -std=gnu99 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Wstrict-prototypes
FLAGS := $(CPPFLAGS) $(CFLAGS) $(LDFLAGS)
TESTFILE= Makefile
LZ4DIR := ../lib
LZ4 = ../programs/lz4
# Define *.exe as extension for Windows systems
ifneq (,$(filter Windows%,$(OS)))
EXT =.exe
VOID = nul
else
EXT =
VOID = /dev/null
endif
default: all
all: printVersion doubleBuffer dictionaryRandomAccess ringBuffer ringBufferHC \
lineCompress frameCompress simpleBuffer
printVersion: $(LZ4DIR)/lz4.c printVersion.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
doubleBuffer: $(LZ4DIR)/lz4.c blockStreaming_doubleBuffer.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
dictionaryRandomAccess: $(LZ4DIR)/lz4.c dictionaryRandomAccess.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
ringBuffer : $(LZ4DIR)/lz4.c blockStreaming_ringBuffer.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
ringBufferHC: $(LZ4DIR)/lz4.c $(LZ4DIR)/lz4hc.c HCStreaming_ringBuffer.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
lineCompress: $(LZ4DIR)/lz4.c blockStreaming_lineByLine.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
frameCompress: frameCompress.c
$(CC) $(FLAGS) $^ -o $@$(EXT) $(LZ4DIR)/liblz4.a
compressFunctions: $(LZ4DIR)/lz4.c compress_functions.c
$(CC) $(FLAGS) $^ -o $@$(EXT) -lrt
simpleBuffer: $(LZ4DIR)/lz4.c simple_buffer.c
$(CC) $(FLAGS) $^ -o $@$(EXT)
test : all
./printVersion$(EXT)
./doubleBuffer$(EXT) $(TESTFILE)
./dictionaryRandomAccess$(EXT) $(TESTFILE) $(TESTFILE) 1100 1400
./ringBuffer$(EXT) $(TESTFILE)
./ringBufferHC$(EXT) $(TESTFILE)
./lineCompress$(EXT) $(TESTFILE)
./frameCompress$(EXT) $(TESTFILE)
./simpleBuffer$(EXT)
$(LZ4) -vt $(TESTFILE).lz4
clean:
@rm -f core *.o *.dec *-0 *-9 *-8192 *.lz4s *.lz4 \
printVersion$(EXT) doubleBuffer$(EXT) dictionaryRandomAccess$(EXT) \
ringBuffer$(EXT) ringBufferHC$(EXT) lineCompress$(EXT) frameCompress$(EXT) \
compressFunctions$(EXT) simpleBuffer$(EXT)
@echo Cleaning completed

View File

@@ -1,11 +0,0 @@
# LZ4 examples
All examples are GPL-v2 licensed.
## Documents
- [Streaming API Basics](streaming_api_basics.md)
- Examples
- [Double Buffer](blockStreaming_doubleBuffer.md)
- [Line by Line Text Compression](blockStreaming_lineByLine.md)
- [Dictionary Random Access](dictionaryRandomAccess.md)

View File

@@ -1,202 +0,0 @@
// LZ4 streaming API example : double buffer
// Copyright : Takayuki Matsuoka
#ifdef _MSC_VER /* Visual Studio */
# define _CRT_SECURE_NO_WARNINGS
# define snprintf sprintf_s
#endif
#include "lz4.h"
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
enum {
BLOCK_BYTES = 1024 * 8,
// BLOCK_BYTES = 1024 * 64,
};
size_t write_int(FILE* fp, int i) {
return fwrite(&i, sizeof(i), 1, fp);
}
size_t write_bin(FILE* fp, const void* array, size_t arrayBytes) {
return fwrite(array, 1, arrayBytes, fp);
}
size_t read_int(FILE* fp, int* i) {
return fread(i, sizeof(*i), 1, fp);
}
size_t read_bin(FILE* fp, void* array, size_t arrayBytes) {
return fread(array, 1, arrayBytes, fp);
}
void test_compress(FILE* outFp, FILE* inpFp)
{
LZ4_stream_t lz4Stream_body;
LZ4_stream_t* lz4Stream = &lz4Stream_body;
char inpBuf[2][BLOCK_BYTES];
int inpBufIndex = 0;
LZ4_resetStream(lz4Stream);
for(;;) {
char* const inpPtr = inpBuf[inpBufIndex];
const int inpBytes = (int) read_bin(inpFp, inpPtr, BLOCK_BYTES);
if(0 == inpBytes) {
break;
}
{
char cmpBuf[LZ4_COMPRESSBOUND(BLOCK_BYTES)];
const int cmpBytes = LZ4_compress_fast_continue(
lz4Stream, inpPtr, cmpBuf, inpBytes, sizeof(cmpBuf), 1);
if(cmpBytes <= 0) {
break;
}
write_int(outFp, cmpBytes);
write_bin(outFp, cmpBuf, (size_t) cmpBytes);
}
inpBufIndex = (inpBufIndex + 1) % 2;
}
write_int(outFp, 0);
}
void test_decompress(FILE* outFp, FILE* inpFp)
{
LZ4_streamDecode_t lz4StreamDecode_body;
LZ4_streamDecode_t* lz4StreamDecode = &lz4StreamDecode_body;
char decBuf[2][BLOCK_BYTES];
int decBufIndex = 0;
LZ4_setStreamDecode(lz4StreamDecode, NULL, 0);
for(;;) {
char cmpBuf[LZ4_COMPRESSBOUND(BLOCK_BYTES)];
int cmpBytes = 0;
{
const size_t readCount0 = read_int(inpFp, &cmpBytes);
if(readCount0 != 1 || cmpBytes <= 0) {
break;
}
const size_t readCount1 = read_bin(inpFp, cmpBuf, (size_t) cmpBytes);
if(readCount1 != (size_t) cmpBytes) {
break;
}
}
{
char* const decPtr = decBuf[decBufIndex];
const int decBytes = LZ4_decompress_safe_continue(
lz4StreamDecode, cmpBuf, decPtr, cmpBytes, BLOCK_BYTES);
if(decBytes <= 0) {
break;
}
write_bin(outFp, decPtr, (size_t) decBytes);
}
decBufIndex = (decBufIndex + 1) % 2;
}
}
int compare(FILE* fp0, FILE* fp1)
{
int result = 0;
while(0 == result) {
char b0[65536];
char b1[65536];
const size_t r0 = read_bin(fp0, b0, sizeof(b0));
const size_t r1 = read_bin(fp1, b1, sizeof(b1));
result = (int) r0 - (int) r1;
if(0 == r0 || 0 == r1) {
break;
}
if(0 == result) {
result = memcmp(b0, b1, r0);
}
}
return result;
}
int main(int argc, char* argv[])
{
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
if(argc < 2) {
printf("Please specify input filename\n");
return 0;
}
snprintf(inpFilename, 256, "%s", argv[1]);
snprintf(lz4Filename, 256, "%s.lz4s-%d", argv[1], BLOCK_BYTES);
snprintf(decFilename, 256, "%s.lz4s-%d.dec", argv[1], BLOCK_BYTES);
printf("inp = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("dec = [%s]\n", decFilename);
// compress
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* outFp = fopen(lz4Filename, "wb");
printf("compress : %s -> %s\n", inpFilename, lz4Filename);
test_compress(outFp, inpFp);
printf("compress : done\n");
fclose(outFp);
fclose(inpFp);
}
// decompress
{
FILE* inpFp = fopen(lz4Filename, "rb");
FILE* outFp = fopen(decFilename, "wb");
printf("decompress : %s -> %s\n", lz4Filename, decFilename);
test_decompress(outFp, inpFp);
printf("decompress : done\n");
fclose(outFp);
fclose(inpFp);
}
// verify
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* decFp = fopen(decFilename, "rb");
printf("verify : %s <-> %s\n", inpFilename, decFilename);
const int cmp = compare(inpFp, decFp);
if(0 == cmp) {
printf("verify : OK\n");
} else {
printf("verify : NG\n");
}
fclose(decFp);
fclose(inpFp);
}
return 0;
}

View File

@@ -1,100 +0,0 @@
# LZ4 Streaming API Example : Double Buffer
by *Takayuki Matsuoka*
`blockStreaming_doubleBuffer.c` is LZ4 Straming API example which implements double buffer (de)compression.
Please note :
- Firstly, read "LZ4 Streaming API Basics".
- This is relatively advanced application example.
- Output file is not compatible with lz4frame and platform dependent.
## What's the point of this example ?
- Handle huge file in small amount of memory
- Always better compression ratio than Block API
- Uniform block size
## How the compression works
First of all, allocate "Double Buffer" for input and LZ4 compressed data buffer for output.
Double buffer has two pages, "first" page (Page#1) and "second" page (Page#2).
```
Double Buffer
Page#1 Page#2
+---------+---------+
| Block#1 | |
+----+----+---------+
|
v
{Out#1}
Prefix Dependency
+---------+
| |
v |
+---------+----+----+
| Block#1 | Block#2 |
+---------+----+----+
|
v
{Out#2}
External Dictionary Mode
+---------+
| |
| v
+----+----+---------+
| Block#3 | Block#2 |
+----+----+---------+
|
v
{Out#3}
Prefix Dependency
+---------+
| |
v |
+---------+----+----+
| Block#3 | Block#4 |
+---------+----+----+
|
v
{Out#4}
```
Next, read first block to double buffer's first page. And compress it by `LZ4_compress_continue()`.
For the first time, LZ4 doesn't know any previous dependencies,
so it just compress the line without dependencies and generates compressed block {Out#1} to LZ4 compressed data buffer.
After that, write {Out#1} to the file.
Next, read second block to double buffer's second page. And compress it.
In this time, LZ4 can use dependency to Block#1 to improve compression ratio.
This dependency is called "Prefix mode".
Next, read third block to double buffer's *first* page. And compress it.
Also this time, LZ4 can use dependency to Block#2.
This dependency is called "External Dictonaly mode".
Continue these procedure to the end of the file.
## How the decompression works
Decompression will do reverse order.
- Read first compressed block.
- Decompress it to the first page and write that page to the file.
- Read second compressed block.
- Decompress it to the second page and write that page to the file.
- Read third compressed block.
- Decompress it to the *first* page and write that page to the file.
Continue these procedure to the end of the compressed file.

View File

@@ -1,211 +0,0 @@
// LZ4 streaming API example : line-by-line logfile compression
// Copyright : Takayuki Matsuoka
#ifdef _MSC_VER /* Visual Studio */
# define _CRT_SECURE_NO_WARNINGS
# define snprintf sprintf_s
#endif
#include "lz4.h"
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
static size_t write_uint16(FILE* fp, uint16_t i)
{
return fwrite(&i, sizeof(i), 1, fp);
}
static size_t write_bin(FILE* fp, const void* array, int arrayBytes)
{
return fwrite(array, 1, arrayBytes, fp);
}
static size_t read_uint16(FILE* fp, uint16_t* i)
{
return fread(i, sizeof(*i), 1, fp);
}
static size_t read_bin(FILE* fp, void* array, int arrayBytes)
{
return fread(array, 1, arrayBytes, fp);
}
static void test_compress(
FILE* outFp,
FILE* inpFp,
size_t messageMaxBytes,
size_t ringBufferBytes)
{
LZ4_stream_t* const lz4Stream = LZ4_createStream();
const size_t cmpBufBytes = LZ4_COMPRESSBOUND(messageMaxBytes);
char* const cmpBuf = (char*) malloc(cmpBufBytes);
char* const inpBuf = (char*) malloc(ringBufferBytes);
int inpOffset = 0;
for ( ; ; )
{
char* const inpPtr = &inpBuf[inpOffset];
#if 0
// Read random length data to the ring buffer.
const int randomLength = (rand() % messageMaxBytes) + 1;
const int inpBytes = (int) read_bin(inpFp, inpPtr, randomLength);
if (0 == inpBytes) break;
#else
// Read line to the ring buffer.
int inpBytes = 0;
if (!fgets(inpPtr, (int) messageMaxBytes, inpFp))
break;
inpBytes = (int) strlen(inpPtr);
#endif
{
const int cmpBytes = LZ4_compress_fast_continue(
lz4Stream, inpPtr, cmpBuf, inpBytes, cmpBufBytes, 1);
if (cmpBytes <= 0) break;
write_uint16(outFp, (uint16_t) cmpBytes);
write_bin(outFp, cmpBuf, cmpBytes);
// Add and wraparound the ringbuffer offset
inpOffset += inpBytes;
if ((size_t)inpOffset >= ringBufferBytes - messageMaxBytes) inpOffset = 0;
}
}
write_uint16(outFp, 0);
free(inpBuf);
free(cmpBuf);
LZ4_freeStream(lz4Stream);
}
static void test_decompress(
FILE* outFp,
FILE* inpFp,
size_t messageMaxBytes,
size_t ringBufferBytes)
{
LZ4_streamDecode_t* const lz4StreamDecode = LZ4_createStreamDecode();
char* const cmpBuf = (char*) malloc(LZ4_COMPRESSBOUND(messageMaxBytes));
char* const decBuf = (char*) malloc(ringBufferBytes);
int decOffset = 0;
for ( ; ; )
{
uint16_t cmpBytes = 0;
if (read_uint16(inpFp, &cmpBytes) != 1) break;
if (cmpBytes <= 0) break;
if (read_bin(inpFp, cmpBuf, cmpBytes) != cmpBytes) break;
{
char* const decPtr = &decBuf[decOffset];
const int decBytes = LZ4_decompress_safe_continue(
lz4StreamDecode, cmpBuf, decPtr, cmpBytes, (int) messageMaxBytes);
if (decBytes <= 0) break;
write_bin(outFp, decPtr, decBytes);
// Add and wraparound the ringbuffer offset
decOffset += decBytes;
if ((size_t)decOffset >= ringBufferBytes - messageMaxBytes) decOffset = 0;
}
}
free(decBuf);
free(cmpBuf);
LZ4_freeStreamDecode(lz4StreamDecode);
}
static int compare(FILE* f0, FILE* f1)
{
int result = 0;
const size_t tempBufferBytes = 65536;
char* const b0 = (char*) malloc(tempBufferBytes);
char* const b1 = (char*) malloc(tempBufferBytes);
while(0 == result)
{
const size_t r0 = fread(b0, 1, tempBufferBytes, f0);
const size_t r1 = fread(b1, 1, tempBufferBytes, f1);
result = (int) r0 - (int) r1;
if (0 == r0 || 0 == r1) break;
if (0 == result) result = memcmp(b0, b1, r0);
}
free(b1);
free(b0);
return result;
}
int main(int argc, char* argv[])
{
enum {
MESSAGE_MAX_BYTES = 1024,
RING_BUFFER_BYTES = 1024 * 256 + MESSAGE_MAX_BYTES,
};
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
if (argc < 2)
{
printf("Please specify input filename\n");
return 0;
}
snprintf(inpFilename, 256, "%s", argv[1]);
snprintf(lz4Filename, 256, "%s.lz4s", argv[1]);
snprintf(decFilename, 256, "%s.lz4s.dec", argv[1]);
printf("inp = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("dec = [%s]\n", decFilename);
// compress
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* outFp = fopen(lz4Filename, "wb");
test_compress(outFp, inpFp, MESSAGE_MAX_BYTES, RING_BUFFER_BYTES);
fclose(outFp);
fclose(inpFp);
}
// decompress
{
FILE* inpFp = fopen(lz4Filename, "rb");
FILE* outFp = fopen(decFilename, "wb");
test_decompress(outFp, inpFp, MESSAGE_MAX_BYTES, RING_BUFFER_BYTES);
fclose(outFp);
fclose(inpFp);
}
// verify
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* decFp = fopen(decFilename, "rb");
const int cmp = compare(inpFp, decFp);
if (0 == cmp)
printf("Verify : OK\n");
else
printf("Verify : NG\n");
fclose(decFp);
fclose(inpFp);
}
return 0;
}

View File

@@ -1,122 +0,0 @@
# LZ4 Streaming API Example : Line by Line Text Compression
by *Takayuki Matsuoka*
`blockStreaming_lineByLine.c` is LZ4 Straming API example which implements line by line incremental (de)compression.
Please note the following restrictions :
- Firstly, read "LZ4 Streaming API Basics".
- This is relatively advanced application example.
- Output file is not compatible with lz4frame and platform dependent.
## What's the point of this example ?
- Line by line incremental (de)compression.
- Handle huge file in small amount of memory
- Generally better compression ratio than Block API
- Non-uniform block size
## How the compression works
First of all, allocate "Ring Buffer" for input and LZ4 compressed data buffer for output.
```
(1)
Ring Buffer
+--------+
| Line#1 |
+---+----+
|
v
{Out#1}
(2)
Prefix Mode Dependency
+----+
| |
v |
+--------+-+------+
| Line#1 | Line#2 |
+--------+---+----+
|
v
{Out#2}
(3)
Prefix Prefix
+----+ +----+
| | | |
v | v |
+--------+-+------+-+------+
| Line#1 | Line#2 | Line#3 |
+--------+--------+---+----+
|
v
{Out#3}
(4)
External Dictionary Mode
+----+ +----+
| | | |
v | v |
------+--------+-+------+-+--------+
| .... | Line#X | Line#X+1 |
------+--------+--------+-----+----+
^ |
| v
| {Out#X+1}
|
Reset
(5)
Prefix
+-----+
| |
v |
------+--------+--------+----------+--+-------+
| .... | Line#X | Line#X+1 | Line#X+2 |
------+--------+--------+----------+-----+----+
^ |
| v
| {Out#X+2}
|
Reset
```
Next (see (1)), read first line to ringbuffer and compress it by `LZ4_compress_continue()`.
For the first time, LZ4 doesn't know any previous dependencies,
so it just compress the line without dependencies and generates compressed line {Out#1} to LZ4 compressed data buffer.
After that, write {Out#1} to the file and forward ringbuffer offset.
Do the same things to second line (see (2)).
But in this time, LZ4 can use dependency to Line#1 to improve compression ratio.
This dependency is called "Prefix mode".
Eventually, we'll reach end of ringbuffer at Line#X (see (4)).
This time, we should reset ringbuffer offset.
After resetting, at Line#X+1 pointer is not adjacent, but LZ4 still maintain its memory.
This is called "External Dictionary Mode".
In Line#X+2 (see (5)), finally LZ4 forget almost all memories but still remains Line#X+1.
This is the same situation as Line#2.
Continue these procedure to the end of text file.
## How the decompression works
Decompression will do reverse order.
- Read compressed line from the file to buffer.
- Decompress it to the ringbuffer.
- Output decompressed plain text line to the file.
- Forward ringbuffer offset. If offset exceedes end of the ringbuffer, reset it.
Continue these procedure to the end of the compressed file.

View File

@@ -1,201 +0,0 @@
// LZ4 streaming API example : ring buffer
// Based on sample code from Takayuki Matsuoka
/**************************************
* Compiler Options
**************************************/
#ifdef _MSC_VER /* Visual Studio */
# define _CRT_SECURE_NO_WARNINGS // for MSVC
# define snprintf sprintf_s
#endif
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-braces" /* GCC bug 53119 : doesn't accept { 0 } as initializer (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119) */
#endif
/**************************************
* Includes
**************************************/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include "lz4.h"
enum {
MESSAGE_MAX_BYTES = 1024,
RING_BUFFER_BYTES = 1024 * 8 + MESSAGE_MAX_BYTES,
DECODE_RING_BUFFER = RING_BUFFER_BYTES + MESSAGE_MAX_BYTES // Intentionally larger, to test unsynchronized ring buffers
};
size_t write_int32(FILE* fp, int32_t i) {
return fwrite(&i, sizeof(i), 1, fp);
}
size_t write_bin(FILE* fp, const void* array, int arrayBytes) {
return fwrite(array, 1, arrayBytes, fp);
}
size_t read_int32(FILE* fp, int32_t* i) {
return fread(i, sizeof(*i), 1, fp);
}
size_t read_bin(FILE* fp, void* array, int arrayBytes) {
return fread(array, 1, arrayBytes, fp);
}
void test_compress(FILE* outFp, FILE* inpFp)
{
LZ4_stream_t lz4Stream_body = { 0 };
LZ4_stream_t* lz4Stream = &lz4Stream_body;
static char inpBuf[RING_BUFFER_BYTES];
int inpOffset = 0;
for(;;) {
// Read random length ([1,MESSAGE_MAX_BYTES]) data to the ring buffer.
char* const inpPtr = &inpBuf[inpOffset];
const int randomLength = (rand() % MESSAGE_MAX_BYTES) + 1;
const int inpBytes = (int) read_bin(inpFp, inpPtr, randomLength);
if (0 == inpBytes) break;
{
#define CMPBUFSIZE (LZ4_COMPRESSBOUND(MESSAGE_MAX_BYTES))
char cmpBuf[CMPBUFSIZE];
const int cmpBytes = LZ4_compress_fast_continue(lz4Stream, inpPtr, cmpBuf, inpBytes, CMPBUFSIZE, 0);
if(cmpBytes <= 0) break;
write_int32(outFp, cmpBytes);
write_bin(outFp, cmpBuf, cmpBytes);
inpOffset += inpBytes;
// Wraparound the ringbuffer offset
if(inpOffset >= RING_BUFFER_BYTES - MESSAGE_MAX_BYTES) inpOffset = 0;
}
}
write_int32(outFp, 0);
}
void test_decompress(FILE* outFp, FILE* inpFp)
{
static char decBuf[DECODE_RING_BUFFER];
int decOffset = 0;
LZ4_streamDecode_t lz4StreamDecode_body = { 0 };
LZ4_streamDecode_t* lz4StreamDecode = &lz4StreamDecode_body;
for(;;) {
int cmpBytes = 0;
char cmpBuf[CMPBUFSIZE];
{
const size_t r0 = read_int32(inpFp, &cmpBytes);
if(r0 != 1 || cmpBytes <= 0) break;
const size_t r1 = read_bin(inpFp, cmpBuf, cmpBytes);
if(r1 != (size_t) cmpBytes) break;
}
{
char* const decPtr = &decBuf[decOffset];
const int decBytes = LZ4_decompress_safe_continue(
lz4StreamDecode, cmpBuf, decPtr, cmpBytes, MESSAGE_MAX_BYTES);
if(decBytes <= 0) break;
decOffset += decBytes;
write_bin(outFp, decPtr, decBytes);
// Wraparound the ringbuffer offset
if(decOffset >= DECODE_RING_BUFFER - MESSAGE_MAX_BYTES) decOffset = 0;
}
}
}
int compare(FILE* f0, FILE* f1)
{
int result = 0;
while(0 == result) {
char b0[65536];
char b1[65536];
const size_t r0 = fread(b0, 1, sizeof(b0), f0);
const size_t r1 = fread(b1, 1, sizeof(b1), f1);
result = (int) r0 - (int) r1;
if(0 == r0 || 0 == r1) {
break;
}
if(0 == result) {
result = memcmp(b0, b1, r0);
}
}
return result;
}
int main(int argc, char** argv)
{
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
if(argc < 2) {
printf("Please specify input filename\n");
return 0;
}
snprintf(inpFilename, 256, "%s", argv[1]);
snprintf(lz4Filename, 256, "%s.lz4s-%d", argv[1], 0);
snprintf(decFilename, 256, "%s.lz4s-%d.dec", argv[1], 0);
printf("inp = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("dec = [%s]\n", decFilename);
// compress
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* outFp = fopen(lz4Filename, "wb");
test_compress(outFp, inpFp);
fclose(outFp);
fclose(inpFp);
}
// decompress
{
FILE* inpFp = fopen(lz4Filename, "rb");
FILE* outFp = fopen(decFilename, "wb");
test_decompress(outFp, inpFp);
fclose(outFp);
fclose(inpFp);
}
// verify
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* decFp = fopen(decFilename, "rb");
const int cmp = compare(inpFp, decFp);
if(0 == cmp) {
printf("Verify : OK\n");
} else {
printf("Verify : NG\n");
}
fclose(decFp);
fclose(inpFp);
}
return 0;
}

View File

@@ -1,363 +0,0 @@
/*
* compress_functions.c
* Copyright : Kyle Harper
* License : Follows same licensing as the lz4.c/lz4.h program at any given time. Currently, BSD 2.
* Description: A program to demonstrate the various compression functions involved in when using LZ4_compress_default(). The idea
* is to show how each step in the call stack can be used directly, if desired. There is also some benchmarking for
* each function to demonstrate the (probably lack of) performance difference when jumping the stack.
* (If you're new to lz4, please read simple_buffer.c to understand the fundamentals)
*
* The call stack (before theoretical compiler optimizations) for LZ4_compress_default is as follows:
* LZ4_compress_default
* LZ4_compress_fast
* LZ4_compress_fast_extState
* LZ4_compress_generic
*
* LZ4_compress_default()
* This is the recommended function for compressing data. It will serve as the baseline for comparison.
* LZ4_compress_fast()
* Despite its name, it's not a "fast" version of compression. It simply decides if HEAPMODE is set and either
* allocates memory on the heap for a struct or creates the struct directly on the stack. Stack access is generally
* faster but this function itself isn't giving that advantage, it's just some logic for compile time.
* LZ4_compress_fast_extState()
* This simply accepts all the pointers and values collected thus far and adds logic to determine how
* LZ4_compress_generic should be invoked; specifically: can the source fit into a single pass as determined by
* LZ4_64Klimit.
* LZ4_compress_generic()
* As the name suggests, this is the generic function that ultimately does most of the heavy lifting. Calling this
* directly can help avoid some test cases and branching which might be useful in some implementation-specific
* situations, but you really need to know what you're doing AND what you're asking lz4 to do! You also need a
* wrapper function because this function isn't exposed with lz4.h.
*
* The call stack for decompression functions is shallow. There are 2 options:
* LZ4_decompress_safe || LZ4_decompress_fast
* LZ4_decompress_generic
*
* LZ4_decompress_safe
* This is the recommended function for decompressing data. It is considered safe because the caller specifies
* both the size of the compresssed buffer to read as well as the maximum size of the output (decompressed) buffer
* instead of just the latter.
* LZ4_decompress_fast
* Again, despite its name it's not a "fast" version of decompression. It simply frees the caller of sending the
* size of the compressed buffer (it will simply be read-to-end, hence it's non-safety).
* LZ4_decompress_generic
* This is the generic function that both of the LZ4_decompress_* functions above end up calling. Calling this
* directly is not advised, period. Furthermore, it is a static inline function in lz4.c, so there isn't a symbol
* exposed for anyone using lz4.h to utilize.
*
* Special Note About Decompression:
* Using the LZ4_decompress_safe() function protects against malicious (user) input. If you are using data from a
* trusted source, or if your program is the producer (P) as well as its consumer (C) in a PC or MPMC setup, you can
* safely use the LZ4_decompress_fast function
*/
/* Since lz4 compiles with c99 and not gnu/std99 we need to enable POSIX linking for time.h structs and functions. */
#if __STDC_VERSION__ >= 199901L
#define _XOPEN_SOURCE 600
#else
#define _XOPEN_SOURCE 500
#endif
#define _POSIX_C_SOURCE 199309L
/* Includes, for Power! */
#include "lz4.h"
#include <stdio.h> /* for printf() */
#include <stdlib.h> /* for exit() */
#include <string.h> /* for atoi() memcmp() */
#include <stdint.h> /* for uint_types */
#include <inttypes.h> /* for PRIu64 */
#include <time.h> /* for clock_gettime() */
#include <locale.h> /* for setlocale() */
/* We need to know what one billion is for clock timing. */
#define BILLION 1000000000L
/* Create a crude set of test IDs so we can switch on them later (Can't switch() on a char[] or char*). */
#define ID__LZ4_COMPRESS_DEFAULT 1
#define ID__LZ4_COMPRESS_FAST 2
#define ID__LZ4_COMPRESS_FAST_EXTSTATE 3
#define ID__LZ4_COMPRESS_GENERIC 4
#define ID__LZ4_DECOMPRESS_SAFE 5
#define ID__LZ4_DECOMPRESS_FAST 6
/*
* Easy show-error-and-bail function.
*/
void run_screaming(const char *message, const int code) {
printf("%s\n", message);
exit(code);
return;
}
/*
* Centralize the usage function to keep main cleaner.
*/
void usage(const char *message) {
printf("Usage: ./argPerformanceTesting <iterations>\n");
run_screaming(message, 1);
return;
}
/*
* Runs the benchmark for LZ4_compress_* based on function_id.
*/
uint64_t bench(
const char *known_good_dst,
const int function_id,
const int iterations,
const char *src,
char *dst,
const size_t src_size,
const size_t max_dst_size,
const size_t comp_size
) {
uint64_t time_taken = 0;
int rv = 0;
const int warm_up = 5000;
struct timespec start, end;
const int acceleration = 1;
LZ4_stream_t state;
// Select the right function to perform the benchmark on. We perform 5000 initial loops to warm the cache and ensure that dst
// remains matching to known_good_dst between successive calls.
switch(function_id) {
case ID__LZ4_COMPRESS_DEFAULT:
printf("Starting benchmark for function: LZ4_compress_default()\n");
for(int junk=0; junk<warm_up; junk++)
rv = LZ4_compress_default(src, dst, src_size, max_dst_size);
if (rv < 1)
run_screaming("Couldn't run LZ4_compress_default()... error code received is in exit code.", rv);
if (memcmp(known_good_dst, dst, max_dst_size) != 0)
run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i=1; i<=iterations; i++)
LZ4_compress_default(src, dst, src_size, max_dst_size);
break;
case ID__LZ4_COMPRESS_FAST:
printf("Starting benchmark for function: LZ4_compress_fast()\n");
for(int junk=0; junk<warm_up; junk++)
rv = LZ4_compress_fast(src, dst, src_size, max_dst_size, acceleration);
if (rv < 1)
run_screaming("Couldn't run LZ4_compress_fast()... error code received is in exit code.", rv);
if (memcmp(known_good_dst, dst, max_dst_size) != 0)
run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i=1; i<=iterations; i++)
LZ4_compress_fast(src, dst, src_size, max_dst_size, acceleration);
break;
case ID__LZ4_COMPRESS_FAST_EXTSTATE:
printf("Starting benchmark for function: LZ4_compress_fast_extState()\n");
for(int junk=0; junk<warm_up; junk++)
rv = LZ4_compress_fast_extState(&state, src, dst, src_size, max_dst_size, acceleration);
if (rv < 1)
run_screaming("Couldn't run LZ4_compress_fast_extState()... error code received is in exit code.", rv);
if (memcmp(known_good_dst, dst, max_dst_size) != 0)
run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i=1; i<=iterations; i++)
LZ4_compress_fast_extState(&state, src, dst, src_size, max_dst_size, acceleration);
break;
// Disabled until LZ4_compress_generic() is exposed in the header.
// case ID__LZ4_COMPRESS_GENERIC:
// printf("Starting benchmark for function: LZ4_compress_generic()\n");
// LZ4_resetStream((LZ4_stream_t*)&state);
// for(int junk=0; junk<warm_up; junk++) {
// LZ4_resetStream((LZ4_stream_t*)&state);
// //rv = LZ4_compress_generic_wrapper(&state, src, dst, src_size, max_dst_size, notLimited, byU16, noDict, noDictIssue, acceleration);
// LZ4_compress_generic_wrapper(&state, src, dst, src_size, max_dst_size, acceleration);
// }
// if (rv < 1)
// run_screaming("Couldn't run LZ4_compress_generic()... error code received is in exit code.", rv);
// if (memcmp(known_good_dst, dst, max_dst_size) != 0)
// run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
// for (int i=1; i<=iterations; i++) {
// LZ4_resetStream((LZ4_stream_t*)&state);
// //LZ4_compress_generic_wrapper(&state, src, dst, src_size, max_dst_size, notLimited, byU16, noDict, noDictIssue, acceleration);
// LZ4_compress_generic_wrapper(&state, src, dst, src_size, max_dst_size, acceleration);
// }
// break;
case ID__LZ4_DECOMPRESS_SAFE:
printf("Starting benchmark for function: LZ4_decompress_safe()\n");
for(int junk=0; junk<warm_up; junk++)
rv = LZ4_decompress_safe(src, dst, comp_size, src_size);
if (rv < 1)
run_screaming("Couldn't run LZ4_decompress_safe()... error code received is in exit code.", rv);
if (memcmp(known_good_dst, dst, src_size) != 0)
run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i=1; i<=iterations; i++)
LZ4_decompress_safe(src, dst, comp_size, src_size);
break;
case ID__LZ4_DECOMPRESS_FAST:
printf("Starting benchmark for function: LZ4_decompress_fast()\n");
for(int junk=0; junk<warm_up; junk++)
rv = LZ4_decompress_fast(src, dst, src_size);
if (rv < 1)
run_screaming("Couldn't run LZ4_decompress_fast()... error code received is in exit code.", rv);
if (memcmp(known_good_dst, dst, src_size) != 0)
run_screaming("According to memcmp(), the compressed dst we got doesn't match the known_good_dst... ruh roh.", 1);
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i=1; i<=iterations; i++)
LZ4_decompress_fast(src, dst, src_size);
break;
default:
run_screaming("The test specified isn't valid. Please check your code.", 1);
break;
}
// Stop timer and return time taken.
clock_gettime(CLOCK_MONOTONIC, &end);
time_taken = BILLION *(end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
return time_taken;
}
/*
* main()
* We will demonstrate the use of each function for simplicity sake. Then we will run 2 suites of benchmarking:
* Test suite A) Uses generic Lorem Ipsum text which should be generally compressible insomuch as basic human text is
* compressible for such a small src_size
* Test Suite B) For the sake of testing, see what results we get if the data is drastically easier to compress. IF there are
* indeed losses and IF more compressible data is faster to process, this will exacerbate the findings.
*/
int main(int argc, char **argv) {
// Get and verify options. There's really only 1: How many iterations to run.
int iterations = 1000000;
if (argc > 1)
iterations = atoi(argv[1]);
if (iterations < 1)
usage("Argument 1 (iterations) must be > 0.");
// First we will create 2 sources (char *) of 2000 bytes each. One normal text, the other highly-compressible text.
const char *src = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed luctus purus et risus vulputate, et mollis orci ullamcorper. Nulla facilisi. Fusce in ligula sed purus varius aliquet interdum vitae justo. Proin quis diam velit. Nulla varius iaculis auctor. Cras volutpat, justo eu dictum pulvinar, elit sem porttitor metus, et imperdiet metus sapien et ante. Nullam nisi nulla, ornare eu tristique eu, dignissim vitae diam. Nulla sagittis porta libero, a accumsan felis sagittis scelerisque. Integer laoreet eleifend congue. Etiam rhoncus leo vel dolor fermentum, quis luctus nisl iaculis. Praesent a erat sapien. Aliquam semper mi in lorem ultrices ultricies. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In feugiat risus sed enim ultrices, at sodales nulla tristique. Maecenas eget pellentesque justo, sed pellentesque lectus. Fusce sagittis sit amet elit vel varius. Donec sed ligula nec ligula vulputate rutrum sed ut lectus. Etiam congue pharetra leo vitae cursus. Morbi enim ante, porttitor ut varius vel, tincidunt quis justo. Nunc iaculis, risus id ultrices semper, metus est efficitur ligula, vel posuere risus nunc eget purus. Ut lorem turpis, condimentum at sem sed, porta aliquam turpis. In ut sapien a nulla dictum tincidunt quis sit amet lorem. Fusce at est egestas, luctus neque eu, consectetur tortor. Phasellus eleifend ultricies nulla ac lobortis. Morbi maximus quam cursus vehicula iaculis. Maecenas cursus vel justo ut rutrum. Curabitur magna orci, dignissim eget dapibus vitae, finibus id lacus. Praesent rhoncus mattis augue vitae bibendum. Praesent porta mauris non ultrices fermentum. Quisque vulputate ipsum in sodales pulvinar. Aliquam nec mollis felis. Donec vitae augue pulvinar, congue nisl sed, pretium purus. Fusce lobortis mi ac neque scelerisque semper. Pellentesque vel est vitae magna aliquet aliquet. Nam non dolor. Nulla facilisi. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Morbi ac lacinia felis metus.";
const char *hc_src = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
// Set and derive sizes. Since we're using strings, use strlen() + 1 for \0.
const size_t src_size = strlen(src) + 1;
const size_t max_dst_size = LZ4_compressBound(src_size);
int bytes_returned = 0;
// Now build allocations for the data we'll be playing with.
char *dst = calloc(1, max_dst_size);
char *known_good_dst = calloc(1, max_dst_size);
char *known_good_hc_dst = calloc(1, max_dst_size);
if (dst == NULL || known_good_dst == NULL || known_good_hc_dst == NULL)
run_screaming("Couldn't allocate memory for the destination buffers. Sad :(", 1);
// Create known-good buffers to verify our tests with other functions will produce the same results.
bytes_returned = LZ4_compress_default(src, known_good_dst, src_size, max_dst_size);
if (bytes_returned < 1)
run_screaming("Couldn't create a known-good destination buffer for comparison... this is bad.", 1);
const size_t src_comp_size = bytes_returned;
bytes_returned = LZ4_compress_default(hc_src, known_good_hc_dst, src_size, max_dst_size);
if (bytes_returned < 1)
run_screaming("Couldn't create a known-good (highly compressible) destination buffer for comparison... this is bad.", 1);
const size_t hc_src_comp_size = bytes_returned;
/* LZ4_compress_default() */
// This is the default function so we don't need to demonstrate how to use it. See basics.c if you need more basal information.
/* LZ4_compress_fast() */
// Using this function is identical to LZ4_compress_default except we need to specify an "acceleration" value. Defaults to 1.
memset(dst, 0, max_dst_size);
bytes_returned = LZ4_compress_fast(src, dst, src_size, max_dst_size, 1);
if (bytes_returned < 1)
run_screaming("Failed to compress src using LZ4_compress_fast. echo $? for return code.", bytes_returned);
if (memcmp(dst, known_good_dst, bytes_returned) != 0)
run_screaming("According to memcmp(), the value we got in dst from LZ4_compress_fast doesn't match the known-good value. This is bad.", 1);
/* LZ4_compress_fast_extState() */
// Using this function directly requires that we build an LZ4_stream_t struct ourselves. We do NOT have to reset it ourselves.
memset(dst, 0, max_dst_size);
LZ4_stream_t state;
bytes_returned = LZ4_compress_fast_extState(&state, src, dst, src_size, max_dst_size, 1);
if (bytes_returned < 1)
run_screaming("Failed to compress src using LZ4_compress_fast_extState. echo $? for return code.", bytes_returned);
if (memcmp(dst, known_good_dst, bytes_returned) != 0)
run_screaming("According to memcmp(), the value we got in dst from LZ4_compress_fast_extState doesn't match the known-good value. This is bad.", 1);
/* LZ4_compress_generic */
// When you can exactly control the inputs and options of your LZ4 needs, you can use LZ4_compress_generic and fixed (const)
// values for the enum types such as dictionary and limitations. Any other direct-use is probably a bad idea.
//
// That said, the LZ4_compress_generic() function is 'static inline' and does not have a prototype in lz4.h to expose a symbol
// for it. In other words: we can't access it directly. I don't want to submit a PR that modifies lz4.c/h. Yann and others can
// do that if they feel it's worth expanding this example.
//
// I will, however, leave a skeleton of what would be required to use it directly:
/*
memset(dst, 0, max_dst_size);
// LZ4_stream_t state: is already declared above. We can reuse it BUT we have to reset the stream ourselves between each call.
LZ4_resetStream((LZ4_stream_t *)&state);
// Since src size is small we know the following enums will be used: notLimited (0), byU16 (2), noDict (0), noDictIssue (0).
bytes_returned = LZ4_compress_generic(&state, src, dst, src_size, max_dst_size, notLimited, byU16, noDict, noDictIssue, 1);
if (bytes_returned < 1)
run_screaming("Failed to compress src using LZ4_compress_generic. echo $? for return code.", bytes_returned);
if (memcmp(dst, known_good_dst, bytes_returned) != 0)
run_screaming("According to memcmp(), the value we got in dst from LZ4_compress_generic doesn't match the known-good value. This is bad.", 1);
*/
/* Benchmarking */
/* Now we'll run a few rudimentary benchmarks with each function to demonstrate differences in speed based on the function used.
* Remember, we cannot call LZ4_compress_generic() directly (yet) so it's disabled.
*/
// Suite A - Normal Compressibility
char *dst_d = calloc(1, src_size);
memset(dst, 0, max_dst_size);
printf("\nStarting suite A: Normal compressible text.\n");
uint64_t time_taken__default = bench(known_good_dst, ID__LZ4_COMPRESS_DEFAULT, iterations, src, dst, src_size, max_dst_size, src_comp_size);
uint64_t time_taken__fast = bench(known_good_dst, ID__LZ4_COMPRESS_FAST, iterations, src, dst, src_size, max_dst_size, src_comp_size);
uint64_t time_taken__fast_extstate = bench(known_good_dst, ID__LZ4_COMPRESS_FAST_EXTSTATE, iterations, src, dst, src_size, max_dst_size, src_comp_size);
//uint64_t time_taken__generic = bench(known_good_dst, ID__LZ4_COMPRESS_GENERIC, iterations, src, dst, src_size, max_dst_size, src_comp_size);
uint64_t time_taken__decomp_safe = bench(src, ID__LZ4_DECOMPRESS_SAFE, iterations, known_good_dst, dst_d, src_size, max_dst_size, src_comp_size);
uint64_t time_taken__decomp_fast = bench(src, ID__LZ4_DECOMPRESS_FAST, iterations, known_good_dst, dst_d, src_size, max_dst_size, src_comp_size);
// Suite B - Highly Compressible
memset(dst, 0, max_dst_size);
printf("\nStarting suite B: Highly compressible text.\n");
uint64_t time_taken_hc__default = bench(known_good_hc_dst, ID__LZ4_COMPRESS_DEFAULT, iterations, hc_src, dst, src_size, max_dst_size, hc_src_comp_size);
uint64_t time_taken_hc__fast = bench(known_good_hc_dst, ID__LZ4_COMPRESS_FAST, iterations, hc_src, dst, src_size, max_dst_size, hc_src_comp_size);
uint64_t time_taken_hc__fast_extstate = bench(known_good_hc_dst, ID__LZ4_COMPRESS_FAST_EXTSTATE, iterations, hc_src, dst, src_size, max_dst_size, hc_src_comp_size);
//uint64_t time_taken_hc__generic = bench(known_good_hc_dst, ID__LZ4_COMPRESS_GENERIC, iterations, hc_src, dst, src_size, max_dst_size, hc_src_comp_size);
uint64_t time_taken_hc__decomp_safe = bench(hc_src, ID__LZ4_DECOMPRESS_SAFE, iterations, known_good_hc_dst, dst_d, src_size, max_dst_size, hc_src_comp_size);
uint64_t time_taken_hc__decomp_fast = bench(hc_src, ID__LZ4_DECOMPRESS_FAST, iterations, known_good_hc_dst, dst_d, src_size, max_dst_size, hc_src_comp_size);
// Report and leave.
setlocale(LC_ALL, "");
const char *format = "|%-14s|%-30s|%'14.9f|%'16d|%'14d|%'13.2f%%|\n";
const char *header_format = "|%-14s|%-30s|%14s|%16s|%14s|%14s|\n";
const char *separator = "+--------------+------------------------------+--------------+----------------+--------------+--------------+\n";
printf("\n");
printf("%s", separator);
printf(header_format, "Source", "Function Benchmarked", "Total Seconds", "Iterations/sec", "ns/Iteration", "% of default");
printf("%s", separator);
printf(format, "Normal Text", "LZ4_compress_default()", (double)time_taken__default / BILLION, (int)(iterations / ((double)time_taken__default /BILLION)), time_taken__default / iterations, (double)time_taken__default * 100 / time_taken__default);
printf(format, "Normal Text", "LZ4_compress_fast()", (double)time_taken__fast / BILLION, (int)(iterations / ((double)time_taken__fast /BILLION)), time_taken__fast / iterations, (double)time_taken__fast * 100 / time_taken__default);
printf(format, "Normal Text", "LZ4_compress_fast_extState()", (double)time_taken__fast_extstate / BILLION, (int)(iterations / ((double)time_taken__fast_extstate /BILLION)), time_taken__fast_extstate / iterations, (double)time_taken__fast_extstate * 100 / time_taken__default);
//printf(format, "Normal Text", "LZ4_compress_generic()", (double)time_taken__generic / BILLION, (int)(iterations / ((double)time_taken__generic /BILLION)), time_taken__generic / iterations, (double)time_taken__generic * 100 / time_taken__default);
printf(format, "Normal Text", "LZ4_decompress_safe()", (double)time_taken__decomp_safe / BILLION, (int)(iterations / ((double)time_taken__decomp_safe /BILLION)), time_taken__decomp_safe / iterations, (double)time_taken__decomp_safe * 100 / time_taken__default);
printf(format, "Normal Text", "LZ4_decompress_fast()", (double)time_taken__decomp_fast / BILLION, (int)(iterations / ((double)time_taken__decomp_fast /BILLION)), time_taken__decomp_fast / iterations, (double)time_taken__decomp_fast * 100 / time_taken__default);
printf(header_format, "", "", "", "", "", "");
printf(format, "Compressible", "LZ4_compress_default()", (double)time_taken_hc__default / BILLION, (int)(iterations / ((double)time_taken_hc__default /BILLION)), time_taken_hc__default / iterations, (double)time_taken_hc__default * 100 / time_taken_hc__default);
printf(format, "Compressible", "LZ4_compress_fast()", (double)time_taken_hc__fast / BILLION, (int)(iterations / ((double)time_taken_hc__fast /BILLION)), time_taken_hc__fast / iterations, (double)time_taken_hc__fast * 100 / time_taken_hc__default);
printf(format, "Compressible", "LZ4_compress_fast_extState()", (double)time_taken_hc__fast_extstate / BILLION, (int)(iterations / ((double)time_taken_hc__fast_extstate /BILLION)), time_taken_hc__fast_extstate / iterations, (double)time_taken_hc__fast_extstate * 100 / time_taken_hc__default);
//printf(format, "Compressible", "LZ4_compress_generic()", (double)time_taken_hc__generic / BILLION, (int)(iterations / ((double)time_taken_hc__generic /BILLION)), time_taken_hc__generic / iterations, (double)time_taken_hc__generic * 100 / time_taken_hc__default);
printf(format, "Compressible", "LZ4_decompress_safe()", (double)time_taken_hc__decomp_safe / BILLION, (int)(iterations / ((double)time_taken_hc__decomp_safe /BILLION)), time_taken_hc__decomp_safe / iterations, (double)time_taken_hc__decomp_safe * 100 / time_taken_hc__default);
printf(format, "Compressible", "LZ4_decompress_fast()", (double)time_taken_hc__decomp_fast / BILLION, (int)(iterations / ((double)time_taken_hc__decomp_fast /BILLION)), time_taken_hc__decomp_fast / iterations, (double)time_taken_hc__decomp_fast * 100 / time_taken_hc__default);
printf("%s", separator);
printf("\n");
printf("All done, ran %d iterations per test.\n", iterations);
return 0;
}

View File

@@ -1,280 +0,0 @@
// LZ4 API example : Dictionary Random Access
#ifdef _MSC_VER /* Visual Studio */
# define _CRT_SECURE_NO_WARNINGS
# define snprintf sprintf_s
#endif
#include "lz4.h"
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#define MIN(x, y) (x) < (y) ? (x) : (y)
enum {
BLOCK_BYTES = 1024, /* 1 KiB of uncompressed data in a block */
DICTIONARY_BYTES = 1024, /* Load a 1 KiB dictionary */
MAX_BLOCKS = 1024 /* For simplicity of implementation */
};
/**
* Magic bytes for this test case.
* This is not a great magic number because it is a common word in ASCII.
* However, it is important to have some versioning system in your format.
*/
const char kTestMagic[] = { 'T', 'E', 'S', 'T' };
void write_int(FILE* fp, int i) {
size_t written = fwrite(&i, sizeof(i), 1, fp);
if (written != 1) { exit(10); }
}
void write_bin(FILE* fp, const void* array, size_t arrayBytes) {
size_t written = fwrite(array, 1, arrayBytes, fp);
if (written != arrayBytes) { exit(11); }
}
void read_int(FILE* fp, int* i) {
size_t read = fread(i, sizeof(*i), 1, fp);
if (read != 1) { exit(12); }
}
size_t read_bin(FILE* fp, void* array, size_t arrayBytes) {
size_t read = fread(array, 1, arrayBytes, fp);
if (ferror(fp)) { exit(12); }
return read;
}
void seek_bin(FILE* fp, long offset, int origin) {
if (fseek(fp, offset, origin)) { exit(14); }
}
void test_compress(FILE* outFp, FILE* inpFp, void *dict, int dictSize)
{
LZ4_stream_t lz4Stream_body;
LZ4_stream_t* lz4Stream = &lz4Stream_body;
char inpBuf[BLOCK_BYTES];
int offsets[MAX_BLOCKS];
int *offsetsEnd = offsets;
LZ4_resetStream(lz4Stream);
/* Write header magic */
write_bin(outFp, kTestMagic, sizeof(kTestMagic));
*offsetsEnd++ = sizeof(kTestMagic);
/* Write compressed data blocks. Each block contains BLOCK_BYTES of plain
data except possibly the last. */
for(;;) {
const int inpBytes = (int) read_bin(inpFp, inpBuf, BLOCK_BYTES);
if(0 == inpBytes) {
break;
}
/* Forget previously compressed data and load the dictionary */
LZ4_loadDict(lz4Stream, dict, dictSize);
{
char cmpBuf[LZ4_COMPRESSBOUND(BLOCK_BYTES)];
const int cmpBytes = LZ4_compress_fast_continue(
lz4Stream, inpBuf, cmpBuf, inpBytes, sizeof(cmpBuf), 1);
if(cmpBytes <= 0) { exit(1); }
write_bin(outFp, cmpBuf, (size_t)cmpBytes);
/* Keep track of the offsets */
*offsetsEnd = *(offsetsEnd - 1) + cmpBytes;
++offsetsEnd;
}
if (offsetsEnd - offsets > MAX_BLOCKS) { exit(2); }
}
/* Write the tailing jump table */
{
int *ptr = offsets;
while (ptr != offsetsEnd) {
write_int(outFp, *ptr++);
}
write_int(outFp, offsetsEnd - offsets);
}
}
void test_decompress(FILE* outFp, FILE* inpFp, void *dict, int dictSize, int offset, int length)
{
LZ4_streamDecode_t lz4StreamDecode_body;
LZ4_streamDecode_t* lz4StreamDecode = &lz4StreamDecode_body;
/* The blocks [currentBlock, endBlock) contain the data we want */
int currentBlock = offset / BLOCK_BYTES;
int endBlock = ((offset + length - 1) / BLOCK_BYTES) + 1;
char decBuf[BLOCK_BYTES];
int offsets[MAX_BLOCKS];
/* Special cases */
if (length == 0) { return; }
/* Read the magic bytes */
{
char magic[sizeof(kTestMagic)];
size_t read = read_bin(inpFp, magic, sizeof(magic));
if (read != sizeof(magic)) { exit(1); }
if (memcmp(kTestMagic, magic, sizeof(magic))) { exit(2); }
}
/* Read the offsets tail */
{
int numOffsets;
int block;
int *offsetsPtr = offsets;
seek_bin(inpFp, -4, SEEK_END);
read_int(inpFp, &numOffsets);
if (numOffsets <= endBlock) { exit(3); }
seek_bin(inpFp, -4 * (numOffsets + 1), SEEK_END);
for (block = 0; block <= endBlock; ++block) {
read_int(inpFp, offsetsPtr++);
}
}
/* Seek to the first block to read */
seek_bin(inpFp, offsets[currentBlock], SEEK_SET);
offset = offset % BLOCK_BYTES;
/* Start decoding */
for(; currentBlock < endBlock; ++currentBlock) {
char cmpBuf[LZ4_COMPRESSBOUND(BLOCK_BYTES)];
/* The difference in offsets is the size of the block */
int cmpBytes = offsets[currentBlock + 1] - offsets[currentBlock];
{
const size_t read = read_bin(inpFp, cmpBuf, (size_t)cmpBytes);
if(read != (size_t)cmpBytes) { exit(4); }
}
/* Load the dictionary */
LZ4_setStreamDecode(lz4StreamDecode, dict, dictSize);
{
const int decBytes = LZ4_decompress_safe_continue(
lz4StreamDecode, cmpBuf, decBuf, cmpBytes, BLOCK_BYTES);
if(decBytes <= 0) { exit(5); }
{
/* Write out the part of the data we care about */
int blockLength = MIN(length, (decBytes - offset));
write_bin(outFp, decBuf + offset, (size_t)blockLength);
offset = 0;
length -= blockLength;
}
}
}
}
int compare(FILE* fp0, FILE* fp1, int length)
{
int result = 0;
while(0 == result) {
char b0[4096];
char b1[4096];
const size_t r0 = read_bin(fp0, b0, MIN(length, (int)sizeof(b0)));
const size_t r1 = read_bin(fp1, b1, MIN(length, (int)sizeof(b1)));
result = (int) r0 - (int) r1;
if(0 == r0 || 0 == r1) {
break;
}
if(0 == result) {
result = memcmp(b0, b1, r0);
}
length -= r0;
}
return result;
}
int main(int argc, char* argv[])
{
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
char dictFilename[256] = { 0 };
int offset;
int length;
char dict[DICTIONARY_BYTES];
int dictSize;
if(argc < 5) {
printf("Usage: %s input dictionary offset length", argv[0]);
return 0;
}
snprintf(inpFilename, 256, "%s", argv[1]);
snprintf(lz4Filename, 256, "%s.lz4s-%d", argv[1], BLOCK_BYTES);
snprintf(decFilename, 256, "%s.lz4s-%d.dec", argv[1], BLOCK_BYTES);
snprintf(dictFilename, 256, "%s", argv[2]);
offset = atoi(argv[3]);
length = atoi(argv[4]);
printf("inp = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("dec = [%s]\n", decFilename);
printf("dict = [%s]\n", dictFilename);
printf("offset = [%d]\n", offset);
printf("length = [%d]\n", length);
/* Load dictionary */
{
FILE* dictFp = fopen(dictFilename, "rb");
dictSize = (int)read_bin(dictFp, dict, DICTIONARY_BYTES);
fclose(dictFp);
}
/* compress */
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* outFp = fopen(lz4Filename, "wb");
printf("compress : %s -> %s\n", inpFilename, lz4Filename);
test_compress(outFp, inpFp, dict, dictSize);
printf("compress : done\n");
fclose(outFp);
fclose(inpFp);
}
/* decompress */
{
FILE* inpFp = fopen(lz4Filename, "rb");
FILE* outFp = fopen(decFilename, "wb");
printf("decompress : %s -> %s\n", lz4Filename, decFilename);
test_decompress(outFp, inpFp, dict, DICTIONARY_BYTES, offset, length);
printf("decompress : done\n");
fclose(outFp);
fclose(inpFp);
}
/* verify */
{
FILE* inpFp = fopen(inpFilename, "rb");
FILE* decFp = fopen(decFilename, "rb");
seek_bin(inpFp, offset, SEEK_SET);
printf("verify : %s <-> %s\n", inpFilename, decFilename);
const int cmp = compare(inpFp, decFp, length);
if(0 == cmp) {
printf("verify : OK\n");
} else {
printf("verify : NG\n");
}
fclose(decFp);
fclose(inpFp);
}
return 0;
}

View File

@@ -1,67 +0,0 @@
# LZ4 API Example : Dictionary Random Access
`dictionaryRandomAccess.c` is LZ4 API example which implements dictionary compression and random access decompression.
Please note that the output file is not compatible with lz4frame and is platform dependent.
## What's the point of this example ?
- Dictionary based compression for homogenous files.
- Random access to compressed blocks.
## How the compression works
Reads the dictionary from a file, and uses it as the history for each block.
This allows each block to be independent, but maintains compression ratio.
```
Dictionary
+
|
v
+---------+
| Block#1 |
+----+----+
|
v
{Out#1}
Dictionary
+
|
v
+---------+
| Block#2 |
+----+----+
|
v
{Out#2}
```
After writing the magic bytes `TEST` and then the compressed blocks, write out the jump table.
The last 4 bytes is an integer containing the number of blocks in the stream.
If there are `N` blocks, then just before the last 4 bytes is `N + 1` 4 byte integers containing the offsets at the beginning and end of each block.
Let `Offset#K` be the total number of bytes written after writing out `Block#K` *including* the magic bytes for simplicity.
```
+------+---------+ +---------+---+----------+ +----------+-----+
| TEST | Block#1 | ... | Block#N | 4 | Offset#1 | ... | Offset#N | N+1 |
+------+---------+ +---------+---+----------+ +----------+-----+
```
## How the decompression works
Decompression will do reverse order.
- Seek to the last 4 bytes of the file and read the number of offsets.
- Read each offset into an array.
- Seek to the first block containing data we want to read.
We know where to look because we know each block contains a fixed amount of uncompressed data, except possibly the last.
- Decompress it and write what data we need from it to the file.
- Read the next block.
- Decompress it and write that page to the file.
Continue these procedure until all the required data has been read.

View File

@@ -1,312 +0,0 @@
// LZ4frame API example : compress a file
// Based on sample code from Zbigniew Jędrzejewski-Szmek
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <lz4frame.h>
#define BUF_SIZE 16*1024
#define LZ4_HEADER_SIZE 19
#define LZ4_FOOTER_SIZE 4
static const LZ4F_preferences_t lz4_preferences = {
{ LZ4F_max256KB, LZ4F_blockLinked, LZ4F_noContentChecksum, LZ4F_frame,
0 /* content size unknown */, 0 /* no dictID */ , LZ4F_noBlockChecksum },
0, /* compression level */
0, /* autoflush */
{ 0, 0, 0, 0 }, /* reserved, must be set to 0 */
};
static size_t compress_file(FILE *in, FILE *out, size_t *size_in, size_t *size_out) {
LZ4F_errorCode_t r;
LZ4F_compressionContext_t ctx;
char *src, *buf = NULL;
size_t size, n, k, count_in = 0, count_out, offset = 0, frame_size;
r = LZ4F_createCompressionContext(&ctx, LZ4F_VERSION);
if (LZ4F_isError(r)) {
printf("Failed to create context: error %zu\n", r);
return 1;
}
r = 1; /* function result; 1 == error, by default (early exit) */
src = malloc(BUF_SIZE);
if (!src) {
printf("Not enough memory\n");
goto cleanup;
}
frame_size = LZ4F_compressBound(BUF_SIZE, &lz4_preferences);
size = frame_size + LZ4_HEADER_SIZE + LZ4_FOOTER_SIZE;
buf = malloc(size);
if (!buf) {
printf("Not enough memory\n");
goto cleanup;
}
n = offset = count_out = LZ4F_compressBegin(ctx, buf, size, &lz4_preferences);
if (LZ4F_isError(n)) {
printf("Failed to start compression: error %zu\n", n);
goto cleanup;
}
printf("Buffer size is %zu bytes, header size %zu bytes\n", size, n);
for (;;) {
k = fread(src, 1, BUF_SIZE, in);
if (k == 0)
break;
count_in += k;
n = LZ4F_compressUpdate(ctx, buf + offset, size - offset, src, k, NULL);
if (LZ4F_isError(n)) {
printf("Compression failed: error %zu\n", n);
goto cleanup;
}
offset += n;
count_out += n;
if (size - offset < frame_size + LZ4_FOOTER_SIZE) {
printf("Writing %zu bytes\n", offset);
k = fwrite(buf, 1, offset, out);
if (k < offset) {
if (ferror(out))
printf("Write failed\n");
else
printf("Short write\n");
goto cleanup;
}
offset = 0;
}
}
n = LZ4F_compressEnd(ctx, buf + offset, size - offset, NULL);
if (LZ4F_isError(n)) {
printf("Failed to end compression: error %zu\n", n);
goto cleanup;
}
offset += n;
count_out += n;
printf("Writing %zu bytes\n", offset);
k = fwrite(buf, 1, offset, out);
if (k < offset) {
if (ferror(out))
printf("Write failed\n");
else
printf("Short write\n");
goto cleanup;
}
*size_in = count_in;
*size_out = count_out;
r = 0;
cleanup:
if (ctx)
LZ4F_freeCompressionContext(ctx);
free(src);
free(buf);
return r;
}
static size_t get_block_size(const LZ4F_frameInfo_t* info) {
switch (info->blockSizeID) {
case LZ4F_default:
case LZ4F_max64KB: return 1 << 16;
case LZ4F_max256KB: return 1 << 18;
case LZ4F_max1MB: return 1 << 20;
case LZ4F_max4MB: return 1 << 22;
default:
printf("Impossible unless more block sizes are allowed\n");
exit(1);
}
}
static size_t decompress_file(FILE *in, FILE *out) {
void* const src = malloc(BUF_SIZE);
void* dst = NULL;
size_t dstCapacity = 0;
LZ4F_dctx *dctx = NULL;
size_t ret;
/* Initialization */
if (!src) { perror("decompress_file(src)"); goto cleanup; }
ret = LZ4F_createDecompressionContext(&dctx, 100);
if (LZ4F_isError(ret)) {
printf("LZ4F_dctx creation error: %s\n", LZ4F_getErrorName(ret));
goto cleanup;
}
/* Decompression */
ret = 1;
while (ret != 0) {
/* Load more input */
size_t srcSize = fread(src, 1, BUF_SIZE, in);
void* srcPtr = src;
void* srcEnd = srcPtr + srcSize;
if (srcSize == 0 || ferror(in)) {
printf("Decompress: not enough input or error reading file\n");
goto cleanup;
}
/* Allocate destination buffer if it isn't already */
if (!dst) {
LZ4F_frameInfo_t info;
ret = LZ4F_getFrameInfo(dctx, &info, src, &srcSize);
if (LZ4F_isError(ret)) {
printf("LZ4F_getFrameInfo error: %s\n", LZ4F_getErrorName(ret));
goto cleanup;
}
/* Allocating enough space for an entire block isn't necessary for
* correctness, but it allows some memcpy's to be elided.
*/
dstCapacity = get_block_size(&info);
dst = malloc(dstCapacity);
if (!dst) { perror("decompress_file(dst)"); goto cleanup; }
srcPtr += srcSize;
srcSize = srcEnd - srcPtr;
}
/* Decompress:
* Continue while there is more input to read and the frame isn't over.
* If srcPtr == srcEnd then we know that there is no more output left in the
* internal buffer left to flush.
*/
while (srcPtr != srcEnd && ret != 0) {
/* INVARIANT: Any data left in dst has already been written */
size_t dstSize = dstCapacity;
ret = LZ4F_decompress(dctx, dst, &dstSize, srcPtr, &srcSize, /* LZ4F_decompressOptions_t */ NULL);
if (LZ4F_isError(ret)) {
printf("Decompression error: %s\n", LZ4F_getErrorName(ret));
goto cleanup;
}
/* Flush output */
if (dstSize != 0){
size_t written = fwrite(dst, 1, dstSize, out);
printf("Writing %zu bytes\n", dstSize);
if (written != dstSize) {
printf("Decompress: Failed to write to file\n");
goto cleanup;
}
}
/* Update input */
srcPtr += srcSize;
srcSize = srcEnd - srcPtr;
}
}
/* Check that there isn't trailing input data after the frame.
* It is valid to have multiple frames in the same file, but this example
* doesn't support it.
*/
ret = fread(src, 1, 1, in);
if (ret != 0 || !feof(in)) {
printf("Decompress: Trailing data left in file after frame\n");
goto cleanup;
}
cleanup:
free(src);
free(dst);
return LZ4F_freeDecompressionContext(dctx); /* note : free works on NULL */
}
int compare(FILE* fp0, FILE* fp1)
{
int result = 0;
while(0 == result) {
char b0[1024];
char b1[1024];
const size_t r0 = fread(b0, 1, sizeof(b0), fp0);
const size_t r1 = fread(b1, 1, sizeof(b1), fp1);
result = (int) r0 - (int) r1;
if (0 == r0 || 0 == r1) {
break;
}
if (0 == result) {
result = memcmp(b0, b1, r0);
}
}
return result;
}
int main(int argc, const char **argv) {
char inpFilename[256] = { 0 };
char lz4Filename[256] = { 0 };
char decFilename[256] = { 0 };
if(argc < 2) {
printf("Please specify input filename\n");
return 0;
}
snprintf(inpFilename, 256, "%s", argv[1]);
snprintf(lz4Filename, 256, "%s.lz4", argv[1]);
snprintf(decFilename, 256, "%s.lz4.dec", argv[1]);
printf("inp = [%s]\n", inpFilename);
printf("lz4 = [%s]\n", lz4Filename);
printf("dec = [%s]\n", decFilename);
/* compress */
{ FILE* const inpFp = fopen(inpFilename, "rb");
FILE* const outFp = fopen(lz4Filename, "wb");
size_t sizeIn = 0;
size_t sizeOut = 0;
size_t ret;
printf("compress : %s -> %s\n", inpFilename, lz4Filename);
ret = compress_file(inpFp, outFp, &sizeIn, &sizeOut);
if (ret) {
printf("compress : failed with code %zu\n", ret);
return (int)ret;
}
printf("%s: %zu → %zu bytes, %.1f%%\n",
inpFilename, sizeIn, sizeOut,
(double)sizeOut / sizeIn * 100);
printf("compress : done\n");
fclose(outFp);
fclose(inpFp);
}
/* decompress */
{ FILE* const inpFp = fopen(lz4Filename, "rb");
FILE* const outFp = fopen(decFilename, "wb");
size_t ret;
printf("decompress : %s -> %s\n", lz4Filename, decFilename);
ret = decompress_file(inpFp, outFp);
if (ret) {
printf("decompress : failed with code %zu\n", ret);
return (int)ret;
}
printf("decompress : done\n");
fclose(outFp);
fclose(inpFp);
}
/* verify */
{ FILE* const inpFp = fopen(inpFilename, "rb");
FILE* const decFp = fopen(decFilename, "rb");
printf("verify : %s <-> %s\n", inpFilename, decFilename);
const int cmp = compare(inpFp, decFp);
if(0 == cmp) {
printf("verify : OK\n");
} else {
printf("verify : NG\n");
}
fclose(decFp);
fclose(inpFp);
}
}

View File

@@ -1,13 +0,0 @@
// LZ4 trivial example : print Library version number
// Copyright : Takayuki Matsuoka & Yann Collet
#include <stdio.h>
#include "lz4.h"
int main(int argc, char** argv)
{
(void)argc; (void)argv;
printf("Hello World ! LZ4 Library version = %d\n", LZ4_versionNumber());
return 0;
}

View File

@@ -1,94 +0,0 @@
/*
* simple_buffer.c
* Copyright : Kyle Harper
* License : Follows same licensing as the lz4.c/lz4.h program at any given time. Currently, BSD 2.
* Description: Example program to demonstrate the basic usage of the compress/decompress functions within lz4.c/lz4.h.
* The functions you'll likely want are LZ4_compress_default and LZ4_decompress_safe. Both of these are documented in
* the lz4.h header file; I recommend reading them.
*/
/* Includes, for Power! */
#include "lz4.h" // This is all that is required to expose the prototypes for basic compression and decompression.
#include <stdio.h> // For printf()
#include <string.h> // For memcmp()
#include <stdlib.h> // For exit()
/*
* Easy show-error-and-bail function.
*/
void run_screaming(const char *message, const int code) {
printf("%s\n", message);
exit(code);
return;
}
/*
* main
*/
int main(void) {
/* Introduction */
// Below we will have a Compression and Decompression section to demonstrate.
// There are a few important notes before we start:
// 1) The return codes of LZ4_ functions are important.
// Read lz4.h if you're unsure what a given code means.
// 2) LZ4 uses char* pointers in all LZ4_ functions.
// This is baked into the API and probably not going to change.
// If your program uses pointers that are unsigned char*, void*, or otherwise different,
// you may need to do some casting or set the right -W compiler flags to ignore those warnings (e.g.: -Wno-pointer-sign).
/* Compression */
// We'll store some text into a variable pointed to by *src to be compressed later.
const char* const src = "Lorem ipsum dolor sit amet, consectetur adipiscing elit.";
// The compression function needs to know how many bytes exist. Since we're using a string, we can use strlen() + 1 (for \0).
const int src_size = (int)(strlen(src) + 1);
// LZ4 provides a function that will tell you the maximum size of compressed output based on input data via LZ4_compressBound().
const int max_dst_size = LZ4_compressBound(src_size);
// We will use that size for our destination boundary when allocating space.
char* compressed_data = malloc(max_dst_size);
if (compressed_data == NULL)
run_screaming("Failed to allocate memory for *compressed_data.", 1);
// That's all the information and preparation LZ4 needs to compress *src into *compressed_data.
// Invoke LZ4_compress_default now with our size values and pointers to our memory locations.
// Save the return value for error checking.
const int compressed_data_size = LZ4_compress_default(src, compressed_data, src_size, max_dst_size);
// Check return_value to determine what happened.
if (compressed_data_size < 0)
run_screaming("A negative result from LZ4_compress_default indicates a failure trying to compress the data. See exit code (echo $?) for value returned.", compressed_data_size);
if (compressed_data_size == 0)
run_screaming("A result of 0 means compression worked, but was stopped because the destination buffer couldn't hold all the information.", 1);
if (compressed_data_size > 0)
printf("We successfully compressed some data!\n");
// Not only does a positive return_value mean success, the value returned == the number of bytes required.
// You can use this to realloc() *compress_data to free up memory, if desired. We'll do so just to demonstrate the concept.
compressed_data = (char *)realloc(compressed_data, compressed_data_size);
if (compressed_data == NULL)
run_screaming("Failed to re-alloc memory for compressed_data. Sad :(", 1);
/* Decompression */
// Now that we've successfully compressed the information from *src to *compressed_data, let's do the opposite! We'll create a
// *new_src location of size src_size since we know that value.
char* const regen_buffer = malloc(src_size);
if (regen_buffer == NULL)
run_screaming("Failed to allocate memory for *regen_buffer.", 1);
// The LZ4_decompress_safe function needs to know where the compressed data is, how many bytes long it is,
// where the regen_buffer memory location is, and how large regen_buffer (uncompressed) output will be.
// Again, save the return_value.
const int decompressed_size = LZ4_decompress_safe(compressed_data, regen_buffer, compressed_data_size, src_size);
free(compressed_data); /* no longer useful */
if (decompressed_size < 0)
run_screaming("A negative result from LZ4_decompress_safe indicates a failure trying to decompress the data. See exit code (echo $?) for value returned.", decompressed_size);
if (decompressed_size == 0)
run_screaming("I'm not sure this function can ever return 0. Documentation in lz4.h doesn't indicate so.", 1);
if (decompressed_size > 0)
printf("We successfully decompressed some data!\n");
// Not only does a positive return value mean success,
// value returned == number of bytes regenerated from compressed_data stream.
/* Validation */
// We should be able to compare our original *src with our *new_src and be byte-for-byte identical.
if (memcmp(src, regen_buffer, src_size) != 0)
run_screaming("Validation failed. *src and *new_src are not identical.", 1);
printf("Validation done. The string we ended up with is:\n%s\n", regen_buffer);
return 0;
}

View File

@@ -1,87 +0,0 @@
# LZ4 Streaming API Basics
by *Takayuki Matsuoka*
## LZ4 API sets
LZ4 has the following API sets :
- "Auto Framing" API (lz4frame.h) :
This is most recommended API for usual application.
It guarantees interoperability with other LZ4 framing format compliant tools/libraries
such as LZ4 command line utility, node-lz4, etc.
- "Block" API : This is recommended for simple purpose.
It compress single raw memory block to LZ4 memory block and vice versa.
- "Streaming" API : This is designed for complex thing.
For example, compress huge stream data in restricted memory environment.
Basically, you should use "Auto Framing" API.
But if you want to write advanced application, it's time to use Block or Streaming APIs.
## What is difference between Block and Streaming API ?
Block API (de)compresses single contiguous memory block.
In other words, LZ4 library find redundancy from single contiguous memory block.
Streaming API does same thing but (de)compress multiple adjacent contiguous memory block.
So LZ4 library could find more redundancy than Block API.
The following figure shows difference between API and block sizes.
In these figures, original data is splitted to 4KiBytes contiguous chunks.
```
Original Data
+---------------+---------------+----+----+----+
| 4KiB Chunk A | 4KiB Chunk B | C | D |... |
+---------------+---------------+----+----+----+
Example (1) : Block API, 4KiB Block
+---------------+---------------+----+----+----+
| 4KiB Chunk A | 4KiB Chunk B | C | D |... |
+---------------+---------------+----+----+----+
| Block #1 | Block #2 | #3 | #4 |... |
+---------------+---------------+----+----+----+
(No Dependency)
Example (2) : Block API, 8KiB Block
+---------------+---------------+----+----+----+
| 4KiB Chunk A | 4KiB Chunk B | C | D |... |
+---------------+---------------+----+----+----+
| Block #1 |Block #2 |... |
+--------------------+----------+-------+-+----+
^ | ^ |
| | | |
+--------------+ +----+
Internal Dependency Internal Dependency
Example (3) : Streaming API, 4KiB Block
+---------------+---------------+-----+----+----+
| 4KiB Chunk A | 4KiB Chunk B | C | D |... |
+---------------+---------------+-----+----+----+
| Block #1 | Block #2 | #3 | #4 |... |
+---------------+----+----------+-+---+-+--+----+
^ | ^ | ^ |
| | | | | |
+--------------+ +--------+ +---+
Dependency Dependency Dependency
```
- In example (1), there is no dependency.
All blocks are compressed independently.
- In example (2), naturally 8KiBytes block has internal dependency.
But still block #1 and #2 are compressed independently.
- In example (3), block #2 has dependency to #1,
also #3 has dependency to #2 and #1, #4 has #3, #2 and #1, and so on.
Here, we can observe difference between example (2) and (3).
In (2), there's no dependency between chunk B and C, but (3) has dependency between B and C.
This dependency improves compression ratio.
## Restriction of Streaming API
For the efficiency, Streaming API doesn't keep mirror copy of dependent (de)compressed memory.
This means users should keep these dependent (de)compressed memory explicitly.
Usually, "Dependent memory" is previous adjacent contiguous memory up to 64KiBytes.
LZ4 will not access further memories.

View File

@@ -1,2 +0,0 @@
# make install artefact
liblz4.pc

Binary file not shown.

View File

@@ -1,180 +0,0 @@
# ################################################################
# LZ4 library - Makefile
# Copyright (C) Yann Collet 2011-2016
# All rights reserved.
#
# This Makefile is validated for Linux, macOS, *BSD, Hurd, Solaris, MSYS2 targets
#
# BSD license
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice, this
# list of conditions and the following disclaimer in the documentation and/or
# other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# You can contact the author at :
# - LZ4 source repository : https://github.com/Cyan4973/lz4
# - LZ4 forum froup : https://groups.google.com/forum/#!forum/lz4c
# ################################################################
# Version numbers
LIBVER_MAJOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MAJOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ./lz4.h`
LIBVER_MINOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MINOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ./lz4.h`
LIBVER_PATCH_SCRIPT:=`sed -n '/define LZ4_VERSION_RELEASE/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < ./lz4.h`
LIBVER_SCRIPT:= $(LIBVER_MAJOR_SCRIPT).$(LIBVER_MINOR_SCRIPT).$(LIBVER_PATCH_SCRIPT)
LIBVER_MAJOR := $(shell echo $(LIBVER_MAJOR_SCRIPT))
LIBVER_MINOR := $(shell echo $(LIBVER_MINOR_SCRIPT))
LIBVER_PATCH := $(shell echo $(LIBVER_PATCH_SCRIPT))
LIBVER := $(shell echo $(LIBVER_SCRIPT))
BUILD_STATIC:=yes
CPPFLAGS+= -DXXH_NAMESPACE=LZ4_
CFLAGS ?= -O3
DEBUGFLAGS:= -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow \
-Wswitch-enum -Wdeclaration-after-statement -Wstrict-prototypes -Wundef \
-Wpointer-arith -Wstrict-aliasing=1
CFLAGS += $(DEBUGFLAGS) $(MOREFLAGS)
FLAGS = $(CPPFLAGS) $(CFLAGS) $(LDFLAGS)
# OS X linker doesn't support -soname, and use different extension
# see : https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/DynamicLibraryDesignGuidelines.html
ifeq ($(shell uname), Darwin)
SHARED_EXT = dylib
SHARED_EXT_MAJOR = $(LIBVER_MAJOR).$(SHARED_EXT)
SHARED_EXT_VER = $(LIBVER).$(SHARED_EXT)
SONAME_FLAGS = -install_name $(LIBDIR)/liblz4.$(SHARED_EXT_MAJOR) -compatibility_version $(LIBVER_MAJOR) -current_version $(LIBVER)
else
SONAME_FLAGS = -Wl,-soname=liblz4.$(SHARED_EXT).$(LIBVER_MAJOR)
SHARED_EXT = so
SHARED_EXT_MAJOR = $(SHARED_EXT).$(LIBVER_MAJOR)
SHARED_EXT_VER = $(SHARED_EXT).$(LIBVER)
endif
LIBLZ4 = liblz4.$(SHARED_EXT_VER)
.PHONY: default
default: lib-release
lib-release: DEBUGFLAGS :=
lib-release: lib
lib: liblz4.a liblz4
all: lib
all32: CFLAGS+=-m32
all32: all
liblz4.a: *.c
ifeq ($(BUILD_STATIC),yes) # can be disabled on command line
@echo compiling static library
@$(CC) $(CPPFLAGS) $(CFLAGS) -c $^
@$(AR) rcs $@ *.o
endif
$(LIBLZ4): *.c
@echo compiling dynamic library $(LIBVER)
ifneq (,$(filter Windows%,$(OS)))
@$(CC) $(FLAGS) -DLZ4_DLL_EXPORT=1 -shared $^ -o dll\$@.dll
dlltool -D dll\liblz4.dll -d dll\liblz4.def -l dll\liblz4.lib
else
@$(CC) $(FLAGS) -shared $^ -fPIC -fvisibility=hidden $(SONAME_FLAGS) -o $@
@echo creating versioned links
@ln -sf $@ liblz4.$(SHARED_EXT_MAJOR)
@ln -sf $@ liblz4.$(SHARED_EXT)
endif
liblz4: $(LIBLZ4)
clean:
@$(RM) core *.o liblz4.pc dll/liblz4.dll dll/liblz4.lib
@$(RM) *.a *.$(SHARED_EXT) *.$(SHARED_EXT_MAJOR) *.$(SHARED_EXT_VER)
@echo Cleaning library completed
#-----------------------------------------------------------------------------
# make install is validated only for Linux, OSX, BSD, Hurd and Solaris targets
#-----------------------------------------------------------------------------
ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD NetBSD DragonFly SunOS))
DESTDIR ?=
# directory variables : GNU convention prefers lowercase
# support both lower and uppercase (BSD), use uppercase in script
prefix ?= /usr/local
PREFIX ?= $(prefix)
exec_prefix ?= $(PREFIX)
libdir ?= $(exec_prefix)/lib
LIBDIR ?= $(libdir)
includedir ?= $(PREFIX)/include
INCLUDEDIR ?= $(includedir)
ifneq (,$(filter $(shell uname),OpenBSD FreeBSD NetBSD DragonFly))
PKGCONFIGDIR ?= $(PREFIX)/libdata/pkgconfig
else
PKGCONFIGDIR ?= $(LIBDIR)/pkgconfig
endif
ifneq (,$(filter $(shell uname),SunOS))
INSTALL ?= ginstall
else
INSTALL ?= install
endif
INSTALL_PROGRAM ?= $(INSTALL)
INSTALL_DATA ?= $(INSTALL) -m 644
liblz4.pc: liblz4.pc.in Makefile
@echo creating pkgconfig
@sed -e 's|@PREFIX@|$(PREFIX)|' \
-e 's|@LIBDIR@|$(LIBDIR)|' \
-e 's|@INCLUDEDIR@|$(INCLUDEDIR)|' \
-e 's|@VERSION@|$(LIBVER)|' \
$< >$@
install: lib liblz4.pc
@$(INSTALL) -d -m 755 $(DESTDIR)$(PKGCONFIGDIR)/ $(DESTDIR)$(INCLUDEDIR)/ $(DESTDIR)$(LIBDIR)/
@$(INSTALL_DATA) liblz4.pc $(DESTDIR)$(PKGCONFIGDIR)/
@echo Installing libraries
ifeq ($(BUILD_STATIC),yes)
@$(INSTALL_DATA) liblz4.a $(DESTDIR)$(LIBDIR)/liblz4.a
@$(INSTALL_DATA) lz4frame_static.h $(DESTDIR)$(INCLUDEDIR)/lz4frame_static.h
endif
@$(INSTALL_PROGRAM) liblz4.$(SHARED_EXT_VER) $(DESTDIR)$(LIBDIR)
@ln -sf liblz4.$(SHARED_EXT_VER) $(DESTDIR)$(LIBDIR)/liblz4.$(SHARED_EXT_MAJOR)
@ln -sf liblz4.$(SHARED_EXT_VER) $(DESTDIR)$(LIBDIR)/liblz4.$(SHARED_EXT)
@echo Installing headers in $(INCLUDEDIR)
@$(INSTALL_DATA) lz4.h $(DESTDIR)$(INCLUDEDIR)/lz4.h
@$(INSTALL_DATA) lz4hc.h $(DESTDIR)$(INCLUDEDIR)/lz4hc.h
@$(INSTALL_DATA) lz4frame.h $(DESTDIR)$(INCLUDEDIR)/lz4frame.h
@echo lz4 libraries installed
uninstall:
@$(RM) $(DESTDIR)$(LIBDIR)/pkgconfig/liblz4.pc
@$(RM) $(DESTDIR)$(LIBDIR)/liblz4.$(SHARED_EXT)
@$(RM) $(DESTDIR)$(LIBDIR)/liblz4.$(SHARED_EXT_MAJOR)
@$(RM) $(DESTDIR)$(LIBDIR)/liblz4.$(SHARED_EXT_VER)
@$(RM) $(DESTDIR)$(LIBDIR)/liblz4.a
@$(RM) $(DESTDIR)$(INCLUDEDIR)/lz4.h
@$(RM) $(DESTDIR)$(INCLUDEDIR)/lz4hc.h
@$(RM) $(DESTDIR)$(INCLUDEDIR)/lz4frame.h
@echo lz4 libraries successfully uninstalled
endif

View File

@@ -1,73 +0,0 @@
LZ4 - Library Files
================================
The directory contains many files, but depending on project's objectives,
not all of them are necessary.
#### Minimal LZ4 build
The minimum required is **`lz4.c`** and **`lz4.h`**,
which will provide the fast compression and decompression algorithm.
#### The High Compression variant of LZ4
For more compression at the cost of compression speed,
the High Compression variant **lz4hc** is available.
It's necessary to add **`lz4hc.c`** and **`lz4hc.h`**.
The variant still depends on regular `lz4` source files.
In particular, the decompression is still provided by `lz4.c`.
#### Compatibility issues
In order to produce files or streams compatible with `lz4` command line utility,
it's necessary to encode lz4-compressed blocks using the [official interoperable frame format].
This format is generated and decoded automatically by the **lz4frame** library.
In order to work properly, lz4frame needs lz4 and lz4hc, and also **xxhash**,
which provides error detection.
(_Advanced stuff_ : It's possible to hide xxhash symbols into a local namespace.
This is what `liblz4` does, to avoid symbol duplication
in case a user program would link to several libraries containing xxhash symbols.)
#### Advanced API
A more complex `lz4frame_static.h` is also provided.
It contains definitions which are not guaranteed to remain stable within future versions.
It must be used with static linking ***only***.
#### Using MinGW+MSYS to create DLL
DLL can be created using MinGW+MSYS with the `make liblz4` command.
This command creates `dll\liblz4.dll` and the import library `dll\liblz4.lib`.
The import library is only required with Visual C++.
The header files `lz4.h`, `lz4hc.h`, `lz4frame.h` and the dynamic library
`dll\liblz4.dll` are required to compile a project using gcc/MinGW.
The dynamic library has to be added to linking options.
It means that if a project that uses LZ4 consists of a single `test-dll.c`
file it should be linked with `dll\liblz4.dll`. For example:
```
gcc $(CFLAGS) -Iinclude/ test-dll.c -o test-dll dll\liblz4.dll
```
The compiled executable will require LZ4 DLL which is available at `dll\liblz4.dll`.
#### Miscellaneous
Other files present in the directory are not source code. There are :
- LICENSE : contains the BSD license text
- Makefile : script to compile or install lz4 library (static or dynamic)
- liblz4.pc.in : for pkg-config (make install)
- README.md : this file
[official interoperable frame format]: ../doc/lz4_Frame_format.md
#### License
All source material within __lib__ directory are BSD 2-Clause licensed.
See [LICENSE](LICENSE) for details.
The license is also repeated at the top of each source file.

View File

@@ -1,63 +0,0 @@
# ##########################################################################
# LZ4 programs - Makefile
# Copyright (C) Yann Collet 2016
#
# GPL v2 License
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# You can contact the author at :
# - LZ4 homepage : http://www.lz4.org
# - LZ4 source repository : https://github.com/lz4/lz4
# ##########################################################################
VOID := /dev/null
LZ4DIR := ../include
LIBDIR := ../static
DLLDIR := ../dll
CFLAGS ?= -O3 # can select custom flags. For example : CFLAGS="-O2 -g" make
CFLAGS += -Wall -Wextra -Wundef -Wcast-qual -Wcast-align -Wshadow -Wswitch-enum \
-Wdeclaration-after-statement -Wstrict-prototypes \
-Wpointer-arith -Wstrict-aliasing=1
CFLAGS += $(MOREFLAGS)
CPPFLAGS:= -I$(LZ4DIR) -DXXH_NAMESPACE=LZ4_
FLAGS := $(CFLAGS) $(CPPFLAGS) $(LDFLAGS)
# Define *.exe as extension for Windows systems
ifneq (,$(filter Windows%,$(OS)))
EXT =.exe
else
EXT =
endif
.PHONY: default fullbench-dll fullbench-lib
default: all
all: fullbench-dll fullbench-lib
fullbench-lib: fullbench.c xxhash.c
$(CC) $(FLAGS) $^ -o $@$(EXT) $(LIBDIR)/liblz4_static.lib
fullbench-dll: fullbench.c xxhash.c
$(CC) $(FLAGS) $^ -o $@$(EXT) -DLZ4_DLL_IMPORT=1 $(DLLDIR)/liblz4.dll
clean:
@$(RM) fullbench-dll$(EXT) fullbench-lib$(EXT) \
@echo Cleaning completed

View File

@@ -1,69 +0,0 @@
LZ4 Windows binary package
====================================
#### The package contents
- `lz4.exe` : Command Line Utility, supporting gzip-like arguments
- `dll\liblz4.dll` : The DLL of LZ4 library
- `dll\liblz4.lib` : The import library of LZ4 library for Visual C++
- `example\` : The example of usage of LZ4 library
- `include\` : Header files required with LZ4 library
- `static\liblz4_static.lib` : The static LZ4 library
#### Usage of Command Line Interface
Command Line Interface (CLI) supports gzip-like arguments.
By default CLI takes an input file and compresses it to an output file:
```
Usage: lz4 [arg] [input] [output]
```
The full list of commands for CLI can be obtained with `-h` or `-H`. The ratio can
be improved with commands from `-3` to `-16` but higher levels also have slower
compression. CLI includes in-memory compression benchmark module with compression
levels starting from `-b` and ending with `-e` with iteration time of `-i` seconds.
CLI supports aggregation of parameters i.e. `-b1`, `-e18`, and `-i1` can be joined
into `-b1e18i1`.
#### The example of usage of static and dynamic LZ4 libraries with gcc/MinGW
Use `cd example` and `make` to build `fullbench-dll` and `fullbench-lib`.
`fullbench-dll` uses a dynamic LZ4 library from the `dll` directory.
`fullbench-lib` uses a static LZ4 library from the `lib` directory.
#### Using LZ4 DLL with gcc/MinGW
The header files from `include\` and the dynamic library `dll\liblz4.dll`
are required to compile a project using gcc/MinGW.
The dynamic library has to be added to linking options.
It means that if a project that uses LZ4 consists of a single `test-dll.c`
file it should be linked with `dll\liblz4.dll`. For example:
```
gcc $(CFLAGS) -Iinclude\ test-dll.c -o test-dll dll\liblz4.dll
```
The compiled executable will require LZ4 DLL which is available at `dll\liblz4.dll`.
#### The example of usage of static and dynamic LZ4 libraries with Visual C++
Open `example\fullbench-dll.sln` to compile `fullbench-dll` that uses a
dynamic LZ4 library from the `dll` directory. The solution works with Visual C++
2010 or newer. When one will open the solution with Visual C++ newer than 2010
then the solution will upgraded to the current version.
#### Using LZ4 DLL with Visual C++
The header files from `include\` and the import library `dll\liblz4.lib`
are required to compile a project using Visual C++.
1. The header files should be added to `Additional Include Directories` that can
be found in project properties `C/C++` then `General`.
2. The import library has to be added to `Additional Dependencies` that can
be found in project properties `Linker` then `Input`.
If one will provide only the name `liblz4.lib` without a full path to the library
the directory has to be added to `Linker\General\Additional Library Directories`.
The compiled executable will require LZ4 DLL which is available at `dll\liblz4.dll`.

View File

@@ -1,25 +0,0 @@
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Express 2012 for Windows Desktop
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "fullbench-dll", "fullbench-dll.vcxproj", "{13992FD2-077E-4954-B065-A428198201A9}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
Debug|x64 = Debug|x64
Release|Win32 = Release|Win32
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{13992FD2-077E-4954-B065-A428198201A9}.Debug|Win32.ActiveCfg = Debug|Win32
{13992FD2-077E-4954-B065-A428198201A9}.Debug|Win32.Build.0 = Debug|Win32
{13992FD2-077E-4954-B065-A428198201A9}.Debug|x64.ActiveCfg = Debug|x64
{13992FD2-077E-4954-B065-A428198201A9}.Debug|x64.Build.0 = Debug|x64
{13992FD2-077E-4954-B065-A428198201A9}.Release|Win32.ActiveCfg = Release|Win32
{13992FD2-077E-4954-B065-A428198201A9}.Release|Win32.Build.0 = Release|Win32
{13992FD2-077E-4954-B065-A428198201A9}.Release|x64.ActiveCfg = Release|x64
{13992FD2-077E-4954-B065-A428198201A9}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal

View File

@@ -1,182 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{13992FD2-077E-4954-B065-A428198201A9}</ProjectGuid>
<Keyword>Win32Proj</Keyword>
<RootNamespace>fullbench-dll</RootNamespace>
<OutDir>$(SolutionDir)bin\$(Platform)_$(Configuration)\</OutDir>
<IntDir>$(SolutionDir)bin\obj\$(RootNamespace)_$(Platform)_$(Configuration)\</IntDir>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<LinkIncremental>true</LinkIncremental>
<IncludePath>$(IncludePath);$(UniversalCRT_IncludePath);$(SolutionDir)..\..\lib;$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSDK_IncludePath);</IncludePath>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<LinkIncremental>true</LinkIncremental>
<IncludePath>$(IncludePath);$(UniversalCRT_IncludePath);$(SolutionDir)..\..\lib;$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSDK_IncludePath);</IncludePath>
<RunCodeAnalysis>true</RunCodeAnalysis>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<LinkIncremental>false</LinkIncremental>
<IncludePath>$(IncludePath);$(UniversalCRT_IncludePath);$(SolutionDir)..\..\lib;$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSDK_IncludePath);</IncludePath>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<LinkIncremental>false</LinkIncremental>
<IncludePath>$(IncludePath);$(UniversalCRT_IncludePath);$(SolutionDir)..\..\lib;$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSDK_IncludePath);</IncludePath>
<RunCodeAnalysis>true</RunCodeAnalysis>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level4</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;LZ4_DLL_IMPORT=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<TreatWarningAsError>true</TreatWarningAsError>
<EnablePREfast>false</EnablePREfast>
<AdditionalIncludeDirectories>..\include</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)..\dll;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>liblz4.lib;%(AdditionalDependencies)</AdditionalDependencies>
<ImageHasSafeExceptionHandlers>false</ImageHasSafeExceptionHandlers>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level4</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;LZ4_DLL_IMPORT=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<TreatWarningAsError>true</TreatWarningAsError>
<EnablePREfast>true</EnablePREfast>
<AdditionalOptions>/analyze:stacksize295252 %(AdditionalOptions)</AdditionalOptions>
<AdditionalIncludeDirectories>..\include</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)..\dll;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>liblz4.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level4</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;LZ4_DLL_IMPORT=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<TreatWarningAsError>false</TreatWarningAsError>
<EnablePREfast>false</EnablePREfast>
<AdditionalIncludeDirectories>..\include</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)..\dll;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>liblz4.lib;%(AdditionalDependencies)</AdditionalDependencies>
<ImageHasSafeExceptionHandlers>false</ImageHasSafeExceptionHandlers>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level4</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;LZ4_DLL_IMPORT=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<TreatWarningAsError>false</TreatWarningAsError>
<EnablePREfast>true</EnablePREfast>
<AdditionalOptions>/analyze:stacksize295252 %(AdditionalOptions)</AdditionalOptions>
<AdditionalIncludeDirectories>..\include</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)..\dll;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>liblz4.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="xxhash.c" />
<ClCompile Include="fullbench.c" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="..\include\lz4.h" />
<ClInclude Include="..\include\lz4frame.h" />
<ClInclude Include="..\include\lz4hc.h" />
<ClInclude Include="xxhash.h" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@@ -1,62 +0,0 @@
LIBRARY liblz4.dll
EXPORTS
LZ4F_compressBegin
LZ4F_compressBound
LZ4F_compressEnd
LZ4F_compressFrame
LZ4F_compressFrameBound
LZ4F_compressUpdate
LZ4F_createCompressionContext
LZ4F_createDecompressionContext
LZ4F_decompress
LZ4F_flush
LZ4F_freeCompressionContext
LZ4F_freeDecompressionContext
LZ4F_getErrorName
LZ4F_getFrameInfo
LZ4F_getVersion
LZ4F_isError
LZ4_compress
LZ4_compressBound
LZ4_compressHC
LZ4_compressHC_continue
LZ4_compressHC_limitedOutput
LZ4_compressHC_limitedOutput_continue
LZ4_compressHC_limitedOutput_withStateHC
LZ4_compressHC_withStateHC
LZ4_compress_HC
LZ4_compress_HC_continue
LZ4_compress_HC_extStateHC
LZ4_compress_continue
LZ4_compress_default
LZ4_compress_destSize
LZ4_compress_fast
LZ4_compress_fast_continue
LZ4_compress_fast_extState
LZ4_compress_limitedOutput
LZ4_compress_limitedOutput_continue
LZ4_compress_limitedOutput_withState
LZ4_compress_withState
LZ4_createStream
LZ4_createStreamDecode
LZ4_createStreamHC
LZ4_decompress_fast
LZ4_decompress_fast_continue
LZ4_decompress_fast_usingDict
LZ4_decompress_safe
LZ4_decompress_safe_continue
LZ4_decompress_safe_partial
LZ4_decompress_safe_usingDict
LZ4_freeStream
LZ4_freeStreamDecode
LZ4_freeStreamHC
LZ4_loadDict
LZ4_loadDictHC
LZ4_resetStream
LZ4_resetStreamHC
LZ4_saveDict
LZ4_saveDictHC
LZ4_setStreamDecode
LZ4_sizeofState
LZ4_sizeofStateHC
LZ4_versionNumber

View File

@@ -1,14 +0,0 @@
# LZ4 - Fast LZ compression algorithm
# Copyright (C) 2011-2014, Yann Collet.
# BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
prefix=@PREFIX@
libdir=@LIBDIR@
includedir=@INCLUDEDIR@
Name: lz4
Description: extremely fast lossless compression algorithm library
URL: http://www.lz4.org/
Version: @VERSION@
Libs: -L@LIBDIR@ -llz4
Cflags: -I@INCLUDEDIR@

File diff suppressed because it is too large Load Diff

View File

@@ -1,463 +0,0 @@
/*
* LZ4 - Fast LZ compression algorithm
* Header File
* Copyright (C) 2011-2017, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 homepage : http://www.lz4.org
- LZ4 source repository : https://github.com/lz4/lz4
*/
#if defined (__cplusplus)
extern "C" {
#endif
#ifndef LZ4_H_2983827168210
#define LZ4_H_2983827168210
/* --- Dependency --- */
#include <stddef.h> /* size_t */
/**
Introduction
LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core,
scalable with multi-cores CPU. It features an extremely fast decoder, with speed in
multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.
The LZ4 compression library provides in-memory compression and decompression functions.
Compression can be done in:
- a single step (described as Simple Functions)
- a single step, reusing a context (described in Advanced Functions)
- unbounded multiple steps (described as Streaming compression)
lz4.h provides block compression functions. It gives full buffer control to user.
Decompressing an lz4-compressed block also requires metadata (such as compressed size).
Each application is free to encode such metadata in whichever way it wants.
An additional format, called LZ4 frame specification (doc/lz4_Frame_format.md),
take care of encoding standard metadata alongside LZ4-compressed blocks.
If your application requires interoperability, it's recommended to use it.
A library is provided to take care of it, see lz4frame.h.
*/
/*^***************************************************************
* Export parameters
*****************************************************************/
/*
* LZ4_DLL_EXPORT :
* Enable exporting of functions when building a Windows DLL
* LZ4LIB_API :
* Control library symbols visibility.
*/
#if defined(LZ4_DLL_EXPORT) && (LZ4_DLL_EXPORT==1)
# define LZ4LIB_API __declspec(dllexport)
#elif defined(LZ4_DLL_IMPORT) && (LZ4_DLL_IMPORT==1)
# define LZ4LIB_API __declspec(dllimport) /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#elif defined(__GNUC__) && (__GNUC__ >= 4)
# define LZ4LIB_API __attribute__ ((__visibility__ ("default")))
#else
# define LZ4LIB_API
#endif
/*------ Version ------*/
#define LZ4_VERSION_MAJOR 1 /* for breaking interface changes */
#define LZ4_VERSION_MINOR 8 /* for new (non-breaking) interface capabilities */
#define LZ4_VERSION_RELEASE 0 /* for tweaks, bug-fixes, or development */
#define LZ4_VERSION_NUMBER (LZ4_VERSION_MAJOR *100*100 + LZ4_VERSION_MINOR *100 + LZ4_VERSION_RELEASE)
#define LZ4_LIB_VERSION LZ4_VERSION_MAJOR.LZ4_VERSION_MINOR.LZ4_VERSION_RELEASE
#define LZ4_QUOTE(str) #str
#define LZ4_EXPAND_AND_QUOTE(str) LZ4_QUOTE(str)
#define LZ4_VERSION_STRING LZ4_EXPAND_AND_QUOTE(LZ4_LIB_VERSION)
LZ4LIB_API int LZ4_versionNumber (void); /**< library version number; to be used when checking dll version */
LZ4LIB_API const char* LZ4_versionString (void); /**< library version string; to be used when checking dll version */
/*-************************************
* Tuning parameter
**************************************/
/*!
* LZ4_MEMORY_USAGE :
* Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
* Increasing memory usage improves compression ratio
* Reduced memory usage can improve speed, due to cache effect
* Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
*/
#ifndef LZ4_MEMORY_USAGE
# define LZ4_MEMORY_USAGE 14
#endif
/*-************************************
* Simple Functions
**************************************/
/*! LZ4_compress_default() :
Compresses 'sourceSize' bytes from buffer 'source'
into already allocated 'dest' buffer of size 'maxDestSize'.
Compression is guaranteed to succeed if 'maxDestSize' >= LZ4_compressBound(sourceSize).
It also runs faster, so it's a recommended setting.
If the function cannot compress 'source' into a more limited 'dest' budget,
compression stops *immediately*, and the function result is zero.
As a consequence, 'dest' content is not valid.
This function never writes outside 'dest' buffer, nor read outside 'source' buffer.
sourceSize : Max supported value is LZ4_MAX_INPUT_VALUE
maxDestSize : full or partial size of buffer 'dest' (which must be already allocated)
return : the number of bytes written into buffer 'dest' (necessarily <= maxOutputSize)
or 0 if compression fails */
LZ4LIB_API int LZ4_compress_default(const char* source, char* dest, int sourceSize, int maxDestSize);
/*! LZ4_decompress_safe() :
compressedSize : is the precise full size of the compressed block.
maxDecompressedSize : is the size of destination buffer, which must be already allocated.
return : the number of bytes decompressed into destination buffer (necessarily <= maxDecompressedSize)
If destination buffer is not large enough, decoding will stop and output an error code (<0).
If the source stream is detected malformed, the function will stop decoding and return a negative result.
This function is protected against buffer overflow exploits, including malicious data packets.
It never writes outside output buffer, nor reads outside input buffer.
*/
LZ4LIB_API int LZ4_decompress_safe (const char* source, char* dest, int compressedSize, int maxDecompressedSize);
/*-************************************
* Advanced Functions
**************************************/
#define LZ4_MAX_INPUT_SIZE 0x7E000000 /* 2 113 929 216 bytes */
#define LZ4_COMPRESSBOUND(isize) ((unsigned)(isize) > (unsigned)LZ4_MAX_INPUT_SIZE ? 0 : (isize) + ((isize)/255) + 16)
/*!
LZ4_compressBound() :
Provides the maximum size that LZ4 compression may output in a "worst case" scenario (input data not compressible)
This function is primarily useful for memory allocation purposes (destination buffer size).
Macro LZ4_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example).
Note that LZ4_compress_default() compress faster when dest buffer size is >= LZ4_compressBound(srcSize)
inputSize : max supported value is LZ4_MAX_INPUT_SIZE
return : maximum output size in a "worst case" scenario
or 0, if input size is too large ( > LZ4_MAX_INPUT_SIZE)
*/
LZ4LIB_API int LZ4_compressBound(int inputSize);
/*!
LZ4_compress_fast() :
Same as LZ4_compress_default(), but allows to select an "acceleration" factor.
The larger the acceleration value, the faster the algorithm, but also the lesser the compression.
It's a trade-off. It can be fine tuned, with each successive value providing roughly +~3% to speed.
An acceleration value of "1" is the same as regular LZ4_compress_default()
Values <= 0 will be replaced by ACCELERATION_DEFAULT (see lz4.c), which is 1.
*/
LZ4LIB_API int LZ4_compress_fast (const char* source, char* dest, int sourceSize, int maxDestSize, int acceleration);
/*!
LZ4_compress_fast_extState() :
Same compression function, just using an externally allocated memory space to store compression state.
Use LZ4_sizeofState() to know how much memory must be allocated,
and allocate it on 8-bytes boundaries (using malloc() typically).
Then, provide it as 'void* state' to compression function.
*/
LZ4LIB_API int LZ4_sizeofState(void);
LZ4LIB_API int LZ4_compress_fast_extState (void* state, const char* source, char* dest, int inputSize, int maxDestSize, int acceleration);
/*!
LZ4_compress_destSize() :
Reverse the logic, by compressing as much data as possible from 'source' buffer
into already allocated buffer 'dest' of size 'targetDestSize'.
This function either compresses the entire 'source' content into 'dest' if it's large enough,
or fill 'dest' buffer completely with as much data as possible from 'source'.
*sourceSizePtr : will be modified to indicate how many bytes where read from 'source' to fill 'dest'.
New value is necessarily <= old value.
return : Nb bytes written into 'dest' (necessarily <= targetDestSize)
or 0 if compression fails
*/
LZ4LIB_API int LZ4_compress_destSize (const char* source, char* dest, int* sourceSizePtr, int targetDestSize);
/*!
LZ4_decompress_fast() :
originalSize : is the original and therefore uncompressed size
return : the number of bytes read from the source buffer (in other words, the compressed size)
If the source stream is detected malformed, the function will stop decoding and return a negative result.
Destination buffer must be already allocated. Its size must be a minimum of 'originalSize' bytes.
note : This function fully respect memory boundaries for properly formed compressed data.
It is a bit faster than LZ4_decompress_safe().
However, it does not provide any protection against intentionally modified data stream (malicious input).
Use this function in trusted environment only (data to decode comes from a trusted source).
*/
LZ4LIB_API int LZ4_decompress_fast (const char* source, char* dest, int originalSize);
/*!
LZ4_decompress_safe_partial() :
This function decompress a compressed block of size 'compressedSize' at position 'source'
into destination buffer 'dest' of size 'maxDecompressedSize'.
The function tries to stop decompressing operation as soon as 'targetOutputSize' has been reached,
reducing decompression time.
return : the number of bytes decoded in the destination buffer (necessarily <= maxDecompressedSize)
Note : this number can be < 'targetOutputSize' should the compressed block to decode be smaller.
Always control how many bytes were decoded.
If the source stream is detected malformed, the function will stop decoding and return a negative result.
This function never writes outside of output buffer, and never reads outside of input buffer. It is therefore protected against malicious data packets
*/
LZ4LIB_API int LZ4_decompress_safe_partial (const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize);
/*-*********************************************
* Streaming Compression Functions
***********************************************/
typedef union LZ4_stream_u LZ4_stream_t; /* incomplete type (defined later) */
/*! LZ4_createStream() and LZ4_freeStream() :
* LZ4_createStream() will allocate and initialize an `LZ4_stream_t` structure.
* LZ4_freeStream() releases its memory.
*/
LZ4LIB_API LZ4_stream_t* LZ4_createStream(void);
LZ4LIB_API int LZ4_freeStream (LZ4_stream_t* streamPtr);
/*! LZ4_resetStream() :
* An LZ4_stream_t structure can be allocated once and re-used multiple times.
* Use this function to init an allocated `LZ4_stream_t` structure and start a new compression.
*/
LZ4LIB_API void LZ4_resetStream (LZ4_stream_t* streamPtr);
/*! LZ4_loadDict() :
* Use this function to load a static dictionary into LZ4_stream.
* Any previous data will be forgotten, only 'dictionary' will remain in memory.
* Loading a size of 0 is allowed.
* Return : dictionary size, in bytes (necessarily <= 64 KB)
*/
LZ4LIB_API int LZ4_loadDict (LZ4_stream_t* streamPtr, const char* dictionary, int dictSize);
/*! LZ4_compress_fast_continue() :
* Compress buffer content 'src', using data from previously compressed blocks as dictionary to improve compression ratio.
* Important : Previous data blocks are assumed to remain present and unmodified !
* 'dst' buffer must be already allocated.
* If dstCapacity >= LZ4_compressBound(srcSize), compression is guaranteed to succeed, and runs faster.
* If not, and if compressed data cannot fit into 'dst' buffer size, compression stops, and function @return==0.
* After an error, the stream status is invalid, it can only be reset or freed.
*/
LZ4LIB_API int LZ4_compress_fast_continue (LZ4_stream_t* streamPtr, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration);
/*! LZ4_saveDict() :
* If previously compressed data block is not guaranteed to remain available at its current memory location,
* save it into a safer place (char* safeBuffer).
* Note : it's not necessary to call LZ4_loadDict() after LZ4_saveDict(), dictionary is immediately usable.
* @return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error.
*/
LZ4LIB_API int LZ4_saveDict (LZ4_stream_t* streamPtr, char* safeBuffer, int dictSize);
/*-**********************************************
* Streaming Decompression Functions
* Bufferless synchronous API
************************************************/
typedef union LZ4_streamDecode_u LZ4_streamDecode_t; /* incomplete type (defined later) */
/*! LZ4_createStreamDecode() and LZ4_freeStreamDecode() :
* creation / destruction of streaming decompression tracking structure */
LZ4LIB_API LZ4_streamDecode_t* LZ4_createStreamDecode(void);
LZ4LIB_API int LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream);
/*! LZ4_setStreamDecode() :
* Use this function to instruct where to find the dictionary.
* Setting a size of 0 is allowed (same effect as reset).
* @return : 1 if OK, 0 if error
*/
LZ4LIB_API int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize);
/*! LZ4_decompress_*_continue() :
* These decoding functions allow decompression of multiple blocks in "streaming" mode.
* Previously decoded blocks *must* remain available at the memory position where they were decoded (up to 64 KB)
* In the case of a ring buffers, decoding buffer must be either :
* - Exactly same size as encoding buffer, with same update rule (block boundaries at same positions)
* In which case, the decoding & encoding ring buffer can have any size, including very small ones ( < 64 KB).
* - Larger than encoding buffer, by a minimum of maxBlockSize more bytes.
* maxBlockSize is implementation dependent. It's the maximum size you intend to compress into a single block.
* In which case, encoding and decoding buffers do not need to be synchronized,
* and encoding ring buffer can have any size, including small ones ( < 64 KB).
* - _At least_ 64 KB + 8 bytes + maxBlockSize.
* In which case, encoding and decoding buffers do not need to be synchronized,
* and encoding ring buffer can have any size, including larger than decoding buffer.
* Whenever these conditions are not possible, save the last 64KB of decoded data into a safe buffer,
* and indicate where it is saved using LZ4_setStreamDecode()
*/
LZ4LIB_API int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int compressedSize, int maxDecompressedSize);
LZ4LIB_API int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int originalSize);
/*! LZ4_decompress_*_usingDict() :
* These decoding functions work the same as
* a combination of LZ4_setStreamDecode() followed by LZ4_decompress_*_continue()
* They are stand-alone, and don't need an LZ4_streamDecode_t structure.
*/
LZ4LIB_API int LZ4_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxDecompressedSize, const char* dictStart, int dictSize);
LZ4LIB_API int LZ4_decompress_fast_usingDict (const char* source, char* dest, int originalSize, const char* dictStart, int dictSize);
/*^**********************************************
* !!!!!! STATIC LINKING ONLY !!!!!!
***********************************************/
/*-************************************
* Private definitions
**************************************
* Do not use these definitions.
* They are exposed to allow static allocation of `LZ4_stream_t` and `LZ4_streamDecode_t`.
* Using these definitions will expose code to API and/or ABI break in future versions of the library.
**************************************/
#define LZ4_HASHLOG (LZ4_MEMORY_USAGE-2)
#define LZ4_HASHTABLESIZE (1 << LZ4_MEMORY_USAGE)
#define LZ4_HASH_SIZE_U32 (1 << LZ4_HASHLOG) /* required as macro for static allocation */
#if defined(__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
#include <stdint.h>
typedef struct {
uint32_t hashTable[LZ4_HASH_SIZE_U32];
uint32_t currentOffset;
uint32_t initCheck;
const uint8_t* dictionary;
uint8_t* bufferStart; /* obsolete, used for slideInputBuffer */
uint32_t dictSize;
} LZ4_stream_t_internal;
typedef struct {
const uint8_t* externalDict;
size_t extDictSize;
const uint8_t* prefixEnd;
size_t prefixSize;
} LZ4_streamDecode_t_internal;
#else
typedef struct {
unsigned int hashTable[LZ4_HASH_SIZE_U32];
unsigned int currentOffset;
unsigned int initCheck;
const unsigned char* dictionary;
unsigned char* bufferStart; /* obsolete, used for slideInputBuffer */
unsigned int dictSize;
} LZ4_stream_t_internal;
typedef struct {
const unsigned char* externalDict;
size_t extDictSize;
const unsigned char* prefixEnd;
size_t prefixSize;
} LZ4_streamDecode_t_internal;
#endif
/*!
* LZ4_stream_t :
* information structure to track an LZ4 stream.
* init this structure before first use.
* note : only use in association with static linking !
* this definition is not API/ABI safe,
* it may change in a future version !
*/
#define LZ4_STREAMSIZE_U64 ((1 << (LZ4_MEMORY_USAGE-3)) + 4)
#define LZ4_STREAMSIZE (LZ4_STREAMSIZE_U64 * sizeof(unsigned long long))
union LZ4_stream_u {
unsigned long long table[LZ4_STREAMSIZE_U64];
LZ4_stream_t_internal internal_donotuse;
} ; /* previously typedef'd to LZ4_stream_t */
/*!
* LZ4_streamDecode_t :
* information structure to track an LZ4 stream during decompression.
* init this structure using LZ4_setStreamDecode (or memset()) before first use
* note : only use in association with static linking !
* this definition is not API/ABI safe,
* and may change in a future version !
*/
#define LZ4_STREAMDECODESIZE_U64 4
#define LZ4_STREAMDECODESIZE (LZ4_STREAMDECODESIZE_U64 * sizeof(unsigned long long))
union LZ4_streamDecode_u {
unsigned long long table[LZ4_STREAMDECODESIZE_U64];
LZ4_streamDecode_t_internal internal_donotuse;
} ; /* previously typedef'd to LZ4_streamDecode_t */
/*-************************************
* Obsolete Functions
**************************************/
/*! Deprecation warnings
Should deprecation warnings be a problem,
it is generally possible to disable them,
typically with -Wno-deprecated-declarations for gcc
or _CRT_SECURE_NO_WARNINGS in Visual.
Otherwise, it's also possible to define LZ4_DISABLE_DEPRECATE_WARNINGS */
#ifdef LZ4_DISABLE_DEPRECATE_WARNINGS
# define LZ4_DEPRECATED(message) /* disable deprecation warnings */
#else
# define LZ4_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
# if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */
# define LZ4_DEPRECATED(message) [[deprecated(message)]]
# elif (LZ4_GCC_VERSION >= 405) || defined(__clang__)
# define LZ4_DEPRECATED(message) __attribute__((deprecated(message)))
# elif (LZ4_GCC_VERSION >= 301)
# define LZ4_DEPRECATED(message) __attribute__((deprecated))
# elif defined(_MSC_VER)
# define LZ4_DEPRECATED(message) __declspec(deprecated(message))
# else
# pragma message("WARNING: You need to implement LZ4_DEPRECATED for this compiler")
# define LZ4_DEPRECATED(message)
# endif
#endif /* LZ4_DISABLE_DEPRECATE_WARNINGS */
/* Obsolete compression functions */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_default() instead") int LZ4_compress (const char* source, char* dest, int sourceSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_default() instead") int LZ4_compress_limitedOutput (const char* source, char* dest, int sourceSize, int maxOutputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_fast_extState() instead") int LZ4_compress_withState (void* state, const char* source, char* dest, int inputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_fast_extState() instead") int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_fast_continue() instead") int LZ4_compress_continue (LZ4_stream_t* LZ4_streamPtr, const char* source, char* dest, int inputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_fast_continue() instead") int LZ4_compress_limitedOutput_continue (LZ4_stream_t* LZ4_streamPtr, const char* source, char* dest, int inputSize, int maxOutputSize);
/* Obsolete decompression functions */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_decompress_fast() instead") int LZ4_uncompress (const char* source, char* dest, int outputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_decompress_safe() instead") int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize);
/* Obsolete streaming functions; use new streaming interface whenever possible */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_createStream() instead") void* LZ4_create (char* inputBuffer);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_createStream() instead") int LZ4_sizeofStreamState(void);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_resetStream() instead") int LZ4_resetStreamState(void* state, char* inputBuffer);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_saveDict() instead") char* LZ4_slideInputBuffer (void* state);
/* Obsolete streaming decoding functions */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_decompress_safe_usingDict() instead") int LZ4_decompress_safe_withPrefix64k (const char* src, char* dst, int compressedSize, int maxDstSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_decompress_fast_usingDict() instead") int LZ4_decompress_fast_withPrefix64k (const char* src, char* dst, int originalSize);
#endif /* LZ4_H_2983827168210 */
#if defined (__cplusplus)
}
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,391 +0,0 @@
/*
LZ4 auto-framing library
Header File
Copyright (C) 2011-2017, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
/* LZ4F is a stand-alone API to create LZ4-compressed frames
* conformant with specification v1.5.1.
* It also offers streaming capabilities.
* lz4.h is not required when using lz4frame.h.
* */
#ifndef LZ4F_H_09782039843
#define LZ4F_H_09782039843
#if defined (__cplusplus)
extern "C" {
#endif
/* --- Dependency --- */
#include <stddef.h> /* size_t */
/**
Introduction
lz4frame.h implements LZ4 frame specification (doc/lz4_Frame_format.md).
lz4frame.h provides frame compression functions that take care
of encoding standard metadata alongside LZ4-compressed blocks.
*/
/*-***************************************************************
* Compiler specifics
*****************************************************************/
/* LZ4_DLL_EXPORT :
* Enable exporting of functions when building a Windows DLL
* LZ4FLIB_API :
* Control library symbols visibility.
*/
#if defined(LZ4_DLL_EXPORT) && (LZ4_DLL_EXPORT==1)
# define LZ4FLIB_API __declspec(dllexport)
#elif defined(LZ4_DLL_IMPORT) && (LZ4_DLL_IMPORT==1)
# define LZ4FLIB_API __declspec(dllimport)
#elif defined(__GNUC__) && (__GNUC__ >= 4)
# define LZ4FLIB_API __attribute__ ((__visibility__ ("default")))
#else
# define LZ4FLIB_API
#endif
#ifdef LZ4F_DISABLE_DEPRECATE_WARNINGS
# define LZ4F_DEPRECATE(x) x
#else
# if defined(_MSC_VER)
# define LZ4F_DEPRECATE(x) x /* __declspec(deprecated) x - only works with C++ */
# elif defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 6))
# define LZ4F_DEPRECATE(x) x __attribute__((deprecated))
# else
# define LZ4F_DEPRECATE(x) x /* no deprecation warning for this compiler */
# endif
#endif
/*-************************************
* Error management
**************************************/
typedef size_t LZ4F_errorCode_t;
LZ4FLIB_API unsigned LZ4F_isError(LZ4F_errorCode_t code); /**< tells if a `LZ4F_errorCode_t` function result is an error code */
LZ4FLIB_API const char* LZ4F_getErrorName(LZ4F_errorCode_t code); /**< return error code string; useful for debugging */
/*-************************************
* Frame compression types
**************************************/
/* #define LZ4F_ENABLE_OBSOLETE_ENUMS // uncomment to enable obsolete enums */
#ifdef LZ4F_ENABLE_OBSOLETE_ENUMS
# define LZ4F_OBSOLETE_ENUM(x) , LZ4F_DEPRECATE(x) = LZ4F_##x
#else
# define LZ4F_OBSOLETE_ENUM(x)
#endif
/* The larger the block size, the (slightly) better the compression ratio,
* though there are diminishing returns.
* Larger blocks also increase memory usage on both compression and decompression sides. */
typedef enum {
LZ4F_default=0,
LZ4F_max64KB=4,
LZ4F_max256KB=5,
LZ4F_max1MB=6,
LZ4F_max4MB=7
LZ4F_OBSOLETE_ENUM(max64KB)
LZ4F_OBSOLETE_ENUM(max256KB)
LZ4F_OBSOLETE_ENUM(max1MB)
LZ4F_OBSOLETE_ENUM(max4MB)
} LZ4F_blockSizeID_t;
/* Linked blocks sharply reduce inefficiencies when using small blocks,
* they compress better.
* However, some LZ4 decoders are only compatible with independent blocks */
typedef enum {
LZ4F_blockLinked=0,
LZ4F_blockIndependent
LZ4F_OBSOLETE_ENUM(blockLinked)
LZ4F_OBSOLETE_ENUM(blockIndependent)
} LZ4F_blockMode_t;
typedef enum {
LZ4F_noContentChecksum=0,
LZ4F_contentChecksumEnabled
LZ4F_OBSOLETE_ENUM(noContentChecksum)
LZ4F_OBSOLETE_ENUM(contentChecksumEnabled)
} LZ4F_contentChecksum_t;
typedef enum {
LZ4F_noBlockChecksum=0,
LZ4F_blockChecksumEnabled
} LZ4F_blockChecksum_t;
typedef enum {
LZ4F_frame=0,
LZ4F_skippableFrame
LZ4F_OBSOLETE_ENUM(skippableFrame)
} LZ4F_frameType_t;
#ifdef LZ4F_ENABLE_OBSOLETE_ENUMS
typedef LZ4F_blockSizeID_t blockSizeID_t;
typedef LZ4F_blockMode_t blockMode_t;
typedef LZ4F_frameType_t frameType_t;
typedef LZ4F_contentChecksum_t contentChecksum_t;
#endif
/*! LZ4F_frameInfo_t :
* makes it possible to set or read frame parameters.
* It's not required to set all fields, as long as the structure was initially memset() to zero.
* For all fields, 0 sets it to default value */
typedef struct {
LZ4F_blockSizeID_t blockSizeID; /* max64KB, max256KB, max1MB, max4MB ; 0 == default */
LZ4F_blockMode_t blockMode; /* LZ4F_blockLinked, LZ4F_blockIndependent ; 0 == default */
LZ4F_contentChecksum_t contentChecksumFlag; /* if enabled, frame is terminated with a 32-bits checksum of decompressed data ; 0 == disabled (default) */
LZ4F_frameType_t frameType; /* read-only field : LZ4F_frame or LZ4F_skippableFrame */
unsigned long long contentSize; /* Size of uncompressed content ; 0 == unknown */
unsigned dictID; /* Dictionary ID, sent by the compressor to help decoder select the correct dictionary; 0 == no dictID provided */
LZ4F_blockChecksum_t blockChecksumFlag; /* if enabled, each block is followed by a checksum of block's compressed data ; 0 == disabled (default) */
} LZ4F_frameInfo_t;
/*! LZ4F_preferences_t :
* makes it possible to supply detailed compression parameters to the stream interface.
* It's not required to set all fields, as long as the structure was initially memset() to zero.
* All reserved fields must be set to zero. */
typedef struct {
LZ4F_frameInfo_t frameInfo;
int compressionLevel; /* 0 == default (fast mode); values above LZ4HC_CLEVEL_MAX count as LZ4HC_CLEVEL_MAX; values below 0 trigger "fast acceleration", proportional to value */
unsigned autoFlush; /* 1 == always flush, to reduce usage of internal buffers */
unsigned reserved[4]; /* must be zero for forward compatibility */
} LZ4F_preferences_t;
LZ4FLIB_API int LZ4F_compressionLevel_max(void);
/*-*********************************
* Simple compression function
***********************************/
/*! LZ4F_compressFrameBound() :
* Returns the maximum possible size of a frame compressed with LZ4F_compressFrame() given srcSize content and preferences.
* Note : this result is only usable with LZ4F_compressFrame(), not with multi-segments compression.
*/
LZ4FLIB_API size_t LZ4F_compressFrameBound(size_t srcSize, const LZ4F_preferences_t* preferencesPtr);
/*! LZ4F_compressFrame() :
* Compress an entire srcBuffer into a valid LZ4 frame.
* dstCapacity MUST be >= LZ4F_compressFrameBound(srcSize, preferencesPtr).
* The LZ4F_preferences_t structure is optional : you can provide NULL as argument. All preferences will be set to default.
* @return : number of bytes written into dstBuffer.
* or an error code if it fails (can be tested using LZ4F_isError())
*/
LZ4FLIB_API size_t LZ4F_compressFrame(void* dstBuffer, size_t dstCapacity,
const void* srcBuffer, size_t srcSize,
const LZ4F_preferences_t* preferencesPtr);
/*-***********************************
* Advanced compression functions
*************************************/
typedef struct LZ4F_cctx_s LZ4F_cctx; /* incomplete type */
typedef LZ4F_cctx* LZ4F_compressionContext_t; /* for compatibility with previous API version */
typedef struct {
unsigned stableSrc; /* 1 == src content will remain present on future calls to LZ4F_compress(); skip copying src content within tmp buffer */
unsigned reserved[3];
} LZ4F_compressOptions_t;
/*--- Resource Management ---*/
#define LZ4F_VERSION 100
LZ4FLIB_API unsigned LZ4F_getVersion(void);
/*! LZ4F_createCompressionContext() :
* The first thing to do is to create a compressionContext object, which will be used in all compression operations.
* This is achieved using LZ4F_createCompressionContext(), which takes as argument a version.
* The version provided MUST be LZ4F_VERSION. It is intended to track potential version mismatch, notably when using DLL.
* The function will provide a pointer to a fully allocated LZ4F_cctx object.
* If @return != zero, there was an error during context creation.
* Object can release its memory using LZ4F_freeCompressionContext();
*/
LZ4FLIB_API LZ4F_errorCode_t LZ4F_createCompressionContext(LZ4F_cctx** cctxPtr, unsigned version);
LZ4FLIB_API LZ4F_errorCode_t LZ4F_freeCompressionContext(LZ4F_cctx* cctx);
/*---- Compression ----*/
#define LZ4F_HEADER_SIZE_MAX 19
/*! LZ4F_compressBegin() :
* will write the frame header into dstBuffer.
* dstCapacity must be >= LZ4F_HEADER_SIZE_MAX bytes.
* `prefsPtr` is optional : you can provide NULL as argument, all preferences will then be set to default.
* @return : number of bytes written into dstBuffer for the header
* or an error code (which can be tested using LZ4F_isError())
*/
LZ4FLIB_API size_t LZ4F_compressBegin(LZ4F_cctx* cctx,
void* dstBuffer, size_t dstCapacity,
const LZ4F_preferences_t* prefsPtr);
/*! LZ4F_compressBound() :
* Provides dstCapacity given a srcSize to guarantee operation success in worst case situations.
* prefsPtr is optional : you can provide NULL as argument, preferences will be set to cover worst case scenario.
* Result is always the same for a srcSize and prefsPtr, so it can be trusted to size reusable buffers.
* When srcSize==0, LZ4F_compressBound() provides an upper bound for LZ4F_flush() and LZ4F_compressEnd() operations.
*/
LZ4FLIB_API size_t LZ4F_compressBound(size_t srcSize, const LZ4F_preferences_t* prefsPtr);
/*! LZ4F_compressUpdate() :
* LZ4F_compressUpdate() can be called repetitively to compress as much data as necessary.
* An important rule is that dstCapacity MUST be large enough to ensure operation success even in worst case situations.
* This value is provided by LZ4F_compressBound().
* If this condition is not respected, LZ4F_compress() will fail (result is an errorCode).
* LZ4F_compressUpdate() doesn't guarantee error recovery. When an error occurs, compression context must be freed or resized.
* `cOptPtr` is optional : NULL can be provided, in which case all options are set to default.
* @return : number of bytes written into `dstBuffer` (it can be zero, meaning input data was just buffered).
* or an error code if it fails (which can be tested using LZ4F_isError())
*/
LZ4FLIB_API size_t LZ4F_compressUpdate(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const void* srcBuffer, size_t srcSize, const LZ4F_compressOptions_t* cOptPtr);
/*! LZ4F_flush() :
* When data must be generated and sent immediately, without waiting for a block to be completely filled,
* it's possible to call LZ4_flush(). It will immediately compress any data buffered within cctx.
* `dstCapacity` must be large enough to ensure the operation will be successful.
* `cOptPtr` is optional : it's possible to provide NULL, all options will be set to default.
* @return : number of bytes written into dstBuffer (it can be zero, which means there was no data stored within cctx)
* or an error code if it fails (which can be tested using LZ4F_isError())
*/
LZ4FLIB_API size_t LZ4F_flush(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const LZ4F_compressOptions_t* cOptPtr);
/*! LZ4F_compressEnd() :
* To properly finish an LZ4 frame, invoke LZ4F_compressEnd().
* It will flush whatever data remained within `cctx` (like LZ4_flush())
* and properly finalize the frame, with an endMark and a checksum.
* `cOptPtr` is optional : NULL can be provided, in which case all options will be set to default.
* @return : number of bytes written into dstBuffer (necessarily >= 4 (endMark), or 8 if optional frame checksum is enabled)
* or an error code if it fails (which can be tested using LZ4F_isError())
* A successful call to LZ4F_compressEnd() makes `cctx` available again for another compression task.
*/
LZ4FLIB_API size_t LZ4F_compressEnd(LZ4F_cctx* cctx, void* dstBuffer, size_t dstCapacity, const LZ4F_compressOptions_t* cOptPtr);
/*-*********************************
* Decompression functions
***********************************/
typedef struct LZ4F_dctx_s LZ4F_dctx; /* incomplete type */
typedef LZ4F_dctx* LZ4F_decompressionContext_t; /* compatibility with previous API versions */
typedef struct {
unsigned stableDst; /* pledge that at least 64KB+64Bytes of previously decompressed data remain unmodifed where it was decoded. This optimization skips storage operations in tmp buffers */
unsigned reserved[3]; /* must be set to zero for forward compatibility */
} LZ4F_decompressOptions_t;
/* Resource management */
/*!LZ4F_createDecompressionContext() :
* Create an LZ4F_dctx object, to track all decompression operations.
* The version provided MUST be LZ4F_VERSION.
* The function provides a pointer to an allocated and initialized LZ4F_dctx object.
* The result is an errorCode, which can be tested using LZ4F_isError().
* dctx memory can be released using LZ4F_freeDecompressionContext();
* The result of LZ4F_freeDecompressionContext() is indicative of the current state of decompressionContext when being released.
* That is, it should be == 0 if decompression has been completed fully and correctly.
*/
LZ4FLIB_API LZ4F_errorCode_t LZ4F_createDecompressionContext(LZ4F_dctx** dctxPtr, unsigned version);
LZ4FLIB_API LZ4F_errorCode_t LZ4F_freeDecompressionContext(LZ4F_dctx* dctx);
/*-***********************************
* Streaming decompression functions
*************************************/
/*! LZ4F_getFrameInfo() :
* This function extracts frame parameters (max blockSize, dictID, etc.).
* Its usage is optional.
* Extracted information is typically useful for allocation and dictionary.
* This function works in 2 situations :
* - At the beginning of a new frame, in which case
* it will decode information from `srcBuffer`, starting the decoding process.
* Input size must be large enough to successfully decode the entire frame header.
* Frame header size is variable, but is guaranteed to be <= LZ4F_HEADER_SIZE_MAX bytes.
* It's allowed to provide more input data than this minimum.
* - After decoding has been started.
* In which case, no input is read, frame parameters are extracted from dctx.
* - If decoding has barely started, but not yet extracted information from header,
* LZ4F_getFrameInfo() will fail.
* The number of bytes consumed from srcBuffer will be updated within *srcSizePtr (necessarily <= original value).
* Decompression must resume from (srcBuffer + *srcSizePtr).
* @return : an hint about how many srcSize bytes LZ4F_decompress() expects for next call,
* or an error code which can be tested using LZ4F_isError().
* note 1 : in case of error, dctx is not modified. Decoding operation can resume from beginning safely.
* note 2 : frame parameters are *copied into* an already allocated LZ4F_frameInfo_t structure.
*/
LZ4FLIB_API size_t LZ4F_getFrameInfo(LZ4F_dctx* dctx,
LZ4F_frameInfo_t* frameInfoPtr,
const void* srcBuffer, size_t* srcSizePtr);
/*! LZ4F_decompress() :
* Call this function repetitively to regenerate compressed data from `srcBuffer`.
* The function will attempt to decode up to *srcSizePtr bytes from srcBuffer, into dstBuffer of capacity *dstSizePtr.
*
* The number of bytes regenerated into dstBuffer is provided within *dstSizePtr (necessarily <= original value).
*
* The number of bytes consumed from srcBuffer is provided within *srcSizePtr (necessarily <= original value).
* Number of bytes consumed can be < number of bytes provided.
* It typically happens when dstBuffer is not large enough to contain all decoded data.
* Unconsumed source data must be presented again in subsequent invocations.
*
* `dstBuffer` content is expected to be flushed between each invocation, as its content will be overwritten.
* `dstBuffer` itself can be changed at will between each consecutive function invocation.
*
* @return : an hint of how many `srcSize` bytes LZ4F_decompress() expects for next call.
* Schematically, it's the size of the current (or remaining) compressed block + header of next block.
* Respecting the hint provides some small speed benefit, because it skips intermediate buffers.
* This is just a hint though, it's always possible to provide any srcSize.
* When a frame is fully decoded, @return will be 0 (no more data expected).
* If decompression failed, @return is an error code, which can be tested using LZ4F_isError().
*
* After a frame is fully decoded, dctx can be used again to decompress another frame.
* After a decompression error, use LZ4F_resetDecompressionContext() before re-using dctx, to return to clean state.
*/
LZ4FLIB_API size_t LZ4F_decompress(LZ4F_dctx* dctx,
void* dstBuffer, size_t* dstSizePtr,
const void* srcBuffer, size_t* srcSizePtr,
const LZ4F_decompressOptions_t* dOptPtr);
/*! LZ4F_resetDecompressionContext() : v1.8.0
* In case of an error, the context is left in "undefined" state.
* In which case, it's necessary to reset it, before re-using it.
* This method can also be used to abruptly stop an unfinished decompression,
* and start a new one using the same context. */
LZ4FLIB_API void LZ4F_resetDecompressionContext(LZ4F_dctx* dctx); /* always successful */
#if defined (__cplusplus)
}
#endif
#endif /* LZ4F_H_09782039843 */

View File

@@ -1,143 +0,0 @@
/*
LZ4 auto-framing library
Header File for static linking only
Copyright (C) 2011-2016, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
#ifndef LZ4FRAME_STATIC_H_0398209384
#define LZ4FRAME_STATIC_H_0398209384
#if defined (__cplusplus)
extern "C" {
#endif
/* lz4frame_static.h should be used solely in the context of static linking.
* It contains definitions which are not stable and may change in the future.
* Never use it in the context of DLL linking.
*/
/* --- Dependency --- */
#include "lz4frame.h"
/* --- Error List --- */
#define LZ4F_LIST_ERRORS(ITEM) \
ITEM(OK_NoError) \
ITEM(ERROR_GENERIC) \
ITEM(ERROR_maxBlockSize_invalid) \
ITEM(ERROR_blockMode_invalid) \
ITEM(ERROR_contentChecksumFlag_invalid) \
ITEM(ERROR_compressionLevel_invalid) \
ITEM(ERROR_headerVersion_wrong) \
ITEM(ERROR_blockChecksum_invalid) \
ITEM(ERROR_reservedFlag_set) \
ITEM(ERROR_allocation_failed) \
ITEM(ERROR_srcSize_tooLarge) \
ITEM(ERROR_dstMaxSize_tooSmall) \
ITEM(ERROR_frameHeader_incomplete) \
ITEM(ERROR_frameType_unknown) \
ITEM(ERROR_frameSize_wrong) \
ITEM(ERROR_srcPtr_wrong) \
ITEM(ERROR_decompressionFailed) \
ITEM(ERROR_headerChecksum_invalid) \
ITEM(ERROR_contentChecksum_invalid) \
ITEM(ERROR_frameDecoding_alreadyStarted) \
ITEM(ERROR_maxCode)
#define LZ4F_GENERATE_ENUM(ENUM) LZ4F_##ENUM,
/* enum list is exposed, to handle specific errors */
typedef enum { LZ4F_LIST_ERRORS(LZ4F_GENERATE_ENUM) } LZ4F_errorCodes;
LZ4F_errorCodes LZ4F_getErrorCode(size_t functionResult);
/**********************************
* Bulk processing dictionary API
*********************************/
typedef struct LZ4F_CDict_s LZ4F_CDict;
/*! LZ4_createCDict() :
* When compressing multiple messages / blocks with the same dictionary, it's recommended to load it just once.
* LZ4_createCDict() will create a digested dictionary, ready to start future compression operations without startup delay.
* LZ4_CDict can be created once and shared by multiple threads concurrently, since its usage is read-only.
* `dictBuffer` can be released after LZ4_CDict creation, since its content is copied within CDict */
LZ4F_CDict* LZ4F_createCDict(const void* dictBuffer, size_t dictSize);
void LZ4F_freeCDict(LZ4F_CDict* CDict);
/*! LZ4_compressFrame_usingCDict() :
* Compress an entire srcBuffer into a valid LZ4 frame using a digested Dictionary.
* If cdict==NULL, compress without a dictionary.
* dstBuffer MUST be >= LZ4F_compressFrameBound(srcSize, preferencesPtr).
* If this condition is not respected, function will fail (@return an errorCode).
* The LZ4F_preferences_t structure is optional : you may provide NULL as argument,
* but it's not recommended, as it's the only way to provide dictID in the frame header.
* @return : number of bytes written into dstBuffer.
* or an error code if it fails (can be tested using LZ4F_isError()) */
size_t LZ4F_compressFrame_usingCDict(void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const LZ4F_CDict* cdict,
const LZ4F_preferences_t* preferencesPtr);
/*! LZ4F_compressBegin_usingCDict() :
* Inits streaming dictionary compression, and writes the frame header into dstBuffer.
* dstCapacity must be >= LZ4F_HEADER_SIZE_MAX bytes.
* `prefsPtr` is optional : you may provide NULL as argument,
* however, it's the only way to provide dictID in the frame header.
* @return : number of bytes written into dstBuffer for the header,
* or an error code (which can be tested using LZ4F_isError()) */
size_t LZ4F_compressBegin_usingCDict(LZ4F_cctx* cctx,
void* dstBuffer, size_t dstCapacity,
const LZ4F_CDict* cdict,
const LZ4F_preferences_t* prefsPtr);
/*! LZ4F_decompress_usingDict() :
* Same as LZ4F_decompress(), using a predefined dictionary.
* Dictionary is used "in place", without any preprocessing.
* It must remain accessible throughout the entire frame decoding. */
size_t LZ4F_decompress_usingDict(LZ4F_dctx* dctxPtr,
void* dstBuffer, size_t* dstSizePtr,
const void* srcBuffer, size_t* srcSizePtr,
const void* dict, size_t dictSize,
const LZ4F_decompressOptions_t* decompressOptionsPtr);
#if defined (__cplusplus)
}
#endif
#endif /* LZ4FRAME_STATIC_H_0398209384 */

View File

@@ -1,807 +0,0 @@
/*
LZ4 HC - High Compression Mode of LZ4
Copyright (C) 2011-2017, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
/* note : lz4hc is not an independent module, it requires lz4.h/lz4.c for proper compilation */
/* *************************************
* Tuning Parameter
***************************************/
/*! HEAPMODE :
* Select how default compression function will allocate workplace memory,
* in stack (0:fastest), or in heap (1:requires malloc()).
* Since workplace is rather large, heap mode is recommended.
*/
#ifndef LZ4HC_HEAPMODE
# define LZ4HC_HEAPMODE 1
#endif
/*=== Dependency ===*/
#include "lz4hc.h"
/*=== Common LZ4 definitions ===*/
#if defined(__GNUC__)
# pragma GCC diagnostic ignored "-Wunused-function"
#endif
#if defined (__clang__)
# pragma clang diagnostic ignored "-Wunused-function"
#endif
#define LZ4_COMMONDEFS_ONLY
#include "lz4.c" /* LZ4_count, constants, mem */
/*=== Constants ===*/
#define OPTIMAL_ML (int)((ML_MASK-1)+MINMATCH)
/*=== Macros ===*/
#define MIN(a,b) ( (a) < (b) ? (a) : (b) )
#define MAX(a,b) ( (a) > (b) ? (a) : (b) )
#define HASH_FUNCTION(i) (((i) * 2654435761U) >> ((MINMATCH*8)-LZ4HC_HASH_LOG))
#define DELTANEXTMAXD(p) chainTable[(p) & LZ4HC_MAXD_MASK] /* flexible, LZ4HC_MAXD dependent */
#define DELTANEXTU16(table, pos) table[(U16)(pos)] /* faster */
static U32 LZ4HC_hashPtr(const void* ptr) { return HASH_FUNCTION(LZ4_read32(ptr)); }
/**************************************
* HC Compression
**************************************/
static void LZ4HC_init (LZ4HC_CCtx_internal* hc4, const BYTE* start)
{
MEM_INIT((void*)hc4->hashTable, 0, sizeof(hc4->hashTable));
MEM_INIT(hc4->chainTable, 0xFF, sizeof(hc4->chainTable));
hc4->nextToUpdate = 64 KB;
hc4->base = start - 64 KB;
hc4->end = start;
hc4->dictBase = start - 64 KB;
hc4->dictLimit = 64 KB;
hc4->lowLimit = 64 KB;
}
/* Update chains up to ip (excluded) */
FORCE_INLINE void LZ4HC_Insert (LZ4HC_CCtx_internal* hc4, const BYTE* ip)
{
U16* const chainTable = hc4->chainTable;
U32* const hashTable = hc4->hashTable;
const BYTE* const base = hc4->base;
U32 const target = (U32)(ip - base);
U32 idx = hc4->nextToUpdate;
while (idx < target) {
U32 const h = LZ4HC_hashPtr(base+idx);
size_t delta = idx - hashTable[h];
if (delta>MAX_DISTANCE) delta = MAX_DISTANCE;
DELTANEXTU16(chainTable, idx) = (U16)delta;
hashTable[h] = idx;
idx++;
}
hc4->nextToUpdate = target;
}
FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_CCtx_internal* const hc4, /* Index table will be updated */
const BYTE* const ip, const BYTE* const iLimit,
const BYTE** matchpos,
const int maxNbAttempts)
{
U16* const chainTable = hc4->chainTable;
U32* const HashTable = hc4->hashTable;
const BYTE* const base = hc4->base;
const BYTE* const dictBase = hc4->dictBase;
const U32 dictLimit = hc4->dictLimit;
const U32 lowLimit = (hc4->lowLimit + 64 KB > (U32)(ip-base)) ? hc4->lowLimit : (U32)(ip - base) - (64 KB - 1);
U32 matchIndex;
int nbAttempts = maxNbAttempts;
size_t ml = 0;
/* HC4 match finder */
LZ4HC_Insert(hc4, ip);
matchIndex = HashTable[LZ4HC_hashPtr(ip)];
while ((matchIndex>=lowLimit) && (nbAttempts)) {
nbAttempts--;
if (matchIndex >= dictLimit) {
const BYTE* const match = base + matchIndex;
if ( (*(match+ml) == *(ip+ml)) /* can be longer */
&& (LZ4_read32(match) == LZ4_read32(ip)) )
{
size_t const mlt = LZ4_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if (mlt > ml) { ml = mlt; *matchpos = match; }
}
} else {
const BYTE* const match = dictBase + matchIndex;
if (LZ4_read32(match) == LZ4_read32(ip)) {
size_t mlt;
const BYTE* vLimit = ip + (dictLimit - matchIndex);
if (vLimit > iLimit) vLimit = iLimit;
mlt = LZ4_count(ip+MINMATCH, match+MINMATCH, vLimit) + MINMATCH;
if ((ip+mlt == vLimit) && (vLimit < iLimit))
mlt += LZ4_count(ip+mlt, base+dictLimit, iLimit);
if (mlt > ml) { ml = mlt; *matchpos = base + matchIndex; } /* virtual matchpos */
}
}
matchIndex -= DELTANEXTU16(chainTable, matchIndex);
}
return (int)ml;
}
FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (
LZ4HC_CCtx_internal* hc4,
const BYTE* const ip,
const BYTE* const iLowLimit,
const BYTE* const iHighLimit,
int longest,
const BYTE** matchpos,
const BYTE** startpos,
const int maxNbAttempts)
{
U16* const chainTable = hc4->chainTable;
U32* const HashTable = hc4->hashTable;
const BYTE* const base = hc4->base;
const U32 dictLimit = hc4->dictLimit;
const BYTE* const lowPrefixPtr = base + dictLimit;
const U32 lowLimit = (hc4->lowLimit + 64 KB > (U32)(ip-base)) ? hc4->lowLimit : (U32)(ip - base) - (64 KB - 1);
const BYTE* const dictBase = hc4->dictBase;
int const delta = (int)(ip-iLowLimit);
int nbAttempts = maxNbAttempts;
U32 matchIndex;
/* First Match */
LZ4HC_Insert(hc4, ip);
matchIndex = HashTable[LZ4HC_hashPtr(ip)];
while ((matchIndex>=lowLimit) && (nbAttempts)) {
nbAttempts--;
if (matchIndex >= dictLimit) {
const BYTE* const matchPtr = base + matchIndex;
if (*(iLowLimit + longest) == *(matchPtr - delta + longest)) {
if (LZ4_read32(matchPtr) == LZ4_read32(ip)) {
int mlt = MINMATCH + LZ4_count(ip+MINMATCH, matchPtr+MINMATCH, iHighLimit);
int back = 0;
while ( (ip+back > iLowLimit)
&& (matchPtr+back > lowPrefixPtr)
&& (ip[back-1] == matchPtr[back-1])) {
back--;
}
mlt -= back;
if (mlt > longest) {
longest = mlt;
*matchpos = matchPtr+back;
*startpos = ip+back;
} } }
} else {
const BYTE* const matchPtr = dictBase + matchIndex;
if (LZ4_read32(matchPtr) == LZ4_read32(ip)) {
int mlt;
int back=0;
const BYTE* vLimit = ip + (dictLimit - matchIndex);
if (vLimit > iHighLimit) vLimit = iHighLimit;
mlt = LZ4_count(ip+MINMATCH, matchPtr+MINMATCH, vLimit) + MINMATCH;
if ((ip+mlt == vLimit) && (vLimit < iHighLimit))
mlt += LZ4_count(ip+mlt, base+dictLimit, iHighLimit);
while ((ip+back > iLowLimit) && (matchIndex+back > lowLimit) && (ip[back-1] == matchPtr[back-1])) back--;
mlt -= back;
if (mlt > longest) { longest = mlt; *matchpos = base + matchIndex + back; *startpos = ip+back; }
}
}
matchIndex -= DELTANEXTU16(chainTable, matchIndex);
}
return longest;
}
typedef enum {
noLimit = 0,
limitedOutput = 1,
limitedDestSize = 2,
} limitedOutput_directive;
#ifndef LZ4HC_DEBUG
# define LZ4HC_DEBUG 0
#endif
/* LZ4HC_encodeSequence() :
* @return : 0 if ok,
* 1 if buffer issue detected */
FORCE_INLINE int LZ4HC_encodeSequence (
const BYTE** ip,
BYTE** op,
const BYTE** anchor,
int matchLength,
const BYTE* const match,
limitedOutput_directive limit,
BYTE* oend)
{
size_t length;
BYTE* const token = (*op)++;
#if LZ4HC_DEBUG
printf("literal : %u -- match : %u -- offset : %u\n",
(U32)(*ip - *anchor), (U32)matchLength, (U32)(*ip-match));
#endif
/* Encode Literal length */
length = (size_t)(*ip - *anchor);
if ((limit) && ((*op + (length >> 8) + length + (2 + 1 + LASTLITERALS)) > oend)) return 1; /* Check output limit */
if (length >= RUN_MASK) {
size_t len = length - RUN_MASK;
*token = (RUN_MASK << ML_BITS);
for(; len >= 255 ; len -= 255) *(*op)++ = 255;
*(*op)++ = (BYTE)len;
} else {
*token = (BYTE)(length << ML_BITS);
}
/* Copy Literals */
LZ4_wildCopy(*op, *anchor, (*op) + length);
*op += length;
/* Encode Offset */
LZ4_writeLE16(*op, (U16)(*ip-match)); *op += 2;
/* Encode MatchLength */
length = (size_t)(matchLength - MINMATCH);
if ((limit) && (*op + (length >> 8) + (1 + LASTLITERALS) > oend)) return 1; /* Check output limit */
if (length >= ML_MASK) {
*token += ML_MASK;
length -= ML_MASK;
for(; length >= 510 ; length -= 510) { *(*op)++ = 255; *(*op)++ = 255; }
if (length >= 255) { length -= 255; *(*op)++ = 255; }
*(*op)++ = (BYTE)length;
} else {
*token += (BYTE)(length);
}
/* Prepare next loop */
*ip += matchLength;
*anchor = *ip;
return 0;
}
/* btopt */
#include "lz4opt.h"
static int LZ4HC_compress_hashChain (
LZ4HC_CCtx_internal* const ctx,
const char* const source,
char* const dest,
int* srcSizePtr,
int const maxOutputSize,
unsigned maxNbAttempts,
limitedOutput_directive limit
)
{
const int inputSize = *srcSizePtr;
const BYTE* ip = (const BYTE*) source;
const BYTE* anchor = ip;
const BYTE* const iend = ip + inputSize;
const BYTE* const mflimit = iend - MFLIMIT;
const BYTE* const matchlimit = (iend - LASTLITERALS);
BYTE* optr = (BYTE*) dest;
BYTE* op = (BYTE*) dest;
BYTE* oend = op + maxOutputSize;
int ml, ml2, ml3, ml0;
const BYTE* ref = NULL;
const BYTE* start2 = NULL;
const BYTE* ref2 = NULL;
const BYTE* start3 = NULL;
const BYTE* ref3 = NULL;
const BYTE* start0;
const BYTE* ref0;
/* init */
*srcSizePtr = 0;
if (limit == limitedDestSize && maxOutputSize < 1) return 0; /* Impossible to store anything */
if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) return 0; /* Unsupported input size, too large (or negative) */
ctx->end += inputSize;
if (limit == limitedDestSize) oend -= LASTLITERALS; /* Hack for support limitations LZ4 decompressor */
if (inputSize < LZ4_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
ip++;
/* Main Loop */
while (ip < mflimit) {
ml = LZ4HC_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref), maxNbAttempts);
if (!ml) { ip++; continue; }
/* saved, in case we would skip too much */
start0 = ip;
ref0 = ref;
ml0 = ml;
_Search2:
if (ip+ml < mflimit)
ml2 = LZ4HC_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 0, matchlimit, ml, &ref2, &start2, maxNbAttempts);
else
ml2 = ml;
if (ml2 == ml) { /* No better match */
optr = op;
if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) goto _dest_overflow;
continue;
}
if (start0 < ip) {
if (start2 < ip + ml0) { /* empirical */
ip = start0;
ref = ref0;
ml = ml0;
}
}
/* Here, start0==ip */
if ((start2 - ip) < 3) { /* First Match too small : removed */
ml = ml2;
ip = start2;
ref =ref2;
goto _Search2;
}
_Search3:
/* At this stage, we have :
* ml2 > ml1, and
* ip1+3 <= ip2 (usually < ip1+ml1) */
if ((start2 - ip) < OPTIMAL_ML) {
int correction;
int new_ml = ml;
if (new_ml > OPTIMAL_ML) new_ml = OPTIMAL_ML;
if (ip+new_ml > start2 + ml2 - MINMATCH) new_ml = (int)(start2 - ip) + ml2 - MINMATCH;
correction = new_ml - (int)(start2 - ip);
if (correction > 0) {
start2 += correction;
ref2 += correction;
ml2 -= correction;
}
}
/* Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18) */
if (start2 + ml2 < mflimit)
ml3 = LZ4HC_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3, maxNbAttempts);
else
ml3 = ml2;
if (ml3 == ml2) { /* No better match : 2 sequences to encode */
/* ip & ref are known; Now for ml */
if (start2 < ip+ml) ml = (int)(start2 - ip);
/* Now, encode 2 sequences */
optr = op;
if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) goto _dest_overflow;
ip = start2;
optr = op;
if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml2, ref2, limit, oend)) goto _dest_overflow;
continue;
}
if (start3 < ip+ml+3) { /* Not enough space for match 2 : remove it */
if (start3 >= (ip+ml)) { /* can write Seq1 immediately ==> Seq2 is removed, so Seq3 becomes Seq1 */
if (start2 < ip+ml) {
int correction = (int)(ip+ml - start2);
start2 += correction;
ref2 += correction;
ml2 -= correction;
if (ml2 < MINMATCH) {
start2 = start3;
ref2 = ref3;
ml2 = ml3;
}
}
optr = op;
if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) goto _dest_overflow;
ip = start3;
ref = ref3;
ml = ml3;
start0 = start2;
ref0 = ref2;
ml0 = ml2;
goto _Search2;
}
start2 = start3;
ref2 = ref3;
ml2 = ml3;
goto _Search3;
}
/*
* OK, now we have 3 ascending matches; let's write at least the first one
* ip & ref are known; Now for ml
*/
if (start2 < ip+ml) {
if ((start2 - ip) < (int)ML_MASK) {
int correction;
if (ml > OPTIMAL_ML) ml = OPTIMAL_ML;
if (ip + ml > start2 + ml2 - MINMATCH) ml = (int)(start2 - ip) + ml2 - MINMATCH;
correction = ml - (int)(start2 - ip);
if (correction > 0) {
start2 += correction;
ref2 += correction;
ml2 -= correction;
}
} else {
ml = (int)(start2 - ip);
}
}
optr = op;
if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) goto _dest_overflow;
ip = start2;
ref = ref2;
ml = ml2;
start2 = start3;
ref2 = ref3;
ml2 = ml3;
goto _Search3;
}
_last_literals:
/* Encode Last Literals */
{ size_t lastRunSize = (size_t)(iend - anchor); /* literals */
size_t litLength = (lastRunSize + 255 - RUN_MASK) / 255;
size_t const totalSize = 1 + litLength + lastRunSize;
if (limit == limitedDestSize) oend += LASTLITERALS; /* restore correct value */
if (limit && (op + totalSize > oend)) {
if (limit == limitedOutput) return 0; /* Check output limit */
/* adapt lastRunSize to fill 'dest' */
lastRunSize = (size_t)(oend - op) - 1;
litLength = (lastRunSize + 255 - RUN_MASK) / 255;
lastRunSize -= litLength;
}
ip = anchor + lastRunSize;
if (lastRunSize >= RUN_MASK) {
size_t accumulator = lastRunSize - RUN_MASK;
*op++ = (RUN_MASK << ML_BITS);
for(; accumulator >= 255 ; accumulator -= 255) *op++ = 255;
*op++ = (BYTE) accumulator;
} else {
*op++ = (BYTE)(lastRunSize << ML_BITS);
}
memcpy(op, anchor, lastRunSize);
op += lastRunSize;
}
/* End */
*srcSizePtr = (int) (((const char*)ip) - source);
return (int) (((char*)op)-dest);
_dest_overflow:
if (limit == limitedDestSize) {
op = optr; /* restore correct out pointer */
goto _last_literals;
}
return 0;
}
static int LZ4HC_getSearchNum(int compressionLevel)
{
switch (compressionLevel) {
default: return 0; /* unused */
case 11: return 128;
case 12: return 1<<10;
}
}
static int LZ4HC_compress_generic (
LZ4HC_CCtx_internal* const ctx,
const char* const src,
char* const dst,
int* const srcSizePtr,
int const dstCapacity,
int cLevel,
limitedOutput_directive limit
)
{
if (cLevel < 1) cLevel = LZ4HC_CLEVEL_DEFAULT; /* note : convention is different from lz4frame, maybe to reconsider */
if (cLevel > 9) {
if (limit == limitedDestSize) cLevel = 10;
switch (cLevel) {
case 10:
return LZ4HC_compress_hashChain(ctx, src, dst, srcSizePtr, dstCapacity, 1 << 12, limit);
case 11:
ctx->searchNum = LZ4HC_getSearchNum(cLevel);
return LZ4HC_compress_optimal(ctx, src, dst, *srcSizePtr, dstCapacity, limit, 128, 0);
default:
cLevel = 12;
/* fall-through */
case 12:
ctx->searchNum = LZ4HC_getSearchNum(cLevel);
return LZ4HC_compress_optimal(ctx, src, dst, *srcSizePtr, dstCapacity, limit, LZ4_OPT_NUM, 1);
}
}
return LZ4HC_compress_hashChain(ctx, src, dst, srcSizePtr, dstCapacity, 1 << (cLevel-1), limit); /* levels 1-9 */
}
int LZ4_sizeofStateHC(void) { return sizeof(LZ4_streamHC_t); }
int LZ4_compress_HC_extStateHC (void* state, const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel)
{
LZ4HC_CCtx_internal* const ctx = &((LZ4_streamHC_t*)state)->internal_donotuse;
if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0; /* Error : state is not aligned for pointers (32 or 64 bits) */
LZ4HC_init (ctx, (const BYTE*)src);
if (dstCapacity < LZ4_compressBound(srcSize))
return LZ4HC_compress_generic (ctx, src, dst, &srcSize, dstCapacity, compressionLevel, limitedOutput);
else
return LZ4HC_compress_generic (ctx, src, dst, &srcSize, dstCapacity, compressionLevel, noLimit);
}
int LZ4_compress_HC(const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel)
{
#if defined(LZ4HC_HEAPMODE) && LZ4HC_HEAPMODE==1
LZ4_streamHC_t* const statePtr = (LZ4_streamHC_t*)malloc(sizeof(LZ4_streamHC_t));
#else
LZ4_streamHC_t state;
LZ4_streamHC_t* const statePtr = &state;
#endif
int const cSize = LZ4_compress_HC_extStateHC(statePtr, src, dst, srcSize, dstCapacity, compressionLevel);
#if defined(LZ4HC_HEAPMODE) && LZ4HC_HEAPMODE==1
free(statePtr);
#endif
return cSize;
}
/* LZ4_compress_HC_destSize() :
* currently, only compatible with Hash Chain implementation,
* hence limit compression level to LZ4HC_CLEVEL_OPT_MIN-1*/
int LZ4_compress_HC_destSize(void* LZ4HC_Data, const char* source, char* dest, int* sourceSizePtr, int targetDestSize, int cLevel)
{
LZ4HC_CCtx_internal* const ctx = &((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse;
LZ4HC_init(ctx, (const BYTE*) source);
return LZ4HC_compress_generic(ctx, source, dest, sourceSizePtr, targetDestSize, cLevel, limitedDestSize);
}
/**************************************
* Streaming Functions
**************************************/
/* allocation */
LZ4_streamHC_t* LZ4_createStreamHC(void) { return (LZ4_streamHC_t*)malloc(sizeof(LZ4_streamHC_t)); }
int LZ4_freeStreamHC (LZ4_streamHC_t* LZ4_streamHCPtr) {
if (!LZ4_streamHCPtr) return 0; /* support free on NULL */
free(LZ4_streamHCPtr);
return 0;
}
/* initialization */
void LZ4_resetStreamHC (LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel)
{
LZ4_STATIC_ASSERT(sizeof(LZ4HC_CCtx_internal) <= sizeof(size_t) * LZ4_STREAMHCSIZE_SIZET); /* if compilation fails here, LZ4_STREAMHCSIZE must be increased */
LZ4_streamHCPtr->internal_donotuse.base = NULL;
if (compressionLevel > LZ4HC_CLEVEL_MAX) compressionLevel = LZ4HC_CLEVEL_MAX; /* cap compression level */
LZ4_streamHCPtr->internal_donotuse.compressionLevel = compressionLevel;
LZ4_streamHCPtr->internal_donotuse.searchNum = LZ4HC_getSearchNum(compressionLevel);
}
void LZ4_setCompressionLevel(LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel)
{
int const currentCLevel = LZ4_streamHCPtr->internal_donotuse.compressionLevel;
int const minCLevel = currentCLevel < LZ4HC_CLEVEL_OPT_MIN ? 1 : LZ4HC_CLEVEL_OPT_MIN;
int const maxCLevel = currentCLevel < LZ4HC_CLEVEL_OPT_MIN ? LZ4HC_CLEVEL_OPT_MIN-1 : LZ4HC_CLEVEL_MAX;
compressionLevel = MIN(compressionLevel, minCLevel);
compressionLevel = MAX(compressionLevel, maxCLevel);
LZ4_streamHCPtr->internal_donotuse.compressionLevel = compressionLevel;
}
int LZ4_loadDictHC (LZ4_streamHC_t* LZ4_streamHCPtr, const char* dictionary, int dictSize)
{
LZ4HC_CCtx_internal* const ctxPtr = &LZ4_streamHCPtr->internal_donotuse;
if (dictSize > 64 KB) {
dictionary += dictSize - 64 KB;
dictSize = 64 KB;
}
LZ4HC_init (ctxPtr, (const BYTE*)dictionary);
ctxPtr->end = (const BYTE*)dictionary + dictSize;
if (ctxPtr->compressionLevel >= LZ4HC_CLEVEL_OPT_MIN)
LZ4HC_updateBinTree(ctxPtr, ctxPtr->end - MFLIMIT, ctxPtr->end - LASTLITERALS);
else
if (dictSize >= 4) LZ4HC_Insert (ctxPtr, ctxPtr->end-3);
return dictSize;
}
/* compression */
static void LZ4HC_setExternalDict(LZ4HC_CCtx_internal* ctxPtr, const BYTE* newBlock)
{
if (ctxPtr->compressionLevel >= LZ4HC_CLEVEL_OPT_MIN)
LZ4HC_updateBinTree(ctxPtr, ctxPtr->end - MFLIMIT, ctxPtr->end - LASTLITERALS);
else
if (ctxPtr->end >= ctxPtr->base + 4) LZ4HC_Insert (ctxPtr, ctxPtr->end-3); /* Referencing remaining dictionary content */
/* Only one memory segment for extDict, so any previous extDict is lost at this stage */
ctxPtr->lowLimit = ctxPtr->dictLimit;
ctxPtr->dictLimit = (U32)(ctxPtr->end - ctxPtr->base);
ctxPtr->dictBase = ctxPtr->base;
ctxPtr->base = newBlock - ctxPtr->dictLimit;
ctxPtr->end = newBlock;
ctxPtr->nextToUpdate = ctxPtr->dictLimit; /* match referencing will resume from there */
}
static int LZ4_compressHC_continue_generic (LZ4_streamHC_t* LZ4_streamHCPtr,
const char* src, char* dst,
int* srcSizePtr, int dstCapacity,
limitedOutput_directive limit)
{
LZ4HC_CCtx_internal* const ctxPtr = &LZ4_streamHCPtr->internal_donotuse;
/* auto-init if forgotten */
if (ctxPtr->base == NULL) LZ4HC_init (ctxPtr, (const BYTE*) src);
/* Check overflow */
if ((size_t)(ctxPtr->end - ctxPtr->base) > 2 GB) {
size_t dictSize = (size_t)(ctxPtr->end - ctxPtr->base) - ctxPtr->dictLimit;
if (dictSize > 64 KB) dictSize = 64 KB;
LZ4_loadDictHC(LZ4_streamHCPtr, (const char*)(ctxPtr->end) - dictSize, (int)dictSize);
}
/* Check if blocks follow each other */
if ((const BYTE*)src != ctxPtr->end) LZ4HC_setExternalDict(ctxPtr, (const BYTE*)src);
/* Check overlapping input/dictionary space */
{ const BYTE* sourceEnd = (const BYTE*) src + *srcSizePtr;
const BYTE* const dictBegin = ctxPtr->dictBase + ctxPtr->lowLimit;
const BYTE* const dictEnd = ctxPtr->dictBase + ctxPtr->dictLimit;
if ((sourceEnd > dictBegin) && ((const BYTE*)src < dictEnd)) {
if (sourceEnd > dictEnd) sourceEnd = dictEnd;
ctxPtr->lowLimit = (U32)(sourceEnd - ctxPtr->dictBase);
if (ctxPtr->dictLimit - ctxPtr->lowLimit < 4) ctxPtr->lowLimit = ctxPtr->dictLimit;
}
}
return LZ4HC_compress_generic (ctxPtr, src, dst, srcSizePtr, dstCapacity, ctxPtr->compressionLevel, limit);
}
int LZ4_compress_HC_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int srcSize, int dstCapacity)
{
if (dstCapacity < LZ4_compressBound(srcSize))
return LZ4_compressHC_continue_generic (LZ4_streamHCPtr, src, dst, &srcSize, dstCapacity, limitedOutput);
else
return LZ4_compressHC_continue_generic (LZ4_streamHCPtr, src, dst, &srcSize, dstCapacity, noLimit);
}
int LZ4_compress_HC_continue_destSize (LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int* srcSizePtr, int targetDestSize)
{
LZ4HC_CCtx_internal* const ctxPtr = &LZ4_streamHCPtr->internal_donotuse;
if (ctxPtr->compressionLevel >= LZ4HC_CLEVEL_OPT_MIN) LZ4HC_init(ctxPtr, (const BYTE*)src); /* not compatible with btopt implementation */
return LZ4_compressHC_continue_generic(LZ4_streamHCPtr, src, dst, srcSizePtr, targetDestSize, limitedDestSize);
}
/* dictionary saving */
int LZ4_saveDictHC (LZ4_streamHC_t* LZ4_streamHCPtr, char* safeBuffer, int dictSize)
{
LZ4HC_CCtx_internal* const streamPtr = &LZ4_streamHCPtr->internal_donotuse;
int const prefixSize = (int)(streamPtr->end - (streamPtr->base + streamPtr->dictLimit));
if (dictSize > 64 KB) dictSize = 64 KB;
if (dictSize < 4) dictSize = 0;
if (dictSize > prefixSize) dictSize = prefixSize;
memmove(safeBuffer, streamPtr->end - dictSize, dictSize);
{ U32 const endIndex = (U32)(streamPtr->end - streamPtr->base);
streamPtr->end = (const BYTE*)safeBuffer + dictSize;
streamPtr->base = streamPtr->end - endIndex;
streamPtr->dictLimit = endIndex - dictSize;
streamPtr->lowLimit = endIndex - dictSize;
if (streamPtr->nextToUpdate < streamPtr->dictLimit) streamPtr->nextToUpdate = streamPtr->dictLimit;
}
return dictSize;
}
/***********************************
* Deprecated Functions
***********************************/
/* These functions currently generate deprecation warnings */
/* Deprecated compression functions */
int LZ4_compressHC(const char* src, char* dst, int srcSize) { return LZ4_compress_HC (src, dst, srcSize, LZ4_compressBound(srcSize), 0); }
int LZ4_compressHC_limitedOutput(const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC(src, dst, srcSize, maxDstSize, 0); }
int LZ4_compressHC2(const char* src, char* dst, int srcSize, int cLevel) { return LZ4_compress_HC (src, dst, srcSize, LZ4_compressBound(srcSize), cLevel); }
int LZ4_compressHC2_limitedOutput(const char* src, char* dst, int srcSize, int maxDstSize, int cLevel) { return LZ4_compress_HC(src, dst, srcSize, maxDstSize, cLevel); }
int LZ4_compressHC_withStateHC (void* state, const char* src, char* dst, int srcSize) { return LZ4_compress_HC_extStateHC (state, src, dst, srcSize, LZ4_compressBound(srcSize), 0); }
int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC_extStateHC (state, src, dst, srcSize, maxDstSize, 0); }
int LZ4_compressHC2_withStateHC (void* state, const char* src, char* dst, int srcSize, int cLevel) { return LZ4_compress_HC_extStateHC(state, src, dst, srcSize, LZ4_compressBound(srcSize), cLevel); }
int LZ4_compressHC2_limitedOutput_withStateHC (void* state, const char* src, char* dst, int srcSize, int maxDstSize, int cLevel) { return LZ4_compress_HC_extStateHC(state, src, dst, srcSize, maxDstSize, cLevel); }
int LZ4_compressHC_continue (LZ4_streamHC_t* ctx, const char* src, char* dst, int srcSize) { return LZ4_compress_HC_continue (ctx, src, dst, srcSize, LZ4_compressBound(srcSize)); }
int LZ4_compressHC_limitedOutput_continue (LZ4_streamHC_t* ctx, const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC_continue (ctx, src, dst, srcSize, maxDstSize); }
/* Deprecated streaming functions */
int LZ4_sizeofStreamStateHC(void) { return LZ4_STREAMHCSIZE; }
int LZ4_resetStreamStateHC(void* state, char* inputBuffer)
{
LZ4HC_CCtx_internal *ctx = &((LZ4_streamHC_t*)state)->internal_donotuse;
if ((((size_t)state) & (sizeof(void*)-1)) != 0) return 1; /* Error : pointer is not aligned for pointer (32 or 64 bits) */
LZ4HC_init(ctx, (const BYTE*)inputBuffer);
ctx->inputBuffer = (BYTE*)inputBuffer;
return 0;
}
void* LZ4_createHC (char* inputBuffer)
{
LZ4_streamHC_t* hc4 = (LZ4_streamHC_t*)ALLOCATOR(1, sizeof(LZ4_streamHC_t));
if (hc4 == NULL) return NULL; /* not enough memory */
LZ4HC_init (&hc4->internal_donotuse, (const BYTE*)inputBuffer);
hc4->internal_donotuse.inputBuffer = (BYTE*)inputBuffer;
return hc4;
}
int LZ4_freeHC (void* LZ4HC_Data) {
if (!LZ4HC_Data) return 0; /* support free on NULL */
FREEMEM(LZ4HC_Data);
return 0;
}
int LZ4_compressHC2_continue (void* LZ4HC_Data, const char* src, char* dst, int srcSize, int cLevel)
{
return LZ4HC_compress_generic (&((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse, src, dst, &srcSize, 0, cLevel, noLimit);
}
int LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* src, char* dst, int srcSize, int dstCapacity, int cLevel)
{
return LZ4HC_compress_generic (&((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse, src, dst, &srcSize, dstCapacity, cLevel, limitedOutput);
}
char* LZ4_slideInputBufferHC(void* LZ4HC_Data)
{
LZ4HC_CCtx_internal* const hc4 = &((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse;
int const dictSize = LZ4_saveDictHC((LZ4_streamHC_t*)LZ4HC_Data, (char*)(hc4->inputBuffer), 64 KB);
return (char*)(hc4->inputBuffer + dictSize);
}

View File

@@ -1,278 +0,0 @@
/*
LZ4 HC - High Compression Mode of LZ4
Header File
Copyright (C) 2011-2017, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
#ifndef LZ4_HC_H_19834876238432
#define LZ4_HC_H_19834876238432
#if defined (__cplusplus)
extern "C" {
#endif
/* --- Dependency --- */
/* note : lz4hc is not an independent module, it requires lz4.h/lz4.c for proper compilation */
#include "lz4.h" /* stddef, LZ4LIB_API, LZ4_DEPRECATED */
/* --- Useful constants --- */
#define LZ4HC_CLEVEL_MIN 3
#define LZ4HC_CLEVEL_DEFAULT 9
#define LZ4HC_CLEVEL_OPT_MIN 11
#define LZ4HC_CLEVEL_MAX 12
/*-************************************
* Block Compression
**************************************/
/*! LZ4_compress_HC() :
* Compress data from `src` into `dst`, using the more powerful but slower "HC" algorithm.
* `dst` must be already allocated.
* Compression is guaranteed to succeed if `dstCapacity >= LZ4_compressBound(srcSize)` (see "lz4.h")
* Max supported `srcSize` value is LZ4_MAX_INPUT_SIZE (see "lz4.h")
* `compressionLevel` : Recommended values are between 4 and 9, although any value between 1 and LZ4HC_CLEVEL_MAX will work.
* Values >LZ4HC_CLEVEL_MAX behave the same as LZ4HC_CLEVEL_MAX.
* @return : the number of bytes written into 'dst'
* or 0 if compression fails.
*/
LZ4LIB_API int LZ4_compress_HC (const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel);
/* Note :
* Decompression functions are provided within "lz4.h" (BSD license)
*/
/*! LZ4_compress_HC_extStateHC() :
* Same as LZ4_compress_HC(), but using an externally allocated memory segment for `state`.
* `state` size is provided by LZ4_sizeofStateHC().
* Memory segment must be aligned on 8-bytes boundaries (which a normal malloc() will do properly).
*/
LZ4LIB_API int LZ4_compress_HC_extStateHC(void* state, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel);
LZ4LIB_API int LZ4_sizeofStateHC(void);
/*-************************************
* Streaming Compression
* Bufferless synchronous API
**************************************/
typedef union LZ4_streamHC_u LZ4_streamHC_t; /* incomplete type (defined later) */
/*! LZ4_createStreamHC() and LZ4_freeStreamHC() :
* These functions create and release memory for LZ4 HC streaming state.
* Newly created states are automatically initialized.
* Existing states can be re-used several times, using LZ4_resetStreamHC().
* These methods are API and ABI stable, they can be used in combination with a DLL.
*/
LZ4LIB_API LZ4_streamHC_t* LZ4_createStreamHC(void);
LZ4LIB_API int LZ4_freeStreamHC (LZ4_streamHC_t* streamHCPtr);
LZ4LIB_API void LZ4_resetStreamHC (LZ4_streamHC_t* streamHCPtr, int compressionLevel);
LZ4LIB_API int LZ4_loadDictHC (LZ4_streamHC_t* streamHCPtr, const char* dictionary, int dictSize);
LZ4LIB_API int LZ4_compress_HC_continue (LZ4_streamHC_t* streamHCPtr, const char* src, char* dst, int srcSize, int maxDstSize);
LZ4LIB_API int LZ4_saveDictHC (LZ4_streamHC_t* streamHCPtr, char* safeBuffer, int maxDictSize);
/*
These functions compress data in successive blocks of any size, using previous blocks as dictionary.
One key assumption is that previous blocks (up to 64 KB) remain read-accessible while compressing next blocks.
There is an exception for ring buffers, which can be smaller than 64 KB.
Ring buffers scenario is automatically detected and handled by LZ4_compress_HC_continue().
Before starting compression, state must be properly initialized, using LZ4_resetStreamHC().
A first "fictional block" can then be designated as initial dictionary, using LZ4_loadDictHC() (Optional).
Then, use LZ4_compress_HC_continue() to compress each successive block.
Previous memory blocks (including initial dictionary when present) must remain accessible and unmodified during compression.
'dst' buffer should be sized to handle worst case scenarios (see LZ4_compressBound()), to ensure operation success.
Because in case of failure, the API does not guarantee context recovery, and context will have to be reset.
If `dst` buffer budget cannot be >= LZ4_compressBound(), consider using LZ4_compress_HC_continue_destSize() instead.
If, for any reason, previous data block can't be preserved unmodified in memory for next compression block,
you can save it to a more stable memory space, using LZ4_saveDictHC().
Return value of LZ4_saveDictHC() is the size of dictionary effectively saved into 'safeBuffer'.
*/
/*-*************************************
* PRIVATE DEFINITIONS :
* Do not use these definitions.
* They are exposed to allow static allocation of `LZ4_streamHC_t`.
* Using these definitions makes the code vulnerable to potential API break when upgrading LZ4
**************************************/
#define LZ4HC_DICTIONARY_LOGSIZE 17 /* because of btopt, hc would only need 16 */
#define LZ4HC_MAXD (1<<LZ4HC_DICTIONARY_LOGSIZE)
#define LZ4HC_MAXD_MASK (LZ4HC_MAXD - 1)
#define LZ4HC_HASH_LOG 15
#define LZ4HC_HASHTABLESIZE (1 << LZ4HC_HASH_LOG)
#define LZ4HC_HASH_MASK (LZ4HC_HASHTABLESIZE - 1)
#if defined(__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
#include <stdint.h>
typedef struct
{
uint32_t hashTable[LZ4HC_HASHTABLESIZE];
uint16_t chainTable[LZ4HC_MAXD];
const uint8_t* end; /* next block here to continue on current prefix */
const uint8_t* base; /* All index relative to this position */
const uint8_t* dictBase; /* alternate base for extDict */
uint8_t* inputBuffer; /* deprecated */
uint32_t dictLimit; /* below that point, need extDict */
uint32_t lowLimit; /* below that point, no more dict */
uint32_t nextToUpdate; /* index from which to continue dictionary update */
uint32_t searchNum; /* only for optimal parser */
uint32_t compressionLevel;
} LZ4HC_CCtx_internal;
#else
typedef struct
{
unsigned int hashTable[LZ4HC_HASHTABLESIZE];
unsigned short chainTable[LZ4HC_MAXD];
const unsigned char* end; /* next block here to continue on current prefix */
const unsigned char* base; /* All index relative to this position */
const unsigned char* dictBase; /* alternate base for extDict */
unsigned char* inputBuffer; /* deprecated */
unsigned int dictLimit; /* below that point, need extDict */
unsigned int lowLimit; /* below that point, no more dict */
unsigned int nextToUpdate; /* index from which to continue dictionary update */
unsigned int searchNum; /* only for optimal parser */
int compressionLevel;
} LZ4HC_CCtx_internal;
#endif
#define LZ4_STREAMHCSIZE (4*LZ4HC_HASHTABLESIZE + 2*LZ4HC_MAXD + 56) /* 393268 */
#define LZ4_STREAMHCSIZE_SIZET (LZ4_STREAMHCSIZE / sizeof(size_t))
union LZ4_streamHC_u {
size_t table[LZ4_STREAMHCSIZE_SIZET];
LZ4HC_CCtx_internal internal_donotuse;
}; /* previously typedef'd to LZ4_streamHC_t */
/*
LZ4_streamHC_t :
This structure allows static allocation of LZ4 HC streaming state.
State must be initialized using LZ4_resetStreamHC() before first use.
Static allocation shall only be used in combination with static linking.
When invoking LZ4 from a DLL, use create/free functions instead, which are API and ABI stable.
*/
/*-************************************
* Deprecated Functions
**************************************/
/* see lz4.h LZ4_DISABLE_DEPRECATE_WARNINGS to turn off deprecation warnings */
/* deprecated compression functions */
/* these functions will trigger warning messages in future releases */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC() instead") int LZ4_compressHC (const char* source, char* dest, int inputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC() instead") int LZ4_compressHC_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC() instead") int LZ4_compressHC2 (const char* source, char* dest, int inputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC() instead") int LZ4_compressHC2_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") int LZ4_compressHC_withStateHC (void* state, const char* source, char* dest, int inputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") int LZ4_compressHC2_withStateHC (void* state, const char* source, char* dest, int inputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") int LZ4_compressHC2_limitedOutput_withStateHC(void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") int LZ4_compressHC_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* source, char* dest, int inputSize);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") int LZ4_compressHC_limitedOutput_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* source, char* dest, int inputSize, int maxOutputSize);
/* Deprecated Streaming functions using older model; should no longer be used */
LZ4LIB_API LZ4_DEPRECATED("use LZ4_createStreamHC() instead") void* LZ4_createHC (char* inputBuffer);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_saveDictHC() instead") char* LZ4_slideInputBufferHC (void* LZ4HC_Data);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_freeStreamHC() instead") int LZ4_freeHC (void* LZ4HC_Data);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") int LZ4_compressHC2_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") int LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_createStreamHC() instead") int LZ4_sizeofStreamStateHC(void);
LZ4LIB_API LZ4_DEPRECATED("use LZ4_resetStreamHC() instead") int LZ4_resetStreamStateHC(void* state, char* inputBuffer);
#if defined (__cplusplus)
}
#endif
#endif /* LZ4_HC_H_19834876238432 */
/*-************************************************
* !!!!! STATIC LINKING ONLY !!!!!
* Following definitions are considered experimental.
* They should not be linked from DLL,
* as there is no guarantee of API stability yet.
* Prototypes will be promoted to "stable" status
* after successfull usage in real-life scenarios.
*************************************************/
#ifdef LZ4_HC_STATIC_LINKING_ONLY /* protection macro */
#ifndef LZ4_HC_SLO_098092834
#define LZ4_HC_SLO_098092834
/*! LZ4_compress_HC_destSize() : v1.8.0 (experimental)
* Will try to compress as much data from `src` as possible
* that can fit into `targetDstSize` budget.
* Result is provided in 2 parts :
* @return : the number of bytes written into 'dst'
* or 0 if compression fails.
* `srcSizePtr` : value will be updated to indicate how much bytes were read from `src`
*/
int LZ4_compress_HC_destSize(void* LZ4HC_Data,
const char* src, char* dst,
int* srcSizePtr, int targetDstSize,
int compressionLevel);
/*! LZ4_compress_HC_continue_destSize() : v1.8.0 (experimental)
* Similar as LZ4_compress_HC_continue(),
* but will read a variable nb of bytes from `src`
* to fit into `targetDstSize` budget.
* Result is provided in 2 parts :
* @return : the number of bytes written into 'dst'
* or 0 if compression fails.
* `srcSizePtr` : value will be updated to indicate how much bytes were read from `src`.
* Important : due to limitations, this prototype only works well up to cLevel < LZ4HC_CLEVEL_OPT_MIN
* beyond that level, compression performance will be much reduced due to internal incompatibilities
*/
int LZ4_compress_HC_continue_destSize(LZ4_streamHC_t* LZ4_streamHCPtr,
const char* src, char* dst,
int* srcSizePtr, int targetDstSize);
/*! LZ4_setCompressionLevel() : v1.8.0 (experimental)
* It's possible to change compression level after LZ4_resetStreamHC(), between 2 invocations of LZ4_compress_HC_continue*(),
* but that requires to stay in the same mode (aka 1-10 or 11-12).
* This function ensures this condition.
*/
void LZ4_setCompressionLevel(LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel);
#endif /* LZ4_HC_SLO_098092834 */
#endif /* LZ4_HC_STATIC_LINKING_ONLY */

View File

@@ -1,366 +0,0 @@
/*
lz4opt.h - Optimal Mode of LZ4
Copyright (C) 2015-2017, Przemyslaw Skibinski <inikep@gmail.com>
Note : this file is intended to be included within lz4hc.c
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
#define LZ4_OPT_NUM (1<<12)
typedef struct {
int off;
int len;
} LZ4HC_match_t;
typedef struct {
int price;
int off;
int mlen;
int litlen;
} LZ4HC_optimal_t;
/* price in bytes */
FORCE_INLINE size_t LZ4HC_literalsPrice(size_t litlen)
{
size_t price = litlen;
if (litlen >= (size_t)RUN_MASK)
price += 1 + (litlen-RUN_MASK)/255;
return price;
}
/* requires mlen >= MINMATCH */
FORCE_INLINE size_t LZ4HC_sequencePrice(size_t litlen, size_t mlen)
{
size_t price = 2 + 1; /* 16-bit offset + token */
price += LZ4HC_literalsPrice(litlen);
if (mlen >= (size_t)(ML_MASK+MINMATCH))
price+= 1 + (mlen-(ML_MASK+MINMATCH))/255;
return price;
}
/*-*************************************
* Binary Tree search
***************************************/
FORCE_INLINE int LZ4HC_BinTree_InsertAndGetAllMatches (
LZ4HC_CCtx_internal* ctx,
const BYTE* const ip,
const BYTE* const iHighLimit,
size_t best_mlen,
LZ4HC_match_t* matches,
int* matchNum)
{
U16* const chainTable = ctx->chainTable;
U32* const HashTable = ctx->hashTable;
const BYTE* const base = ctx->base;
const U32 dictLimit = ctx->dictLimit;
const U32 current = (U32)(ip - base);
const U32 lowLimit = (ctx->lowLimit + MAX_DISTANCE > current) ? ctx->lowLimit : current - (MAX_DISTANCE - 1);
const BYTE* const dictBase = ctx->dictBase;
const BYTE* match;
int nbAttempts = ctx->searchNum;
int mnum = 0;
U16 *ptr0, *ptr1, delta0, delta1;
U32 matchIndex;
size_t matchLength = 0;
U32* HashPos;
if (ip + MINMATCH > iHighLimit) return 1;
/* HC4 match finder */
HashPos = &HashTable[LZ4HC_hashPtr(ip)];
matchIndex = *HashPos;
*HashPos = current;
ptr0 = &DELTANEXTMAXD(current*2+1);
ptr1 = &DELTANEXTMAXD(current*2);
delta0 = delta1 = (U16)(current - matchIndex);
while ((matchIndex < current) && (matchIndex>=lowLimit) && (nbAttempts)) {
nbAttempts--;
if (matchIndex >= dictLimit) {
match = base + matchIndex;
matchLength = LZ4_count(ip, match, iHighLimit);
} else {
const BYTE* vLimit = ip + (dictLimit - matchIndex);
match = dictBase + matchIndex;
if (vLimit > iHighLimit) vLimit = iHighLimit;
matchLength = LZ4_count(ip, match, vLimit);
if ((ip+matchLength == vLimit) && (vLimit < iHighLimit))
matchLength += LZ4_count(ip+matchLength, base+dictLimit, iHighLimit);
if (matchIndex+matchLength >= dictLimit)
match = base + matchIndex; /* to prepare for next usage of match[matchLength] */
}
if (matchLength > best_mlen) {
best_mlen = matchLength;
if (matches) {
if (matchIndex >= dictLimit)
matches[mnum].off = (int)(ip - match);
else
matches[mnum].off = (int)(ip - (base + matchIndex)); /* virtual matchpos */
matches[mnum].len = (int)matchLength;
mnum++;
}
if (best_mlen > LZ4_OPT_NUM) break;
}
if (ip+matchLength >= iHighLimit) /* equal : no way to know if inf or sup */
break; /* drop , to guarantee consistency ; miss a bit of compression, but other solutions can corrupt the tree */
DEBUGLOG(6, "ip :%016llX", (U64)ip);
DEBUGLOG(6, "match:%016llX", (U64)match);
if (*(ip+matchLength) < *(match+matchLength)) {
*ptr0 = delta0;
ptr0 = &DELTANEXTMAXD(matchIndex*2);
if (*ptr0 == (U16)-1) break;
delta0 = *ptr0;
delta1 += delta0;
matchIndex -= delta0;
} else {
*ptr1 = delta1;
ptr1 = &DELTANEXTMAXD(matchIndex*2+1);
if (*ptr1 == (U16)-1) break;
delta1 = *ptr1;
delta0 += delta1;
matchIndex -= delta1;
}
}
*ptr0 = (U16)-1;
*ptr1 = (U16)-1;
if (matchNum) *matchNum = mnum;
/* if (best_mlen > 8) return best_mlen-8; */
if (!matchNum) return 1;
return 1;
}
FORCE_INLINE void LZ4HC_updateBinTree(LZ4HC_CCtx_internal* ctx, const BYTE* const ip, const BYTE* const iHighLimit)
{
const BYTE* const base = ctx->base;
const U32 target = (U32)(ip - base);
U32 idx = ctx->nextToUpdate;
while(idx < target)
idx += LZ4HC_BinTree_InsertAndGetAllMatches(ctx, base+idx, iHighLimit, 8, NULL, NULL);
}
/** Tree updater, providing best match */
FORCE_INLINE int LZ4HC_BinTree_GetAllMatches (
LZ4HC_CCtx_internal* ctx,
const BYTE* const ip, const BYTE* const iHighLimit,
size_t best_mlen, LZ4HC_match_t* matches, const int fullUpdate)
{
int mnum = 0;
if (ip < ctx->base + ctx->nextToUpdate) return 0; /* skipped area */
if (fullUpdate) LZ4HC_updateBinTree(ctx, ip, iHighLimit);
best_mlen = LZ4HC_BinTree_InsertAndGetAllMatches(ctx, ip, iHighLimit, best_mlen, matches, &mnum);
ctx->nextToUpdate = (U32)(ip - ctx->base + best_mlen);
return mnum;
}
#define SET_PRICE(pos, ml, offset, ll, cost) \
{ \
while (last_pos < pos) { opt[last_pos+1].price = 1<<30; last_pos++; } \
opt[pos].mlen = (int)ml; \
opt[pos].off = (int)offset; \
opt[pos].litlen = (int)ll; \
opt[pos].price = (int)cost; \
}
static int LZ4HC_compress_optimal (
LZ4HC_CCtx_internal* ctx,
const char* const source,
char* dest,
int inputSize,
int maxOutputSize,
limitedOutput_directive limit,
size_t sufficient_len,
const int fullUpdate
)
{
LZ4HC_optimal_t opt[LZ4_OPT_NUM + 1]; /* this uses a bit too much stack memory to my taste ... */
LZ4HC_match_t matches[LZ4_OPT_NUM + 1];
const BYTE* ip = (const BYTE*) source;
const BYTE* anchor = ip;
const BYTE* const iend = ip + inputSize;
const BYTE* const mflimit = iend - MFLIMIT;
const BYTE* const matchlimit = (iend - LASTLITERALS);
BYTE* op = (BYTE*) dest;
BYTE* const oend = op + maxOutputSize;
/* init */
DEBUGLOG(5, "LZ4HC_compress_optimal");
if (sufficient_len >= LZ4_OPT_NUM) sufficient_len = LZ4_OPT_NUM-1;
ctx->end += inputSize;
ip++;
/* Main Loop */
while (ip < mflimit) {
size_t const llen = ip - anchor;
size_t last_pos = 0;
size_t match_num, cur, best_mlen, best_off;
memset(opt, 0, sizeof(LZ4HC_optimal_t)); /* memset only the first one */
match_num = LZ4HC_BinTree_GetAllMatches(ctx, ip, matchlimit, MINMATCH-1, matches, fullUpdate);
if (!match_num) { ip++; continue; }
if ((size_t)matches[match_num-1].len > sufficient_len) {
/* good enough solution : immediate encoding */
best_mlen = matches[match_num-1].len;
best_off = matches[match_num-1].off;
cur = 0;
last_pos = 1;
goto encode;
}
/* set prices using matches at position = 0 */
{ size_t matchNb;
for (matchNb = 0; matchNb < match_num; matchNb++) {
size_t mlen = (matchNb>0) ? (size_t)matches[matchNb-1].len+1 : MINMATCH;
best_mlen = matches[matchNb].len; /* necessarily < sufficient_len < LZ4_OPT_NUM */
for ( ; mlen <= best_mlen ; mlen++) {
size_t const cost = LZ4HC_sequencePrice(llen, mlen) - LZ4HC_literalsPrice(llen);
SET_PRICE(mlen, mlen, matches[matchNb].off, 0, cost); /* updates last_pos and opt[pos] */
} } }
if (last_pos < MINMATCH) { ip++; continue; } /* note : on clang at least, this test improves performance */
/* check further positions */
opt[0].mlen = opt[1].mlen = 1;
for (cur = 1; cur <= last_pos; cur++) {
const BYTE* const curPtr = ip + cur;
/* establish baseline price if cur is literal */
{ size_t price, litlen;
if (opt[cur-1].mlen == 1) {
/* no match at previous position */
litlen = opt[cur-1].litlen + 1;
if (cur > litlen) {
price = opt[cur - litlen].price + LZ4HC_literalsPrice(litlen);
} else {
price = LZ4HC_literalsPrice(llen + litlen) - LZ4HC_literalsPrice(llen);
}
} else {
litlen = 1;
price = opt[cur - 1].price + LZ4HC_literalsPrice(1);
}
if (price < (size_t)opt[cur].price)
SET_PRICE(cur, 1 /*mlen*/, 0 /*off*/, litlen, price); /* note : increases last_pos */
}
if (cur == last_pos || curPtr >= mflimit) break;
match_num = LZ4HC_BinTree_GetAllMatches(ctx, curPtr, matchlimit, MINMATCH-1, matches, fullUpdate);
if ((match_num > 0) && (size_t)matches[match_num-1].len > sufficient_len) {
/* immediate encoding */
best_mlen = matches[match_num-1].len;
best_off = matches[match_num-1].off;
last_pos = cur + 1;
goto encode;
}
/* set prices using matches at position = cur */
{ size_t matchNb;
for (matchNb = 0; matchNb < match_num; matchNb++) {
size_t ml = (matchNb>0) ? (size_t)matches[matchNb-1].len+1 : MINMATCH;
best_mlen = (cur + matches[matchNb].len < LZ4_OPT_NUM) ?
(size_t)matches[matchNb].len : LZ4_OPT_NUM - cur;
for ( ; ml <= best_mlen ; ml++) {
size_t ll, price;
if (opt[cur].mlen == 1) {
ll = opt[cur].litlen;
if (cur > ll)
price = opt[cur - ll].price + LZ4HC_sequencePrice(ll, ml);
else
price = LZ4HC_sequencePrice(llen + ll, ml) - LZ4HC_literalsPrice(llen);
} else {
ll = 0;
price = opt[cur].price + LZ4HC_sequencePrice(0, ml);
}
if (cur + ml > last_pos || price < (size_t)opt[cur + ml].price) {
SET_PRICE(cur + ml, ml, matches[matchNb].off, ll, price);
} } } }
} /* for (cur = 1; cur <= last_pos; cur++) */
best_mlen = opt[last_pos].mlen;
best_off = opt[last_pos].off;
cur = last_pos - best_mlen;
encode: /* cur, last_pos, best_mlen, best_off must be set */
opt[0].mlen = 1;
while (1) { /* from end to beginning */
size_t const ml = opt[cur].mlen;
int const offset = opt[cur].off;
opt[cur].mlen = (int)best_mlen;
opt[cur].off = (int)best_off;
best_mlen = ml;
best_off = offset;
if (ml > cur) break; /* can this happen ? */
cur -= ml;
}
/* encode all recorded sequences */
cur = 0;
while (cur < last_pos) {
int const ml = opt[cur].mlen;
int const offset = opt[cur].off;
if (ml == 1) { ip++; cur++; continue; }
cur += ml;
if ( LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ip - offset, limit, oend) ) return 0;
}
} /* while (ip < mflimit) */
/* Encode Last Literals */
{ int lastRun = (int)(iend - anchor);
if ((limit) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0; /* Check output limit */
if (lastRun>=(int)RUN_MASK) { *op++=(RUN_MASK<<ML_BITS); lastRun-=RUN_MASK; for(; lastRun > 254 ; lastRun-=255) *op++ = 255; *op++ = (BYTE) lastRun; }
else *op++ = (BYTE)(lastRun<<ML_BITS);
memcpy(op, anchor, iend - anchor);
op += iend-anchor;
}
/* End */
return (int) ((char*)op-dest);
}

View File

@@ -1,894 +0,0 @@
/*
* xxHash - Fast Hash algorithm
* Copyright (C) 2012-2016, Yann Collet
*
* BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following disclaimer
* in the documentation and/or other materials provided with the
* distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You can contact the author at :
* - xxHash homepage: http://www.xxhash.com
* - xxHash source repository : https://github.com/Cyan4973/xxHash
*/
/* *************************************
* Tuning parameters
***************************************/
/*!XXH_FORCE_MEMORY_ACCESS :
* By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable.
* Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal.
* The below switch allow to select different access method for improved performance.
* Method 0 (default) : use `memcpy()`. Safe and portable.
* Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable).
* This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`.
* Method 2 : direct access. This method doesn't depend on compiler but violate C standard.
* It can generate buggy code on targets which do not support unaligned memory accesses.
* But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6)
* See http://stackoverflow.com/a/32095106/646947 for details.
* Prefer these methods in priority order (0 > 1 > 2)
*/
#ifndef XXH_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */
# if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) )
# define XXH_FORCE_MEMORY_ACCESS 2
# elif defined(__INTEL_COMPILER) || \
(defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) ))
# define XXH_FORCE_MEMORY_ACCESS 1
# endif
#endif
/*!XXH_ACCEPT_NULL_INPUT_POINTER :
* If the input pointer is a null pointer, xxHash default behavior is to trigger a memory access error, since it is a bad pointer.
* When this option is enabled, xxHash output for null input pointers will be the same as a null-length input.
* By default, this option is disabled. To enable it, uncomment below define :
*/
/* #define XXH_ACCEPT_NULL_INPUT_POINTER 1 */
/*!XXH_FORCE_NATIVE_FORMAT :
* By default, xxHash library provides endian-independent Hash values, based on little-endian convention.
* Results are therefore identical for little-endian and big-endian CPU.
* This comes at a performance cost for big-endian CPU, since some swapping is required to emulate little-endian format.
* Should endian-independence be of no importance for your application, you may set the #define below to 1,
* to improve speed for Big-endian CPU.
* This option has no impact on Little_Endian CPU.
*/
#ifndef XXH_FORCE_NATIVE_FORMAT /* can be defined externally */
# define XXH_FORCE_NATIVE_FORMAT 0
#endif
/*!XXH_FORCE_ALIGN_CHECK :
* This is a minor performance trick, only useful with lots of very small keys.
* It means : check for aligned/unaligned input.
* The check costs one initial branch per hash; set to 0 when the input data
* is guaranteed to be aligned.
*/
#ifndef XXH_FORCE_ALIGN_CHECK /* can be defined externally */
# if defined(__i386) || defined(_M_IX86) || defined(__x86_64__) || defined(_M_X64)
# define XXH_FORCE_ALIGN_CHECK 0
# else
# define XXH_FORCE_ALIGN_CHECK 1
# endif
#endif
/* *************************************
* Includes & Memory related functions
***************************************/
/*! Modify the local functions below should you wish to use some other memory routines
* for malloc(), free() */
#include <stdlib.h>
static void* XXH_malloc(size_t s) { return malloc(s); }
static void XXH_free (void* p) { free(p); }
/*! and for memcpy() */
#include <string.h>
static void* XXH_memcpy(void* dest, const void* src, size_t size) { return memcpy(dest,src,size); }
#define XXH_STATIC_LINKING_ONLY
#include "xxhash.h"
/* *************************************
* Compiler Specific Options
***************************************/
#ifdef _MSC_VER /* Visual Studio */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
#endif
#ifndef FORCE_INLINE
# ifdef _MSC_VER /* Visual Studio */
# define FORCE_INLINE static __forceinline
# else
# if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */
# ifdef __GNUC__
# define FORCE_INLINE static inline __attribute__((always_inline))
# else
# define FORCE_INLINE static inline
# endif
# else
# define FORCE_INLINE static
# endif /* __STDC_VERSION__ */
# endif /* _MSC_VER */
#endif /* FORCE_INLINE */
/* *************************************
* Basic Types
***************************************/
#ifndef MEM_MODULE
# if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# include <stdint.h>
typedef uint8_t BYTE;
typedef uint16_t U16;
typedef uint32_t U32;
typedef int32_t S32;
# else
typedef unsigned char BYTE;
typedef unsigned short U16;
typedef unsigned int U32;
typedef signed int S32;
# endif
#endif
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))
/* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */
static U32 XXH_read32(const void* memPtr) { return *(const U32*) memPtr; }
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))
/* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */
/* currently only defined for gcc and icc */
typedef union { U32 u32; } __attribute__((packed)) unalign;
static U32 XXH_read32(const void* ptr) { return ((const unalign*)ptr)->u32; }
#else
/* portable and safe solution. Generally efficient.
* see : http://stackoverflow.com/a/32095106/646947
*/
static U32 XXH_read32(const void* memPtr)
{
U32 val;
memcpy(&val, memPtr, sizeof(val));
return val;
}
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
/* ****************************************
* Compiler-specific Functions and Macros
******************************************/
#define XXH_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
/* Note : although _rotl exists for minGW (GCC under windows), performance seems poor */
#if defined(_MSC_VER)
# define XXH_rotl32(x,r) _rotl(x,r)
# define XXH_rotl64(x,r) _rotl64(x,r)
#else
# define XXH_rotl32(x,r) ((x << r) | (x >> (32 - r)))
# define XXH_rotl64(x,r) ((x << r) | (x >> (64 - r)))
#endif
#if defined(_MSC_VER) /* Visual Studio */
# define XXH_swap32 _byteswap_ulong
#elif XXH_GCC_VERSION >= 403
# define XXH_swap32 __builtin_bswap32
#else
static U32 XXH_swap32 (U32 x)
{
return ((x << 24) & 0xff000000 ) |
((x << 8) & 0x00ff0000 ) |
((x >> 8) & 0x0000ff00 ) |
((x >> 24) & 0x000000ff );
}
#endif
/* *************************************
* Architecture Macros
***************************************/
typedef enum { XXH_bigEndian=0, XXH_littleEndian=1 } XXH_endianess;
/* XXH_CPU_LITTLE_ENDIAN can be defined externally, for example on the compiler command line */
#ifndef XXH_CPU_LITTLE_ENDIAN
static const int g_one = 1;
# define XXH_CPU_LITTLE_ENDIAN (*(const char*)(&g_one))
#endif
/* ***************************
* Memory reads
*****************************/
typedef enum { XXH_aligned, XXH_unaligned } XXH_alignment;
FORCE_INLINE U32 XXH_readLE32_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
{
if (align==XXH_unaligned)
return endian==XXH_littleEndian ? XXH_read32(ptr) : XXH_swap32(XXH_read32(ptr));
else
return endian==XXH_littleEndian ? *(const U32*)ptr : XXH_swap32(*(const U32*)ptr);
}
FORCE_INLINE U32 XXH_readLE32(const void* ptr, XXH_endianess endian)
{
return XXH_readLE32_align(ptr, endian, XXH_unaligned);
}
static U32 XXH_readBE32(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap32(XXH_read32(ptr)) : XXH_read32(ptr);
}
/* *************************************
* Macros
***************************************/
#define XXH_STATIC_ASSERT(c) { enum { XXH_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
XXH_PUBLIC_API unsigned XXH_versionNumber (void) { return XXH_VERSION_NUMBER; }
/* *******************************************************************
* 32-bits hash functions
*********************************************************************/
static const U32 PRIME32_1 = 2654435761U;
static const U32 PRIME32_2 = 2246822519U;
static const U32 PRIME32_3 = 3266489917U;
static const U32 PRIME32_4 = 668265263U;
static const U32 PRIME32_5 = 374761393U;
static U32 XXH32_round(U32 seed, U32 input)
{
seed += input * PRIME32_2;
seed = XXH_rotl32(seed, 13);
seed *= PRIME32_1;
return seed;
}
FORCE_INLINE U32 XXH32_endian_align(const void* input, size_t len, U32 seed, XXH_endianess endian, XXH_alignment align)
{
const BYTE* p = (const BYTE*)input;
const BYTE* bEnd = p + len;
U32 h32;
#define XXH_get32bits(p) XXH_readLE32_align(p, endian, align)
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (p==NULL) {
len=0;
bEnd=p=(const BYTE*)(size_t)16;
}
#endif
if (len>=16) {
const BYTE* const limit = bEnd - 16;
U32 v1 = seed + PRIME32_1 + PRIME32_2;
U32 v2 = seed + PRIME32_2;
U32 v3 = seed + 0;
U32 v4 = seed - PRIME32_1;
do {
v1 = XXH32_round(v1, XXH_get32bits(p)); p+=4;
v2 = XXH32_round(v2, XXH_get32bits(p)); p+=4;
v3 = XXH32_round(v3, XXH_get32bits(p)); p+=4;
v4 = XXH32_round(v4, XXH_get32bits(p)); p+=4;
} while (p<=limit);
h32 = XXH_rotl32(v1, 1) + XXH_rotl32(v2, 7) + XXH_rotl32(v3, 12) + XXH_rotl32(v4, 18);
} else {
h32 = seed + PRIME32_5;
}
h32 += (U32) len;
while (p+4<=bEnd) {
h32 += XXH_get32bits(p) * PRIME32_3;
h32 = XXH_rotl32(h32, 17) * PRIME32_4 ;
p+=4;
}
while (p<bEnd) {
h32 += (*p) * PRIME32_5;
h32 = XXH_rotl32(h32, 11) * PRIME32_1 ;
p++;
}
h32 ^= h32 >> 15;
h32 *= PRIME32_2;
h32 ^= h32 >> 13;
h32 *= PRIME32_3;
h32 ^= h32 >> 16;
return h32;
}
XXH_PUBLIC_API unsigned int XXH32 (const void* input, size_t len, unsigned int seed)
{
#if 0
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH32_state_t state;
XXH32_reset(&state, seed);
XXH32_update(&state, input, len);
return XXH32_digest(&state);
#else
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 3) == 0) { /* Input is 4-bytes aligned, leverage the speed benefit */
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
else
return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
} }
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
else
return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
#endif
}
/*====== Hash streaming ======*/
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void)
{
return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));
}
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dstState, const XXH32_state_t* srcState)
{
memcpy(dstState, srcState, sizeof(*dstState));
}
XXH_PUBLIC_API XXH_errorcode XXH32_reset(XXH32_state_t* statePtr, unsigned int seed)
{
XXH32_state_t state; /* using a local state to memcpy() in order to avoid strict-aliasing warnings */
memset(&state, 0, sizeof(state)-4); /* do not write into reserved, for future removal */
state.v1 = seed + PRIME32_1 + PRIME32_2;
state.v2 = seed + PRIME32_2;
state.v3 = seed + 0;
state.v4 = seed - PRIME32_1;
memcpy(statePtr, &state, sizeof(state));
return XXH_OK;
}
FORCE_INLINE XXH_errorcode XXH32_update_endian (XXH32_state_t* state, const void* input, size_t len, XXH_endianess endian)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (input==NULL) return XXH_ERROR;
#endif
state->total_len_32 += (unsigned)len;
state->large_len |= (len>=16) | (state->total_len_32>=16);
if (state->memsize + len < 16) { /* fill in tmp buffer */
XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, len);
state->memsize += (unsigned)len;
return XXH_OK;
}
if (state->memsize) { /* some data left from previous update */
XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, 16-state->memsize);
{ const U32* p32 = state->mem32;
state->v1 = XXH32_round(state->v1, XXH_readLE32(p32, endian)); p32++;
state->v2 = XXH32_round(state->v2, XXH_readLE32(p32, endian)); p32++;
state->v3 = XXH32_round(state->v3, XXH_readLE32(p32, endian)); p32++;
state->v4 = XXH32_round(state->v4, XXH_readLE32(p32, endian)); p32++;
}
p += 16-state->memsize;
state->memsize = 0;
}
if (p <= bEnd-16) {
const BYTE* const limit = bEnd - 16;
U32 v1 = state->v1;
U32 v2 = state->v2;
U32 v3 = state->v3;
U32 v4 = state->v4;
do {
v1 = XXH32_round(v1, XXH_readLE32(p, endian)); p+=4;
v2 = XXH32_round(v2, XXH_readLE32(p, endian)); p+=4;
v3 = XXH32_round(v3, XXH_readLE32(p, endian)); p+=4;
v4 = XXH32_round(v4, XXH_readLE32(p, endian)); p+=4;
} while (p<=limit);
state->v1 = v1;
state->v2 = v2;
state->v3 = v3;
state->v4 = v4;
}
if (p < bEnd) {
XXH_memcpy(state->mem32, p, (size_t)(bEnd-p));
state->memsize = (unsigned)(bEnd-p);
}
return XXH_OK;
}
XXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* state_in, const void* input, size_t len)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_update_endian(state_in, input, len, XXH_littleEndian);
else
return XXH32_update_endian(state_in, input, len, XXH_bigEndian);
}
FORCE_INLINE U32 XXH32_digest_endian (const XXH32_state_t* state, XXH_endianess endian)
{
const BYTE * p = (const BYTE*)state->mem32;
const BYTE* const bEnd = (const BYTE*)(state->mem32) + state->memsize;
U32 h32;
if (state->large_len) {
h32 = XXH_rotl32(state->v1, 1) + XXH_rotl32(state->v2, 7) + XXH_rotl32(state->v3, 12) + XXH_rotl32(state->v4, 18);
} else {
h32 = state->v3 /* == seed */ + PRIME32_5;
}
h32 += state->total_len_32;
while (p+4<=bEnd) {
h32 += XXH_readLE32(p, endian) * PRIME32_3;
h32 = XXH_rotl32(h32, 17) * PRIME32_4;
p+=4;
}
while (p<bEnd) {
h32 += (*p) * PRIME32_5;
h32 = XXH_rotl32(h32, 11) * PRIME32_1;
p++;
}
h32 ^= h32 >> 15;
h32 *= PRIME32_2;
h32 ^= h32 >> 13;
h32 *= PRIME32_3;
h32 ^= h32 >> 16;
return h32;
}
XXH_PUBLIC_API unsigned int XXH32_digest (const XXH32_state_t* state_in)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_digest_endian(state_in, XXH_littleEndian);
else
return XXH32_digest_endian(state_in, XXH_bigEndian);
}
/*====== Canonical representation ======*/
/*! Default XXH result types are basic unsigned 32 and 64 bits.
* The canonical representation follows human-readable write convention, aka big-endian (large digits first).
* These functions allow transformation of hash result into and from its canonical format.
* This way, hash values can be written into a file or buffer, and remain comparable across different systems and programs.
*/
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH32_canonical_t) == sizeof(XXH32_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap32(hash);
memcpy(dst, &hash, sizeof(*dst));
}
XXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src)
{
return XXH_readBE32(src);
}
#ifndef XXH_NO_LONG_LONG
/* *******************************************************************
* 64-bits hash functions
*********************************************************************/
/*====== Memory access ======*/
#ifndef MEM_MODULE
# define MEM_MODULE
# if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# include <stdint.h>
typedef uint64_t U64;
# else
typedef unsigned long long U64; /* if your compiler doesn't support unsigned long long, replace by another 64-bit type here. Note that xxhash.h will also need to be updated. */
# endif
#endif
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))
/* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */
static U64 XXH_read64(const void* memPtr) { return *(const U64*) memPtr; }
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))
/* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */
/* currently only defined for gcc and icc */
typedef union { U32 u32; U64 u64; } __attribute__((packed)) unalign64;
static U64 XXH_read64(const void* ptr) { return ((const unalign64*)ptr)->u64; }
#else
/* portable and safe solution. Generally efficient.
* see : http://stackoverflow.com/a/32095106/646947
*/
static U64 XXH_read64(const void* memPtr)
{
U64 val;
memcpy(&val, memPtr, sizeof(val));
return val;
}
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
#if defined(_MSC_VER) /* Visual Studio */
# define XXH_swap64 _byteswap_uint64
#elif XXH_GCC_VERSION >= 403
# define XXH_swap64 __builtin_bswap64
#else
static U64 XXH_swap64 (U64 x)
{
return ((x << 56) & 0xff00000000000000ULL) |
((x << 40) & 0x00ff000000000000ULL) |
((x << 24) & 0x0000ff0000000000ULL) |
((x << 8) & 0x000000ff00000000ULL) |
((x >> 8) & 0x00000000ff000000ULL) |
((x >> 24) & 0x0000000000ff0000ULL) |
((x >> 40) & 0x000000000000ff00ULL) |
((x >> 56) & 0x00000000000000ffULL);
}
#endif
FORCE_INLINE U64 XXH_readLE64_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
{
if (align==XXH_unaligned)
return endian==XXH_littleEndian ? XXH_read64(ptr) : XXH_swap64(XXH_read64(ptr));
else
return endian==XXH_littleEndian ? *(const U64*)ptr : XXH_swap64(*(const U64*)ptr);
}
FORCE_INLINE U64 XXH_readLE64(const void* ptr, XXH_endianess endian)
{
return XXH_readLE64_align(ptr, endian, XXH_unaligned);
}
static U64 XXH_readBE64(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap64(XXH_read64(ptr)) : XXH_read64(ptr);
}
/*====== xxh64 ======*/
static const U64 PRIME64_1 = 11400714785074694791ULL;
static const U64 PRIME64_2 = 14029467366897019727ULL;
static const U64 PRIME64_3 = 1609587929392839161ULL;
static const U64 PRIME64_4 = 9650029242287828579ULL;
static const U64 PRIME64_5 = 2870177450012600261ULL;
static U64 XXH64_round(U64 acc, U64 input)
{
acc += input * PRIME64_2;
acc = XXH_rotl64(acc, 31);
acc *= PRIME64_1;
return acc;
}
static U64 XXH64_mergeRound(U64 acc, U64 val)
{
val = XXH64_round(0, val);
acc ^= val;
acc = acc * PRIME64_1 + PRIME64_4;
return acc;
}
FORCE_INLINE U64 XXH64_endian_align(const void* input, size_t len, U64 seed, XXH_endianess endian, XXH_alignment align)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
U64 h64;
#define XXH_get64bits(p) XXH_readLE64_align(p, endian, align)
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (p==NULL) {
len=0;
bEnd=p=(const BYTE*)(size_t)32;
}
#endif
if (len>=32) {
const BYTE* const limit = bEnd - 32;
U64 v1 = seed + PRIME64_1 + PRIME64_2;
U64 v2 = seed + PRIME64_2;
U64 v3 = seed + 0;
U64 v4 = seed - PRIME64_1;
do {
v1 = XXH64_round(v1, XXH_get64bits(p)); p+=8;
v2 = XXH64_round(v2, XXH_get64bits(p)); p+=8;
v3 = XXH64_round(v3, XXH_get64bits(p)); p+=8;
v4 = XXH64_round(v4, XXH_get64bits(p)); p+=8;
} while (p<=limit);
h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
h64 = XXH64_mergeRound(h64, v1);
h64 = XXH64_mergeRound(h64, v2);
h64 = XXH64_mergeRound(h64, v3);
h64 = XXH64_mergeRound(h64, v4);
} else {
h64 = seed + PRIME64_5;
}
h64 += (U64) len;
while (p+8<=bEnd) {
U64 const k1 = XXH64_round(0, XXH_get64bits(p));
h64 ^= k1;
h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
p+=8;
}
if (p+4<=bEnd) {
h64 ^= (U64)(XXH_get32bits(p)) * PRIME64_1;
h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
p+=4;
}
while (p<bEnd) {
h64 ^= (*p) * PRIME64_5;
h64 = XXH_rotl64(h64, 11) * PRIME64_1;
p++;
}
h64 ^= h64 >> 33;
h64 *= PRIME64_2;
h64 ^= h64 >> 29;
h64 *= PRIME64_3;
h64 ^= h64 >> 32;
return h64;
}
XXH_PUBLIC_API unsigned long long XXH64 (const void* input, size_t len, unsigned long long seed)
{
#if 0
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH64_state_t state;
XXH64_reset(&state, seed);
XXH64_update(&state, input, len);
return XXH64_digest(&state);
#else
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 7)==0) { /* Input is aligned, let's leverage the speed advantage */
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
else
return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
} }
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
else
return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
#endif
}
/*====== Hash Streaming ======*/
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void)
{
return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t));
}
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* dstState, const XXH64_state_t* srcState)
{
memcpy(dstState, srcState, sizeof(*dstState));
}
XXH_PUBLIC_API XXH_errorcode XXH64_reset(XXH64_state_t* statePtr, unsigned long long seed)
{
XXH64_state_t state; /* using a local state to memcpy() in order to avoid strict-aliasing warnings */
memset(&state, 0, sizeof(state)-8); /* do not write into reserved, for future removal */
state.v1 = seed + PRIME64_1 + PRIME64_2;
state.v2 = seed + PRIME64_2;
state.v3 = seed + 0;
state.v4 = seed - PRIME64_1;
memcpy(statePtr, &state, sizeof(state));
return XXH_OK;
}
FORCE_INLINE XXH_errorcode XXH64_update_endian (XXH64_state_t* state, const void* input, size_t len, XXH_endianess endian)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (input==NULL) return XXH_ERROR;
#endif
state->total_len += len;
if (state->memsize + len < 32) { /* fill in tmp buffer */
XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, len);
state->memsize += (U32)len;
return XXH_OK;
}
if (state->memsize) { /* tmp buffer is full */
XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, 32-state->memsize);
state->v1 = XXH64_round(state->v1, XXH_readLE64(state->mem64+0, endian));
state->v2 = XXH64_round(state->v2, XXH_readLE64(state->mem64+1, endian));
state->v3 = XXH64_round(state->v3, XXH_readLE64(state->mem64+2, endian));
state->v4 = XXH64_round(state->v4, XXH_readLE64(state->mem64+3, endian));
p += 32-state->memsize;
state->memsize = 0;
}
if (p+32 <= bEnd) {
const BYTE* const limit = bEnd - 32;
U64 v1 = state->v1;
U64 v2 = state->v2;
U64 v3 = state->v3;
U64 v4 = state->v4;
do {
v1 = XXH64_round(v1, XXH_readLE64(p, endian)); p+=8;
v2 = XXH64_round(v2, XXH_readLE64(p, endian)); p+=8;
v3 = XXH64_round(v3, XXH_readLE64(p, endian)); p+=8;
v4 = XXH64_round(v4, XXH_readLE64(p, endian)); p+=8;
} while (p<=limit);
state->v1 = v1;
state->v2 = v2;
state->v3 = v3;
state->v4 = v4;
}
if (p < bEnd) {
XXH_memcpy(state->mem64, p, (size_t)(bEnd-p));
state->memsize = (unsigned)(bEnd-p);
}
return XXH_OK;
}
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* state_in, const void* input, size_t len)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_update_endian(state_in, input, len, XXH_littleEndian);
else
return XXH64_update_endian(state_in, input, len, XXH_bigEndian);
}
FORCE_INLINE U64 XXH64_digest_endian (const XXH64_state_t* state, XXH_endianess endian)
{
const BYTE * p = (const BYTE*)state->mem64;
const BYTE* const bEnd = (const BYTE*)state->mem64 + state->memsize;
U64 h64;
if (state->total_len >= 32) {
U64 const v1 = state->v1;
U64 const v2 = state->v2;
U64 const v3 = state->v3;
U64 const v4 = state->v4;
h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
h64 = XXH64_mergeRound(h64, v1);
h64 = XXH64_mergeRound(h64, v2);
h64 = XXH64_mergeRound(h64, v3);
h64 = XXH64_mergeRound(h64, v4);
} else {
h64 = state->v3 + PRIME64_5;
}
h64 += (U64) state->total_len;
while (p+8<=bEnd) {
U64 const k1 = XXH64_round(0, XXH_readLE64(p, endian));
h64 ^= k1;
h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
p+=8;
}
if (p+4<=bEnd) {
h64 ^= (U64)(XXH_readLE32(p, endian)) * PRIME64_1;
h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
p+=4;
}
while (p<bEnd) {
h64 ^= (*p) * PRIME64_5;
h64 = XXH_rotl64(h64, 11) * PRIME64_1;
p++;
}
h64 ^= h64 >> 33;
h64 *= PRIME64_2;
h64 ^= h64 >> 29;
h64 *= PRIME64_3;
h64 ^= h64 >> 32;
return h64;
}
XXH_PUBLIC_API unsigned long long XXH64_digest (const XXH64_state_t* state_in)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_digest_endian(state_in, XXH_littleEndian);
else
return XXH64_digest_endian(state_in, XXH_bigEndian);
}
/*====== Canonical representation ======*/
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH64_canonical_t) == sizeof(XXH64_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap64(hash);
memcpy(dst, &hash, sizeof(*dst));
}
XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src)
{
return XXH_readBE64(src);
}
#endif /* XXH_NO_LONG_LONG */

View File

@@ -1,293 +0,0 @@
/*
xxHash - Extremely Fast Hash algorithm
Header File
Copyright (C) 2012-2016, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- xxHash source repository : https://github.com/Cyan4973/xxHash
*/
/* Notice extracted from xxHash homepage :
xxHash is an extremely fast Hash algorithm, running at RAM speed limits.
It also successfully passes all tests from the SMHasher suite.
Comparison (single thread, Windows Seven 32 bits, using SMHasher on a Core 2 Duo @3GHz)
Name Speed Q.Score Author
xxHash 5.4 GB/s 10
CrapWow 3.2 GB/s 2 Andrew
MumurHash 3a 2.7 GB/s 10 Austin Appleby
SpookyHash 2.0 GB/s 10 Bob Jenkins
SBox 1.4 GB/s 9 Bret Mulvey
Lookup3 1.2 GB/s 9 Bob Jenkins
SuperFastHash 1.2 GB/s 1 Paul Hsieh
CityHash64 1.05 GB/s 10 Pike & Alakuijala
FNV 0.55 GB/s 5 Fowler, Noll, Vo
CRC32 0.43 GB/s 9
MD5-32 0.33 GB/s 10 Ronald L. Rivest
SHA1-32 0.28 GB/s 10
Q.Score is a measure of quality of the hash function.
It depends on successfully passing SMHasher test set.
10 is a perfect score.
A 64-bits version, named XXH64, is available since r35.
It offers much better speed, but for 64-bits applications only.
Name Speed on 64 bits Speed on 32 bits
XXH64 13.8 GB/s 1.9 GB/s
XXH32 6.8 GB/s 6.0 GB/s
*/
#ifndef XXHASH_H_5627135585666179
#define XXHASH_H_5627135585666179 1
#if defined (__cplusplus)
extern "C" {
#endif
/* ****************************
* Definitions
******************************/
#include <stddef.h> /* size_t */
typedef enum { XXH_OK=0, XXH_ERROR } XXH_errorcode;
/* ****************************
* API modifier
******************************/
/** XXH_PRIVATE_API
* This is useful to include xxhash functions in `static` mode
* in order to inline them, and remove their symbol from the public list.
* Methodology :
* #define XXH_PRIVATE_API
* #include "xxhash.h"
* `xxhash.c` is automatically included.
* It's not useful to compile and link it as a separate module.
*/
#ifdef XXH_PRIVATE_API
# ifndef XXH_STATIC_LINKING_ONLY
# define XXH_STATIC_LINKING_ONLY
# endif
# if defined(__GNUC__)
# define XXH_PUBLIC_API static __inline __attribute__((unused))
# elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
# define XXH_PUBLIC_API static inline
# elif defined(_MSC_VER)
# define XXH_PUBLIC_API static __inline
# else
# define XXH_PUBLIC_API static /* this version may generate warnings for unused static functions; disable the relevant warning */
# endif
#else
# define XXH_PUBLIC_API /* do nothing */
#endif /* XXH_PRIVATE_API */
/*!XXH_NAMESPACE, aka Namespace Emulation :
If you want to include _and expose_ xxHash functions from within your own library,
but also want to avoid symbol collisions with other libraries which may also include xxHash,
you can use XXH_NAMESPACE, to automatically prefix any public symbol from xxhash library
with the value of XXH_NAMESPACE (therefore, avoid NULL and numeric values).
Note that no change is required within the calling program as long as it includes `xxhash.h` :
regular symbol name will be automatically translated by this header.
*/
#ifdef XXH_NAMESPACE
# define XXH_CAT(A,B) A##B
# define XXH_NAME2(A,B) XXH_CAT(A,B)
# define XXH_versionNumber XXH_NAME2(XXH_NAMESPACE, XXH_versionNumber)
# define XXH32 XXH_NAME2(XXH_NAMESPACE, XXH32)
# define XXH32_createState XXH_NAME2(XXH_NAMESPACE, XXH32_createState)
# define XXH32_freeState XXH_NAME2(XXH_NAMESPACE, XXH32_freeState)
# define XXH32_reset XXH_NAME2(XXH_NAMESPACE, XXH32_reset)
# define XXH32_update XXH_NAME2(XXH_NAMESPACE, XXH32_update)
# define XXH32_digest XXH_NAME2(XXH_NAMESPACE, XXH32_digest)
# define XXH32_copyState XXH_NAME2(XXH_NAMESPACE, XXH32_copyState)
# define XXH32_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH32_canonicalFromHash)
# define XXH32_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH32_hashFromCanonical)
# define XXH64 XXH_NAME2(XXH_NAMESPACE, XXH64)
# define XXH64_createState XXH_NAME2(XXH_NAMESPACE, XXH64_createState)
# define XXH64_freeState XXH_NAME2(XXH_NAMESPACE, XXH64_freeState)
# define XXH64_reset XXH_NAME2(XXH_NAMESPACE, XXH64_reset)
# define XXH64_update XXH_NAME2(XXH_NAMESPACE, XXH64_update)
# define XXH64_digest XXH_NAME2(XXH_NAMESPACE, XXH64_digest)
# define XXH64_copyState XXH_NAME2(XXH_NAMESPACE, XXH64_copyState)
# define XXH64_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH64_canonicalFromHash)
# define XXH64_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH64_hashFromCanonical)
#endif
/* *************************************
* Version
***************************************/
#define XXH_VERSION_MAJOR 0
#define XXH_VERSION_MINOR 6
#define XXH_VERSION_RELEASE 2
#define XXH_VERSION_NUMBER (XXH_VERSION_MAJOR *100*100 + XXH_VERSION_MINOR *100 + XXH_VERSION_RELEASE)
XXH_PUBLIC_API unsigned XXH_versionNumber (void);
/*-**********************************************************************
* 32-bits hash
************************************************************************/
typedef unsigned int XXH32_hash_t;
/*! XXH32() :
Calculate the 32-bits hash of sequence "length" bytes stored at memory address "input".
The memory between input & input+length must be valid (allocated and read-accessible).
"seed" can be used to alter the result predictably.
Speed on Core 2 Duo @ 3 GHz (single thread, SMHasher benchmark) : 5.4 GB/s */
XXH_PUBLIC_API XXH32_hash_t XXH32 (const void* input, size_t length, unsigned int seed);
/*====== Streaming ======*/
typedef struct XXH32_state_s XXH32_state_t; /* incomplete type */
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr);
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dst_state, const XXH32_state_t* src_state);
XXH_PUBLIC_API XXH_errorcode XXH32_reset (XXH32_state_t* statePtr, unsigned int seed);
XXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH32_hash_t XXH32_digest (const XXH32_state_t* statePtr);
/*
These functions generate the xxHash of an input provided in multiple segments.
Note that, for small input, they are slower than single-call functions, due to state management.
For small input, prefer `XXH32()` and `XXH64()` .
XXH state must first be allocated, using XXH*_createState() .
Start a new hash by initializing state with a seed, using XXH*_reset().
Then, feed the hash state by calling XXH*_update() as many times as necessary.
Obviously, input must be allocated and read accessible.
The function returns an error code, with 0 meaning OK, and any other value meaning there is an error.
Finally, a hash value can be produced anytime, by using XXH*_digest().
This function returns the nn-bits hash as an int or long long.
It's still possible to continue inserting input into the hash state after a digest,
and generate some new hashes later on, by calling again XXH*_digest().
When done, free XXH state space if it was allocated dynamically.
*/
/*====== Canonical representation ======*/
typedef struct { unsigned char digest[4]; } XXH32_canonical_t;
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash);
XXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src);
/* Default result type for XXH functions are primitive unsigned 32 and 64 bits.
* The canonical representation uses human-readable write convention, aka big-endian (large digits first).
* These functions allow transformation of hash result into and from its canonical format.
* This way, hash values can be written into a file / memory, and remain comparable on different systems and programs.
*/
#ifndef XXH_NO_LONG_LONG
/*-**********************************************************************
* 64-bits hash
************************************************************************/
typedef unsigned long long XXH64_hash_t;
/*! XXH64() :
Calculate the 64-bits hash of sequence of length "len" stored at memory address "input".
"seed" can be used to alter the result predictably.
This function runs faster on 64-bits systems, but slower on 32-bits systems (see benchmark).
*/
XXH_PUBLIC_API XXH64_hash_t XXH64 (const void* input, size_t length, unsigned long long seed);
/*====== Streaming ======*/
typedef struct XXH64_state_s XXH64_state_t; /* incomplete type */
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr);
XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* dst_state, const XXH64_state_t* src_state);
XXH_PUBLIC_API XXH_errorcode XXH64_reset (XXH64_state_t* statePtr, unsigned long long seed);
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH64_hash_t XXH64_digest (const XXH64_state_t* statePtr);
/*====== Canonical representation ======*/
typedef struct { unsigned char digest[8]; } XXH64_canonical_t;
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash);
XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src);
#endif /* XXH_NO_LONG_LONG */
#ifdef XXH_STATIC_LINKING_ONLY
/* ================================================================================================
This section contains definitions which are not guaranteed to remain stable.
They may change in future versions, becoming incompatible with a different version of the library.
They shall only be used with static linking.
Never use these definitions in association with dynamic linking !
=================================================================================================== */
/* These definitions are only meant to allow allocation of XXH state
statically, on stack, or in a struct for example.
Do not use members directly. */
struct XXH32_state_s {
unsigned total_len_32;
unsigned large_len;
unsigned v1;
unsigned v2;
unsigned v3;
unsigned v4;
unsigned mem32[4]; /* buffer defined as U32 for alignment */
unsigned memsize;
unsigned reserved; /* never read nor write, will be removed in a future version */
}; /* typedef'd to XXH32_state_t */
#ifndef XXH_NO_LONG_LONG
struct XXH64_state_s {
unsigned long long total_len;
unsigned long long v1;
unsigned long long v2;
unsigned long long v3;
unsigned long long v4;
unsigned long long mem64[4]; /* buffer defined as U64 for alignment */
unsigned memsize;
unsigned reserved[2]; /* never read nor write, will be removed in a future version */
}; /* typedef'd to XXH64_state_t */
#endif
# ifdef XXH_PRIVATE_API
# include "xxhash.c" /* include xxhash function bodies as `static`, for inlining */
# endif
#endif /* XXH_STATIC_LINKING_ONLY */
#if defined (__cplusplus)
}
#endif
#endif /* XXHASH_H_5627135585666179 */

View File

@@ -1,20 +0,0 @@
# local binary (Makefile)
lz4
unlz4
lz4cat
lz4c
lz4c32
datagen
frametest
frametest32
fullbench
fullbench32
fuzzer
fuzzer32
*.exe
# tests files
tmp*
# artefacts
*.dSYM

View File

@@ -1,339 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

View File

@@ -1,172 +0,0 @@
# ##########################################################################
# LZ4 programs - Makefile
# Copyright (C) Yann Collet 2011-2016
#
# This Makefile is validated for Linux, macOS, *BSD, Hurd, Solaris, MSYS2 targets
#
# GPL v2 License
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# You can contact the author at :
# - LZ4 homepage : http://www.lz4.org
# - LZ4 source repository : https://github.com/Cyan4973/lz4
# ##########################################################################
# lz4 : Command Line Utility, supporting gzip-like arguments
# lz4c : CLU, supporting also legacy lz4demo arguments
# lz4c32: Same as lz4c, but forced to compile in 32-bits mode
# ##########################################################################
# Version numbers
LZ4DIR := ../lib
LIBVER_SRC := $(LZ4DIR)/lz4.h
LIBVER_MAJOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MAJOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LIBVER_SRC)`
LIBVER_MINOR_SCRIPT:=`sed -n '/define LZ4_VERSION_MINOR/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LIBVER_SRC)`
LIBVER_PATCH_SCRIPT:=`sed -n '/define LZ4_VERSION_RELEASE/s/.*[[:blank:]]\([0-9][0-9]*\).*/\1/p' < $(LIBVER_SRC)`
LIBVER_SCRIPT:= $(LIBVER_MAJOR_SCRIPT).$(LIBVER_MINOR_SCRIPT).$(LIBVER_PATCH_SCRIPT)
LIBVER_MAJOR := $(shell echo $(LIBVER_MAJOR_SCRIPT))
LIBVER_MINOR := $(shell echo $(LIBVER_MINOR_SCRIPT))
LIBVER_PATCH := $(shell echo $(LIBVER_PATCH_SCRIPT))
LIBVER := $(shell echo $(LIBVER_SCRIPT))
SRCFILES := $(wildcard $(LZ4DIR)/*.c) $(wildcard *.c)
OBJFILES := $(patsubst %.c,%.o,$(SRCFILES))
CPPFLAGS += -I$(LZ4DIR) -DXXH_NAMESPACE=LZ4_
CFLAGS ?= -O3
DEBUGFLAGS:=-Wall -Wextra -Wundef -Wcast-qual -Wcast-align -Wshadow \
-Wswitch-enum -Wdeclaration-after-statement -Wstrict-prototypes \
-Wpointer-arith -Wstrict-aliasing=1
CFLAGS += $(DEBUGFLAGS) $(MOREFLAGS)
FLAGS = $(CFLAGS) $(CPPFLAGS) $(LDFLAGS)
LZ4_VERSION=$(LIBVER)
MD2ROFF = ronn
MD2ROFF_FLAGS = --roff --warnings --manual="User Commands" --organization="lz4 $(LZ4_VERSION)"
# Define *.exe as extension for Windows systems
ifneq (,$(filter Windows%,$(OS)))
EXT :=.exe
VOID := nul
else
EXT :=
VOID := /dev/null
endif
default: lz4-release
all: lz4 lz4c
all32: CFLAGS+=-m32
all32: all
lz4: $(OBJFILES)
$(CC) $(FLAGS) $^ -o $@$(EXT)
lz4-release: DEBUGFLAGS=
lz4-release: lz4
lz4c32: CFLAGS += -m32
lz4c32 : $(SRCFILES)
$(CC) $(FLAGS) $^ -o $@$(EXT)
lz4c: lz4
ln -s lz4 lz4c
lz4.1: lz4.1.md
cat $^ | $(MD2ROFF) $(MD2ROFF_FLAGS) | sed -n '/^\.\\\".*/!p' > $@
man: lz4.1
clean-man:
rm lz4.1
preview-man: clean-man man
man ./lz4.1
clean:
@$(MAKE) -C $(LZ4DIR) $@ > $(VOID)
@$(RM) core *.o *.test tmp* \
lz4$(EXT) lz4c$(EXT) lz4c32$(EXT) unlz4 lz4cat
@echo Cleaning completed
#-----------------------------------------------------------------------------
# make install is validated only for Linux, OSX, BSD, Hurd and Solaris targets
#-----------------------------------------------------------------------------
ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD NetBSD DragonFly SunOS))
unlz4: lz4
ln -s lz4 unlz4
lz4cat: lz4
ln -s lz4 lz4cat
ifneq (,$(filter $(shell uname),SunOS))
INSTALL ?= ginstall
else
INSTALL ?= install
endif
DESTDIR ?=
# directory variables : GNU convention prefers lowercase
# support both lower and uppercase (BSD), use uppercase in script
prefix ?= /usr/local
PREFIX ?= $(prefix)
exec_prefix ?= $(PREFIX)
bindir ?= $(exec_prefix)/bin
BINDIR ?= $(bindir)
datarootdir ?= $(PREFIX)/share
mandir ?= $(datarootdir)/man
ifneq (,$(filter $(shell uname),OpenBSD FreeBSD NetBSD DragonFly SunOS))
MANDIR ?= $(PREFIX)/man/man1
else
MANDIR ?= $(mandir)
endif
INSTALL_PROGRAM ?= $(INSTALL) -m 755
INSTALL_DATA ?= $(INSTALL) -m 644
install: lz4$(EXT) lz4c$(EXT)
@echo Installing binaries
@$(INSTALL) -d -m 755 $(DESTDIR)$(BINDIR)/ $(DESTDIR)$(MANDIR)/
@$(INSTALL_PROGRAM) lz4 $(DESTDIR)$(BINDIR)/lz4
@ln -sf lz4 $(DESTDIR)$(BINDIR)/lz4cat
@ln -sf lz4 $(DESTDIR)$(BINDIR)/unlz4
@$(INSTALL_PROGRAM) lz4c$(EXT) $(DESTDIR)$(BINDIR)/lz4c
@echo Installing man pages
@$(INSTALL_DATA) lz4.1 $(DESTDIR)$(MANDIR)/lz4.1
@ln -sf lz4.1 $(DESTDIR)$(MANDIR)/lz4c.1
@ln -sf lz4.1 $(DESTDIR)$(MANDIR)/lz4cat.1
@ln -sf lz4.1 $(DESTDIR)$(MANDIR)/unlz4.1
@echo lz4 installation completed
uninstall:
@$(RM) $(DESTDIR)$(BINDIR)/lz4cat
@$(RM) $(DESTDIR)$(BINDIR)/unlz4
@$(RM) $(DESTDIR)$(BINDIR)/lz4
@$(RM) $(DESTDIR)$(BINDIR)/lz4c
@$(RM) $(DESTDIR)$(MANDIR)/lz4.1
@$(RM) $(DESTDIR)$(MANDIR)/lz4c.1
@$(RM) $(DESTDIR)$(MANDIR)/lz4cat.1
@$(RM) $(DESTDIR)$(MANDIR)/unlz4.1
@echo lz4 programs successfully uninstalled
endif

View File

@@ -1,71 +0,0 @@
Command Line Interface for LZ4 library
============================================
Command Line Interface (CLI) can be created using the `make` command without any additional parameters.
There are also multiple targets that create different variations of CLI:
- `lz4` : default CLI, with a command line syntax close to gzip
- `lz4c` : Same as `lz4` with additional support legacy lz4 commands (incompatible with gzip)
- `lz4c32` : Same as `lz4c`, but forced to compile in 32-bits mode
#### Aggregation of parameters
CLI supports aggregation of parameters i.e. `-b1`, `-e18`, and `-i1` can be joined into `-b1e18i1`.
#### Benchmark in Command Line Interface
CLI includes in-memory compression benchmark module for lz4.
The benchmark is conducted using a given filename.
The file is read into memory.
It makes benchmark more precise as it eliminates I/O overhead.
The benchmark measures ratio, compressed size, compression and decompression speed.
One can select compression levels starting from `-b` and ending with `-e`.
The `-i` parameter selects a number of seconds used for each of tested levels.
#### Usage of Command Line Interface
The full list of commands can be obtained with `-h` or `-H` parameter:
```
Usage :
lz4 [arg] [input] [output]
input : a filename
with no FILE, or when FILE is - or stdin, read standard input
Arguments :
-1 : Fast compression (default)
-9 : High compression
-d : decompression (default for .lz4 extension)
-z : force compression
-f : overwrite output without prompting
--rm : remove source file(s) after successful de/compression
-h/-H : display help/long help and exit
Advanced arguments :
-V : display Version number and exit
-v : verbose mode
-q : suppress warnings; specify twice to suppress errors too
-c : force write to standard output, even if it is the console
-t : test compressed file integrity
-m : multiple input files (implies automatic output filenames)
-r : operate recursively on directories (sets also -m)
-l : compress using Legacy format (Linux kernel compression)
-B# : Block size [4-7] (default : 7)
-BD : Block dependency (improve compression ratio)
--no-frame-crc : disable stream checksum (default:enabled)
--content-size : compressed frame includes original size (default:not present)
--[no-]sparse : sparse mode (default:enabled on file, disabled on stdout)
Benchmark arguments :
-b# : benchmark file(s), using # compression level (default : 1)
-e# : test all compression levels from -bX to # (default : 1)
-i# : minimum evaluation time in seconds (default : 3s)
-B# : cut file into independent blocks of size # bytes [32+]
or predefined block size [4-7] (default: 7)
```
#### License
All files in this directory are licensed under GPL-v2.
See [COPYING](COPYING) for details.
The text of the license is also included at the top of each source file.

View File

@@ -1,521 +0,0 @@
/*
bench.c - Demo program to benchmark open-source compression algorithms
Copyright (C) Yann Collet 2012-2016
GPL v2 License
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
You can contact the author at :
- LZ4 homepage : http://www.lz4.org
- LZ4 source repository : https://github.com/lz4/lz4
*/
/*-************************************
* Compiler options
**************************************/
#ifdef _MSC_VER /* Visual Studio */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
#endif
/* *************************************
* Includes
***************************************/
#include "platform.h" /* Compiler options */
#include "util.h" /* UTIL_GetFileSize, UTIL_sleep */
#include <stdlib.h> /* malloc, free */
#include <string.h> /* memset */
#include <stdio.h> /* fprintf, fopen, ftello */
#include <time.h> /* clock_t, clock, CLOCKS_PER_SEC */
#include "datagen.h" /* RDG_genBuffer */
#include "xxhash.h"
#include "lz4.h"
#define COMPRESSOR0 LZ4_compress_local
static int LZ4_compress_local(const char* src, char* dst, int srcSize, int dstSize, int clevel) { (void)clevel; return LZ4_compress_default(src, dst, srcSize, dstSize); }
#include "lz4hc.h"
#define COMPRESSOR1 LZ4_compress_HC
#define DEFAULTCOMPRESSOR COMPRESSOR0
#define LZ4_isError(errcode) (errcode==0)
/* *************************************
* Constants
***************************************/
#ifndef LZ4_GIT_COMMIT_STRING
# define LZ4_GIT_COMMIT_STRING ""
#else
# define LZ4_GIT_COMMIT_STRING LZ4_EXPAND_AND_QUOTE(LZ4_GIT_COMMIT)
#endif
#define NBSECONDS 3
#define TIMELOOP_MICROSEC 1*1000000ULL /* 1 second */
#define ACTIVEPERIOD_MICROSEC 70*1000000ULL /* 70 seconds */
#define COOLPERIOD_SEC 10
#define DECOMP_MULT 2 /* test decompression DECOMP_MULT times longer than compression */
#define KB *(1 <<10)
#define MB *(1 <<20)
#define GB *(1U<<30)
static const size_t maxMemory = (sizeof(size_t)==4) ? (2 GB - 64 MB) : (size_t)(1ULL << ((sizeof(size_t)*8)-31));
static U32 g_compressibilityDefault = 50;
/* *************************************
* console display
***************************************/
#define DISPLAY(...) fprintf(stderr, __VA_ARGS__)
#define DISPLAYLEVEL(l, ...) if (g_displayLevel>=l) { DISPLAY(__VA_ARGS__); }
static U32 g_displayLevel = 2; /* 0 : no display; 1: errors; 2 : + result + interaction + warnings; 3 : + progression; 4 : + information */
#define DISPLAYUPDATE(l, ...) if (g_displayLevel>=l) { \
if ((clock() - g_time > refreshRate) || (g_displayLevel>=4)) \
{ g_time = clock(); DISPLAY(__VA_ARGS__); \
if (g_displayLevel>=4) fflush(stdout); } }
static const clock_t refreshRate = CLOCKS_PER_SEC * 15 / 100;
static clock_t g_time = 0;
/* *************************************
* Exceptions
***************************************/
#ifndef DEBUG
# define DEBUG 0
#endif
#define DEBUGOUTPUT(...) if (DEBUG) DISPLAY(__VA_ARGS__);
#define EXM_THROW(error, ...) \
{ \
DEBUGOUTPUT("Error defined at %s, line %i : \n", __FILE__, __LINE__); \
DISPLAYLEVEL(1, "Error %i : ", error); \
DISPLAYLEVEL(1, __VA_ARGS__); \
DISPLAYLEVEL(1, "\n"); \
exit(error); \
}
/* *************************************
* Benchmark Parameters
***************************************/
static U32 g_nbSeconds = NBSECONDS;
static size_t g_blockSize = 0;
int g_additionalParam = 0;
void BMK_setNotificationLevel(unsigned level) { g_displayLevel=level; }
void BMK_setAdditionalParam(int additionalParam) { g_additionalParam=additionalParam; }
void BMK_SetNbSeconds(unsigned nbSeconds)
{
g_nbSeconds = nbSeconds;
DISPLAYLEVEL(3, "- test >= %u seconds per compression / decompression -\n", g_nbSeconds);
}
void BMK_SetBlockSize(size_t blockSize)
{
g_blockSize = blockSize;
}
/* ********************************************************
* Bench functions
**********************************************************/
typedef struct {
const char* srcPtr;
size_t srcSize;
char* cPtr;
size_t cRoom;
size_t cSize;
char* resPtr;
size_t resSize;
} blockParam_t;
struct compressionParameters
{
int (*compressionFunction)(const char* src, char* dst, int srcSize, int dstSize, int cLevel);
};
#define MIN(a,b) ((a)<(b) ? (a) : (b))
#define MAX(a,b) ((a)>(b) ? (a) : (b))
static int BMK_benchMem(const void* srcBuffer, size_t srcSize,
const char* displayName, int cLevel,
const size_t* fileSizes, U32 nbFiles)
{
size_t const blockSize = (g_blockSize>=32 ? g_blockSize : srcSize) + (!srcSize) /* avoid div by 0 */ ;
U32 const maxNbBlocks = (U32) ((srcSize + (blockSize-1)) / blockSize) + nbFiles;
blockParam_t* const blockTable = (blockParam_t*) malloc(maxNbBlocks * sizeof(blockParam_t));
size_t const maxCompressedSize = LZ4_compressBound((int)srcSize) + (maxNbBlocks * 1024); /* add some room for safety */
void* const compressedBuffer = malloc(maxCompressedSize);
void* const resultBuffer = malloc(srcSize);
U32 nbBlocks;
UTIL_time_t ticksPerSecond;
struct compressionParameters compP;
int cfunctionId;
/* checks */
if (!compressedBuffer || !resultBuffer || !blockTable)
EXM_THROW(31, "allocation error : not enough memory");
/* init */
if (strlen(displayName)>17) displayName += strlen(displayName)-17; /* can only display 17 characters */
UTIL_initTimer(&ticksPerSecond);
/* Init */
if (cLevel < LZ4HC_CLEVEL_MIN) cfunctionId = 0; else cfunctionId = 1;
switch (cfunctionId)
{
#ifdef COMPRESSOR0
case 0 : compP.compressionFunction = COMPRESSOR0; break;
#endif
#ifdef COMPRESSOR1
case 1 : compP.compressionFunction = COMPRESSOR1; break;
#endif
default : compP.compressionFunction = DEFAULTCOMPRESSOR;
}
/* Init blockTable data */
{ const char* srcPtr = (const char*)srcBuffer;
char* cPtr = (char*)compressedBuffer;
char* resPtr = (char*)resultBuffer;
U32 fileNb;
for (nbBlocks=0, fileNb=0; fileNb<nbFiles; fileNb++) {
size_t remaining = fileSizes[fileNb];
U32 const nbBlocksforThisFile = (U32)((remaining + (blockSize-1)) / blockSize);
U32 const blockEnd = nbBlocks + nbBlocksforThisFile;
for ( ; nbBlocks<blockEnd; nbBlocks++) {
size_t const thisBlockSize = MIN(remaining, blockSize);
blockTable[nbBlocks].srcPtr = srcPtr;
blockTable[nbBlocks].cPtr = cPtr;
blockTable[nbBlocks].resPtr = resPtr;
blockTable[nbBlocks].srcSize = thisBlockSize;
blockTable[nbBlocks].cRoom = LZ4_compressBound((int)thisBlockSize);
srcPtr += thisBlockSize;
cPtr += blockTable[nbBlocks].cRoom;
resPtr += thisBlockSize;
remaining -= thisBlockSize;
} } }
/* warmimg up memory */
RDG_genBuffer(compressedBuffer, maxCompressedSize, 0.10, 0.50, 1);
/* Bench */
{ U64 fastestC = (U64)(-1LL), fastestD = (U64)(-1LL);
U64 const crcOrig = XXH64(srcBuffer, srcSize, 0);
UTIL_time_t coolTime;
U64 const maxTime = (g_nbSeconds * TIMELOOP_MICROSEC) + 100;
U64 totalCTime=0, totalDTime=0;
U32 cCompleted=0, dCompleted=0;
# define NB_MARKS 4
const char* const marks[NB_MARKS] = { " |", " /", " =", "\\" };
U32 markNb = 0;
size_t cSize = 0;
double ratio = 0.;
UTIL_getTime(&coolTime);
DISPLAYLEVEL(2, "\r%79s\r", "");
while (!cCompleted || !dCompleted) {
UTIL_time_t clockStart;
U64 clockLoop = g_nbSeconds ? TIMELOOP_MICROSEC : 1;
/* overheat protection */
if (UTIL_clockSpanMicro(coolTime, ticksPerSecond) > ACTIVEPERIOD_MICROSEC) {
DISPLAYLEVEL(2, "\rcooling down ... \r");
UTIL_sleep(COOLPERIOD_SEC);
UTIL_getTime(&coolTime);
}
/* Compression */
DISPLAYLEVEL(2, "%2s-%-17.17s :%10u ->\r", marks[markNb], displayName, (U32)srcSize);
if (!cCompleted) memset(compressedBuffer, 0xE5, maxCompressedSize); /* warm up and erase result buffer */
UTIL_sleepMilli(1); /* give processor time to other processes */
UTIL_waitForNextTick(ticksPerSecond);
UTIL_getTime(&clockStart);
if (!cCompleted) { /* still some time to do compression tests */
U32 nbLoops = 0;
do {
U32 blockNb;
for (blockNb=0; blockNb<nbBlocks; blockNb++) {
size_t const rSize = compP.compressionFunction(blockTable[blockNb].srcPtr, blockTable[blockNb].cPtr, (int)blockTable[blockNb].srcSize, (int)blockTable[blockNb].cRoom, cLevel);
if (LZ4_isError(rSize)) EXM_THROW(1, "LZ4_compress() failed");
blockTable[blockNb].cSize = rSize;
}
nbLoops++;
} while (UTIL_clockSpanMicro(clockStart, ticksPerSecond) < clockLoop);
{ U64 const clockSpan = UTIL_clockSpanMicro(clockStart, ticksPerSecond);
if (clockSpan < fastestC*nbLoops) fastestC = clockSpan / nbLoops;
totalCTime += clockSpan;
cCompleted = totalCTime>maxTime;
} }
cSize = 0;
{ U32 blockNb; for (blockNb=0; blockNb<nbBlocks; blockNb++) cSize += blockTable[blockNb].cSize; }
cSize += !cSize; /* avoid div by 0 */
ratio = (double)srcSize / (double)cSize;
markNb = (markNb+1) % NB_MARKS;
DISPLAYLEVEL(2, "%2s-%-17.17s :%10u ->%10u (%5.3f),%6.1f MB/s\r",
marks[markNb], displayName, (U32)srcSize, (U32)cSize, ratio,
(double)srcSize / fastestC );
(void)fastestD; (void)crcOrig; /* unused when decompression disabled */
#if 1
/* Decompression */
if (!dCompleted) memset(resultBuffer, 0xD6, srcSize); /* warm result buffer */
UTIL_sleepMilli(1); /* give processor time to other processes */
UTIL_waitForNextTick(ticksPerSecond);
UTIL_getTime(&clockStart);
if (!dCompleted) {
U32 nbLoops = 0;
do {
U32 blockNb;
for (blockNb=0; blockNb<nbBlocks; blockNb++) {
size_t const regenSize = LZ4_decompress_safe(blockTable[blockNb].cPtr, blockTable[blockNb].resPtr, (int)blockTable[blockNb].cSize, (int)blockTable[blockNb].srcSize);
if (LZ4_isError(regenSize)) {
DISPLAY("LZ4_decompress_safe() failed on block %u \n", blockNb);
clockLoop = 0; /* force immediate test end */
break;
}
blockTable[blockNb].resSize = regenSize;
}
nbLoops++;
} while (UTIL_clockSpanMicro(clockStart, ticksPerSecond) < DECOMP_MULT*clockLoop);
{ U64 const clockSpan = UTIL_clockSpanMicro(clockStart, ticksPerSecond);
if (clockSpan < fastestD*nbLoops) fastestD = clockSpan / nbLoops;
totalDTime += clockSpan;
dCompleted = totalDTime>(DECOMP_MULT*maxTime);
} }
markNb = (markNb+1) % NB_MARKS;
DISPLAYLEVEL(2, "%2s-%-17.17s :%10u ->%10u (%5.3f),%6.1f MB/s ,%6.1f MB/s\r",
marks[markNb], displayName, (U32)srcSize, (U32)cSize, ratio,
(double)srcSize / fastestC,
(double)srcSize / fastestD );
/* CRC Checking */
{ U64 const crcCheck = XXH64(resultBuffer, srcSize, 0);
if (crcOrig!=crcCheck) {
size_t u;
DISPLAY("!!! WARNING !!! %14s : Invalid Checksum : %x != %x \n", displayName, (unsigned)crcOrig, (unsigned)crcCheck);
for (u=0; u<srcSize; u++) {
if (((const BYTE*)srcBuffer)[u] != ((const BYTE*)resultBuffer)[u]) {
U32 segNb, bNb, pos;
size_t bacc = 0;
DISPLAY("Decoding error at pos %u ", (U32)u);
for (segNb = 0; segNb < nbBlocks; segNb++) {
if (bacc + blockTable[segNb].srcSize > u) break;
bacc += blockTable[segNb].srcSize;
}
pos = (U32)(u - bacc);
bNb = pos / (128 KB);
DISPLAY("(block %u, sub %u, pos %u) \n", segNb, bNb, pos);
break;
}
if (u==srcSize-1) { /* should never happen */
DISPLAY("no difference detected\n");
} }
break;
} } /* CRC Checking */
#endif
} /* for (testNb = 1; testNb <= (g_nbSeconds + !g_nbSeconds); testNb++) */
if (g_displayLevel == 1) {
double cSpeed = (double)srcSize / fastestC;
double dSpeed = (double)srcSize / fastestD;
if (g_additionalParam)
DISPLAY("-%-3i%11i (%5.3f) %6.2f MB/s %6.1f MB/s %s (param=%d)\n", cLevel, (int)cSize, ratio, cSpeed, dSpeed, displayName, g_additionalParam);
else
DISPLAY("-%-3i%11i (%5.3f) %6.2f MB/s %6.1f MB/s %s\n", cLevel, (int)cSize, ratio, cSpeed, dSpeed, displayName);
}
DISPLAYLEVEL(2, "%2i#\n", cLevel);
} /* Bench */
/* clean up */
free(blockTable);
free(compressedBuffer);
free(resultBuffer);
return 0;
}
static size_t BMK_findMaxMem(U64 requiredMem)
{
size_t step = 64 MB;
BYTE* testmem=NULL;
requiredMem = (((requiredMem >> 26) + 1) << 26);
requiredMem += 2*step;
if (requiredMem > maxMemory) requiredMem = maxMemory;
while (!testmem) {
if (requiredMem > step) requiredMem -= step;
else requiredMem >>= 1;
testmem = (BYTE*) malloc ((size_t)requiredMem);
}
free (testmem);
/* keep some space available */
if (requiredMem > step) requiredMem -= step;
else requiredMem >>= 1;
return (size_t)requiredMem;
}
static void BMK_benchCLevel(void* srcBuffer, size_t benchedSize,
const char* displayName, int cLevel, int cLevelLast,
const size_t* fileSizes, unsigned nbFiles)
{
int l;
const char* pch = strrchr(displayName, '\\'); /* Windows */
if (!pch) pch = strrchr(displayName, '/'); /* Linux */
if (pch) displayName = pch+1;
SET_REALTIME_PRIORITY;
if (g_displayLevel == 1 && !g_additionalParam)
DISPLAY("bench %s %s: input %u bytes, %u seconds, %u KB blocks\n", LZ4_VERSION_STRING, LZ4_GIT_COMMIT_STRING, (U32)benchedSize, g_nbSeconds, (U32)(g_blockSize>>10));
if (cLevelLast < cLevel) cLevelLast = cLevel;
for (l=cLevel; l <= cLevelLast; l++) {
BMK_benchMem(srcBuffer, benchedSize,
displayName, l,
fileSizes, nbFiles);
}
}
/*! BMK_loadFiles() :
Loads `buffer` with content of files listed within `fileNamesTable`.
At most, fills `buffer` entirely */
static void BMK_loadFiles(void* buffer, size_t bufferSize,
size_t* fileSizes,
const char** fileNamesTable, unsigned nbFiles)
{
size_t pos = 0, totalSize = 0;
unsigned n;
for (n=0; n<nbFiles; n++) {
FILE* f;
U64 fileSize = UTIL_getFileSize(fileNamesTable[n]);
if (UTIL_isDirectory(fileNamesTable[n])) {
DISPLAYLEVEL(2, "Ignoring %s directory... \n", fileNamesTable[n]);
fileSizes[n] = 0;
continue;
}
f = fopen(fileNamesTable[n], "rb");
if (f==NULL) EXM_THROW(10, "impossible to open file %s", fileNamesTable[n]);
DISPLAYUPDATE(2, "Loading %s... \r", fileNamesTable[n]);
if (fileSize > bufferSize-pos) fileSize = bufferSize-pos, nbFiles=n; /* buffer too small - stop after this file */
{ size_t const readSize = fread(((char*)buffer)+pos, 1, (size_t)fileSize, f);
if (readSize != (size_t)fileSize) EXM_THROW(11, "could not read %s", fileNamesTable[n]);
pos += readSize; }
fileSizes[n] = (size_t)fileSize;
totalSize += (size_t)fileSize;
fclose(f);
}
if (totalSize == 0) EXM_THROW(12, "no data to bench");
}
static void BMK_benchFileTable(const char** fileNamesTable, unsigned nbFiles,
int cLevel, int cLevelLast)
{
void* srcBuffer;
size_t benchedSize;
size_t* fileSizes = (size_t*)malloc(nbFiles * sizeof(size_t));
U64 const totalSizeToLoad = UTIL_getTotalFileSize(fileNamesTable, nbFiles);
char mfName[20] = {0};
if (!fileSizes) EXM_THROW(12, "not enough memory for fileSizes");
/* Memory allocation & restrictions */
benchedSize = BMK_findMaxMem(totalSizeToLoad * 3) / 3;
if (benchedSize==0) EXM_THROW(12, "not enough memory");
if ((U64)benchedSize > totalSizeToLoad) benchedSize = (size_t)totalSizeToLoad;
if (benchedSize > LZ4_MAX_INPUT_SIZE) {
benchedSize = LZ4_MAX_INPUT_SIZE;
DISPLAY("File(s) bigger than LZ4's max input size; testing %u MB only...\n", (U32)(benchedSize >> 20));
} else {
if (benchedSize < totalSizeToLoad)
DISPLAY("Not enough memory; testing %u MB only...\n", (U32)(benchedSize >> 20));
}
srcBuffer = malloc(benchedSize + !benchedSize); /* avoid alloc of zero */
if (!srcBuffer) EXM_THROW(12, "not enough memory");
/* Load input buffer */
BMK_loadFiles(srcBuffer, benchedSize, fileSizes, fileNamesTable, nbFiles);
/* Bench */
snprintf (mfName, sizeof(mfName), " %u files", nbFiles);
{ const char* displayName = (nbFiles > 1) ? mfName : fileNamesTable[0];
BMK_benchCLevel(srcBuffer, benchedSize,
displayName, cLevel, cLevelLast,
fileSizes, nbFiles);
}
/* clean up */
free(srcBuffer);
free(fileSizes);
}
static void BMK_syntheticTest(int cLevel, int cLevelLast, double compressibility)
{
char name[20] = {0};
size_t benchedSize = 10000000;
void* const srcBuffer = malloc(benchedSize);
/* Memory allocation */
if (!srcBuffer) EXM_THROW(21, "not enough memory");
/* Fill input buffer */
RDG_genBuffer(srcBuffer, benchedSize, compressibility, 0.0, 0);
/* Bench */
snprintf (name, sizeof(name), "Synthetic %2u%%", (unsigned)(compressibility*100));
BMK_benchCLevel(srcBuffer, benchedSize, name, cLevel, cLevelLast, &benchedSize, 1);
/* clean up */
free(srcBuffer);
}
int BMK_benchFiles(const char** fileNamesTable, unsigned nbFiles,
int cLevel, int cLevelLast)
{
double const compressibility = (double)g_compressibilityDefault / 100;
if (cLevel > LZ4HC_CLEVEL_MAX) cLevel = LZ4HC_CLEVEL_MAX;
if (cLevelLast > LZ4HC_CLEVEL_MAX) cLevelLast = LZ4HC_CLEVEL_MAX;
if (cLevelLast < cLevel) cLevelLast = cLevel;
if (cLevelLast > cLevel) DISPLAYLEVEL(2, "Benchmarking levels from %d to %d\n", cLevel, cLevelLast);
if (nbFiles == 0)
BMK_syntheticTest(cLevel, cLevelLast, compressibility);
else
BMK_benchFileTable(fileNamesTable, nbFiles, cLevel, cLevelLast);
return 0;
}

View File

@@ -1,37 +0,0 @@
/*
bench.h - Demo program to benchmark open-source compression algorithm
Copyright (C) Yann Collet 2012-2016
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
*/
#ifndef BENCH_H_125623623633
#define BENCH_H_125623623633
#include <stddef.h>
int BMK_benchFiles(const char** fileNamesTable, unsigned nbFiles,
int cLevel, int cLevelLast);
/* Set Parameters */
void BMK_SetNbSeconds(unsigned nbLoops);
void BMK_SetBlockSize(size_t blockSize);
void BMK_setAdditionalParam(int additionalParam);
void BMK_setNotificationLevel(unsigned level);
#endif /* BENCH_H_125623623633 */

View File

@@ -1,189 +0,0 @@
/*
datagen.c - compressible data generator test tool
Copyright (C) Yann Collet 2012-2016
GPL v2 License
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
You can contact the author at :
- LZ4 source repository : https://github.com/lz4/lz4
- Public forum : https://groups.google.com/forum/#!forum/lz4c
*/
/**************************************
* Includes
**************************************/
#include "platform.h" /* Compiler options, SET_BINARY_MODE */
#include "util.h" /* U32 */
#include <stdlib.h> /* malloc */
#include <stdio.h> /* FILE, fwrite */
#include <string.h> /* memcpy */
/**************************************
* Constants
**************************************/
#define KB *(1 <<10)
#define PRIME1 2654435761U
#define PRIME2 2246822519U
/**************************************
* Local types
**************************************/
#define LTLOG 13
#define LTSIZE (1<<LTLOG)
#define LTMASK (LTSIZE-1)
typedef BYTE litDistribTable[LTSIZE];
/*********************************************************
* Local Functions
*********************************************************/
#define MIN(a,b) ( (a) < (b) ? (a) :(b) )
#define RDG_rotl32(x,r) ((x << r) | (x >> (32 - r)))
static unsigned int RDG_rand(U32* src)
{
U32 rand32 = *src;
rand32 *= PRIME1;
rand32 ^= PRIME2;
rand32 = RDG_rotl32(rand32, 13);
*src = rand32;
return rand32;
}
static void RDG_fillLiteralDistrib(litDistribTable lt, double ld)
{
BYTE const firstChar = ld <= 0.0 ? 0 : '(';
BYTE const lastChar = ld <= 0.0 ? 255 : '}';
BYTE character = ld <= 0.0 ? 0 : '0';
U32 u = 0;
while (u<LTSIZE) {
U32 const weight = (U32)((double)(LTSIZE - u) * ld) + 1;
U32 const end = MIN(u+weight, LTSIZE);
while (u < end) lt[u++] = character;
character++;
if (character > lastChar) character = firstChar;
}
}
static BYTE RDG_genChar(U32* seed, const litDistribTable lt)
{
U32 id = RDG_rand(seed) & LTMASK;
return (lt[id]);
}
#define RDG_DICTSIZE (32 KB)
#define RDG_RAND15BITS ((RDG_rand(seed) >> 3) & 32767)
#define RDG_RANDLENGTH ( ((RDG_rand(seed) >> 7) & 7) ? (RDG_rand(seed) & 15) : (RDG_rand(seed) & 511) + 15)
void RDG_genBlock(void* buffer, size_t buffSize, size_t prefixSize, double matchProba, litDistribTable lt, unsigned* seedPtr)
{
BYTE* buffPtr = (BYTE*)buffer;
const U32 matchProba32 = (U32)(32768 * matchProba);
size_t pos = prefixSize;
U32* seed = seedPtr;
/* special case */
while (matchProba >= 1.0)
{
size_t size0 = RDG_rand(seed) & 3;
size0 = (size_t)1 << (16 + size0 * 2);
size0 += RDG_rand(seed) & (size0-1); /* because size0 is power of 2*/
if (buffSize < pos + size0)
{
memset(buffPtr+pos, 0, buffSize-pos);
return;
}
memset(buffPtr+pos, 0, size0);
pos += size0;
buffPtr[pos-1] = RDG_genChar(seed, lt);
}
/* init */
if (pos==0) buffPtr[0] = RDG_genChar(seed, lt), pos=1;
/* Generate compressible data */
while (pos < buffSize)
{
/* Select : Literal (char) or Match (within 32K) */
if (RDG_RAND15BITS < matchProba32)
{
/* Copy (within 32K) */
size_t match;
size_t d;
int length = RDG_RANDLENGTH + 4;
U32 offset = RDG_RAND15BITS + 1;
if (offset > pos) offset = (U32)pos;
match = pos - offset;
d = pos + length;
if (d > buffSize) d = buffSize;
while (pos < d) buffPtr[pos++] = buffPtr[match++];
}
else
{
/* Literal (noise) */
size_t d;
size_t length = RDG_RANDLENGTH;
d = pos + length;
if (d > buffSize) d = buffSize;
while (pos < d) buffPtr[pos++] = RDG_genChar(seed, lt);
}
}
}
void RDG_genBuffer(void* buffer, size_t size, double matchProba, double litProba, unsigned seed)
{
litDistribTable lt;
if (litProba==0.0) litProba = matchProba / 4.5;
RDG_fillLiteralDistrib(lt, litProba);
RDG_genBlock(buffer, size, 0, matchProba, lt, &seed);
}
#define RDG_BLOCKSIZE (128 KB)
void RDG_genOut(unsigned long long size, double matchProba, double litProba, unsigned seed)
{
BYTE buff[RDG_DICTSIZE + RDG_BLOCKSIZE];
U64 total = 0;
size_t genBlockSize = RDG_BLOCKSIZE;
litDistribTable lt;
/* init */
if (litProba==0.0) litProba = matchProba / 4.5;
RDG_fillLiteralDistrib(lt, litProba);
SET_BINARY_MODE(stdout);
/* Generate dict */
RDG_genBlock(buff, RDG_DICTSIZE, 0, matchProba, lt, &seed);
/* Generate compressible data */
while (total < size)
{
RDG_genBlock(buff, RDG_DICTSIZE+RDG_BLOCKSIZE, RDG_DICTSIZE, matchProba, lt, &seed);
if (size-total < RDG_BLOCKSIZE) genBlockSize = (size_t)(size-total);
total += genBlockSize;
fwrite(buff, 1, genBlockSize, stdout);
/* update dict */
memcpy(buff, buff + RDG_BLOCKSIZE, RDG_DICTSIZE);
}
}

Some files were not shown because too many files have changed in this diff Show More