Commit b5f12d28 authored by 董子豪's avatar 董子豪

rust-proofs

parent c8fff73b
This diff is collapsed.
type-complexity-threshold = 400
\ No newline at end of file
/target
**/*.rs.bk
Cargo.lock
.criterion
**/*.h
heaptrack*
.bencher
*.profile
*.heap
rust-fil-proofs.config.toml
This diff is collapsed.
# Global Owners
* @dignifiedquire @cryptonemo
# Contributing
Welcome, it is great that you found your way here. In order to make the best of all our time, we have gathered some notes
below which we think can be helpful when contributing to this project.
## Getting Started
Please start by reviewing this file.
## Coding Standards
- No compiler warnings.
- No [clippy](https://github.com/rust-lang/rust-clippy) warnings.
- Minimize use of `unsafe` and justify usage in comments.
- Prefer `expect` with a good description to `unwrap`.
- Write unit tests in the same file.
- Format your code with `rustfmt`
- Code should compile on `stable` and `nightly`. If adding `nightly` only features they should be behind a flag.
- Write benchmarks for performance sensitive areas. We use [criterion.rs](https://github.com/japaric/criterion.rs).
## General Guidelines
- PRs require code owner approval to merge.
- Please scope PRs to areas in which you have expertise. This code is still close to research.
- Please follow our commit guideline described below.
- Welcome contribution areas might include:
- SNARKs
- Proof-of-replication
- Rust improvements
- Optimizations
- Documentation (expertise would require careful reading of the code)
## PR Merge Policy (Git topology)
### Allowed (white list)
- Single fast-forward merge commit, with all internal commits squashed.
- Non-fast-forward merge commit, with all internal commits squashed -- rebased to branch from the previous commit to master.
- Non-fast-forward merge commit, with curated (as appropriate), linear, internal commits preserved -- rebased to branch from the previous commit to master.
### Disallowed (black list)
- Non-rebased merge commits which branch from anywhere but the previous commit to master.
- Merge commits whose internal history contains merge commits (except in rare circumstances).
- Multiple fast-forward merge commits for a single PR.
- Internal junk commits — (e.g. strings of WIP).
### In Practice
- In general, please rebase PRs before merging.
- To avoid having approvals dismissed by rebasing, authors may instead choose to:
- First use GitHub's 'resolve conflicts' button;
- Then merge with GitHub's 'squash and merge' button.
If automated conflict resolution is not possible, you will need to rebase and seek re-approval. In any event, please note the guidelines and prefer either a single commit or a usefully curated set of commits.
## Resources for learning Rust
- Beginners
- [The Rust Book](https://doc.rust-lang.org/book/)
- [Rust Playground](https://play.rust-lang.org/)
- [Rust Docs](https://doc.rust-lang.org/)
- [Clippy](https://github.com/rust-lang/rust-clippy)
- [Rustfmt](https://github.com/rust-lang/rustfmt)
- Advanced
- What does the Rust compiler do with my code? [Godbolt compiler explorer](https://rust.godbolt.org/)
- How to safely write unsafe Rust: [The Rustonomicon](https://doc.rust-lang.org/nomicon/)
- Did someone say macros? [The Little Book of Rust Macros](https://danielkeep.github.io/tlborm/book/index.html)
## Commit Message Guidelines
We have very precise rules over how our git commit messages can be formatted. This leads to **more
readable messages** that are easy to follow when looking through the **project history**. But also,
we use the git commit messages to **generate the change log programmatically**.
### Commit Message Format
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
format that includes a **type**, a **scope** and a **subject**:
```
<type>(<scope>): <subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>
```
The **header** is mandatory and the **scope** of the header is optional.
Any line of the commit message cannot be longer 100 characters! This allows the message to be easier
to read on GitHub as well as in various git tools.
The footer should contain a [closing reference to an issue](https://help.github.com/articles/closing-issues-via-commit-messages/) if any.
Samples: (even more [samples](https://github.com/filecoin-project/rust-fil-proofs/commits/master))
```
docs(changelog): update changelog to beta.5
```
```
fix(release): need to depend on latest rxjs and zone.js
The version in our package.json gets copied to the one we publish, and users need the latest of these.
```
### Revert
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header of the reverted commit. In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit being reverted.
### Type
Must be one of the following:
* **build**: Changes that affect the build system or external dependencies (example scopes: cargo, benchmarks)
* **ci**: Changes to our CI configuration files and scripts (example scopes: Circle)
* **docs**: Documentation only changes
* **feat**: A new feature
* **fix**: A bug fix
* **perf**: A code change that improves performance
* **refactor**: A code change that neither fixes a bug nor adds a feature
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
* **test**: Adding missing tests or correcting existing tests
* **revert**: Used only for `git revert` commits.
### Scope
The scope should be the name of the crate affected (as perceived by the person reading the changelog generated from commit messages.
The following is the list of supported scopes:
* **fil-proofs-tooling**
* **filecoin-proofs**
* **storage-proofs**
There are currently a few exceptions to the "use package name" rule:
* **cargo**: used for changes that change the cargo workspace layout, e.g.
public path changes, Cargo.toml changes done to all packages, etc.
* **changelog**: used for updating the release notes in CHANGELOG.md
* none/empty string: useful for `style`, `test` and `refactor` changes that are done across all
packages (e.g. `style: add missing semicolons`) and for docs changes that are not related to a
specific package (e.g. `docs: fix typo in tutorial`).
> If you find yourself wanting to use other scopes regularly, please open an issue so we can discuss and extend this list.
### Subject
The subject contains a succinct description of the change:
* use the imperative, present tense: "change" not "changed" nor "changes"
* don't capitalize the first letter
* no dot (.) at the end
### Body
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
The body should include the motivation for the change and contrast this with previous behavior.
### Footer
The footer should contain any information about **Breaking Changes** and is also the place to
reference GitHub issues that this commit **Closes**.
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines. The rest of the commit message is then used for this.
This guideline was adopted from the [Angular project](https://github.com/angular/angular/blob/master/CONTRIBUTING.md#commit).
## Licensing
As mentioned in the [readme](README.md) all contributions are dual licensed under Apache 2 and MIT.
This library is dual-licensed under Apache 2.0 and MIT terms.
[workspace]
members = [
"filecoin-proofs",
"storage-proofs-core",
"storage-proofs-porep",
"storage-proofs-post",
"fil-proofs-tooling",
"fil-proofs-param",
"fr32",
"sha2raw",
"filecoin-hashers",
]
# Dockerfile for CircleCI
# build with
# `docker build -t filecoin/rust:latest -f ./Dockerfile-ci .`
# rebuild: `docker build --pull --no-cache -t filecoin/rust:latest -f ./Dockerfile-ci .`
FROM debian:stretch
# Some of the dependencies I need to build a few libraries,
# personalize to your needs. You can use multi-stage builds
# to produce a lightweight image.
RUN apt-get update && \
apt-get install -y curl file gcc g++ git make openssh-client \
autoconf automake cmake libtool libcurl4-openssl-dev libssl-dev \
libelf-dev libdw-dev binutils-dev zlib1g-dev libiberty-dev wget \
xz-utils pkg-config python clang ocl-icd-opencl-dev libhwloc-dev
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH "$PATH:/root/.cargo/bin"
ENV RUSTFLAGS "-C link-dead-code"
ENV CFG_RELEASE_CHANNEL "nightly"
RUN bash -l -c 'echo $(rustc --print sysroot)/lib >> /etc/ld.so.conf'
RUN bash -l -c 'echo /usr/local/lib >> /etc/ld.so.conf'
RUN ldconfig
# How to build and run this Dockerfile:
#
# ```
# RUST_FIL_PROOFS=`pwd` # path to `rust-fil-proofs`
# docker --log-level debug build --progress tty --file Dockerfile-profile --tag rust-cpu-profile .
# docker run -it -v $RUST_FIL_PROOFS:/code/ rust-cpu-profile
# ```
FROM rust
# Get all the dependencies
# ------------------------
# Copied from: github.com/filecoin-project/rust-fil-proofs/blob/master/Dockerfile-ci
RUN apt-get update && \
apt-get install -y curl file gcc g++ git make openssh-client \
autoconf automake cmake libtool libcurl4-openssl-dev libssl-dev \
libelf-dev libdw-dev binutils-dev zlib1g-dev libiberty-dev wget \
xz-utils pkg-config python clang
# `gperftools` and dependencies (`libunwind`)
# -------------------------------------------
ENV GPERFTOOLS_VERSION="2.7"
ENV LIBUNWIND_VERSION="0.99-beta"
ENV HOME="/root"
ENV DOWNLOADS=${HOME}/downloads
RUN mkdir -p ${DOWNLOADS}
RUN echo ${DOWNLOADS}
WORKDIR ${DOWNLOADS}
RUN wget http://download.savannah.gnu.org/releases/libunwind/libunwind-${LIBUNWIND_VERSION}.tar.gz --output-document ${DOWNLOADS}/libunwind-${LIBUNWIND_VERSION}.tar.gz
RUN tar -xvf ${DOWNLOADS}/libunwind-${LIBUNWIND_VERSION}.tar.gz
WORKDIR ${DOWNLOADS}/libunwind-${LIBUNWIND_VERSION}
RUN ./configure
RUN make
RUN make install
WORKDIR ${DOWNLOADS}
RUN wget https://github.com/gperftools/gperftools/releases/download/gperftools-${GPERFTOOLS_VERSION}/gperftools-${GPERFTOOLS_VERSION}.tar.gz --output-document ${DOWNLOADS}/gperftools-${GPERFTOOLS_VERSION}.tar.gz
RUN tar -xvf ${DOWNLOADS}/gperftools-${GPERFTOOLS_VERSION}.tar.gz
WORKDIR ${DOWNLOADS}/gperftools-${GPERFTOOLS_VERSION}
RUN ./configure
RUN make install
WORKDIR ${DOWNLOADS}
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib
# FIXME: `gperftools` installs the library (`make install`) in
# `/usr/local/lib` by default but Debian/Ubuntu don't look there
# now, the correct `--prefix` should be added to the command.
# Install latest toolchain used by `rust-fil-proofs`
# --------------------------------------------------
RUN rustup default nightly-2019-07-15
# FIXME: The lastest version used should be dynamically obtained form the `rust-fil-proofs` repo
# and not hard-coded here.
# Ready to run
# ------------
WORKDIR /code
CMD \
cargo update \
&& \
cargo build \
-p filecoin-proofs \
--release \
--example stacked \
--features \
cpu-profile \
-Z package-features \
&& \
RUST_BACKTRACE=full \
RUST_LOG=trace \
target/release/examples/stacked \
--size 1024 \
&& \
pprof target/release/examples/stacked replicate.profile || bash
Copyright (c) 2018 Filecoin Project
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
\ No newline at end of file
This diff is collapsed.
# Security Policy
## Reporting a Vulnerability
For reporting *critical* and *security* bugs, please consult our [Security Policy and Responsible Disclosure Program information](https://github.com/filecoin-project/community/blob/master/SECURITY.md)
## Reporting a non security bug
For non-critical bugs, please simply file a GitHub issue on this repo.
fn is_compiled_for_64_bit_arch() -> bool {
cfg!(target_pointer_width = "64")
}
fn main() {
assert!(
is_compiled_for_64_bit_arch(),
"must be built for 64-bit architectures"
);
}
[package]
name = "fil-proofs-param"
description = "Filecoin parameter cli tools."
version = "3.0.2"
authors = ["dignifiedquire <dignifiedquire@gmail.com>", "laser <l@s3r.com>", "porcuquine <porcuquine@users.noreply.github.com>"]
license = "MIT OR Apache-2.0"
edition = "2018"
repository = "https://github.com/filecoin-project/rust-fil-proofs"
readme = "README.md"
[dependencies]
storage-proofs-core = { path = "../storage-proofs-core", version = "^8.0.0", default-features = false}
storage-proofs-porep = { path = "../storage-proofs-porep", version = "^8.0.0", default-features = false }
storage-proofs-post = { path = "../storage-proofs-post", version = "^8.0.0", default-features = false }
filecoin-hashers = { version = "^3.0.0", path = "../filecoin-hashers", default-features = false, features = ["poseidon", "sha256"] }
filecoin-proofs = { version = "^8.0.0", path = "../filecoin-proofs", default-features = false }
bitvec = "0.17"
rand = "0.7"
lazy_static = "1.2"
memmap = "0.7"
pbr = "1.0"
byteorder = "1"
itertools = "0.9"
serde = { version = "1.0", features = ["rc", "derive"] }
serde_json = "1.0"
ff = { version = "0.3.1", package = "fff" }
blake2b_simd = "0.5"
bellperson = { version = "0.14", default-features = false }
log = "0.4.7"
fil_logger = "0.1"
env_proxy = "0.4"
flate2 = { version = "1.0.9", features = ["rust_backend"]}
tar = "0.4.26"
rayon = "1.1.0"
blake2s_simd = "0.5.8"
hex = "0.4.0"
merkletree = "0.21.0"
bincode = "1.1.2"
anyhow = "1.0.23"
rand_xorshift = "0.2.0"
sha2 = "0.9.1"
typenum = "1.11.2"
gperftools = { version = "0.2", optional = true }
generic-array = "0.14.4"
structopt = "0.3.12"
humansize = "1.1.0"
indicatif = "0.15.0"
groupy = "0.4.1"
dialoguer = "0.8.0"
clap = "2.33.3"
[dependencies.reqwest]
version = "0.10"
default-features = false
features = ["blocking", "native-tls-vendored"]
[dev-dependencies]
criterion = "0.3"
rexpect = "0.4.0"
pretty_assertions = "0.6.1"
failure = "0.1.7"
tempfile = "3"
[features]
default = ["gpu", "pairing"]
cpu-profile = ["gperftools"]
heap-profile = ["gperftools/heap"]
simd = ["storage-proofs-core/simd"]
asm = ["storage-proofs-core/asm"]
gpu = ["storage-proofs-core/gpu", "storage-proofs-porep/gpu", "storage-proofs-post/gpu", "bellperson/gpu"]
pairing = ["storage-proofs-core/pairing", "storage-proofs-porep/pairing", "storage-proofs-post/pairing", "bellperson/pairing"]
blst = ["storage-proofs-core/blst", "storage-proofs-porep/blst", "storage-proofs-post/blst", "bellperson/blst"]
This diff is collapsed.
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
\ No newline at end of file
# Filecoin Parameters
> Parameter related utilities for Filecoin.
Available tools are
- `paramcache`
- `paramfetch`
- `parampublish`
- `fakeipfsadd`
# Running `parampublish` with Mocked `ipfs` Binary
```
$ cargo build --bin fakeipfsadd --bin parampublish
$ ./target/debug/parampublish --ipfs-bin=./target/debug/fakeipfsadd [-a]
```
## License
MIT or Apache 2.0
{
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.params": {
"cid": "QmVxjFRyhmyQaZEtCh7nk2abc7LhFkzhnRX4rcHqCCpikR",
"digest": "7610b9f82bfc88405b7a832b651ce2f6",
"sector_size": 2048
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.vk": {
"cid": "QmcS5JZs8X3TdtkEBpHAdUYjdNDqcL7fWQFtQz69mpnu2X",
"digest": "0e0958009936b9d5e515ec97b8cb792d",
"sector_size": 2048
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0cfb4f178bbb71cf2ecfcd42accce558b27199ab4fb59cb78f2483fe21ef36d9.params": {
"cid": "QmUiRx71uxfmUE8V3H9sWAsAXoM88KR4eo1ByvvcFNeTLR",
"digest": "1a7d4a9c8a502a497ed92a54366af33f",
"sector_size": 536870912
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0cfb4f178bbb71cf2ecfcd42accce558b27199ab4fb59cb78f2483fe21ef36d9.vk": {
"cid": "QmfCeddjFpWtavzfEzZpJfzSajGNwfL4RjFXWAvA9TSnTV",
"digest": "4dae975de4f011f101f5a2f86d1daaba",
"sector_size": 536870912
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-3ea05428c9d11689f23529cde32fd30aabd50f7d2c93657c1d3650bca3e8ea9e.params": {
"cid": "QmcSTqDcFVLGGVYz1njhUZ7B6fkKtBumsLUwx4nkh22TzS",
"digest": "82c88066be968bb550a05e30ff6c2413",
"sector_size": 2048
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-3ea05428c9d11689f23529cde32fd30aabd50f7d2c93657c1d3650bca3e8ea9e.vk": {
"cid": "QmSTCXF2ipGA3f6muVo6kHc2URSx6PzZxGUqu7uykaH5KU",
"digest": "ffd79788d614d27919ae5bd2d94eacb6",
"sector_size": 2048
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-50c7368dea9593ed0989e70974d28024efa9d156d585b7eea1be22b2e753f331.params": {
"cid": "QmU9SBzJNrcjRFDiFc4GcApqdApN6z9X7MpUr66mJ2kAJP",
"digest": "700171ecf7334e3199437c930676af82",
"sector_size": 8388608
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-50c7368dea9593ed0989e70974d28024efa9d156d585b7eea1be22b2e753f331.vk": {
"cid": "QmbmUMa3TbbW3X5kFhExs6WgC4KeWT18YivaVmXDkB6ANG",
"digest": "79ebb55f56fda427743e35053edad8fc",
"sector_size": 8388608
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-5294475db5237a2e83c3e52fd6c2b03859a1831d45ed08c4f35dbf9a803165a9.params": {
"cid": "QmdNEL2RtqL52GQNuj8uz6mVj5Z34NVnbaJ1yMyh1oXtBx",
"digest": "c49499bb76a0762884896f9683403f55",
"sector_size": 8388608
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-5294475db5237a2e83c3e52fd6c2b03859a1831d45ed08c4f35dbf9a803165a9.vk": {
"cid": "QmUiVYCQUgr6Y13pZFr8acWpSM4xvTXUdcvGmxyuHbKhsc",
"digest": "34d4feeacd9abf788d69ef1bb4d8fd00",
"sector_size": 8388608
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-7d739b8cf60f1b0709eeebee7730e297683552e4b69cab6984ec0285663c5781.params": {
"cid": "QmVgCsJFRXKLuuUhT3aMYwKVGNA9rDeR6DCrs7cAe8riBT",
"digest": "827359440349fe8f5a016e7598993b79",
"sector_size": 536870912
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-7d739b8cf60f1b0709eeebee7730e297683552e4b69cab6984ec0285663c5781.vk": {
"cid": "QmfA31fbCWojSmhSGvvfxmxaYCpMoXP95zEQ9sLvBGHNaN",
"digest": "bd2cd62f65c1ab84f19ca27e97b7c731",
"sector_size": 536870912
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-0377ded656c6f524f1618760bffe4e0a1c51d5a70c4509eedae8a27555733edc.params": {
"cid": "QmaUmfcJt6pozn8ndq1JVBzLRjRJdHMTPd4foa8iw5sjBZ",
"digest": "2cf49eb26f1fee94c85781a390ddb4c8",
"sector_size": 34359738368
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-0377ded656c6f524f1618760bffe4e0a1c51d5a70c4509eedae8a27555733edc.vk": {
"cid": "QmR9i9KL3vhhAqTBGj1bPPC7LvkptxrH9RvxJxLN1vvsBE",
"digest": "0f8ec542485568fa3468c066e9fed82b",
"sector_size": 34359738368
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-559e581f022bb4e4ec6e719e563bf0e026ad6de42e56c18714a2c692b1b88d7e.params": {
"cid": "Qmdtczp7p4wrbDofmHdGhiixn9irAcN77mV9AEHZBaTt1i",
"digest": "d84f79a16fe40e9e25a36e2107bb1ba0",
"sector_size": 34359738368
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-559e581f022bb4e4ec6e719e563bf0e026ad6de42e56c18714a2c692b1b88d7e.vk": {
"cid": "QmZCvxKcKP97vDAk8Nxs9R1fWtqpjQrAhhfXPoCi1nkDoF",
"digest": "fc02943678dd119e69e7fab8420e8819",
"sector_size": 34359738368
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-2627e4006b67f99cef990c0a47d5426cb7ab0a0ad58fc1061547bf2d28b09def.params": {
"cid": "QmeAN4vuANhXsF8xP2Lx5j2L6yMSdogLzpcvqCJThRGK1V",
"digest": "3810b7780ac0e299b22ae70f1f94c9bc",
"sector_size": 68719476736
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-2627e4006b67f99cef990c0a47d5426cb7ab0a0ad58fc1061547bf2d28b09def.vk": {
"cid": "QmWV8rqZLxs1oQN9jxNWmnT1YdgLwCcscv94VARrhHf1T7",
"digest": "59d2bf1857adc59a4f08fcf2afaa916b",
"sector_size": 68719476736
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-b62098629d07946e9028127e70295ed996fe3ed25b0f9f88eb610a0ab4385a3c.params": {
"cid": "QmVkrXc1SLcpgcudK5J25HH93QvR9tNsVhVTYHm5UymXAz",
"digest": "2170a91ad5bae22ea61f2ea766630322",
"sector_size": 68719476736
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-b62098629d07946e9028127e70295ed996fe3ed25b0f9f88eb610a0ab4385a3c.vk": {
"cid": "QmbfQjPD7EpzjhWGmvWAsyN2mAZ4PcYhsf3ujuhU9CSuBm",
"digest": "6d3789148fb6466d07ee1e24d6292fd6",
"sector_size": 68719476736
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-032d3138d22506ec0082ed72b2dcba18df18477904e35bafee82b3793b06832f.params": {
"cid": "QmWceMgnWYLopMuM4AoGMvGEau7tNe5UK83XFjH5V9B17h",
"digest": "434fb1338ecfaf0f59256f30dde4968f",
"sector_size": 2048
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-032d3138d22506ec0082ed72b2dcba18df18477904e35bafee82b3793b06832f.vk": {
"cid": "QmamahpFCstMUqHi2qGtVoDnRrsXhid86qsfvoyCTKJqHr",
"digest": "dc1ade9929ade1708238f155343044ac",
"sector_size": 2048
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-6babf46ce344ae495d558e7770a585b2382d54f225af8ed0397b8be7c3fcd472.params": {
"cid": "QmYBpTt7LWNAWr1JXThV5VxX7wsQFLd1PHrGYVbrU1EZjC",
"digest": "6c77597eb91ab936c1cef4cf19eba1b3",
"sector_size": 536870912
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-6babf46ce344ae495d558e7770a585b2382d54f225af8ed0397b8be7c3fcd472.vk": {
"cid": "QmWionkqH2B6TXivzBSQeSyBxojaiAFbzhjtwYRrfwd8nH",
"digest": "065179da19fbe515507267677f02823e",
"sector_size": 536870912
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-ecd683648512ab1765faa2a5f14bab48f676e633467f0aa8aad4b55dcb0652bb.params": {
"cid": "QmPXAPPuQtuQz7Zz3MHMAMEtsYwqM1o9H1csPLeiMUQwZH",
"digest": "09e612e4eeb7a0eb95679a88404f960c",
"sector_size": 8388608
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-ecd683648512ab1765faa2a5f14bab48f676e633467f0aa8aad4b55dcb0652bb.vk": {
"cid": "QmYCuipFyvVW1GojdMrjK1JnMobXtT4zRCZs1CGxjizs99",
"digest": "b687beb9adbd9dabe265a7e3620813e4",
"sector_size": 8388608
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-0-sha256_hasher-82a357d2f2ca81dc61bb45f4a762807aedee1b0a53fd6c4e77b46a01bfef7820.params": {
"cid": "QmengpM684XLQfG8754ToonszgEg2bQeAGUan5uXTHUQzJ",
"digest": "6a388072a518cf46ebd661f5cc46900a",
"sector_size": 34359738368
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-0-sha256_hasher-82a357d2f2ca81dc61bb45f4a762807aedee1b0a53fd6c4e77b46a01bfef7820.vk": {
"cid": "Qmf93EMrADXAK6CyiSfE8xx45fkMfR3uzKEPCvZC1n2kzb",
"digest": "0c7b4aac1c40fdb7eb82bc355b41addf",
"sector_size": 34359738368
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-2-sha256_hasher-96f1b4a04c5c51e4759bbf224bbc2ef5a42c7100f16ec0637123f16a845ddfb2.params": {
"cid": "QmS7ye6Ri2MfFzCkcUJ7FQ6zxDKuJ6J6B8k5PN7wzSR9sX",
"digest": "1801f8a6e1b00bceb00cc27314bb5ce3",
"sector_size": 68719476736
},
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-2-sha256_hasher-96f1b4a04c5c51e4759bbf224bbc2ef5a42c7100f16ec0637123f16a845ddfb2.vk": {
"cid": "QmehSmC6BhrgRZakPDta2ewoH9nosNzdjCqQRXsNFNUkLN",
"digest": "a89884252c04c298d0b3c81bfd884164",
"sector_size": 68719476736
}
}
\ No newline at end of file
#!/usr/bin/env bash
set -Eeuo pipefail
# pin-params.sh
#
# - Post the directory of params to cluster.ipfs.io
# - Grab the CID for the previous params from proofs.filecoin.io
# - Add the old params as a `prev` dir to the new params dir to keep them around.
# - Pin the new cid on cluster
# - Publish the new cid as a dnslink to proofs.filecoin.io
# - The gateways will pin the new dir by checking proofs.filecoin.io hourly.
#
# Requires:
# - `ipfs-cluster-ctl` - download from https://dist.ipfs.io/#ipfs-cluster-ctl
# - `npx`, as provide `npm` >= v6
# - `ipfs`
#
# You _must_ provide the following env vars
#
# - CLUSTER_TOKEN - the basic auth string as "username:password"
# - DNSIMPLE_TOKEN - an api key for a dnsimple account with a zone for proofs.filecoin.io
#
# Optional: you can override the input dir by passing a path as the first param.
#
# Usage:
# CLUSTER_TOKEN="user:pass" DNSIMPLE_TOKEN="xyz" ./pin-params.sh
#
INPUT_DIR=${1:-"/var/tmp/filecoin-proof-parameters"}
: "${CLUSTER_TOKEN:?please set CLUSTER_TOKEN env var}"
: "${DNSIMPLE_TOKEN:?please set DNSIMPLE_TOKEN env var}"
echo "checking $INPUT_DIR"
# Grab the version number from the files in the dir.
# Fail if more than 1 version or doesnt match a version string like vNN, e.g v12
if ls -A $INPUT_DIR &> /dev/null; then
# version will be a list if there is more than one...
VERSION=$(ls $INPUT_DIR | sort -r | cut -c 1-3 | uniq)
echo found $VERSION
if [[ $(echo $VERSION | wc -w) -eq 1 && $VERSION =~ ^v[0-9]+ ]]; then
# we have 1 version, lets go...
COUNT=$(ls -l $INPUT_DIR | wc -l | xargs echo -n)
echo "adding $COUNT files to ipfs..."
else
echo "Error: input dir should contain just the current version of the params"
exit 1
fi
else
echo "Error: input dir '$INPUT_DIR' should contain the params"
exit 1
fi
CLUSTER_HOST="/dnsaddr/filecoin.collab.ipfscluster.io"
ADDITIONAL_CLUSTER_HOST="/dnsaddr/cluster.ipfs.io"
CLUSTER_PIN_NAME="filecoin-proof-parameters-$VERSION"
DNSLINK_DOMAIN="proofs.filecoin.io"
# Add and pin to collab cluster. After this it will be on 1 peer and pin requests
# will have been triggered for the others.
ROOT_CID=$(ipfs-cluster-ctl \
--host $CLUSTER_HOST \
--basic-auth $CLUSTER_TOKEN \
add --quieter \
--local \
--name $CLUSTER_PIN_NAME \
--recursive $INPUT_DIR )
echo "ok! root cid is $ROOT_CID"
# Pin to main cluster additionally.
ipfs-cluster-ctl \
--host $ADDITIONAL_CLUSTER_HOST \
--basic-auth $CLUSTER_TOKEN \
pin add $ROOT_CID \
--no-status
echo "ok! Pin request sent to additional cluster"
# Publist the new cid to the dnslink
npx dnslink-dnsimple --domain $DNSLINK_DOMAIN --link "/ipfs/$ROOT_CID"
echo "done!"
#!/bin/sh
# This script verifies that a given `.params` file (and the corresponding
# `.vk` file) is part of `parameters.json` and has the correct digest.
#
# This script runs on POSIX compatible shells. You need to have standard
# utilities (`basename`, `head`, `grep`) as well as have `jq` and `b2sum`
# installed.
#
# The inputs are a `parameter.json` file and a `.params' file.
if [ "${#}" -ne 2 ]; then
echo "Verify that a given .params file (and the corresponding .vk file)"
echo "is part of parameters.json and has the correct digest."
echo ""
echo "Usage: $(basename "${0}") parameters.json parameter-file.params"
exit 1
fi
if ! command -v b2sum >/dev/null 2>&1
then
echo "ERROR: 'b2sum' needs to be installed."
exit 1
fi
if ! command -v jq >/dev/null 2>&1
then
echo "ERROR: 'jq' needs to be installed."
exit 1
fi
PARAMS_JSON=${1}
PARAMS_ID="${2%.*}"
PARAMS_FILE="${PARAMS_ID}.params"
VK_FILE="${PARAMS_ID}.vk"
# Transforms the `parameters.json` into a string that consists of digest and
# filename pairs.
PARAMS_JSON_DATA=$(jq -r 'to_entries[] | "\(.value.digest) \(.key)"' "${PARAMS_JSON}")
VK_HASH_SHORT=$(b2sum "${VK_FILE}"|head --bytes 32)
if echo "${PARAMS_JSON_DATA}"|grep --silent "${VK_HASH_SHORT} ${VK_FILE}"; then
echo "ok Correct digest of VK file was found in ${PARAMS_JSON}."
else
echo "not ok ERROR: Digest of VK file was *not* found/correct in ${PARAMS_JSON}."
exit 1
fi
PARAMS_HASH_SHORT=$(b2sum "${PARAMS_FILE}"|head --bytes 32)
if echo "${PARAMS_JSON_DATA}"|grep --silent "${PARAMS_HASH_SHORT} ${PARAMS_FILE}"; then
echo "ok Correct digest of params file was found in ${PARAMS_JSON}."
else
echo "not ok ERROR: Digest of params file was *not* found/correct in ${PARAMS_JSON}."
exit 1
fi
echo "# Verification successfully completed."
use std::fs::File;
use std::io;
use blake2b_simd::State as Blake2b;
use structopt::StructOpt;
#[derive(Debug, StructOpt)]
#[structopt(
name = "fakeipfsadd",
version = "0.1",
about = "This program is used to simulate the `ipfs add` command while testing. It accepts a \
path to a file and writes 32 characters of its hex-encoded BLAKE2b checksum to stdout. \
Note that the real `ipfs add` command computes and emits a CID."
)]
enum Cli {
Add {
#[structopt(help = "Positional argument for the path to the file to add.")]
file_path: String,
#[structopt(short = "Q", help = "Simulates the -Q argument to `ipfs add`.")]
quieter: bool,
},
}
impl Cli {
fn file_path(&self) -> &str {
match self {
Cli::Add { file_path, .. } => file_path,
}
}
}
pub fn main() {
let cli = Cli::from_args();
let mut src_file = File::open(cli.file_path())
.unwrap_or_else(|_| panic!("failed to open file: {}", cli.file_path()));
let mut hasher = Blake2b::new();
io::copy(&mut src_file, &mut hasher).expect("failed to write BLAKE2b bytes to hasher");
let hex_string: String = hasher.finalize().to_hex()[..32].into();
println!("{}", hex_string)
}
use std::env;
use std::process::exit;
use std::str::FromStr;
use dialoguer::{theme::ColorfulTheme, MultiSelect};
use filecoin_proofs::{
constants::{
DefaultPieceHasher, POREP_PARTITIONS, PUBLISHED_SECTOR_SIZES, WINDOW_POST_CHALLENGE_COUNT,
WINDOW_POST_SECTOR_COUNT, WINNING_POST_CHALLENGE_COUNT, WINNING_POST_SECTOR_COUNT,
},
parameters::{public_params, window_post_public_params, winning_post_public_params},
types::{PaddedBytesAmount, PoRepConfig, PoRepProofPartitions, PoStConfig, SectorSize},
with_shape, PoStType,
};
use humansize::{file_size_opts, FileSize};
use indicatif::ProgressBar;
use log::{error, info, warn};
use rand::rngs::OsRng;
use storage_proofs_core::{
api_version::ApiVersion, compound_proof::CompoundProof, merkle::MerkleTreeTrait,
parameter_cache::CacheableParameters,
};
use storage_proofs_porep::stacked::{StackedCircuit, StackedCompound, StackedDrg};
use storage_proofs_post::fallback::{FallbackPoSt, FallbackPoStCircuit, FallbackPoStCompound};
use structopt::StructOpt;
fn cache_porep_params<Tree: 'static + MerkleTreeTrait>(porep_config: PoRepConfig) {
info!("generating PoRep groth params");
let public_params = public_params(
PaddedBytesAmount::from(porep_config),
usize::from(PoRepProofPartitions::from(porep_config)),
porep_config.porep_id,
porep_config.api_version,
)
.expect("failed to get public params from config");
let circuit = <StackedCompound<Tree, DefaultPieceHasher> as CompoundProof<
StackedDrg<Tree, DefaultPieceHasher>,
StackedCircuit<Tree, DefaultPieceHasher>,
>>::blank_circuit(&public_params);
let _ = StackedCompound::<Tree, DefaultPieceHasher>::get_param_metadata(
circuit.clone(),
&public_params,
)
.expect("failed to get metadata");
let _ = StackedCompound::<Tree, DefaultPieceHasher>::get_groth_params(
Some(&mut OsRng),
circuit.clone(),
&public_params,
)
.expect("failed to get groth params");
let _ = StackedCompound::<Tree, DefaultPieceHasher>::get_verifying_key(
Some(&mut OsRng),
circuit,
&public_params,
)
.expect("failed to get verifying key");
}
fn cache_winning_post_params<Tree: 'static + MerkleTreeTrait>(post_config: &PoStConfig) {
info!("generating Winning-PoSt groth params");
let public_params = winning_post_public_params::<Tree>(post_config)
.expect("failed to get public params from config");
let circuit = <FallbackPoStCompound<Tree> as CompoundProof<
FallbackPoSt<Tree>,
FallbackPoStCircuit<Tree>,
>>::blank_circuit(&public_params);
let _ = <FallbackPoStCompound<Tree>>::get_param_metadata(circuit.clone(), &public_params)
.expect("failed to get metadata");
let _ = <FallbackPoStCompound<Tree>>::get_groth_params(
Some(&mut OsRng),
circuit.clone(),
&public_params,
)
.expect("failed to get groth params");
let _ =
<FallbackPoStCompound<Tree>>::get_verifying_key(Some(&mut OsRng), circuit, &public_params)
.expect("failed to get verifying key");
}
fn cache_window_post_params<Tree: 'static + MerkleTreeTrait>(post_config: &PoStConfig) {
info!("generating Window-PoSt groth params");
let public_params = window_post_public_params::<Tree>(post_config)
.expect("failed to get public params from config");
let circuit: FallbackPoStCircuit<Tree> = <FallbackPoStCompound<Tree> as CompoundProof<
FallbackPoSt<Tree>,
FallbackPoStCircuit<Tree>,
>>::blank_circuit(&public_params);
let _ = <FallbackPoStCompound<Tree>>::get_param_metadata(circuit.clone(), &public_params)
.expect("failed to get metadata");
let _ = <FallbackPoStCompound<Tree>>::get_groth_params(
Some(&mut OsRng),
circuit.clone(),
&public_params,
)
.expect("failed to get groth params");
let _ =
<FallbackPoStCompound<Tree>>::get_verifying_key(Some(&mut OsRng), circuit, &public_params)
.expect("failed to get verifying key");
}
#[derive(Debug, StructOpt)]
#[structopt(
name = "paramcache",
about = "generates and caches SDR PoRep, Winning-PoSt, and Window-PoSt groth params"
)]
struct Opt {
#[structopt(long, help = "Only cache PoSt groth params.")]
only_post: bool,
#[structopt(
short = "z",
long,
use_delimiter = true,
help = "A comma-separated list of sector sizes (in number of bytes)."
)]
sector_sizes: Vec<u64>,
#[structopt(
long = "api-version",
value_name = "SEMANTIC VERSION",
default_value = "1.1.0",
help = "Use a specific rust-fil-proofs API version."
)]
api_version: String,
}
fn generate_params_post(sector_size: u64, api_version: ApiVersion) {
with_shape!(
sector_size,
cache_winning_post_params,
&PoStConfig {
sector_size: SectorSize(sector_size),
challenge_count: WINNING_POST_CHALLENGE_COUNT,
sector_count: WINNING_POST_SECTOR_COUNT,
typ: PoStType::Winning,
priority: true,
api_version,
}
);
with_shape!(
sector_size,
cache_window_post_params,
&PoStConfig {
sector_size: SectorSize(sector_size),
challenge_count: WINDOW_POST_CHALLENGE_COUNT,
sector_count: *WINDOW_POST_SECTOR_COUNT
.read()
.expect("WINDOW_POST_SECTOR_COUNT poisoned")
.get(&sector_size)
.expect("unknown sector size"),
typ: PoStType::Window,
priority: true,
api_version,
}
);
}
fn generate_params_porep(sector_size: u64, api_version: ApiVersion) {
with_shape!(
sector_size,
cache_porep_params,
PoRepConfig {
sector_size: SectorSize(sector_size),
partitions: PoRepProofPartitions(
*POREP_PARTITIONS
.read()
.expect("POREP_PARTITIONS poisoned")
.get(&sector_size)
.expect("unknown sector size"),
),
porep_id: [0; 32],
api_version,
}
);
}
pub fn main() {
// Create a stderr logger for all log levels.
env::set_var("RUST_LOG", "paramcache");
fil_logger::init();
let mut opts = Opt::from_args();
// If no sector-sizes were given provided via. the CLI, display an interactive menu. Otherwise,
// filter out invalid CLI sector-size arguments.
if opts.sector_sizes.is_empty() {
let sector_size_strings: Vec<String> = PUBLISHED_SECTOR_SIZES
.iter()
.map(|sector_size| {
let human_size = sector_size
.file_size(file_size_opts::BINARY)
.expect("failed to format sector size");
// Right align numbers for easier reading.
format!("{: >7}", human_size)
})
.collect();
opts.sector_sizes = MultiSelect::with_theme(&ColorfulTheme::default())
.with_prompt(
"Select the sizes that should be generated if not already cached [use space key to \
select, press return to finish]",
)
.items(&sector_size_strings)
.interact()
.expect("interaction failed")
.into_iter()
.map(|i| PUBLISHED_SECTOR_SIZES[i])
.collect();
} else {
opts.sector_sizes.retain(|size| {
if PUBLISHED_SECTOR_SIZES.contains(size) {
true
} else {
let human_size = size
.file_size(file_size_opts::BINARY)
.expect("failed to humansize sector size argument");
warn!("ignoring invalid sector size argument: {}", human_size);
false
}
});
}
if opts.sector_sizes.is_empty() {
error!("no valid sector sizes given, aborting");
exit(1);
}
let api_version = ApiVersion::from_str(&opts.api_version)
.expect("Cannot parse API version from semver string (e.g. 1.1.0)");
for sector_size in opts.sector_sizes {
let human_size = sector_size
.file_size(file_size_opts::BINARY)
.expect("failed to format sector size");
let message = format!("Generating sector size: {}", human_size);
info!("{}", &message);
let spinner = ProgressBar::new_spinner();
spinner.set_message(&message);
spinner.enable_steady_tick(100);
generate_params_post(sector_size, api_version);
if !opts.only_post {
generate_params_porep(sector_size, api_version);
}
spinner.finish_with_message(&format!("✔ {}", &message));
}
}
This diff is collapsed.
use std::collections::BTreeMap;
use std::env;
use std::fs::{read_dir, File};
use std::io::{stderr, Write};
use std::path::Path;
use std::process::{exit, Command};
use anyhow::{ensure, Context, Result};
use filecoin_proofs::param::{
get_digest_for_file_within_cache, get_full_path_for_file_within_cache, has_extension,
};
use lazy_static::lazy_static;
use log::{error, info, trace, warn};
use storage_proofs_core::parameter_cache::{
parameter_cache_dir, parameter_cache_dir_name, ParameterData, ParameterMap,
GROTH_PARAMETER_EXT, PARAMETER_METADATA_EXT, SRS_KEY_EXT, VERIFYING_KEY_EXT,
};
use structopt::StructOpt;
lazy_static! {
static ref CLI_ABOUT: String = format!(
"Publish srs file(s) found in the cache directory specified by the env-var \
$FIL_PROOFS_PARAMETER_CACHE (or if the env-var is not set, the dir: {}) to ipfs",
parameter_cache_dir_name(),
);
}
/// Returns `true` if a params filename starts with a version string and has a valid extension.
fn is_well_formed_filename(filename: &str) -> bool {
let ext_is_valid = has_extension(filename, SRS_KEY_EXT);
if !ext_is_valid {
if !has_extension(filename, GROTH_PARAMETER_EXT)
&& !has_extension(filename, VERIFYING_KEY_EXT)
&& !has_extension(filename, PARAMETER_METADATA_EXT)
{
warn!("file has invalid extension: {}, ignoring file", filename);
}
return false;
}
let version = filename.split('-').next().unwrap();
if version.len() < 2 {
return false;
}
let version_is_valid =
version.get(0..1).unwrap() == "v" && version[1..].chars().all(|c| c.is_digit(10));
if !version_is_valid {
warn!(
"filename does not start with version: {}, ignoring file",
filename
);
return false;
}
true
}
fn get_filenames_in_cache_dir() -> Vec<String> {
let path = parameter_cache_dir();
if !path.exists() {
warn!("param cache dir does not exist (no files to publish), exiting");
exit(1);
}
// Ignore entries that are not files or have a non-Utf8 filename.
read_dir(path)
.expect("failed to read param cache dir")
.filter_map(|entry_res| {
let path = entry_res.expect("failed to read directory entry").path();
if !path.is_file() {
return None;
}
path.file_name()
.and_then(|os_str| os_str.to_str())
.map(|s| s.to_string())
})
.collect()
}
fn publish_file(ipfs_bin: &str, filename: &str) -> Result<String> {
let path = get_full_path_for_file_within_cache(filename);
let output = Command::new(ipfs_bin)
.args(&["add", "-Q", path.to_str().unwrap()])
.output()
.expect("failed to run ipfs subprocess");
stderr()
.write_all(&output.stderr)
.with_context(|| "failed to write ipfs' stderr")?;
ensure!(output.status.success(), "failed to publish via ipfs");
let cid = String::from_utf8(output.stdout)
.with_context(|| "ipfs' stdout is not valid Utf8")?
.trim()
.to_string();
Ok(cid)
}
/// Write the srs-inner-product.json file (or file specified by `json_path`) containing the published
/// params' IPFS cid's.
fn write_param_map_to_disk(param_map: &ParameterMap, json_path: &str) -> Result<()> {
let mut file = File::create(json_path).with_context(|| "failed to create json file")?;
serde_json::to_writer_pretty(&mut file, &param_map).with_context(|| "failed to write json")?;
Ok(())
}
#[derive(Debug, StructOpt)]
#[structopt(name = "srspublish", version = "1.0", about = CLI_ABOUT.as_str())]
struct Cli {
#[structopt(
long = "list-all",
short = "a",
help = "The user will be prompted to select the files to publish from the set of all files \
found in the cache dir. Excluding the -a/--list-all flag will result in the user being \
prompted for a single param version number for filtering-in files in the cache dir."
)]
list_all_files: bool,
#[structopt(
long = "ipfs-bin",
value_name = "PATH TO IPFS BINARY",
default_value = "ipfs",
help = "Use a specific ipfs binary instead of searching for one in $PATH."
)]
ipfs_bin: String,
#[structopt(
long = "json",
short = "j",
value_name = "PATH",
default_value = "srs-inner-product.json",
help = "The path to write the srs-inner-product.json file."
)]
json_path: String,
}
pub fn main() {
// Log all levels to stderr.
env::set_var("RUST_LOG", "srspublish");
fil_logger::init();
let cli = Cli::from_args();
let cache_dir = match env::var("FIL_PROOFS_PARAMETER_CACHE") {
Ok(s) => s,
_ => format!("{}", parameter_cache_dir().display()),
};
info!("using param cache dir: {}", cache_dir);
if !Path::new(&cli.ipfs_bin).exists() {
error!("ipfs binary not found: `{}`, exiting", cli.ipfs_bin);
exit(1);
}
// Get the filenames in the cache dir (.srs).
let filenames: Vec<String> = get_filenames_in_cache_dir()
.into_iter()
.filter(|filename| is_well_formed_filename(filename))
.collect();
trace!("found {} param files in cache dir", filenames.len());
// Publish files to ipfs.
let mut param_map: ParameterMap = BTreeMap::new();
for filename in filenames {
trace!("publishing file to ipfs: {}", filename);
match publish_file(&cli.ipfs_bin, &filename) {
Ok(cid) => {
info!("successfully published file to ipfs, cid={}", cid);
let digest =
get_digest_for_file_within_cache(&filename).expect("failed to hash file");
trace!("successfully hashed file: {}", digest);
let param_data = ParameterData {
cid,
digest,
sector_size: 0,
};
param_map.insert(filename, param_data);
}
Err(e) => {
error!("failed to publish file to ipfs:\n{:?}\nexiting", e);
exit(1);
}
}
}
info!("finished publishing files");
// Write srs-inner-product.json file containing published ipfs cid's.
if let Err(e) = write_param_map_to_disk(&param_map, &cli.json_path) {
error!("failed to write json file:\n{:?}\nexiting", e);
exit(1);
}
info!("successfully wrote json file: {}", cli.json_path);
}
#![deny(clippy::all, clippy::perf, clippy::correctness)]
#![warn(clippy::unwrap_used)]
use std::collections::BTreeMap;
use std::fs::File;
use std::io::{self, BufReader, Write};
use std::path::PathBuf;
use blake2b_simd::State as Blake2b;
use failure::Error as FailureError;
use rand::{thread_rng, Rng};
use storage_proofs_core::parameter_cache::{ParameterData, ParameterMap};
use crate::support::tmp_manifest;
mod session;
use session::ParamFetchSessionBuilder;
/// Produce a random sequence of bytes and first 32 characters of hex encoded
/// BLAKE2b checksum. This helper function must be kept up-to-date with the
/// parampublish implementation.
fn rand_bytes_with_blake2b() -> Result<(Vec<u8>, String), FailureError> {
let bytes = thread_rng().gen::<[u8; 32]>();
let mut hasher = Blake2b::new();
let mut as_slice = &bytes[..];
io::copy(&mut as_slice, &mut hasher)?;
Ok((
bytes.iter().cloned().collect(),
hasher.finalize().to_hex()[..32].into(),
))
}
#[test]
fn nothing_to_fetch_if_cache_fully_hydrated() -> Result<(), FailureError> {
let mut manifest: BTreeMap<String, ParameterData> = BTreeMap::new();
let (aaa_bytes, aaa_checksum) = rand_bytes_with_blake2b()?;
let mut aaa_bytes: &[u8] = &aaa_bytes;
// manifest entry checksum matches the BLAKE2b we compute locally
manifest.insert(
"aaa.vk".to_string(),
ParameterData {
cid: "".to_string(),
digest: aaa_checksum,
sector_size: 1234,
},
);
let manifest_pbuf = tmp_manifest(Some(manifest))?;
let mut session = ParamFetchSessionBuilder::new(Some(manifest_pbuf))
.with_session_timeout_ms(1000)
.with_file_and_bytes("aaa.vk", &mut aaa_bytes)
.build();
session.exp_string("determining if file is out of date: aaa.vk")?;
session.exp_string("file is up to date")?;
session.exp_string("no outdated files, exiting")?;
Ok(())
}
#[test]
fn prompts_to_download_if_file_in_manifest_is_missing() -> Result<(), FailureError> {
let mut manifest: BTreeMap<String, ParameterData> = BTreeMap::new();
manifest.insert(
"aaa.vk".to_string(),
ParameterData {
cid: "".to_string(),
digest: "".to_string(),
sector_size: 1024,
},
);
let manifest_pbuf = tmp_manifest(Some(manifest))?;
let mut session = ParamFetchSessionBuilder::new(Some(manifest_pbuf))
.with_session_timeout_ms(1000)
.build();
session.exp_string("determining if file is out of date: aaa.vk")?;
session.exp_string("file not found, marking for download")?;
session.exp_string("Select files to be downloaded")?;
session.exp_string("aaa.vk (1 KiB)")?;
Ok(())
}
#[test]
fn prompts_to_download_if_file_checksum_does_not_match_manifest() -> Result<(), FailureError> {
let mut manifest: BTreeMap<String, ParameterData> = BTreeMap::new();
let (aaa_bytes, _) = rand_bytes_with_blake2b()?;
let mut aaa_bytes: &[u8] = &aaa_bytes;
manifest.insert(
"aaa.vk".to_string(),
ParameterData {
cid: "".to_string(),
digest: "obviouslywrong".to_string(),
sector_size: 1024,
},
);
let manifest_pbuf = tmp_manifest(Some(manifest))?;
let mut session = ParamFetchSessionBuilder::new(Some(manifest_pbuf))
.with_session_timeout_ms(1000)
.with_file_and_bytes("aaa.vk", &mut aaa_bytes)
.build();
session.exp_string("determining if file is out of date: aaa.vk")?;
session.exp_string("params file found")?;
session.exp_string("file has unexpected digest, marking for download")?;
session.exp_string("Select files to be downloaded")?;
session.exp_string("aaa.vk (1 KiB)")?;
Ok(())
}
#[test]
fn fetches_vk_even_if_sector_size_does_not_match() -> Result<(), FailureError> {
let mut manifest: BTreeMap<String, ParameterData> = BTreeMap::new();
manifest.insert(
"aaa.params".to_string(),
ParameterData {
cid: "".to_string(),
digest: "".to_string(),
sector_size: 1024,
},
);
manifest.insert(
"aaa.vk".to_string(),
ParameterData {
cid: "".to_string(),
digest: "".to_string(),
sector_size: 1024,
},
);
let manifest_pbuf = tmp_manifest(Some(manifest))?;
let mut session = ParamFetchSessionBuilder::new(Some(manifest_pbuf))
.with_session_timeout_ms(1000)
.whitelisted_sector_sizes(vec!["6666".to_string(), "4444".to_string()])
.build();
session.exp_string("json contains 2 files")?;
session.exp_string("ignoring file: aaa.params (1 KiB)")?;
session.exp_string("determining if file is out of date: aaa.vk")?;
session.exp_string("file not found, marking for download")?;
Ok(())
}
#[test]
fn invalid_json_path_produces_error() -> Result<(), FailureError> {
let mut session = ParamFetchSessionBuilder::new(Some(PathBuf::from("/invalid/path")))
.with_session_timeout_ms(1000)
.build();
session.exp_string("using json file: /invalid/path")?;
session.exp_string("failed to open json file, exiting")?;
Ok(())
}
#[test]
fn invalid_json_produces_error() -> Result<(), FailureError> {
let manifest_pbuf = tmp_manifest(None)?;
let mut file = File::create(&manifest_pbuf)?;
file.write_all(b"invalid json")?;
let mut session = ParamFetchSessionBuilder::new(Some(manifest_pbuf))
.with_session_timeout_ms(1000)
.build();
session.exp_string("failed to parse json file, exiting")?;
Ok(())
}
#[test]
fn no_json_path_uses_default_manifest() -> Result<(), FailureError> {
let file = File::open("../parameters.json")?;
let reader = BufReader::new(file);
let manifest: ParameterMap = serde_json::from_reader(reader)?;
let mut session = ParamFetchSessionBuilder::new(None)
.with_session_timeout_ms(1000)
.build();
session.exp_string("using built-in json")?;
for parameter in manifest.keys() {
session.exp_string(&format!(
"determining if file is out of date: {}",
parameter
))?;
}
Ok(())
}
use std::fs::File;
use std::io::{self, Read};
use std::panic::panic_any;
use std::path::{Path, PathBuf};
use failure::SyncFailure;
use rexpect::session::PtyReplSession;
use tempfile::{tempdir, TempDir};
use crate::support::{cargo_bin, spawn_bash_with_retries};
pub struct ParamFetchSessionBuilder {
cache_dir: TempDir,
session_timeout_ms: u64,
whitelisted_sector_sizes: Option<Vec<String>>,
manifest: Option<PathBuf>,
prompt_enabled: bool,
}
impl ParamFetchSessionBuilder {
pub fn new(manifest: Option<PathBuf>) -> ParamFetchSessionBuilder {
let temp_dir = tempdir().expect("could not create temp dir");
ParamFetchSessionBuilder {
cache_dir: temp_dir,
session_timeout_ms: 1000,
manifest,
prompt_enabled: true,
whitelisted_sector_sizes: None,
}
}
/// Configure the pty timeout (see documentation for `rexpect::spawn_bash`).
pub fn with_session_timeout_ms(mut self, timeout_ms: u64) -> ParamFetchSessionBuilder {
self.session_timeout_ms = timeout_ms;
self
}
/// Configure the pty timeout (see documentation for `rexpect::spawn_bash`).
pub fn whitelisted_sector_sizes(
mut self,
sector_sizes: Vec<String>,
) -> ParamFetchSessionBuilder {
self.whitelisted_sector_sizes = Some(sector_sizes);
self
}
/// Create a file with the provided bytes in the cache directory.
pub fn with_file_and_bytes<P: AsRef<Path>, R: Read>(
self,
filename: P,
r: &mut R,
) -> ParamFetchSessionBuilder {
let mut pbuf = self.cache_dir.path().to_path_buf();
pbuf.push(filename.as_ref());
let mut file = File::create(&pbuf).expect("failed to create file in temp dir");
io::copy(r, &mut file).expect("failed to copy bytes to file");
self
}
/// Launch paramfetch in an environment configured by the builder.
pub fn build(self) -> ParamFetchSession {
let mut p = spawn_bash_with_retries(10, Some(self.session_timeout_ms))
.unwrap_or_else(|err| panic_any(err));
let cache_dir_path = format!("{:?}", self.cache_dir.path());
let paramfetch_path = cargo_bin("paramfetch");
let whitelist: String = self
.whitelisted_sector_sizes
.map(|wl| {
let mut s = "--sector-sizes=".to_string();
s.push_str(&wl.join(","));
s
})
.unwrap_or_else(|| "".to_string());
let json_argument = if self.manifest.is_some() {
format!("--json={:?}", self.manifest.expect("missing manifest"))
} else {
"".to_string()
};
let cmd = format!(
"{}={} {:?} {} {} {}",
"FIL_PROOFS_PARAMETER_CACHE", // related to var name in core/src/settings.rs
cache_dir_path,
paramfetch_path,
if self.prompt_enabled { "" } else { "--all" },
json_argument,
whitelist,
);
p.execute(&cmd, ".*").expect("could not execute paramfetch");
ParamFetchSession {
pty_session: p,
_cache_dir: self.cache_dir,
}
}
}
/// An active pseudoterminal (pty) used to interact with paramfetch.
pub struct ParamFetchSession {
pty_session: PtyReplSession,
_cache_dir: TempDir,
}
impl ParamFetchSession {
/// Block until provided string is seen on stdout from paramfetch and
/// return remaining output.
pub fn exp_string(
&mut self,
needle: &str,
) -> Result<String, SyncFailure<rexpect::errors::Error>> {
self.pty_session
.exp_string(needle)
.map_err(SyncFailure::new)
}
}
pub mod prompts_to_publish;
pub mod read_metadata_files;
pub mod support;
pub mod write_json_manifest;
use failure::Error as FailureError;
use storage_proofs_core::parameter_cache::CacheEntryMetadata;
use crate::parampublish::support::session::ParamPublishSessionBuilder;
#[test]
fn ignores_files_unrecognized_extensions() -> Result<(), FailureError> {
let to_create = vec!["v1-aaa.vk", "v1-aaa.params", "v1-bbb.txt", "ddd"];
let (mut session, _) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.with_files(&to_create)
.with_metadata("v1-aaa.meta", &CacheEntryMetadata { sector_size: 1024 })
.list_all_files()
.build();
session.exp_string("found 3 param files in cache dir")?;
session.exp_string("found 1 file triples")?;
session.exp_string("Select files to publish")?;
session.exp_string("v1-aaa.params (1 KiB)")?;
session.exp_string("v1-aaa.vk (1 KiB)")?;
session.send_line("")?;
session.exp_string("no params selected, exiting")?;
Ok(())
}
#[test]
fn displays_sector_size_in_prompt() -> Result<(), FailureError> {
let to_create = vec!["v1-aaa.vk", "v1-aaa.params", "v1-xxx.vk", "v1-xxx.params"];
let (mut session, _) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.with_files(&to_create)
.with_metadata("v1-aaa.meta", &CacheEntryMetadata { sector_size: 2048 })
.with_metadata("v1-xxx.meta", &CacheEntryMetadata { sector_size: 1024 })
.list_all_files()
.build();
session.exp_string("found 6 param files in cache dir")?;
session.exp_string("found 2 file triples")?;
session.exp_string("Select files to publish")?;
session.exp_string("v1-xxx.params (1 KiB)")?;
session.exp_string("v1-xxx.vk (1 KiB)")?;
session.exp_string("v1-aaa.params (2 KiB)")?;
session.exp_string("v1-aaa.vk (2 KiB)")?;
session.send_line("")?;
session.exp_string("no params selected, exiting")?;
Ok(())
}
#[test]
fn no_assets_no_prompt() -> Result<(), FailureError> {
let (mut session, _) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.build();
session.exp_string("found 0 param files in cache dir")?;
session.exp_string("no file triples found, exiting")?;
Ok(())
}
use failure::Error as FailureError;
use crate::parampublish::support::session::ParamPublishSessionBuilder;
#[test]
fn fails_if_missing_metadata_file() -> Result<(), FailureError> {
// missing the corresponding .meta file
let filenames = vec!["v12-aaa.vk", "v12-aaa.params"];
let (mut session, _) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.with_files(&filenames)
.build();
session.exp_string("found 2 param files in cache dir")?;
session.exp_string("no file triples found, exiting")?;
Ok(())
}
#[test]
fn fails_if_malformed_metadata_file() -> Result<(), FailureError> {
// A malformed v11-aaa.meta file.
let mut malformed: &[u8] = &[42];
let (mut session, _) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.with_files(&["v11-aaa.vk", "v11-aaa.params"])
.with_file_and_bytes("v11-aaa.meta", &mut malformed)
.build();
session.exp_string("found 3 param files in cache dir")?;
session.exp_string("found 1 file triples")?;
session.exp_string("failed to parse .meta file")?;
session.exp_string("exiting")?;
Ok(())
}
use std::fs::{read_dir, File};
use std::io::{self, Read, Write};
use std::panic::panic_any;
use std::path::{Path, PathBuf};
use failure::SyncFailure;
use rand::{thread_rng, Rng};
use rexpect::session::PtyReplSession;
use storage_proofs_core::parameter_cache::CacheEntryMetadata;
use tempfile::{tempdir, TempDir};
use crate::support::{cargo_bin, spawn_bash_with_retries, FakeIpfsBin};
pub struct ParamPublishSessionBuilder {
cache_dir: TempDir,
cached_file_pbufs: Vec<PathBuf>,
session_timeout_ms: u64,
manifest: PathBuf,
ipfs_bin_path: PathBuf,
list_all_files: bool,
}
impl ParamPublishSessionBuilder {
pub fn new() -> ParamPublishSessionBuilder {
let temp_dir = tempdir().expect("could not create temp dir");
let mut pbuf = temp_dir.path().to_path_buf();
pbuf.push("parameters.json");
File::create(&pbuf).expect("failed to create file in temp dir");
ParamPublishSessionBuilder {
cache_dir: temp_dir,
cached_file_pbufs: vec![],
session_timeout_ms: 1000,
manifest: pbuf,
ipfs_bin_path: cargo_bin("fakeipfsadd"),
list_all_files: false,
}
}
/// Configure the path used by `parampublish` to add files to IPFS daemon.
pub fn with_ipfs_bin(mut self, ipfs_bin: &FakeIpfsBin) -> ParamPublishSessionBuilder {
let pbuf: PathBuf = PathBuf::from(&ipfs_bin.bin_path());
self.ipfs_bin_path = pbuf;
self
}
/// Create empty files with the given names in the cache directory.
pub fn with_files<P: AsRef<Path>>(self, filenames: &[P]) -> ParamPublishSessionBuilder {
filenames.iter().fold(self, |acc, item| acc.with_file(item))
}
/// Create a file containing 32 random bytes with the given name in the
/// cache directory.
pub fn with_file<P: AsRef<Path>>(mut self, filename: P) -> ParamPublishSessionBuilder {
let mut pbuf = self.cache_dir.path().to_path_buf();
pbuf.push(filename.as_ref());
let mut file = File::create(&pbuf).expect("failed to create file in temp dir");
let random_bytes = thread_rng().gen::<[u8; 32]>();
file.write_all(&random_bytes)
.expect("failed to write bytes");
self.cached_file_pbufs.push(pbuf);
self
}
/// Create a file with the provided bytes in the cache directory.
pub fn with_file_and_bytes<P: AsRef<Path>, R: Read>(
mut self,
filename: P,
r: &mut R,
) -> ParamPublishSessionBuilder {
let mut pbuf = self.cache_dir.path().to_path_buf();
pbuf.push(filename.as_ref());
let mut file = File::create(&pbuf).expect("failed to create file in temp dir");
io::copy(r, &mut file).expect("failed to copy bytes to file");
self.cached_file_pbufs.push(pbuf);
self
}
/// Create a metadata file with the provided name in the cache directory.
pub fn with_metadata<P: AsRef<Path>>(
self,
filename: P,
meta: &CacheEntryMetadata,
) -> ParamPublishSessionBuilder {
let mut meta_bytes: &[u8] = &serde_json::to_vec(meta)
.expect("failed to serialize CacheEntryMetadata to JSON byte array");
self.with_file_and_bytes(filename, &mut meta_bytes)
}
/// Configure the pty timeout (see documentation for `rexpect::spawn_bash`).
pub fn with_session_timeout_ms(mut self, timeout_ms: u64) -> ParamPublishSessionBuilder {
self.session_timeout_ms = timeout_ms;
self
}
/// Prompts the user to filter by param version.
pub fn list_all_files(mut self) -> ParamPublishSessionBuilder {
self.list_all_files = true;
self
}
/// When publishing, write JSON manifest to provided path.
pub fn write_manifest_to(mut self, manifest_dest: PathBuf) -> ParamPublishSessionBuilder {
self.manifest = manifest_dest;
self
}
/// Launch parampublish in an environment configured by the builder.
pub fn build(self) -> (ParamPublishSession, Vec<PathBuf>) {
let mut p = spawn_bash_with_retries(10, Some(self.session_timeout_ms))
.unwrap_or_else(|err| panic_any(err));
let cache_dir_path = format!("{:?}", self.cache_dir.path());
let cache_contents: Vec<PathBuf> = read_dir(&self.cache_dir)
.unwrap_or_else(|_| panic_any(format!("failed to read cache dir {:?}", self.cache_dir)))
.map(|x| x.expect("failed to get dir entry"))
.map(|x| x.path())
.collect();
let parampublish_path = cargo_bin("parampublish");
let cmd = format!(
"{}={} {:?} {} --ipfs-bin={:?} --json={:?}",
"FIL_PROOFS_PARAMETER_CACHE", // related to var name in core/src/settings.rs
cache_dir_path,
parampublish_path,
if self.list_all_files { "-a" } else { "" },
self.ipfs_bin_path,
self.manifest
);
p.execute(&cmd, ".*")
.expect("could not execute parampublish");
(
ParamPublishSession {
pty_session: p,
_cache_dir: self.cache_dir,
},
cache_contents,
)
}
}
/// An active pseudoterminal (pty) used to interact with parampublish.
pub struct ParamPublishSession {
pty_session: PtyReplSession,
_cache_dir: TempDir,
}
impl ParamPublishSession {
/// Send provided string and trailing newline to parampublish.
pub fn send_line(&mut self, line: &str) -> Result<usize, SyncFailure<rexpect::errors::Error>> {
self.pty_session.send_line(line).map_err(SyncFailure::new)
}
/// Block until provided string is seen on stdout from parampublish and
/// return remaining output.
pub fn exp_string(
&mut self,
needle: &str,
) -> Result<String, SyncFailure<rexpect::errors::Error>> {
self.pty_session
.exp_string(needle)
.map_err(SyncFailure::new)
}
}
use std::collections::BTreeMap;
use std::fs::File;
use std::path::Path;
use storage_proofs_core::parameter_cache::{CacheEntryMetadata, ParameterData};
use crate::{
parampublish::support::session::ParamPublishSessionBuilder,
support::{tmp_manifest, FakeIpfsBin},
};
#[test]
fn writes_json_manifest() -> Result<(), failure::Error> {
let filenames = vec!["v10-aaa.vk", "v10-aaa.params"];
let manifest_path = tmp_manifest(None)?;
let ipfs = FakeIpfsBin::new();
let (mut session, files_in_cache) = ParamPublishSessionBuilder::new()
.with_session_timeout_ms(1000)
.with_files(&filenames)
.with_metadata("v10-aaa.meta", &CacheEntryMetadata { sector_size: 1234 })
.write_manifest_to(manifest_path.clone())
.with_ipfs_bin(&ipfs)
.build();
// compute checksums from files added to cache to compare with
// manifest entries after publishing completes
let cache_checksums = filename_to_checksum(&ipfs, files_in_cache.as_ref());
session.exp_string("Select a version")?;
// There is only one version of parameters, accept that one
session.send_line("")?;
//session.exp_regex(".*Select the sizes to publish.*")?;
session.exp_string("Select sizes to publish")?;
// There is only one size, accept that one
session.send_line(" ")?;
// wait for confirmation...
session.exp_string("2 files to publish")?;
session.exp_string("finished publishing files")?;
// read the manifest file from disk and verify that it is well
// formed and contains the expected keys
let manifest_file = File::open(&manifest_path)?;
let manifest_map: BTreeMap<String, ParameterData> = serde_json::from_reader(manifest_file)?;
// ensure that each filename exists in the manifest and that its
// cid matches that which was produced from the `ipfs add` command
for filename in filenames.iter().cloned() {
if let (Some(m_entry), Some(expected)) =
(manifest_map.get(filename), cache_checksums.get(filename))
{
assert_eq!(
&m_entry.cid, expected,
"manifest does not include digest produced by ipfs add for {}",
filename
);
} else {
panic!("{} must be present in both manifest and cache", filename);
}
}
Ok(())
}
/// Produce a map of filename (not path) to the checksum produced by the ipfs
/// binary.
fn filename_to_checksum<P: AsRef<Path>>(
ipfs_bin: &FakeIpfsBin,
paths: &[P],
) -> BTreeMap<String, String> {
paths.iter().fold(BTreeMap::new(), |mut acc, item| {
acc.insert(
item.as_ref()
.file_name()
.and_then(|os_str| os_str.to_str())
.map(|s| s.to_string())
.unwrap_or_else(|| "".to_string()),
ipfs_bin
.compute_checksum(item)
.expect("failed to compute checksum"),
);
acc
})
}
mod paramfetch;
mod parampublish;
mod support;
use std::collections::BTreeMap;
use std::env;
use std::fs::File;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::thread;
use std::time::Duration;
use failure::format_err;
use rexpect::{session::PtyReplSession, spawn_bash};
use storage_proofs_core::parameter_cache::ParameterData;
use tempfile::tempdir;
pub struct FakeIpfsBin {
bin_path: PathBuf,
}
impl FakeIpfsBin {
pub fn new() -> FakeIpfsBin {
FakeIpfsBin {
bin_path: cargo_bin("fakeipfsadd"),
}
}
pub fn compute_checksum<P: AsRef<Path>>(&self, path: P) -> Result<String, failure::Error> {
let output = Command::new(&self.bin_path)
.arg("add")
.arg("-Q")
.arg(path.as_ref())
.output()?;
if !output.status.success() {
Err(format_err!(
"{:?} produced non-zero exit code",
&self.bin_path
))
} else {
Ok(String::from_utf8(output.stdout)?.trim().to_string())
}
}
pub fn bin_path(&self) -> &Path {
&self.bin_path
}
}
/// Get the path of the target directory.
pub fn target_dir() -> PathBuf {
env::current_exe()
.ok()
.map(|mut path| {
path.pop();
if path.ends_with("deps") {
path.pop();
}
path
})
.expect("failed to get current exe path")
}
/// Look up the path to a cargo-built binary within an integration test.
pub fn cargo_bin<S: AsRef<str>>(name: S) -> PathBuf {
target_dir().join(format!("{}{}", name.as_ref(), env::consts::EXE_SUFFIX))
}
/// Spawn a pty and, if an error is produced, retry with linear backoff (to 5s).
pub fn spawn_bash_with_retries(
retries: u8,
timeout: Option<u64>,
) -> Result<PtyReplSession, rexpect::errors::Error> {
let result = spawn_bash(timeout);
if result.is_ok() || retries == 0 {
result
} else {
let sleep_d = Duration::from_millis(5000 / u64::from(retries));
eprintln!(
"failed to spawn pty: {} retries remaining - sleeping {:?}",
retries, sleep_d
);
thread::sleep(sleep_d);
spawn_bash_with_retries(retries - 1, timeout)
}
}
/// Create a parameters.json manifest file in a temp directory and return its
/// path.
pub fn tmp_manifest(
opt_manifest: Option<BTreeMap<String, ParameterData>>,
) -> Result<PathBuf, failure::Error> {
let manifest_dir = tempdir()?;
let mut pbuf = manifest_dir.into_path();
pbuf.push("parameters.json");
let mut file = File::create(&pbuf)?;
if let Some(map) = opt_manifest {
// JSON encode the manifest and write bytes to temp file
serde_json::to_writer(&mut file, &map)?;
}
Ok(pbuf)
}
/target
**/*.rs.bk
Cargo.lock
.criterion
**/*.h
heaptrack*
.bencher
logging-toolkit
*.profile
*.heap
rust-fil-proofs.config.toml
[package]
name = "fil-proofs-tooling"
description = "Tooling for rust-fil-proofs"
version = "7.0.2"
authors = ["dignifiedquire <dignifiedquire@gmail.com>"]
license = "MIT OR Apache-2.0"
publish = false
edition = "2018"
repository = "https://github.com/filecoin-project/rust-fil-proofs"
readme = "README.md"
[dependencies]
storage-proofs-core = { path = "../storage-proofs-core", version = "^8.0.0", default-features = false}
storage-proofs-porep = { path = "../storage-proofs-porep", version = "^8.0.0", default-features = false }
storage-proofs-post = { path = "../storage-proofs-post", version = "^8.0.0", default-features = false }
filecoin-proofs = { path = "../filecoin-proofs", default-features = false }
filecoin-hashers = { path = "../filecoin-hashers", default-features = false, features = ["poseidon", "blake2s", "sha256"] }
clap = "2"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
toml = "0.5"
lazy_static = "1.2"
glob = "0.3"
regex = "1.3.7"
commandspec = "0.12.2"
chrono = { version = "0.4.7", features = ["serde"] }
memmap = "0.7.0"
bellperson = { version = "0.14", default-features = false }
rand = "0.7"
tempfile = "3.0.8"
cpu-time = "1.0.0"
git2 = "0.13.6"
heim = { git = "https://github.com/heim-rs/heim", rev = "e22e235", features = ["host", "memory", "cpu"] }
async-std = "1.6"
blake2s_simd = "0.5.6"
fil_logger = "0.1"
log = "0.4.8"
uom = "0.30"
merkletree = "0.21.0"
bincode = "1.1.2"
anyhow = "1.0.23"
ff = { version = "0.3.1", package = "fff" }
rand_xorshift = "0.2.0"
bytefmt = "0.1.7"
rayon = "1.3.0"
flexi_logger = "0.16.1"
typenum = "1.11.2"
generic-array = "0.14.4"
byte-unit = "4.0.9"
fdlimit = "0.2.0"
dialoguer = "0.8.0"
structopt = "0.3.12"
humansize = "1.1.0"
[features]
default = ["gpu", "measurements", "pairing"]
gpu = [
"storage-proofs-core/gpu",
"storage-proofs-porep/gpu",
"storage-proofs-post/gpu",
"filecoin-proofs/gpu",
"bellperson/gpu",
"filecoin-hashers/gpu",
]
measurements = ["storage-proofs-core/measurements"]
profile = ["storage-proofs-core/profile", "measurements"]
pairing = [
"storage-proofs-core/pairing",
"storage-proofs-porep/pairing",
"storage-proofs-post/pairing",
"filecoin-proofs/pairing",
"bellperson/pairing",
"filecoin-hashers/pairing",
]
blst = [
"storage-proofs-core/blst",
"storage-proofs-porep/blst",
"storage-proofs-post/blst",
"filecoin-proofs/blst",
"bellperson/blst",
"filecoin-hashers/blst",
]
[target.'cfg(target_arch = "x86_64")'.dependencies]
raw-cpuid = "8.1.2"
Copyright (c) 2018 Filecoin Project
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
\ No newline at end of file
# fil-proofs-tooling
This crate contains the following binaries
- `benchy` - Can be used to capture Stacked performance metrics
- `micro` - Runs the micro benchmarks written with criterion, parses the output.
## `benchy`
The `benchy` program can (currently) be used to capture Stacked performance
metrics. Metrics are printed to stdout.
```
$ ./target/release/benchy stacked --size=1024 | jq '.'
{
"inputs": {
"dataSize": 1048576,
"m": 5,
"expansionDegree": 8,
"slothIter": 0,
"partitions": 1,
"hasher": "pedersen",
"samples": 5,
"layers": 10
},
"outputs": {
"avgGrothVerifyingCpuTimeMs": null,
"avgGrothVerifyingWallTimeMs": null,
"circuitNumConstraints": null,
"circuitNumInputs": null,
"extractingCpuTimeMs": null,
"extractingWallTimeMs": null,
"replicationWallTimeMs": 4318,
"replicationCpuTimeMs": 32232,
"replicationWallTimeNsPerByte": 4117,
"replicationCpuTimeNsPerByte": 30739,
"totalProvingCpuTimeMs": 0,
"totalProvingWallTimeMs": 0,
"vanillaProvingCpuTimeUs": 378,
"vanillaProvingWallTimeUs": 377,
"vanillaVerificationWallTimeUs": 98435,
"vanillaVerificationCpuTimeUs": 98393,
"verifyingWallTimeAvg": 97,
"verifyingCpuTimeAvg": 97
}
}
```
To include information about RAM utilization during Stacked benchmarking, run
`benchy` via its wrapper script:
```
$ ./scripts/benchy.sh stacked --size=1024 | jq '.'
{
"inputs": {
"dataSize": 1048576,
"m": 5,
"expansionDegree": 8,
"slothIter": 0,
"partitions": 1,
"hasher": "pedersen",
"samples": 5,
"layers": 10
},
"outputs": {
"avgGrothVerifyingCpuTimeMs": null,
"avgGrothVerifyingWallTimeMs": null,
"circuitNumConstraints": null,
"circuitNumInputs": null,
"extractingCpuTimeMs": null,
"extractingWallTimeMs": null,
"replicationWallTimeMs": 4318,
"replicationCpuTimeMs": 32232,
"replicationWallTimeNsPerByte": 4117,
"replicationCpuTimeNsPerByte": 30739,
"totalProvingCpuTimeMs": 0,
"totalProvingWallTimeMs": 0,
"vanillaProvingCpuTimeUs": 378,
"vanillaProvingWallTimeUs": 377,
"vanillaVerificationWallTimeUs": 98435,
"vanillaVerificationCpuTimeUs": 98393,
"verifyingWallTimeAvg": 97,
"verifyingCpuTimeAvg": 97,
"maxResidentSetSizeKb": 45644
}
}
```
To run benchy on a remote server, provide SSH connection information to the
benchy-remote.sh script:
```shell
10:13 $ ./fil-proofs-tooling/scripts/benchy-remote.sh master foo@16.16.16.16 stacked --size=1 | jq '.'
{
"inputs": {
// ...
},
"outputs": {
// ...
}
}
```
Run benchy in "prodbench" mode with custom input and detailed metrics.
```shell
> echo '{
"porep_challenges": 50,
"porep_partitions": 10,
"post_challenged_nodes": 1,
"post_challenges": 20,
"stacked_layers": 11,
"sector_size": "2KiB",
"num_sectors": 1,
"api_version": "1.1.0"
}' > config.json
> cat config.json|RUST_LOG=info ./target/release/benchy prodbench|jq '.'
{
"git": {
"hash": "d751257b4f7339f6ec3de7b3fda1b1b8979ccf21",
"date": "2019-12-18T21:08:21Z"
},
"system": {
"system": "Linux",
"release": "5.2.0-3-amd64",
"version": "#1 SMP Debian 5.2.17-1 (2019-09-26)",
"architecture": "x86_64",
"processor": "Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz",
"processor-base-frequency-hz": 2000,
"processor-max-frequency-hz": 4000,
"processor-features": "FeatureInfo { eax: 526058, ebx: 101713920, edx_ecx: SSE3 | PCLMULQDQ | DTES64 | MONITOR | DSCPL | VMX | EIST | TM2 | SSSE3 | FMA | CMPXCHG16B | PDCM | PCID | SSE41 | SSE42 | X2APIC | MOVBE | POPCNT | TSC_DEADLINE | AESNI | XSAVE | OSXSAVE | AVX | F16C | RDRAND | FPU | VME | DE | PSE | TSC | MSR | PAE | MCE | CX8 | APIC | SEP | MTRR | PGE | MCA | CMOV | PAT | PSE36 | CLFSH | DS | ACPI | MMX | FXSR | SSE | SSE2 | SS | HTT | TM | PBE | 0x4800 }",
"processor-cores-logical": 8,
"processor-cores-physical": 4,
"memory-total-bytes": 32932844000
},
"benchmarks": {
"inputs": {
"window_size_bytes": 512,
"sector_size_bytes": 1024,
"drg_parents": 6,
"expander_parents": 8,
"porep_challenges": 50,
"porep_partitions": 10,
"post_challenges": 20,
"post_challenged_nodes": 1,
"stacked_layers": 4,
"wrapper_parents_all": 8
},
"outputs": {
"comm_d_cpu_time_ms": 0,
"comm_d_wall_time_ms": 0,
"encode_window_time_all_cpu_time_ms": 11,
"encode_window_time_all_wall_time_ms": 4,
"encoding_cpu_time_ms": 23,
"encoding_wall_time_ms": 18,
"epost_cpu_time_ms": 1,
"epost_wall_time_ms": 1,
"generate_tree_c_cpu_time_ms": 12,
"generate_tree_c_wall_time_ms": 6,
"porep_commit_time_cpu_time_ms": 83,
"porep_commit_time_wall_time_ms": 27,
"porep_proof_gen_cpu_time_ms": 6501654,
"porep_proof_gen_wall_time_ms": 972945,
"post_finalize_ticket_cpu_time_ms": 0,
"post_finalize_ticket_time_ms": 0,
"epost_inclusions_cpu_time_ms": 1,
"epost_inclusions_wall_time_ms": 0,
"post_partial_ticket_hash_cpu_time_ms": 1,
"post_partial_ticket_hash_time_ms": 1,
"post_proof_gen_cpu_time_ms": 61069,
"post_proof_gen_wall_time_ms": 9702,
"post_read_challenged_range_cpu_time_ms": 0,
"post_read_challenged_range_time_ms": 0,
"post_verify_cpu_time_ms": 37,
"post_verify_wall_time_ms": 31,
"tree_r_last_cpu_time_ms": 14,
"tree_r_last_wall_time_ms": 6,
"window_comm_leaves_time_cpu_time_ms": 20,
"window_comm_leaves_time_wall_time_ms": 3,
"porep_constraints": 67841707,
"post_constraints": 335127,
"kdf_constraints": 212428
}
}
}
```
## `micro`
All arguments passed to `micro` will be passed to `cargo bench --all <your arguments> -- --verbose --color never`.
Except for the following
### Example
```sh
> cargo run --bin micro -- --bench blake2s hash-blake2s
```
disable-push = true
disable-publish = true
disable-tag = true
no-dev-version = true
#!/usr/bin/env bash
set -e
stacked_path=$1
micro_path=$2
hash_constraints_path=$3
window_post_path=$4
jq --sort-keys -s '{ benchmarks: { "stacked-benchmarks": { outputs: { "max-resident-set-size-kb": .[0] } } } } * .[1]' \
<(jq '.["max-resident-set-size-kb"]' $stacked_path) \
<(jq -s '.[0] * { benchmarks: { "hash-constraints": .[1], "stacked-benchmarks": .[2], "micro-benchmarks": .[3], "window-post-benchmarks": .[4] } }' \
<(jq 'del (.benchmarks)' $micro_path) \
<(jq '.benchmarks' $hash_constraints_path) \
<(jq '.benchmarks' $stacked_path) \
<(jq '.benchmarks' $micro_path) \
<(jq '.benchmarks' $window_post_path))
#!/usr/bin/env bash
which jq >/dev/null || { printf '%s\n' "error: jq" >&2; exit 1; }
BENCHY_STDOUT=$(mktemp)
GTIME_STDERR=$(mktemp)
JQ_STDERR=$(mktemp)
GTIME_BIN="env time"
GTIME_ARG="-f '{ \"max-resident-set-size-kb\": %M }' cargo run --quiet --bin benchy --release -- ${@}"
if [[ $(env time --version 2>&1) != *"GNU"* ]]; then
if [[ $(/usr/bin/time --version 2>&1) != *"GNU"* ]]; then
if [[ $(env gtime --version 2>&1) != *"GNU"* ]]; then
printf '%s\n' "error: GNU time not installed" >&2
exit 1
else
GTIME_BIN="gtime"
fi
else
GTIME_BIN="/usr/bin/time"
fi
fi
CMD="${GTIME_BIN} ${GTIME_ARG}"
eval "RUST_BACKTRACE=1 RUSTFLAGS=\"-Awarnings -C target-cpu=native\" ${CMD}" > $BENCHY_STDOUT 2> $GTIME_STDERR
GTIME_EXIT_CODE=$?
jq -s '.[0] * .[1]' $BENCHY_STDOUT $GTIME_STDERR 2> $JQ_STDERR
JQ_EXIT_CODE=$?
if [[ ! $GTIME_EXIT_CODE -eq 0 || ! $JQ_EXIT_CODE -eq 0 ]]; then
>&2 echo "*********************************************"
>&2 echo "* benchy failed - dumping debug information *"
>&2 echo "*********************************************"
>&2 echo ""
>&2 echo "<COMMAND>"
>&2 echo "${CMD}"
>&2 echo "</COMMAND>"
>&2 echo ""
>&2 echo "<GTIME_STDERR>"
>&2 echo "$(cat $GTIME_STDERR)"
>&2 echo "</GTIME_STDERR>"
>&2 echo ""
>&2 echo "<BENCHY_STDOUT>"
>&2 echo "$(cat $BENCHY_STDOUT)"
>&2 echo "</BENCHY_STDOUT>"
>&2 echo ""
>&2 echo "<JQ_STDERR>"
>&2 echo "$(cat $JQ_STDERR)"
>&2 echo "</JQ_STDERR>"
exit 1
fi
#!/usr/bin/env bash
MICRO_SDERR=$(mktemp)
MICRO_SDOUT=$(mktemp)
JQ_STDERR=$(mktemp)
CMD="cargo run --bin micro --release ${@}"
eval "RUST_BACKTRACE=1 RUSTFLAGS=\"-Awarnings -C target-cpu=native\" ${CMD}" 1> $MICRO_SDOUT 2> $MICRO_SDERR
MICRO_EXIT_CODE=$?
cat $MICRO_SDOUT | jq '.' 2> $JQ_STDERR
JQ_EXIT_CODE=$?
if [[ ! $MICRO_EXIT_CODE -eq 0 || ! $JQ_EXIT_CODE -eq 0 ]]; then
>&2 echo "********************************************"
>&2 echo "* micro failed - dumping debug information *"
>&2 echo "********************************************"
>&2 echo ""
>&2 echo "<COMMAND>"
>&2 echo "${CMD}"
>&2 echo "</COMMAND>"
>&2 echo ""
>&2 echo "<MICRO_SDERR>"
>&2 echo "$(cat $MICRO_SDERR)"
>&2 echo "</MICRO_SDERR>"
>&2 echo ""
>&2 echo "<MICRO_SDOUT>"
>&2 echo "$(cat $MICRO_SDOUT)"
>&2 echo "</MICRO_SDOUT>"
>&2 echo ""
>&2 echo "<JQ_STDERR>"
>&2 echo "$(cat $JQ_STDERR)"
>&2 echo "</JQ_STDERR>"
exit 1
fi
#!/usr/bin/env bash
# Inspired by https://gist.github.com/reacocard/28611bfaa2395072119464521d48729a
set -o errexit
set -o nounset
set -o pipefail
# Retry a command on a particular exit code, up to a max number of attempts,
# with exponential backoff.
# Invocation:
# err_retry exit_code attempts sleep_multiplier <command>
# exit_code: The exit code to retry on.
# attempts: The number of attempts to make.
# sleep_millis: Multiplier for sleep between attempts. Examples:
# If multiplier is 1000, sleep intervals are 1, 4, 9, 16, etc. seconds.
# If multiplier is 5000, sleep intervals are 5, 20, 45, 80, 125, etc. seconds.
exit_code=$1
attempts=$2
sleep_millis=$3
shift 3
for attempt in `seq 1 $attempts`; do
# This weird construction lets us capture return codes under -o errexit
"$@" && rc=$? || rc=$?
if [[ ! $rc -eq $exit_code ]]; then
exit $rc
fi
if [[ $attempt -eq $attempts ]]; then
exit $rc
fi
sleep_ms="$(($attempt * $attempt * $sleep_millis))"
sleep_seconds=$(echo "scale=2; ${sleep_ms}/1000" | bc)
(>&2 echo "sleeping ${sleep_seconds}s and then retrying ($((attempt + 1))/${attempts})")
sleep "${sleep_seconds}"
done
#!/usr/bin/env bash
CMDS=$(cat <<EOF
set -e
# Creates a temporary directory in which we build rust-fil-proofs and capture
# performance metrics. The name of the directory (today's UTC seconds plus 24
# hours) serves as a cleanup mechanism; before metrics are captured, any expired
# directories are removed.
_one_day_from_now=\$((\$(date +%s) + 86400))
_metrics_dir=/tmp/metrics/\$_one_day_from_now
# Find and prune any stale metrics directories.
find /tmp/metrics/ -maxdepth 1 -mindepth 1 -type d -printf "%f\n" \
| xargs -I {} bash -c 'if (({} < \$(date +%s))) ; then rm -rf /tmp/metrics/{} ; fi' 2> /dev/null
# Make sure hwloc library is available on the remote host.
apt-get -y -q install libhwloc-dev > /dev/null 2>&1
# Make sure rust is installed on the remote host.
curl https://sh.rustup.rs -sSf | sh -s -- -y > /dev/null 2>&1
source $HOME/.cargo/env /dev/null 2>&1
git clone -b $1 --single-branch https://github.com/filecoin-project/rust-fil-proofs.git \$_metrics_dir || true
cd \$_metrics_dir
./fil-proofs-tooling/scripts/retry.sh 42 10 60000 \
./fil-proofs-tooling/scripts/with-lock.sh 42 /tmp/metrics.lock \
./fil-proofs-tooling/scripts/with-dots.sh \
${@:3}
EOF
)
ssh -q $2 "$CMDS"
#!/usr/bin/env bash
trap cleanup EXIT
cleanup() {
kill $DOT_PID
}
(
sleep 1
while true; do
(printf "." >&2)
sleep 1
done
) &
DOT_PID=$!
$@
#!/usr/bin/env bash
# Inspired by http://mywiki.wooledge.org/BashFAQ/045
failure_code=$1
lockdir=$2
shift 2
# Check to make sure that the process which owns the lock, if one exists, is
# still alive. If the process is not alive, release the lock.
for lockdir_pid in $(find "$lockdir" -type f -exec basename {} \; 2> /dev/null)
do
if ! ps -p "${lockdir_pid}" > /dev/null
then
(>&2 echo "cleaning up leaked lock (pid=${lockdir_pid}, path=${lockdir})")
rm -rf "${lockdir}"
fi
done
if mkdir "$lockdir" > /dev/null 2>&1
then
(>&2 echo "successfully acquired lock (pid=$$, path=${lockdir})")
# Create a file to track the process id that acquired the lock. This
# is used to prevent leaks if the lock isn't relinquished correctly.
touch "$lockdir/$$"
# Unlock (by removing dir and pid file) when the script finishes.
trap '(>&2 echo "relinquishing lock (${lockdir})"); rm -rf "$lockdir"' EXIT
# Execute command
"$@"
else
(>&2 echo "failed to acquire lock (path=${lockdir})")
exit "$failure_code"
fi
use std::path::Path;
use anyhow::Result;
use bellperson::bls::Bls12;
use bellperson::groth16::MappedParameters;
use clap::{value_t, App, Arg, SubCommand};
use storage_proofs_core::parameter_cache::read_cached_params;
fn run_map(parameter_file: &Path) -> Result<MappedParameters<Bls12>> {
read_cached_params(&parameter_file.to_path_buf())
}
fn main() {
fil_logger::init();
let map_cmd = SubCommand::with_name("map")
.about("build mapped parameters")
.arg(
Arg::with_name("param")
.long("parameter-file")
.help("The parameter file to map")
.required(true)
.takes_value(true),
);
let matches = App::new("check_parameters")
.version("0.1")
.subcommand(map_cmd)
.get_matches();
match matches.subcommand() {
("map", Some(m)) => {
let parameter_file_str = value_t!(m, "param", String).expect("param failed");
run_map(&Path::new(&parameter_file_str)).expect("run_map failed");
}
_ => panic!("Unrecognized subcommand"),
}
}
fn main() {
fil_logger::init();
let res = fdlimit::raise_fd_limit().expect("failed to raise fd limit");
println!("File descriptor limit was raised to {}", res);
}
This diff is collapsed.
use storage_proofs_core::settings::SETTINGS;
fn main() {
println!("{:#?}", *SETTINGS);
}
#![deny(clippy::all, clippy::perf, clippy::correctness, rust_2018_idioms)]
#![warn(clippy::unwrap_used)]
#![warn(clippy::needless_collect)]
pub mod measure;
pub mod metadata;
pub mod shared;
pub use measure::{measure, FuncMeasurement};
pub use metadata::Metadata;
pub use shared::{create_replica, create_replicas};
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment