Introduction
Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.
The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).
Spec Status
Each section of the spec must be stable and audited before it is considered done. The state of each section is tracked below.
- The State column indicates the stability as defined in the legend.
- The Theory Audit column shows the date of the last theory audit with a link to the report.
- The Weight column is used to highlight the relative criticality of a section against the others.
Spec Status Legend
Spec state | Label |
---|---|
Unlikely to change in the foreseeable future. | Stable |
All content is correct. Important details are covered. | Reliable |
All content is correct. Details are being worked on. | Draft/WIP |
Do not follow. Important things have changed. | Incorrect |
No work has been done yet. | Missing |
Spec Status Overview
Spec Stabilization Progess
This progress bar shows what percentage of the spec sections are considered stable.
Implementations Status
Known implementations of the filecoin spec are tracked below, with their current CI build status, their test coverage as reported by codecov.io, and a link to their last security audit report where one exists.
Repo | Language | CI | Test Coverage | Security Audit |
---|---|---|---|---|
lotus | go | Passed | 37% | WIP |
go-fil-markets | go | Passed | 61% | WIP |
specs-actors | go | Passed | 74% | WIP |
rust-fil-proofs | rust | Unknown | Unknown | [1] [2] |
go-filecoin | go | Failed | 48% | Missing |
forest | rust | Unknown | Unknown | Missing |
cpp-filecoin | c++ | Passed | 33% | Missing |
Architecture Diagrams
Overview Diagram
TODO
- cleanup / reorganize
- this diagram is accurate, and helps lots to navigate, but it’s still a bit confusing
- the arrows and lines make it a bit hard to follow. We should have a much cleaner version (maybe based on C4)
- reflect addition of Token system
- move data_transfers into Token
Protocol Flow Diagram
Parameter Calculation Dependency Graph
This is a diagram of the model for parameter calculation. This is made with orient, our tool for modeling and solving for constraints.
Key Concepts
For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:
Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).
Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).
Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the ChainSync is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.
APIs are the interfaces for delivering messages to components. A client’s view of a given sub-protocol, such as a request to a miner node’s Storage Provider component to store files in the storage market, may require the execution of a series of API requests.
Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components and supports all of the APIs detailed in the spec.
Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.
Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.
Filecoin VM
The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages
, and a checkpoint of the current global state
after the application of those messages
.
The global state
here consists of a set of actors
, each with their own private state
.
An actor
is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state
pointer, a code
CID which tells the system what type of actor it is, and a nonce
which tracks the number of messages sent by this actor.
There are two routes to calling a method on an actor
. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message
to the network, and pay a fee to the miner that includes your message
. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the messages execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).
Second, an actor
may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).
For full implementation details, see the VM Subsystem.
System Decomposition
What are Systems? How do they work?
Filecoin decouples and modularizes functionality into loosely-joined systems
.
Each system adds significant functionality, usually to achieve a set of important and tightly related goals.
For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.
Why is System decoupling useful?
This decoupling is useful for:
- Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
- Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
- Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
- Scalability: systems, and various use cases, may drive different performance requirements for different opertators. System decoupling makes it easier for operators to scale their deployments along system boundaries.
Filecoin Nodes don’t need all the systems
Filecoin Nodes vary significantly, and do not need all the systems. Most systems are only needed for a subset of use cases.
For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust.
Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:
- Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
- Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
- Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
- Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.
Separating Systems
How do we determine what functionality belongs in one system vs another?
Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.
Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For
example, the StoragePowerActor
and the StorageMarketActor
were a single Actor
previously. This caused
a large coupling of functionality across StorageDeal
making, the StorageMarket
, markets in general, with
Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality
requried breaking apart the one actor into two.
Decomposing within a System
Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.
Implementing Systems
System Requirements
In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).
All Systems, as defined in this document, require the following:
- Repository:
- Local
IpldStore
. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes. - User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
- Local, Secure
KeyStore
. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie theKeyStore
) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
- Local
- Local
FileStore
. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB). - Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes.
Systems expect to be initialized with a
libp2p.Node
on which they can mount their own protocols. - Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.
For this purpose, we use the FilecoinNode
data structure, which is passed into all systems at initialization:
import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import message_pool "github.com/filecoin-project/specs/systems/filecoin_blockchain/message_pool"
type FilecoinNode struct {
Node libp2p.Node
Repository repo.Repository
FileStore filestore.FileStore
Clock clock.UTCClock
MessagePool message_pool.MessagePoolSubsystem
}
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key_store"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"
type Repository struct {
Config config.Config
KeyStore key_store.KeyStore
ChainStore ipld.GraphStore
StateStore ipld.GraphStore
}
System Limitations
Further, Systems MUST abide by the following limitations:
- Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achived by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
- Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it.) This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
- No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
- No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).
Systems
In this section we are detailing all the system components one by one in increasing level of complexity and/or interdependence to other system components. The interaction of the components between each other is only briefly discussed where appropriate, but the overall workflow is given in the Introduction section. In particular, in this section we discuss:
- Filecoin Nodes: the different types of nodes that participate in the Filecoin Network, as well as important parts and processes that these nodes run, such as the key store and IPLD store, as well as the network interface to libp2p.
- Files & Data: the data units of Filecoin, such as the Sectors and the Pieces.
- Virtual Machine: the subcomponents of the Filecoin VM, such as the actors, i.e., the smart contracts that run on the Filecoin Blockchain, and the State Tree.
- Blockchain: the main building blocks of the Filecoin blockchain, such as the structure of the Transaction and Block messages, the message pool, as well as how nodes synchronise the blockchain when they first join the network.
- Token: the components needed for a wallet.
- Storage Mining: the details of storage mining, storage power consensus, and how storage miners prove storage (without going into details of proofs, which are discussed later).
- Markets: the storage and retrieval markets, which are primarily processes that take place off-chain, but are very important for the smooth operation of the decentralised storage market.
Filecoin Nodes
This section is providing all the details needed in order to implement any of the different Filecoin Nodes. Although node types in the Filecoin blockchain are less strictly defined than in other blockchains, there are still a few different types of nodes each with its own features and characteristics.
In this section we also discuss issues related to storage of system files in Filecoin nodes. Note that by storage in this section we do not refer to the storage that a node is commiting for mining in the network, but rather the local storage that it needs to have available for keys and IPLD data among other things.
The network interface and how nodes connect with each other and interact using libp2p, as well as how to set the node’s clock is also discussed here.
Node Types
Node Interface
import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import message_pool "github.com/filecoin-project/specs/systems/filecoin_blockchain/message_pool"
type FilecoinNode struct {
Node libp2p.Node
Repository repo.Repository
FileStore filestore.FileStore
Clock clock.UTCClock
MessagePool message_pool.MessagePoolSubsystem
}
Examples
There are many kinds of Filecoin Nodes …
This section should contain:
- what all nodes must have, and why
- examples of using different systems
Chain Verifier Node
type ChainVerifierNode interface {
FilecoinNode
systems.Blockchain
}
Client Node
type ClientNode struct {
FilecoinNode
systems.Blockchain
markets.StorageMarketClient
markets.RetrievalMarketClient
markets.MarketOrderBook
markets.DataTransfers
}
Storage Miner Node
type StorageMinerNode interface {
FilecoinNode
systems.Blockchain
systems.Mining
markets.StorageMarketProvider
markets.MarketOrderBook
markets.DataTransfers
}
Retrieval Miner Node
type RetrievalMinerNode interface {
FilecoinNode
blockchain.Blockchain
markets.RetrievalMarketProvider
markets.MarketOrderBook
markets.DataTransfers
}
Relayer Node
type RelayerNode interface {
FilecoinNode
blockchain.MessagePool
markets.MarketOrderBook
}
Repository - Local Storage for Chain Data and Systems
The Filecoin node repository is simply an abstraction denoting that data which any functional Filecoin node needs to store locally in order to run correctly.
The repo is accessible to the node’s systems and subsystems and acts as local storage compartementalized from the node’s FileStore
(for instance).
It stores the node’s keys, the IPLD datastructures of stateful objects and node configs.
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key_store"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"
type Repository struct {
Config config.Config
KeyStore key_store.KeyStore
ChainStore ipld.GraphStore
StateStore ipld.GraphStore
}
Config - Local Storage for Configuration Values
Filecoin Node configuration
type ConfigKey string
type ConfigVal Bytes
type Config struct {
Get(k ConfigKey) union {c ConfigVal, e error}
Put(k ConfigKey, v ConfigVal) error
Subconfig(k ConfigKey) Config
}
Key Store
The Key Store
is a fundamental abstraction in any full Filecoin node used to store the keypairs associated with a given miner’s address and distinct workers (should the miner choose to run multiple workers).
Node security depends in large part on keeping these keys secure. To that end we recommend keeping keys separate from any given subsystem and using a separate key store to sign requests as required by subsystems as well as keeping those keys not used as part of mining in cold storage.
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import address "github.com/filecoin-project/go-address"
type KeyStore struct {
MinerAddress address.Address
OwnerKey filcrypto.VRFKeyPair
WorkerKey filcrypto.VRFKeyPair
}
Filecoin storage miners rely on three main components:
- The miner address uniquely assigned to a given storage miner actor upon calling
registerMiner()
in the Storage Power Consensus Subsystem. It is a unique identifier for a given storage miner to which its power and other keys will be associated. - The owner keypair is provided by the miner ahead of registration and its public key associated with the miner address. Block rewards and other payments are made to the ownerAddress.
- The worker keypair can be chosen and changed by the miner, its public key associated with the miner address. It is used to sign transactions, signatures, etc. It must be a BLS keypair given its use as part of the Verifiable Random Function.
While miner addresses are unique, multiple storage miner actors can share an owner public key or likewise a worker public key.
The process for changing the worker keypairs on-chain (i.e. the workerKey associated with a storage miner) is specified in Storage Miner Actor. Note that this is a two-step process. First a miner stages a change by sending a message to the chain. When received, the key change is staged to occur in twice the randomness lookback parameter number of epochs, to prevent adaptive key selection attacks. Every time a worker key is queried, a pending change is lazily checked and state is potentially updated as needed.
TODO:
- potential reccomendations or clear disclaimers with regards to consequences of failed key security
IPLD Store - Local Storage for hash-linked data
// imported as ipld.Object
import cid "github.com/ipfs/go-cid"
type Object interface {
CID() cid.Cid
// Populate(v interface{}) error
}
type GraphStore struct {
// Retrieves a serialized value from the store by CID. Returns the value and whether it was found.
Get(c cid.Cid) (util.Bytes, bool)
// Puts a serialized value in the store, returning the CID.
Put(value util.Bytes) (c cid.Cid)
}
IPLD is a set of libraries which allow for the interoperability of content-addressed data structures across different distributed systems. It provides a fundamental ‘common language’ to primitive cryptographic hashing, enabling data to be verifiably referenced and retrieved between two independent protocols. For example, a user can reference a git commit in a blockchain transaction to create an immutable copy and timestamp, or a data from a DHT can be referenced and linked to in a smart contract.
The Data Model
At its core, IPLD defines a Data Model for representing data. The Data Model is designed for practical implementation across a wide variety of programming languages, while maintaining usability for content-addressed data and a broad range of generalized tools that interact with that data.
The Data Model includes a range of standard primitive types (or “kinds”), such as booleans, integers, strings, nulls and byte arrays, as well as two recursive types: lists and maps. Because IPLD is designed for content-addressed data, it also includes a “link” primitive in its Data Model. In practice, links use the CID specification. IPLD data is organized into “blocks”, where a block is represented by the raw, encoded data and its content-address, or CID. Every content-addressable chunk of data can be represented as a block, and together, blocks can form a coherent graph, or Merkle DAG.
Applications interact with IPLD via the Data Model, and IPLD handles marshalling and unmarshalling via a suite of codecs. IPLD codecs may support the complete Data Model or part of the Data Model. Two codecs that support the complete Data Model are DAG-CBOR and DAG-JSON. These codecs are respectively based on the CBOR and JSON serialization formats but include formalizations that allow them to encapsulate the IPLD Data Model (including its link type) and additional rules that create a strict mapping between any set of data and it’s respective content address (or hash digest). These rules include the mandating of particular ordering of keys when encoding maps, or the sizing of integer types when stored.
IPLD in Filecoin
On the Filecoin network, IPLD is used in two ways:
- All system datastructures are stored in IPLD format, a data format akin to JSON but designed for storage, retrieval and traversal of hash-linked data DAGs.
- Files and data stored on the Filecoin network may also be stored in IPLD format. While this is not required, it offers the benefit of supporting selectors to retrieve a smaller subset of the total stored data, as opposed to inefficiently downloading the data set entirely.
IPLD provides a consistent and coherent abstraction above data that allows Filecoin to build and interact with complex, multi-block data structures, such as HAMT and Sharray. Filecoin uses the DAG-CBOR codec for the serialization and deserialization of its data structures and interacts with that data using the IPLD Data Model, upon which various tools are built. IPLD Paths are also used to address specific nodes within a linked data structure.
IpldStores
The Filecoin network relies primarily on two distinct IPLD GraphStores:
- One
ChainStore
which stores the blockchain, including block headers, associated messages, etc. - One
StateStore
which stores the payload state from a given blockchain, or thestateTree
resulting from all block messages in a given chain being applied to the genesis state by the Filecoin VM.
The ChainStore
is downloaded by a node from their peers during the bootstrapping phase of
Chain Sync and is stored by the node thereafter. It is updated on every new block reception, or if the node syncs to a new best chain.
The StateStore
is computed through the execution of all block messages in a given ChainStore
and is stored by the node thereafter. It is updated with every new incoming block’s processing by the
VM Interpreter, and referenced accordingly by new blocks produced atop it in the
block header’s ParentState
field.
Interface
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
type Node libp2p.Node
Filecoin nodes use the libp2p protocol for peer discovery, peer routing, and message multicast, and so on. Libp2p is a set of modular protocols common to the peer-to-peer networking stack. Nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protcols will be mounted under /fil/...
protocol identifiers.
Here is the list of libp2p protocols used by Filecoin.
- Graphsync:
- Graphsync is used to transfer blockchain and user data
- Draft spec
- No filecoin specific modifications to the protocol id
- Gossipsub:
- block headers and messages are broadcasted through a Gossip PubSub protocol where nodes can subscribe to topics for blockchain data and receive messages in those topics. When receiving messages related to a topic, nodes process the message and forward it to peers who also subscribed to the same topic.
- Spec is here
- No filecoin specific modifications to the protocol id. However the topic identifiers MUST be of the form
fil/blocks/<network-name>
andfil/msgs/<network-name>
- KademliaDHT:
- Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. Kad DHT is used primarily for peer routing as well as peer discovery in the Filecoin protocol.
- Spec TODO reference implementation
- The protocol id must be of the form
fil/<network-name>/kad/1.0.0
- Bootstrap List:
- Bootstrap is a list of nodes that a new node attempts to connect to upon joining the network. The list of bootstrap nodes and their addresses are defined by the users.
- Peer Exchange:
- Peer Exchange is a discovery protocol enabling peers to create and issue queries for desired peers against their existing peers
- spec TODO
- No Filecoin specific modifications to the protocol id.
- DNSDiscovery: Design and spec needed before implementing
- HTTPDiscovery: Design and spec needed before implementing
- Hello:
- Hello protocol handles new connections to filecoin nodes to facilitate discovery
- the protocol string is
fil/hello/1.0.0
.
Hello Spec
Protocol Flow
fil/hello
is a filecoin specific protocol built on the libp2p stack. It consists of two conceptual
procedures: hello_connect
and hello_listen
.
hello_listen
: on new stream
-> read peer hello msg from stream
-> write latency message to stream
-> close stream
hello_connect
: on connected
-> open stream
-> write own hello msg to stream
-> read peer latency msg from stream
-> close stream
where stream and connection operations are all standard libp2p operations. Nodes running the Hello Protocol should consume the incoming Hello Message and use it to help manage peers and sync the chain.
Messages
import cid "github.com/ipfs/go-cid"
// HelloMessage shares information about a peer's chain head
type HelloMessage struct {
HeaviestTipSet [cid.Cid]
HeaviestTipSetWeight BigInt
HeaviestTipSetHeight Int
GenesisHash cid.Cid
}
// LatencyMessage shares information about a peer's network latency
type LatencyMessage struct {
// Measured in unix nanoseconds
TArrival Int
// Measured in unix nanoseconds
TSent Int
}
When writing the HelloMessage
to the stream the peer must inspect its current head to provide accurate information. When writing the LatencyMessage
to the stream the peer should set TArrival
immediately upon receipt and TSent
immediately before writing the message to the stream.
Clock
type UnixTime int64 // unix timestamp
// UTCClock is a normal, system clock reporting UTC time.
// It should be kept in sync, with drift less than 1 second.
type UTCClock struct {
NowUTCUnix() UnixTime
}
// ChainEpoch represents a round of a blockchain protocol.
type ChainEpoch int64
// ChainEpochClock is a clock that represents epochs of the protocol.
type ChainEpochClock struct {
// GenesisTime is the time of the first block. EpochClock counts
// up from there.
GenesisTime UnixTime
EpochAtTime(t UnixTime) ChainEpoch
}
package clock
import "time"
// UTCSyncPeriod notes how often to sync the UTC clock with an authoritative
// source, such as NTP, or a very precise hardware clock.
var UTCSyncPeriod = time.Hour
// EpochDuration is a constant that represents the duration in seconds
// of a blockchain epoch.
var EpochDuration = UnixTime(15)
func (_ *UTCClock_I) NowUTCUnix() UnixTime {
return UnixTime(time.Now().Unix())
}
// EpochAtTime returns the ChainEpoch corresponding to time `t`.
// It first subtracts GenesisTime, then divides by EpochDuration
// and returns the resulting number of epochs.
func (c *ChainEpochClock_I) EpochAtTime(t UnixTime) ChainEpoch {
difference := t - c.GenesisTime()
epochs := difference / EpochDuration
return ChainEpoch(epochs)
}
Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock (tolerating some bounded drift).
Filecoin relies on this system clock in order to secure consensus. Specifically the clock is necessary to support validation rules that prevent block producers from mining blocks with a future timstamp, and running leader elections more frequently than the protocol allows.
Clock uses
The Filecoin system clock is used:
- by syncing nodes to validate that incoming blocks were mined in the appropriate epoch given their timestamp (see Block Validation). This is possible because the system clock maps all times to a unique epoch number totally determined by the start time in the genesis block.
- by syncing nodes to drop blocks coming from a future epoch
- by mining nodes to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in the current round (see Storage Power Consensus).
In order to allow miners to do the above, the system clock must:
- Have low enough clock drift (sub 1s) relative to other nodes so that blocks are not mined in epochs considered future epochs from the persective of other nodes (those blocks should not be validated until the proper epoch/time as per validation rules).
- Set epoch number on node initialization equal to
epoch = Floor[(current_time - genesis_time) / epoch_time]
It is expected that other subsystems will register to a NewRound() event from the clock subsystem.
Clock Requirements
Clocks used as part of the Filecoin protocol should be kept in sync, with drift less than 1 second so as to enable appropriate validation.
Computer-grade clock crystals can be expected to have drift rates on the order of 1ppm (i.e. 1 microsecond every second or .6 seconds a week), therefore, in order to respect the above-requirement,
- clients SHOULD query an NTP server (
pool.ntp.org
is recommended) on an hourly basis to adjust clock skew.- We recommend one of the following:
pool.ntp.org
(can be catered to a specific zone)time.cloudflare.com:1234
(more on Cloudflare time services)time.google.com
(more on Google Public NTP)ntp-b.nist.gov
( NIST servers require registration)
- We further recommend making 3 measurements in order to drop by using the network to drop outliers
- See how go-ethereum does this for inspiration
- We recommend one of the following:
- clients MAY consider using cesium clocks instead for accurate synchrony within larger mining operations
Mining operations have a strong incentive to prevent their clock from drifting ahead more than one epoch to keep their block submissions from being rejected. Likewise they have an incentive to prevent their clocks from drifting behind more than one epoch to avoid partitioning themselves off from the synchronized nodes in the network.
Future work
If either of the above metrics show significant network skew over time, future versions of Filecoin may include potential timestamp/epoch correction periods at regular intervals.
When recoverying from exceptional chain halting outages (for example all implementations panic on a given block) the network can potentially opt for per-outage “dead zone” rules banning the authoring of blocks during the outage epochs to prevent attack vectors related to unmined epochs during chain restart.
Future versions of the Filecoin protocol may use Verifiable Delay Functions (VDFs) to strongly enforce block time and fulfill this leader election requirement; we choose to explicitly assume clock synchrony until hardware VDF security has been proven more extensively.
Files & Data
Filecoin’s primary aim is to store client’s Files and Data.
This section details data structures and tooling related to working with files,
chunking, encoding, graph representations, Pieces
, storage abstractions, and more.
File
// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string
// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
Path() Path
Size() int
Close() error
// Read reads from File into buf, starting at offset, and for size bytes.
Read(offset int, size int, buf Bytes) struct {size int, e error}
// Write writes from buf into File, starting at offset, and for size bytes.
Write(offset int, size int, buf Bytes) struct {size int, e error}
}
FileStore - Local Storage for Files
The FileStore
is an abstraction used to refer to any underlying system or device
that Filecoin will store its data to. It is based on Unix filesystem semantics, and
includes the notion of Paths
. This abstraction is here in order to make sure Filecoin
implementations make it easy for end-users to replace the underlying storage system with
whatever suits their needs. The simplest version of FileStore
is just the host operating
system’s file system.
// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
Open(p Path) union {f File, e error}
Create(p Path) union {f File, e error}
Store(p Path, f File) error
Delete(p Path) error
// maybe add:
// Copy(SrcPath, DstPath)
}
Varying user needs
Filecoin user needs vary significantly, and many users – especially miners – will implement
complex storage architectures underneath and around Filecoin. The FileStore
abstraction is here
to make it easy for these varying needs to be easy to satisfy. All file and sector local data
storage in the Filecoin Protocol is defined in terms of this FileStore
interface, which makes
it easy for implementations to make swappable, and for end-users to swap out with their system
of choice.
Implementation examples
The FileStore
interface may be implemented by many kinds of backing data storage systems. For example:
- The host Operating System file system
- Any Unix/Posix file system
- RAID-backed file systems
- Networked of distributed file systems (NFS, HDFS, etc)
- IPFS
- Databases
- NAS systems
- Raw serial or block devices
- Raw hard drives (hdd sectors, etc)
Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.
Piece - Part of a file
A Piece
is an object that represents a whole or part of a File
,
and is used by Clients
and Miners
in Deals
. Clients
hire Miners
to store Pieces
.
The piece data structure is designed for proving storage of arbitrary IPLD graphs and client data. This diagram shows the detailed composition of a piece and its proving tree, including both full and bandwidth-optimized piece data structures.
import abi "github.com/filecoin-project/specs-actors/actors/abi"
// PieceInfo is an object that describes details about a piece, and allows
// decoupling storage of this information from the piece itself.
type PieceInfo struct {
ID PieceID
Size abi.PieceSize
// TODO: store which algorithms were used to construct this piece.
}
// Piece represents the basic unit of tradeable data in Filecoin. Clients
// break files and data up into Pieces, maybe apply some transformations,
// and then hire Miners to store the Pieces.
//
// The kinds of transformations that may ocurr include erasure coding,
// encryption, and more.
//
// Note: pieces are well formed.
type Piece struct {
Info PieceInfo
// tree is the internal representation of Piece. It is a tree
// formed according to a sequence of algorithms, which make the
// piece able to be verified.
tree PieceTree
// Payload is the user's data.
Payload() Bytes
// Data returns the serialized representation of the Piece.
// It includes the payload data, and intermediate tree objects,
// formed according to relevant storage algorithms.
Data() Bytes
}
// // LocalPieceRef is an object used to refer to pieces in local storage.
// // This is used by subsystems to store and locate pieces.
// type LocalPieceRef struct {
// ID PieceID
// Path file.Path
// }
// PieceTree is a data structure used to form pieces. The algorithms involved
// in the storage proofs determine the shape of PieceTree and how it must be
// constructed.
//
// Usually, a node in PieceTree will include either Children or Data, but not
// both.
//
// TODO: move this into filproofs -- use a tree from there, as that's where
// the algorightms are defined. Or keep this as an interface, met by others.
type PieceTree struct {
Children [PieceTree]
Data Bytes
}
PieceStore - Storing and indexing pieces
A PieceStore
is an object that can store and retrieve pieces
from some local storage. The PieceStore
additionally keeps
an index of pieces.
import ipld "github.com/filecoin-project/specs/libraries/ipld"
type PieceID UVarint
// PieceStore is an object that stores pieces into some local storage.
// it is internally backed by an IpldStore.
type PieceStore struct {
Store ipld.GraphStore
Index {PieceID: Piece}
Get(i PieceID) struct {p Piece, e error}
Put(p Piece) error
Delete(i PieceID) error
}
Data Transfer in Filecoin
The Data Transfer Protocol is a protocol for transferring all or part of a Piece
across the network when a deal is made. The overall goal for the data transfer module is for it to be an abstraction of the underlying transport medium over which data is transferred between different parties in the Filecoin network. Currently, the underlying medium or protocol used to actually do the data transfer is GraphSync. As such, the Data Transfer Protocol can be thought of as a negotiation protocol.
The Data Transfer Protocol is used both for Storage and for Retrieval Deals. In both cases, the data transfer request is initiated by the client. The primary reason for this is that clients will more often than not be behind NATs and therefore, it is more convenient to start any data transfer from their side. In the case of Storage Deals the data transfer request is initiated as a push request to send data to the storage provider. In the case of Retrieval Deals the data transfer request is initiated as a pull request to retrieve data by the storage provider.
The request to initiate a data transfer includes a voucher or token (none to be confused with the Payment Channel voucher) that points to a specific deal that the two parties have agreed to before. This is so that the storage provider can identify and link the request to a deal it has agreed to and not disregard the request. As described below the case might be slightly different for retrieval deals, where both a deal proposal and a data transfer request can be sent at once.
Modules
This diagram shows how Data Transfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.
Terminology
- Push Request: A request to send data to the other party - normally initiated by the client and primarily in case of a Storage Deal.
- Pull Request: A request to have the other party send data - normally initiated by the client and primarily in case of a Retrieval Deal.
- Requestor: The party that initiates the data transfer request (whether Push or Pull) - normally the client, at least as currently implemented in Filecoin, to overcome NAT-traversal problems.
- Responder: The party that receives the data transfer request - normally the storage provider.
- Data Transfer Voucher or Token: A wrapper around storage- or retrieval-related data that can identify and validate the transfer request to the other party.
- Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request or disregard the request.
- Transporter: Once a request is negotiated and validated, the actual transfer is managed by a transporter on both sides. The transporter is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
- Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
- GraphSync: The default underlying transport protocol used by the Transporter. The full graphsync specification can be found here
Request Phases
There are two basic phases to any data transfer:
- Negotiation: the requestor and responder agree to the transfer by validating it with the data transfer voucher.
- Transfer: once the negotiation phase is complete, the data is actually transferred. The default protocol used to do the transfer is Graphsync.
Note that the Negotiation and Transfer stages can occur in separate round trips,
or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data. Whether the process is taking place in a single or multiple round-trips depends in part on whether the request is a push request (storage deal) or a pull request (retrieval deal), and on whether the data transfer negotiation process is able to piggy back on the underlying transport mechanism.
In the case of GraphSync as transport mechanism, data transfer requests can piggy back as an extension to the GraphSync protocol using
GraphSync’s built-in extensibility. So, only a single round trip is required for Pull Requests. However, because Graphsync is a request/response protocol with no direct support for push
type requests, in the Push case, negotiation happens in a seperate request over data transfer’s own libp2p protocol /fil/datatransfer/1.0.0
. Other future transport mechinisms might handle both Push and Pull, either, or neither as a single round trip.
Upon receiving a data transfer request, the data transfer module does the decoding the voucher and delivers it to the request validators. In storage deals, the request validator checks if the deal included is one that the recipient has agreed to before. For retrieval deals the request includes the proposal for the retrieval deal itself. As long as request validator accepts the deal proposal, everything is done at once as a single round-trip.
It is worth noting that in the case of retrieval the provider can accept the deal and the data transfer request, but then pause the retrieval itself in order to carry out the unsealing process. The storage provider has to unseal all of the requested data before initiating the actual data transfer. Furthermore, the storage provider has the option of pausing the retrieval flow before starting the unsealing process in order to ask for an unsealing payment request. Storage providers have the option to request for this payment in order to cover unsealing computation costs and avoid falling victims of misbehaving clients.
Example Flows
Push Flow
- A requestor initiates a Push transfer when it wants to send data to another party.
- The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher.
- The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder.
- The responder’s data transfer module initiates the transfer by making a GraphSync request.
- The requestor receives the GraphSync request, verifies that it recognises the data transfer and begins sending data.
- The responder receives data and can produce an indication of progress.
- The responder completes receiving data, and notifies any listeners.
The push flow is ideal for storage deals, where the client initiates the data transfer straightaway once the provider indicates their intent to accept and publish the client’s deal proposal.
Pull Flow - Single Round Trip
- A requestor initiates a Pull transfer when it wants to receive data from another party.
- The requestor’s data transfer module initiates the transfer by making a pull request embedded in the GraphSync request to the responder. The request includes the data transfer voucher.
- The responder receives the GraphSync request, and forwards the data transfer request to the data transfer module.
- The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder.
- The responder accepts the GraphSync request and sends the accepted response along with the data transfer level acceptance response.
- The requestor receives data and can produce an indication of progress. This timing of this step comes later in time, after the storage provider has finished unsealing the data.
- The requestor completes receiving data, and notifies any listeners.
Protocol
A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a libp2p protocol type.
Using the Data Transfer Protocol as an independent libp2p communciation mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.
Data Structures
Data Transfer Types
package datatransfer
import (
"fmt"
"github.com/ipfs/go-cid"
"github.com/ipld/go-ipld-prime"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/filecoin-project/go-data-transfer/encoding"
)
type errorString string
func (es errorString) Error() string {
return string(es)
}
//go:generate cbor-gen-for ChannelID
// TypeIdentifier is a unique string identifier for a type of encodable object in a
// registry
type TypeIdentifier string
// EmptyTypeIdentifier means there is no voucher present
const EmptyTypeIdentifier = TypeIdentifier("")
// Registerable is a type of object in a registry. It must be encodable and must
// have a single method that uniquely identifies its type
type Registerable interface {
encoding.Encodable
// Type is a unique string identifier for this voucher type
Type() TypeIdentifier
}
// Voucher is used to validate
// a data transfer request against the underlying storage or retrieval deal
// that precipitated it. The only requirement is a voucher can read and write
// from bytes, and has a string identifier type
type Voucher Registerable
// VoucherResult is used to provide option additional information about a
// voucher being rejected or accepted
type VoucherResult Registerable
// TransferID is an identifier for a data transfer, shared between
// request/responder and unique to the requester
type TransferID uint64
// ChannelID is a unique identifier for a channel, distinct by both the other
// party's peer ID + the transfer ID
type ChannelID struct {
Initiator peer.ID
Responder peer.ID
ID TransferID
}
func (c ChannelID) String() string {
return fmt.Sprintf("%s-%s-%d", c.Initiator, c.Responder, c.ID)
}
// OtherParty returns the peer on the other side of the request, depending
// on whether this peer is the initiator or responder
func (c ChannelID) OtherParty(thisPeer peer.ID) peer.ID {
if thisPeer == c.Initiator {
return c.Responder
}
return c.Initiator
}
// Channel represents all the parameters for a single data transfer
type Channel interface {
// TransferID returns the transfer id for this channel
TransferID() TransferID
// BaseCID returns the CID that is at the root of this data transfer
BaseCID() cid.Cid
// Selector returns the IPLD selector for this data transfer (represented as
// an IPLD node)
Selector() ipld.Node
// Voucher returns the voucher for this data transfer
Voucher() Voucher
// Sender returns the peer id for the node that is sending data
Sender() peer.ID
// Recipient returns the peer id for the node that is receiving data
Recipient() peer.ID
// TotalSize returns the total size for the data being transferred
TotalSize() uint64
// IsPull returns whether this is a pull request
IsPull() bool
// ChannelID returns the ChannelID for this request
ChannelID() ChannelID
// OtherParty returns the opposite party in the channel to the passed in party
OtherParty(thisParty peer.ID) peer.ID
}
// ChannelState is channel parameters plus it's current state
type ChannelState interface {
Channel
// Status is the current status of this channel
Status() Status
// Sent returns the number of bytes sent
Sent() uint64
// Received returns the number of bytes received
Received() uint64
// Message offers additional information about the current status
Message() string
// Vouchers returns all vouchers sent on this channel
Vouchers() []Voucher
// VoucherResults are results of vouchers sent on the channel
VoucherResults() []VoucherResult
// LastVoucher returns the last voucher sent on the channel
LastVoucher() Voucher
// LastVoucherResult returns the last voucher result sent on the channel
LastVoucherResult() VoucherResult
}
Data Transfer Statuses
package datatransfer
// Status is the status of transfer for a given channel
type Status uint64
const (
// Requested means a data transfer was requested by has not yet been approved
Requested Status = iota
// Ongoing means the data transfer is in progress
Ongoing
// TransferFinished indicates the initiator is done sending/receiving
// data but is awaiting confirmation from the responder
TransferFinished
// ResponderCompleted indicates the initiator received a message from the
// responder that it's completed
ResponderCompleted
// Finalizing means the responder is awaiting a final message from the initator to
// consider the transfer done
Finalizing
// Completing just means we have some final cleanup for a completed request
Completing
// Completed means the data transfer is completed successfully
Completed
// Failing just means we have some final cleanup for a failed request
Failing
// Failed means the data transfer failed
Failed
// Cancelling just means we have some final cleanup for a cancelled request
Cancelling
// Cancelled means the data transfer ended prematurely
Cancelled
// InitiatorPaused means the data sender has paused the channel (only the sender can unpause this)
InitiatorPaused
// ResponderPaused means the data receiver has paused the channel (only the receiver can unpause this)
ResponderPaused
// BothPaused means both sender and receiver have paused the channel seperately (both must unpause)
BothPaused
// ResponderFinalizing is a unique state where the responder is awaiting a final voucher
ResponderFinalizing
// ResponderFinalizingTransferFinished is a unique state where the responder is awaiting a final voucher
// and we have received all data
ResponderFinalizingTransferFinished
// ChannelNotFoundError means the searched for data transfer does not exist
ChannelNotFoundError
)
// Statuses are human readable names for data transfer states
var Statuses = map[Status]string{
// Requested means a data transfer was requested by has not yet been approved
Requested: "Requested",
Ongoing: "Ongoing",
TransferFinished: "TransferFinished",
ResponderCompleted: "ResponderCompleted",
Finalizing: "Finalizing",
Completing: "Completing",
Completed: "Completed",
Failing: "Failing",
Failed: "Failed",
Cancelling: "Cancelling",
Cancelled: "Cancelled",
InitiatorPaused: "InitiatorPaused",
ResponderPaused: "ResponderPaused",
BothPaused: "BothPaused",
ResponderFinalizing: "ResponderFinalizing",
ResponderFinalizingTransferFinished: "ResponderFinalizingTransferFinished",
ChannelNotFoundError: "ChannelNotFoundError",
}
Data Transfer Manager
package datatransfer
import (
"context"
"github.com/ipfs/go-cid"
"github.com/ipld/go-ipld-prime"
"github.com/libp2p/go-libp2p-core/peer"
)
// RequestValidator is an interface implemented by the client of the
// data transfer module to validate requests
type RequestValidator interface {
// ValidatePush validates a push request received from the peer that will send data
ValidatePush(
sender peer.ID,
voucher Voucher,
baseCid cid.Cid,
selector ipld.Node) (VoucherResult, error)
// ValidatePull validates a pull request received from the peer that will receive data
ValidatePull(
receiver peer.ID,
voucher Voucher,
baseCid cid.Cid,
selector ipld.Node) (VoucherResult, error)
}
// Revalidator is a request validator revalidates in progress requests
// by requesting request additional vouchers, and resuming when it receives them
type Revalidator interface {
// Revalidate revalidates a request with a new voucher
Revalidate(channelID ChannelID, voucher Voucher) (VoucherResult, error)
// OnPullDataSent is called on the responder side when more bytes are sent
// for a given pull request. It should return a VoucherResult + ErrPause to
// request revalidation or nil to continue uninterrupted,
// other errors will terminate the request
OnPullDataSent(chid ChannelID, additionalBytesSent uint64) (VoucherResult, error)
// OnPushDataReceived is called on the responder side when more bytes are received
// for a given push request. It should return a VoucherResult + ErrPause to
// request revalidation or nil to continue uninterrupted,
// other errors will terminate the request
OnPushDataReceived(chid ChannelID, additionalBytesReceived uint64) (VoucherResult, error)
// OnComplete is called to make a final request for revalidation -- often for the
// purpose of settlement.
// if VoucherResult is non nil, the request will enter a settlement phase awaiting
// a final update
OnComplete(chid ChannelID) (VoucherResult, error)
}
// TransportConfigurer provides a mechanism to provide transport specific configuration for a given voucher type
type TransportConfigurer func(chid ChannelID, voucher Voucher, transport Transport)
// Manager is the core interface presented by all implementations of
// of the data transfer sub system
type Manager interface {
// Start initializes data transfer processing
Start(ctx context.Context) error
// Stop terminates all data transfers and ends processing
Stop() error
// RegisterVoucherType registers a validator for the given voucher type
// will error if voucher type does not implement voucher
// or if there is a voucher type registered with an identical identifier
RegisterVoucherType(voucherType Voucher, validator RequestValidator) error
// RegisterRevalidator registers a revalidator for the given voucher type
// Note: this is the voucher type used to revalidate. It can share a name
// with the initial validator type and CAN be the same type, or a different type.
// The revalidator can simply be the sampe as the original request validator,
// or a different validator that satisfies the revalidator interface.
RegisterRevalidator(voucherType Voucher, revalidator Revalidator) error
// RegisterVoucherResultType allows deserialization of a voucher result,
// so that a listener can read the metadata
RegisterVoucherResultType(resultType VoucherResult) error
// RegisterTransportConfigurer registers the given transport configurer to be run on requests with the given voucher
// type
RegisterTransportConfigurer(voucherType Voucher, configurer TransportConfigurer) error
// open a data transfer that will send data to the recipient peer and
// transfer parts of the piece that match the selector
OpenPushDataChannel(ctx context.Context, to peer.ID, voucher Voucher, baseCid cid.Cid, selector ipld.Node) (ChannelID, error)
// open a data transfer that will request data from the sending peer and
// transfer parts of the piece that match the selector
OpenPullDataChannel(ctx context.Context, to peer.ID, voucher Voucher, baseCid cid.Cid, selector ipld.Node) (ChannelID, error)
// send an intermediate voucher as needed when the receiver sends a request for revalidation
SendVoucher(ctx context.Context, chid ChannelID, voucher Voucher) error
// close an open channel (effectively a cancel)
CloseDataTransferChannel(ctx context.Context, chid ChannelID) error
// pause a data transfer channel (only allowed if transport supports it)
PauseDataTransferChannel(ctx context.Context, chid ChannelID) error
// resume a data transfer channel (only allowed if transport supports it)
ResumeDataTransferChannel(ctx context.Context, chid ChannelID) error
// get status of a transfer
TransferChannelStatus(ctx context.Context, x ChannelID) Status
// get notified when certain types of events happen
SubscribeToEvents(subscriber Subscriber) Unsubscribe
// get all in progress transfers
InProgressChannels(ctx context.Context) (map[ChannelID]ChannelState, error)
}
Data Formats and Serialization
Filecoin seeks to make use of as few data formats as needed, with well-specced serialization rules to better protocol security through simplicity and enable interoperability amongst implementations of the Filecoin protocol.
Read more on design considerations here for CBOR-usage and here for int types in Filecoin.
Data Formats
Filecoin in-memory data types are mostly straightforward. Implementations should support two integer types: Int (meaning native 64-bit integer), and BigInt (meaning arbitrary length) and avoid dealing with floating-point numbers to minimize interoperability issues across programming languages and implementations.
You can also read more on data formats as part of randomness generation in the Filecoin protocol.
Serialization
Data Serialization
in Filecoin ensures a consistent format for serializing in-memory data for transfer
in-flight and in-storage. Serialization is critical to protocol security and interoperability across
implementations of the Filecoin protocol, enabling consistent state updates across Filecoin nodes.
All data structures in Filecoin are CBOR-tuple encoded. That is, any data structures used in the Filecoin system (structs in this spec) should be serialized as CBOR-arrays with items corresponding to the data structure fields in their order of declaration.
You can find the encoding structure for major data types in CBOR here.
For illustration, an in-memory map would be represented as a CBOR-array of the keys and values listed in some pre-determined order. A near-term update to the serialization format will involve tagging fields appropriately to ensure appropriate serialization/deserialization as the protocol evolves.
VM - Virtual Machine
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
// VM is the object that controls execution.
// It is a stateless, pure function. It uses no local storage.
//
// TODO: make it just a function: VMExec(...) ?
type VM struct {
// Execute computes and returns outTree, a new StateTree which is the
// application of msgs to inTree.
//
// *Important:* Execute is intended to be a pure function, with no side-effects.
// however, storage of the new parts of the computed outTree may exist in
// local storage.
//
// *TODO:* define whether this should take 0, 1, or 2 IpldStores:
// - (): storage of IPLD datastructures is assumed implicit
// - (store): get and put to same IpldStore
// - (inStore, outStore): get from inStore, put new structures into outStore
//
// This decision impacts callers, and potentially impacts how we reason about
// local storage, and intermediate storage. It is definitely the case that
// implementations may want to operate on this differently, depending on
// how their IpldStores work.
Execute(inTree st.StateTree, msgs [msg.UnsignedMessage]) union {outTree st.StateTree, err error}
}
VM Actor Interface
// This contains actor things that are _outside_ of VM exection.
// The VM uses this to execute abi.
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import actor "github.com/filecoin-project/specs-actors/actors"
import cid "github.com/ipfs/go-cid"
// CallSeqNum is an invocation (Call) sequence (Seq) number (Num).
// This is a value used for securing against replay attacks:
// each AccountActor (user) invocation must have a unique CallSeqNum
// value. The sequenctiality of the numbers is used to make it
// easy to verify, and to order messages.
//
// Q&A
// - > Does it have to be sequential?
// No, a random nonce could work against replay attacks, but
// making it sequential makes it much easier to verify.
// - > Can it be used to order events?
// Yes, a user may submit N separate messages with increasing
// sequence number, causing them to execute in order.
//
type CallSeqNum int64
// Actor is a base computation object in the Filecoin VM. Similar
// to Actors in the Actor Model (programming), or Objects in Object-
// Oriented Programming, or Ethereum Contracts in the EVM.
//
// ActorState represents the on-chain storage all actors keep.
type ActorState struct {
// Identifies the code this actor executes.
CodeID abi.ActorCodeID
// CID of the root of optional actor-specific sub-state.
State actor.ActorSubstateCID
// Balance of tokens held by this actor.
Balance abi.TokenAmount
// Expected sequence number of the next message sent by this actor.
// Initially zero, incremented when an account actor originates a top-level message.
// Always zero for other abi.
CallSeqNum
}
type ActorSystemStateCID cid.Cid
// ActorState represents the on-chain storage actors keep. This type is a
// union of concrete types, for each of the Actors:
// - InitActor
// - CronActor
// - AccountActor
// - PaymentChannelActor
// - StoragePowerActor
// - StorageMinerActor
// - StroageMarketActor
//
// TODO: move this into a directory inside the VM that patches in all
// the actors from across the system. this will be where we declare/mount
// all actors in the VM.
// type ActorState union {
// Init struct {
// AddressMap {addr.Address: ActorID}
// NextID ActorID
// }
// }
package actor
import (
util "github.com/filecoin-project/specs/util"
cid "github.com/ipfs/go-cid"
)
var IMPL_FINISH = util.IMPL_FINISH
var IMPL_TODO = util.IMPL_TODO
var TODO = util.TODO
type Serialization = util.Serialization
func (st *ActorState_I) CID() cid.Cid {
panic("TODO")
}
State Tree
The State Tree is the output of the execution of any operation applied on the Filecoin Blockchain. The on-chain (i.e., VM) state data structure is a map (in the form of a Hash Array Mapped Trie - HAMT) that binds addresses to actor states. The current State Tree function is called by the VM upon every actor method invocation.
package state
import (
"context"
"fmt"
"github.com/filecoin-project/specs-actors/actors/builtin"
init_ "github.com/filecoin-project/specs-actors/actors/builtin/init"
"github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
logging "github.com/ipfs/go-log/v2"
"go.opencensus.io/trace"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/lotus/chain/types"
)
var log = logging.Logger("statetree")
// StateTree stores actors state by their ID.
type StateTree struct {
root *adt.Map
Store cbor.IpldStore
snaps *stateSnaps
}
type stateSnaps struct {
layers []*stateSnapLayer
}
type stateSnapLayer struct {
actors map[address.Address]streeOp
resolveCache map[address.Address]address.Address
}
func newStateSnapLayer() *stateSnapLayer {
return &stateSnapLayer{
actors: make(map[address.Address]streeOp),
resolveCache: make(map[address.Address]address.Address),
}
}
type streeOp struct {
Act types.Actor
Delete bool
}
func newStateSnaps() *stateSnaps {
ss := &stateSnaps{}
ss.addLayer()
return ss
}
func (ss *stateSnaps) addLayer() {
ss.layers = append(ss.layers, newStateSnapLayer())
}
func (ss *stateSnaps) dropLayer() {
ss.layers[len(ss.layers)-1] = nil // allow it to be GCed
ss.layers = ss.layers[:len(ss.layers)-1]
}
func (ss *stateSnaps) mergeLastLayer() {
last := ss.layers[len(ss.layers)-1]
nextLast := ss.layers[len(ss.layers)-2]
for k, v := range last.actors {
nextLast.actors[k] = v
}
for k, v := range last.resolveCache {
nextLast.resolveCache[k] = v
}
ss.dropLayer()
}
func (ss *stateSnaps) resolveAddress(addr address.Address) (address.Address, bool) {
for i := len(ss.layers) - 1; i >= 0; i-- {
resa, ok := ss.layers[i].resolveCache[addr]
if ok {
return resa, true
}
}
return address.Undef, false
}
func (ss *stateSnaps) cacheResolveAddress(addr, resa address.Address) {
ss.layers[len(ss.layers)-1].resolveCache[addr] = resa
}
func (ss *stateSnaps) getActor(addr address.Address) (*types.Actor, error) {
for i := len(ss.layers) - 1; i >= 0; i-- {
act, ok := ss.layers[i].actors[addr]
if ok {
if act.Delete {
return nil, types.ErrActorNotFound
}
return &act.Act, nil
}
}
return nil, nil
}
func (ss *stateSnaps) setActor(addr address.Address, act *types.Actor) {
ss.layers[len(ss.layers)-1].actors[addr] = streeOp{Act: *act}
}
func (ss *stateSnaps) deleteActor(addr address.Address) {
ss.layers[len(ss.layers)-1].actors[addr] = streeOp{Delete: true}
}
func NewStateTree(cst cbor.IpldStore) (*StateTree, error) {
return &StateTree{
root: adt.MakeEmptyMap(adt.WrapStore(context.TODO(), cst)),
Store: cst,
snaps: newStateSnaps(),
}, nil
}
func LoadStateTree(cst cbor.IpldStore, c cid.Cid) (*StateTree, error) {
nd, err := adt.AsMap(adt.WrapStore(context.TODO(), cst), c)
if err != nil {
log.Errorf("loading hamt node %s failed: %s", c, err)
return nil, err
}
return &StateTree{
root: nd,
Store: cst,
snaps: newStateSnaps(),
}, nil
}
func (st *StateTree) SetActor(addr address.Address, act *types.Actor) error {
iaddr, err := st.LookupID(addr)
if err != nil {
return xerrors.Errorf("ID lookup failed: %w", err)
}
addr = iaddr
st.snaps.setActor(addr, act)
return nil
}
// LookupID gets the ID address of this actor's `addr` stored in the `InitActor`.
func (st *StateTree) LookupID(addr address.Address) (address.Address, error) {
if addr.Protocol() == address.ID {
return addr, nil
}
resa, ok := st.snaps.resolveAddress(addr)
if ok {
return resa, nil
}
act, err := st.GetActor(builtin.InitActorAddr)
if err != nil {
return address.Undef, xerrors.Errorf("getting init actor: %w", err)
}
var ias init_.State
if err := st.Store.Get(context.TODO(), act.Head, &ias); err != nil {
return address.Undef, xerrors.Errorf("loading init actor state: %w", err)
}
a, found, err := ias.ResolveAddress(&AdtStore{st.Store}, addr)
if err == nil && !found {
err = types.ErrActorNotFound
}
if err != nil {
return address.Undef, xerrors.Errorf("resolve address %s: %w", addr, err)
}
st.snaps.cacheResolveAddress(addr, a)
return a, nil
}
// GetActor returns the actor from any type of `addr` provided.
func (st *StateTree) GetActor(addr address.Address) (*types.Actor, error) {
if addr == address.Undef {
return nil, fmt.Errorf("GetActor called on undefined address")
}
// Transform `addr` to its ID format.
iaddr, err := st.LookupID(addr)
if err != nil {
if xerrors.Is(err, types.ErrActorNotFound) {
return nil, xerrors.Errorf("resolution lookup failed (%s): %w", addr, err)
}
return nil, xerrors.Errorf("address resolution: %w", err)
}
addr = iaddr
snapAct, err := st.snaps.getActor(addr)
if err != nil {
return nil, err
}
if snapAct != nil {
return snapAct, nil
}
var act types.Actor
if found, err := st.root.Get(adt.AddrKey(addr), &act); err != nil {
return nil, xerrors.Errorf("hamt find failed: %w", err)
} else if !found {
return nil, types.ErrActorNotFound
}
st.snaps.setActor(addr, &act)
return &act, nil
}
func (st *StateTree) DeleteActor(addr address.Address) error {
if addr == address.Undef {
return xerrors.Errorf("DeleteActor called on undefined address")
}
iaddr, err := st.LookupID(addr)
if err != nil {
if xerrors.Is(err, types.ErrActorNotFound) {
return xerrors.Errorf("resolution lookup failed (%s): %w", addr, err)
}
return xerrors.Errorf("address resolution: %w", err)
}
addr = iaddr
_, err = st.GetActor(addr)
if err != nil {
return err
}
st.snaps.deleteActor(addr)
return nil
}
func (st *StateTree) Flush(ctx context.Context) (cid.Cid, error) {
ctx, span := trace.StartSpan(ctx, "stateTree.Flush") //nolint:staticcheck
defer span.End()
if len(st.snaps.layers) != 1 {
return cid.Undef, xerrors.Errorf("tried to flush state tree with snapshots on the stack")
}
for addr, sto := range st.snaps.layers[0].actors {
if sto.Delete {
if err := st.root.Delete(adt.AddrKey(addr)); err != nil {
return cid.Undef, err
}
} else {
if err := st.root.Put(adt.AddrKey(addr), &sto.Act); err != nil {
return cid.Undef, err
}
}
}
return st.root.Root()
}
func (st *StateTree) Snapshot(ctx context.Context) error {
ctx, span := trace.StartSpan(ctx, "stateTree.SnapShot") //nolint:staticcheck
defer span.End()
st.snaps.addLayer()
return nil
}
func (st *StateTree) ClearSnapshot() {
st.snaps.mergeLastLayer()
}
func (st *StateTree) RegisterNewAddress(addr address.Address) (address.Address, error) {
var out address.Address
err := st.MutateActor(builtin.InitActorAddr, func(initact *types.Actor) error {
var ias init_.State
if err := st.Store.Get(context.TODO(), initact.Head, &ias); err != nil {
return err
}
oaddr, err := ias.MapAddressToNewID(&AdtStore{st.Store}, addr)
if err != nil {
return err
}
out = oaddr
ncid, err := st.Store.Put(context.TODO(), &ias)
if err != nil {
return err
}
initact.Head = ncid
return nil
})
if err != nil {
return address.Undef, err
}
return out, nil
}
type AdtStore struct{ cbor.IpldStore }
func (a *AdtStore) Context() context.Context {
return context.TODO()
}
var _ adt.Store = (*AdtStore)(nil)
func (st *StateTree) Revert() error {
st.snaps.dropLayer()
st.snaps.addLayer()
return nil
}
func (st *StateTree) MutateActor(addr address.Address, f func(*types.Actor) error) error {
act, err := st.GetActor(addr)
if err != nil {
return err
}
if err := f(act); err != nil {
return err
}
return st.SetActor(addr, act)
}
func (st *StateTree) ForEach(f func(address.Address, *types.Actor) error) error {
var act types.Actor
return st.root.ForEach(&act, func(k string) error {
addr, err := address.NewFromBytes([]byte(k))
if err != nil {
return xerrors.Errorf("invalid address (%x) found in state tree key: %w", []byte(k), err)
}
return f(addr, &act)
})
}
VM Message - Actor Method Invocation
A message is the unit of communication between two actors, and thus the primitive cause of changes in state. A message combines:
- a token amount to be transferred from the sender to the receiver, and
- a method with parameters to be invoked on the receiver (optional/where applicable).
Actor code may send additional messages to other actors while processing a received message. Messages are processed synchronously, that is, an actor waits for a sent message to complete before resuming control.
The processing of a message consumes units of computation and storage, both of which are denominated in gas. A message’s gas limit provides an upper bound on the computation required to process it. The sender of a message pays for the gas units consumed by a message’s execution (including all nested messages) at a gas price they determine. A block producer chooses which messages to include in a block and is rewarded according to each message’s gas price and consumption, forming a market.
Message syntax validation
A syntactically invalid message must not be transmitted, retained in a message pool, or included in a block. If an invalid message is received, it should be dropped and not propagated further.
When transmitted individually (before inclusion in a block), a message is packaged as
SignedMessage
, regardless of signature scheme used. A valid signed message has a total serialized size no greater than message.MessageMaxSize
.
type SignedMessage struct {
Message Message
Signature crypto.Signature
}
A syntactically valid UnsignedMessage
:
- has a well-formed, non-empty
To
address, - has a well-formed, non-empty
From
address, - has
Value
no less than zero and no greater than the total token supply (2e9 * 1e18
), and - has non-negative
GasPrice
, - has
GasLimit
that is at least equal to the gas consumption associated with the message’s serialized bytes, - has
GasLimit
that is no greater than the block gas limit network parameter.
type Message struct {
// Version of this message (has to be non-negative)
Version uint64
// Address of the receiving actor.
To address.Address
// Address of the sending actor.
From address.Address
CallSeqNum uint64
// Value to transfer from sender's to receiver's balance.
Value BigInt
// GasPrice is a Gas-to-FIL cost
GasPrice BigInt
// Maximum Gas to be spent on the processing of this message
GasLimit int64
// Optional method to invoke on receiver, zero for a plain value transfer.
Method abi.MethodNum
//Serialized parameters to the method.
Params []byte
}
There should be several functions able to extract information from the Message struct
, such as the sender and recipient addresses, the value to be transferred, the required funds to execute the message and the CID of the message.
Given that Transaction Messages should eventually be included in a Block and added to the blockchain, the validity of the message should be checked with regard to the sender and the receiver of the message, the value (which should be non-negative and always smaller than the circulating supply), the gas price (which again should be non-negative) and the BlockGasLimit
which should not be greater than the block’s gas limit.
Message semantic validation
Semantic validation refers to validation requiring information outside of the message itself.
A semantically valid SignedMessage
must carry a signature that verifies the payload as having
been signed with the public key of the account actor identified by the From
address.
Note that when the From
address is an ID-address, the public key must be
looked up in the state of the sending account actor in the parent state identified by the block.
Note: the sending actor must exist in the parent state identified by the block that includes the message. This means that it is not valid for a single block to include a message that creates a new account actor and a message from that same actor. The first message from that actor must wait until a subsequent epoch. Message pools may exclude messages from an actor that is not yet present in the chain state.
There is no further semantic validation of a message that can cause a block including the message
to be invalid. Every syntactically valid and correctly signed message can be included in a block and
will produce a receipt from execution. The MessageReceipt sturct
includes the following:
type MessageReceipt struct {
ExitCode exitcode.ExitCode
Return []byte
GasUsed int64
}
However, a message may fail to execute to completion, in which case it will not trigger the desired state change.
The reason for this “no message semantic validation” policy is that the state that a message will be applied to cannot be known before the message is executed as part of a tipset. A block producer does not know whether another block will precede it in the tipset, thus altering the state to which the block’s messages will apply from the declared parent state.
package types
import (
"bytes"
"fmt"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/specs-actors/actors/abi"
"github.com/filecoin-project/specs-actors/actors/abi/big"
block "github.com/ipfs/go-block-format"
"github.com/ipfs/go-cid"
xerrors "golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
)
const MessageVersion = 0
type ChainMsg interface {
Cid() cid.Cid
VMMessage() *Message
ToStorageBlock() (block.Block, error)
// FIXME: This is the *message* length, this name is misleading.
ChainLength() int
}
type Message struct {
Version uint64
To address.Address
From address.Address
Nonce uint64
Value abi.TokenAmount
GasLimit int64
GasFeeCap abi.TokenAmount
GasPremium abi.TokenAmount
Method abi.MethodNum
Params []byte
}
func (m *Message) Caller() address.Address {
return m.From
}
func (m *Message) Receiver() address.Address {
return m.To
}
func (m *Message) ValueReceived() abi.TokenAmount {
return m.Value
}
func DecodeMessage(b []byte) (*Message, error) {
var msg Message
if err := msg.UnmarshalCBOR(bytes.NewReader(b)); err != nil {
return nil, err
}
if msg.Version != MessageVersion {
return nil, fmt.Errorf("decoded message had incorrect version (%d)", msg.Version)
}
return &msg, nil
}
func (m *Message) Serialize() ([]byte, error) {
buf := new(bytes.Buffer)
if err := m.MarshalCBOR(buf); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (m *Message) ChainLength() int {
ser, err := m.Serialize()
if err != nil {
panic(err)
}
return len(ser)
}
func (m *Message) ToStorageBlock() (block.Block, error) {
data, err := m.Serialize()
if err != nil {
return nil, err
}
c, err := abi.CidBuilder.Sum(data)
if err != nil {
return nil, err
}
return block.NewBlockWithCid(data, c)
}
func (m *Message) Cid() cid.Cid {
b, err := m.ToStorageBlock()
if err != nil {
panic(fmt.Sprintf("failed to marshal message: %s", err)) // I think this is maybe sketchy, what happens if we try to serialize a message with an undefined address in it?
}
return b.Cid()
}
func (m *Message) RequiredFunds() BigInt {
return BigMul(m.GasFeeCap, NewInt(uint64(m.GasLimit)))
}
func (m *Message) VMMessage() *Message {
return m
}
func (m *Message) Equals(o *Message) bool {
return m.Cid() == o.Cid()
}
func (m *Message) EqualCall(o *Message) bool {
m1 := *m
m2 := *o
m1.GasLimit, m2.GasLimit = 0, 0
m1.GasFeeCap, m2.GasFeeCap = big.Zero(), big.Zero()
m1.GasPremium, m2.GasPremium = big.Zero(), big.Zero()
return (&m1).Equals(&m2)
}
func (m *Message) ValidForBlockInclusion(minGas int64) error {
if m.Version != 0 {
return xerrors.New("'Version' unsupported")
}
if m.To == address.Undef {
return xerrors.New("'To' address cannot be empty")
}
if m.From == address.Undef {
return xerrors.New("'From' address cannot be empty")
}
if m.Value.Int == nil {
return xerrors.New("'Value' cannot be nil")
}
if m.Value.LessThan(big.Zero()) {
return xerrors.New("'Value' field cannot be negative")
}
if m.Value.GreaterThan(TotalFilecoinInt) {
return xerrors.New("'Value' field cannot be greater than total filecoin supply")
}
if m.GasFeeCap.Int == nil {
return xerrors.New("'GasFeeCap' cannot be nil")
}
if m.GasFeeCap.LessThan(big.Zero()) {
return xerrors.New("'GasFeeCap' field cannot be negative")
}
if m.GasPremium.Int == nil {
return xerrors.New("'GasPremium' cannot be nil")
}
if m.GasPremium.LessThan(big.Zero()) {
return xerrors.New("'GasPremium' field cannot be negative")
}
if m.GasPremium.GreaterThan(m.GasFeeCap) {
return xerrors.New("'GasFeeCap' less than 'GasPremium'")
}
if m.GasLimit > build.BlockGasLimit {
return xerrors.New("'GasLimit' field cannot be greater than a block's gas limit")
}
// since prices might vary with time, this is technically semantic validation
if m.GasLimit < minGas {
return xerrors.New("'GasLimit' field cannot be less than the cost of storing a message on chain")
}
return nil
}
const TestGasLimit = 100e6
VM Runtime Environment (Inside the VM)
Receipts
A MessageReceipt
contains the result of a top-level message execution.
A syntactically valid receipt has:
- a non-negative
ExitCode
, - a non empty
ReturnValue
only if the exit code is zero, - a non-negative
GasUsed
.
vm/runtime
interface
package runtime
import (
"bytes"
"context"
"io"
"github.com/filecoin-project/go-address"
addr "github.com/filecoin-project/go-address"
cid "github.com/ipfs/go-cid"
abi "github.com/filecoin-project/specs-actors/actors/abi"
crypto "github.com/filecoin-project/specs-actors/actors/crypto"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
)
// Specifies importance of message, LogLevel numbering is consistent with the uber-go/zap package.
type LogLevel int
const (
// DebugLevel logs are typically voluminous, and are usually disabled in
// production.
DEBUG LogLevel = iota - 1
// InfoLevel is the default logging priority.
INFO
// WarnLevel logs are more important than Info, but don't need individual
// human review.
WARN
// ErrorLevel logs are high-priority. If an application is running smoothly,
// it shouldn't generate any error-level logs.
ERROR
)
// Runtime is the VM's internal runtime object.
// this is everything that is accessible to actors, beyond parameters.
type Runtime interface {
// Information related to the current message being executed.
// When an actor invokes a method on another actor as a sub-call, these values reflect
// the sub-call context, rather than the top-level context.
Message() Message
// The current chain epoch number. The genesis block has epoch zero.
CurrEpoch() abi.ChainEpoch
// Satisfies the requirement that every exported actor method must invoke at least one caller validation
// method before returning, without making any assertions about the caller.
ValidateImmediateCallerAcceptAny()
// Validates that the immediate caller's address exactly matches one of a set of expected addresses,
// aborting if it does not.
// The caller address is always normalized to an ID address, so expected addresses must be
// ID addresses to have any expectation of passing validation.
ValidateImmediateCallerIs(addrs ...addr.Address)
// Validates that the immediate caller is an actor with code CID matching one of a set of
// expected CIDs, aborting if it does not.
ValidateImmediateCallerType(types ...cid.Cid)
// The balance of the receiver. Always >= zero.
CurrentBalance() abi.TokenAmount
// Resolves an address of any protocol to an ID address (via the Init actor's table).
// This allows resolution of externally-provided SECP, BLS, or actor addresses to the canonical form.
// If the argument is an ID address it is returned directly.
ResolveAddress(address addr.Address) (addr.Address, bool)
// Look up the code ID at an actor address.
// The address will be resolved as if via ResolveAddress, if necessary, so need not be an ID-address.
GetActorCodeCID(addr addr.Address) (ret cid.Cid, ok bool)
// GetRandomnessFromBeacon returns a (pseudo)random byte array drawing from a random beacon at a prior epoch.
// The beacon value is combined with the personalization tag, epoch number, and explicitly provided entropy.
// The personalization tag may be any int64 value.
// The epoch must be less than the current epoch. The epoch may be negative, in which case
// it addresses the beacon value from genesis block.
// The entropy may be any byte array, or nil.
GetRandomnessFromBeacon(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness
// GetRandomnessFromTickets samples randomness from the ticket chain. Randomess
// sampled through this method is unique per potential fork, and as a
// result, processes relying on this randomness are tied to whichever fork
// they choose.
// See GetRandomnessFromBeacon for notes about the personalization tag, epoch, and entropy.
GetRandomnessFromTickets(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness
// Provides a handle for the actor's state object.
State() StateHandle
Store() Store
// Sends a message to another actor, returning the exit code and return value envelope.
// If the invoked method does not return successfully, its state changes (and that of any messages it sent in turn)
// will be rolled back.
// The result is never a bare nil, but may be (a wrapper of) adt.Empty.
Send(toAddr addr.Address, methodNum abi.MethodNum, params CBORMarshaler, value abi.TokenAmount) (SendReturn, exitcode.ExitCode)
// Halts execution upon an error from which the receiver cannot recover. The caller will receive the exitcode and
// an empty return value. State changes made within this call will be rolled back.
// This method does not return.
// The provided exit code must be >= exitcode.FirstActorExitCode.
// The message and args are for diagnostic purposes and do not persist on chain. They should be suitable for
// passing to fmt.Errorf(msg, args...).
Abortf(errExitCode exitcode.ExitCode, msg string, args ...interface{})
// Computes an address for a new actor. The returned address is intended to uniquely refer to
// the actor even in the event of a chain re-org (whereas an ID-address might refer to a
// different actor after messages are re-ordered).
// Always an ActorExec address.
NewActorAddress() addr.Address
// Creates an actor with code `codeID` and address `address`, with empty state.
// May only be called by Init actor.
// Aborts if the provided address has previously been created.
CreateActor(codeId cid.Cid, address addr.Address)
// Deletes the executing actor from the state tree, transferring any balance to beneficiary.
// Aborts if the beneficiary does not exist or is the calling actor.
// May only be called by the actor itself.
DeleteActor(beneficiary addr.Address)
// Provides the system call interface.
Syscalls() Syscalls
// Returns the total token supply in circulation at the beginning of the current epoch.
// The circulating supply is the sum of:
// - rewards emitted by the reward actor,
// - funds vested from lock-ups in the genesis state,
// less the sum of:
// - funds burnt,
// - pledge collateral locked in storage miner actors (recorded in the storage power actor)
// - deal collateral locked by the storage market actor
TotalFilCircSupply() abi.TokenAmount
// Provides a Go context for use by HAMT, etc.
// The VM is intended to provide an idealised machine abstraction, with infinite storage etc, so this context
// should not be used by actor code directly.
Context() context.Context
// Starts a new tracing span. The span must be End()ed explicitly, typically with a deferred invocation.
StartSpan(name string) TraceSpan
// ChargeGas charges specified amount of `gas` for execution.
// `name` provides information about gas charging point
// `virtual` sets virtual amount of gas to charge, this amount is not counted
// toward execution cost. This functionality is used for observing global changes
// in total gas charged if amount of gas charged was to be changed.
ChargeGas(name string, gas int64, virtual int64)
// Note events that may make debugging easier
Log(level LogLevel, msg string, args ...interface{})
}
// Store defines the storage module exposed to actors.
type Store interface {
// Retrieves and deserializes an object from the store into `o`. Returns whether successful.
Get(c cid.Cid, o CBORUnmarshaler) bool
// Serializes and stores an object, returning its CID.
Put(x CBORMarshaler) cid.Cid
}
// Message contains information available to the actor about the executing message.
// These values are fixed for the duration of an invocation.
type Message interface {
// The address of the immediate calling actor. Always an ID-address.
// If an actor invokes its own method, Caller() == Receiver().
Caller() addr.Address
// The address of the actor receiving the message. Always an ID-address.
Receiver() addr.Address
// The value attached to the message being processed, implicitly added to CurrentBalance()
// of Receiver() before method invocation.
// This value came from Caller().
ValueReceived() abi.TokenAmount
}
// Pure functions implemented as primitives by the runtime.
type Syscalls interface {
// Verifies that a signature is valid for an address and plaintext.
// If the address is a public-key type address, it is used directly.
// If it's an ID-address, the actor is looked up in state. It must be an account actor, and the
// public key is obtained from it's state.
VerifySignature(signature crypto.Signature, signer addr.Address, plaintext []byte) error
// Hashes input data using blake2b with 256 bit output.
HashBlake2b(data []byte) [32]byte
// Computes an unsealed sector CID (CommD) from its constituent piece CIDs (CommPs) and sizes.
ComputeUnsealedSectorCID(reg abi.RegisteredSealProof, pieces []abi.PieceInfo) (cid.Cid, error)
// Verifies a sector seal proof.
VerifySeal(vi abi.SealVerifyInfo) error
BatchVerifySeals(vis map[address.Address][]abi.SealVerifyInfo) (map[address.Address][]bool, error)
// Verifies a proof of spacetime.
VerifyPoSt(vi abi.WindowPoStVerifyInfo) error
// Verifies that two block headers provide proof of a consensus fault:
// - both headers mined by the same actor
// - headers are different
// - first header is of the same or lower epoch as the second
// - the headers provide evidence of a fault (see the spec for the different fault types).
// The parameters are all serialized block headers. The third "extra" parameter is consulted only for
// the "parent grinding fault", in which case it must be the sibling of h1 (same parent tipset) and one of the
// blocks in an ancestor of h2.
// Returns nil and an error if the headers don't prove a fault.
VerifyConsensusFault(h1, h2, extra []byte) (*ConsensusFault, error)
}
// The return type from a message send from one actor to another. This abstracts over the internal representation of
// the return, in particular whether it has been serialized to bytes or just passed through.
// Production code is expected to de/serialize, but test and other code may pass the value straight through.
type SendReturn interface {
Into(CBORUnmarshaler) error
}
// Provides (minimal) tracing facilities to actor code.
type TraceSpan interface {
// Ends the span
End()
}
// StateHandle provides mutable, exclusive access to actor state.
type StateHandle interface {
// Create initializes the state object.
// This is only valid in a constructor function and when the state has not yet been initialized.
Create(obj CBORMarshaler)
// Readonly loads a readonly copy of the state into the argument.
//
// Any modification to the state is illegal and will result in an abort.
Readonly(obj CBORUnmarshaler)
// Transaction loads a mutable version of the state into the `obj` argument and protects
// the execution from side effects (including message send).
//
// The second argument is a function which allows the caller to mutate the state.
//
// If the state is modified after this function returns, execution will abort.
//
// The gas cost of this method is that of a Store.Put of the mutated state object.
//
// Note: the Go signature is not ideal due to lack of type system power.
//
// # Usage
// ```go
// var state SomeState
// ret := rt.State().Transaction(&state, func() (interface{}) {
// // make some changes
// st.ImLoaded = True
// return st.Thing, nil
// })
// // state.ImLoaded = False // BAD!! state is readonly outside the lambda, it will panic
// ```
Transaction(obj CBORer, f func())
}
// Result of checking two headers for a consensus fault.
type ConsensusFault struct {
// Address of the miner at fault (always an ID address).
Target addr.Address
// Epoch of the fault, which is the higher epoch of the two blocks causing it.
Epoch abi.ChainEpoch
// Type of fault.
Type ConsensusFaultType
}
type ConsensusFaultType int64
const (
//ConsensusFaultNone ConsensusFaultType = 0
ConsensusFaultDoubleForkMining ConsensusFaultType = 1
ConsensusFaultParentGrinding ConsensusFaultType = 2
ConsensusFaultTimeOffsetMining ConsensusFaultType = 3
)
// These interfaces are intended to match those from whyrusleeping/cbor-gen, such that code generated from that
// system is automatically usable here (but not mandatory).
type CBORMarshaler interface {
MarshalCBOR(w io.Writer) error
}
type CBORUnmarshaler interface {
UnmarshalCBOR(r io.Reader) error
}
type CBORer interface {
CBORMarshaler
CBORUnmarshaler
}
// Wraps already-serialized bytes as CBOR-marshalable.
type CBORBytes []byte
func (b CBORBytes) MarshalCBOR(w io.Writer) error {
_, err := w.Write(b)
return err
}
func (b *CBORBytes) UnmarshalCBOR(r io.Reader) error {
var c bytes.Buffer
_, err := c.ReadFrom(r)
*b = c.Bytes()
return err
}
vm/runtime
implementation
package impl
import (
"bytes"
"encoding/binary"
"fmt"
addr "github.com/filecoin-project/go-address"
actor "github.com/filecoin-project/specs-actors/actors"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
acctact "github.com/filecoin-project/specs-actors/actors/builtin/account"
initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
ipld "github.com/filecoin-project/specs/libraries/ipld"
chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
actstate "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
util "github.com/filecoin-project/specs/util"
cid "github.com/ipfs/go-cid"
cbornode "github.com/ipfs/go-ipld-cbor"
mh "github.com/multiformats/go-multihash"
)
type ActorSubstateCID = actor.ActorSubstateCID
type ExitCode = exitcode.ExitCode
type CallerPattern = vmr.CallerPattern
type Runtime = vmr.Runtime
type InvocInput = vmr.InvocInput
type InvocOutput = vmr.InvocOutput
type ActorStateHandle = vmr.ActorStateHandle
var EnsureErrorCode = exitcode.EnsureErrorCode
type Bytes = util.Bytes
var Assert = util.Assert
var IMPL_FINISH = util.IMPL_FINISH
var IMPL_TODO = util.IMPL_TODO
var TODO = util.TODO
var EmptyCBOR cid.Cid
type RuntimeError struct {
ExitCode ExitCode
ErrMsg string
}
func init() {
n, err := cbornode.WrapObject(map[string]struct{}{}, mh.SHA2_256, -1)
Assert(err == nil)
EmptyCBOR = n.Cid()
}
func (x *RuntimeError) String() string {
ret := fmt.Sprintf("Runtime error: %v", x.ExitCode)
if x.ErrMsg != "" {
ret += fmt.Sprintf(" (\"%v\")", x.ErrMsg)
}
return ret
}
func RuntimeError_Make(exitCode ExitCode, errMsg string) *RuntimeError {
exitCode = EnsureErrorCode(exitCode)
return &RuntimeError{
ExitCode: exitCode,
ErrMsg: errMsg,
}
}
func ActorSubstateCID_Equals(x, y ActorSubstateCID) bool {
IMPL_FINISH()
panic("")
}
type ActorStateHandle_I struct {
_initValue *ActorSubstateCID
_rt *VMContext
}
func (h *ActorStateHandle_I) UpdateRelease(newStateCID ActorSubstateCID) {
h._rt._updateReleaseActorSubstate(newStateCID)
}
func (h *ActorStateHandle_I) Release(checkStateCID ActorSubstateCID) {
h._rt._releaseActorSubstate(checkStateCID)
}
func (h *ActorStateHandle_I) Take() ActorSubstateCID {
if h._initValue == nil {
h._rt._apiError("Must call Take() only once on actor substate object")
}
ret := *h._initValue
h._initValue = nil
return ret
}
// Concrete instantiation of the Runtime interface. This should be instantiated by the
// interpreter once per actor method invocation, and responds to that method's Runtime
// API calls.
type VMContext struct {
_store ipld.GraphStore
_globalStateInit st.StateTree
_globalStatePending st.StateTree
_running bool
_chain chain.Chain
_actorAddress addr.Address
_actorStateAcquired bool
// Tracks whether actor substate has changed in order to charge gas just once
// regardless of how many times it's written.
_actorSubstateUpdated bool
_immediateCaller addr.Address
// Note: This is the actor in the From field of the initial on-chain message.
// Not necessarily the immediate caller.
_toplevelSender addr.Address
_toplevelBlockWinner addr.Address
// call sequence number of the top level message that began this execution sequence
_toplevelMsgCallSeqNum actstate.CallSeqNum
// Sequence number representing the total number of calls (to any actor, any method)
// during the current top-level message execution.
// Note: resets with every top-level message, and therefore not necessarily monotonic.
_internalCallSeqNum actstate.CallSeqNum
_valueReceived abi.TokenAmount
_gasRemaining msg.GasAmount
_numValidateCalls int
_output vmr.InvocOutput
}
func VMContext_Make(
store ipld.GraphStore,
chain chain.Chain,
toplevelSender addr.Address,
toplevelBlockWinner addr.Address,
toplevelMsgCallSeqNum actstate.CallSeqNum,
internalCallSeqNum actstate.CallSeqNum,
globalState st.StateTree,
actorAddress addr.Address,
valueReceived abi.TokenAmount,
gasRemaining msg.GasAmount) *VMContext {
return &VMContext{
_store: store,
_chain: chain,
_globalStateInit: globalState,
_globalStatePending: globalState,
_running: false,
_actorAddress: actorAddress,
_actorStateAcquired: false,
_actorSubstateUpdated: false,
_toplevelSender: toplevelSender,
_toplevelBlockWinner: toplevelBlockWinner,
_toplevelMsgCallSeqNum: toplevelMsgCallSeqNum,
_internalCallSeqNum: internalCallSeqNum,
_valueReceived: valueReceived,
_gasRemaining: gasRemaining,
_numValidateCalls: 0,
_output: vmr.InvocOutput{},
}
}
func (rt *VMContext) AbortArgMsg(msg string) {
rt.Abort(exitcode.InvalidArguments_User, msg)
}
func (rt *VMContext) AbortArg() {
rt.AbortArgMsg("Invalid arguments")
}
func (rt *VMContext) AbortStateMsg(msg string) {
rt.Abort(exitcode.InconsistentState_User, msg)
}
func (rt *VMContext) AbortState() {
rt.AbortStateMsg("Inconsistent state")
}
func (rt *VMContext) AbortFundsMsg(msg string) {
rt.Abort(exitcode.InsufficientFunds_User, msg)
}
func (rt *VMContext) AbortFunds() {
rt.AbortFundsMsg("Insufficient funds")
}
func (rt *VMContext) AbortAPI(msg string) {
rt.Abort(exitcode.RuntimeAPIError, msg)
}
func (rt *VMContext) CreateActor(codeID abi.ActorCodeID, address addr.Address) {
if rt._actorAddress != builtin.InitActorAddr {
rt.AbortAPI("Only InitActor may call rt.CreateActor")
}
if address.Protocol() != addr.ID {
rt.AbortAPI("New actor adddress must be an ID-address")
}
rt._createActor(codeID, address)
}
func (rt *VMContext) _createActor(codeID abi.ActorCodeID, address addr.Address) {
// Create empty actor state.
actorState := &actstate.ActorState_I{
CodeID_: codeID,
State_: actor.ActorSubstateCID(EmptyCBOR),
Balance_: abi.TokenAmount(0),
CallSeqNum_: 0,
}
// Put it in the state tree.
actorStateCID := actstate.ActorSystemStateCID(rt.IpldPut(actorState))
rt._updateActorSystemStateInternal(address, actorStateCID)
rt._rtAllocGas(gascost.ExecNewActor)
}
func (rt *VMContext) DeleteActor(address addr.Address) {
// Only a given actor may delete itself.
if rt._actorAddress != address {
rt.AbortAPI("Invalid actor deletion request")
}
rt._deleteActor(address)
}
func (rt *VMContext) _deleteActor(address addr.Address) {
rt._globalStatePending = rt._globalStatePending.Impl().WithDeleteActorSystemState(address)
rt._rtAllocGas(gascost.DeleteActor)
}
func (rt *VMContext) _updateActorSystemStateInternal(actorAddress addr.Address, newStateCID actstate.ActorSystemStateCID) {
newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSystemState(rt._actorAddress, newStateCID)
if err != nil {
panic("Error in runtime implementation: failed to update actor system state")
}
rt._globalStatePending = newGlobalStatePending
}
func (rt *VMContext) _updateActorSubstateInternal(actorAddress addr.Address, newStateCID actor.ActorSubstateCID) {
newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSubstate(rt._actorAddress, newStateCID)
if err != nil {
panic("Error in runtime implementation: failed to update actor substate")
}
rt._globalStatePending = newGlobalStatePending
}
func (rt *VMContext) _updateReleaseActorSubstate(newStateCID ActorSubstateCID) {
rt._checkRunning()
rt._checkActorStateAcquired()
rt._updateActorSubstateInternal(rt._actorAddress, newStateCID)
rt._actorSubstateUpdated = true
rt._actorStateAcquired = false
}
func (rt *VMContext) _releaseActorSubstate(checkStateCID ActorSubstateCID) {
rt._checkRunning()
rt._checkActorStateAcquired()
prevState, ok := rt._globalStatePending.GetActor(rt._actorAddress)
util.Assert(ok)
prevStateCID := prevState.State()
if !ActorSubstateCID_Equals(prevStateCID, checkStateCID) {
rt.AbortAPI("State CID differs upon release call")
}
rt._actorStateAcquired = false
}
func (rt *VMContext) Assert(cond bool) {
if !cond {
rt.Abort(exitcode.RuntimeAssertFailure, "Runtime assertion failed")
}
}
func (rt *VMContext) _checkActorStateAcquiredFlag(expected bool) {
rt._checkRunning()
if rt._actorStateAcquired != expected {
rt._apiError("State updates and message sends must be disjoint")
}
}
func (rt *VMContext) _checkActorStateAcquired() {
rt._checkActorStateAcquiredFlag(true)
}
func (rt *VMContext) _checkActorStateNotAcquired() {
rt._checkActorStateAcquiredFlag(false)
}
func (rt *VMContext) Abort(errExitCode exitcode.ExitCode, errMsg string) {
errExitCode = exitcode.EnsureErrorCode(errExitCode)
rt._throwErrorFull(errExitCode, errMsg)
}
func (rt *VMContext) ImmediateCaller() addr.Address {
return rt._immediateCaller
}
func (rt *VMContext) CurrReceiver() addr.Address {
return rt._actorAddress
}
func (rt *VMContext) ToplevelBlockWinner() addr.Address {
return rt._toplevelBlockWinner
}
func (rt *VMContext) ValidateImmediateCallerMatches(
callerExpectedPattern CallerPattern) {
rt._checkRunning()
rt._checkNumValidateCalls(0)
caller := rt.ImmediateCaller()
if !callerExpectedPattern.Matches(caller) {
rt.AbortAPI("Method invoked by incorrect caller")
}
rt._numValidateCalls += 1
}
func CallerPattern_MakeAcceptAnyOfTypes(rt *VMContext, types []abi.ActorCodeID) CallerPattern {
return CallerPattern{
Matches: func(y addr.Address) bool {
codeID, ok := rt.GetActorCodeID(y)
if !ok {
panic("Internal runtime error: actor not found")
}
for _, type_ := range types {
if codeID == type_ {
return true
}
}
return false
},
}
}
func (rt *VMContext) ValidateImmediateCallerIs(callerExpected addr.Address) {
rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeSingleton(callerExpected))
}
func (rt *VMContext) ValidateImmediateCallerInSet(callersExpected []addr.Address) {
rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeSet(callersExpected))
}
func (rt *VMContext) ValidateImmediateCallerAcceptAnyOfType(type_ abi.ActorCodeID) {
rt.ValidateImmediateCallerAcceptAnyOfTypes([]abi.ActorCodeID{type_})
}
func (rt *VMContext) ValidateImmediateCallerAcceptAnyOfTypes(types []abi.ActorCodeID) {
rt.ValidateImmediateCallerMatches(CallerPattern_MakeAcceptAnyOfTypes(rt, types))
}
func (rt *VMContext) ValidateImmediateCallerAcceptAny() {
rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeAcceptAny())
}
func (rt *VMContext) _checkNumValidateCalls(x int) {
if rt._numValidateCalls != x {
rt.AbortAPI("Method must validate caller identity exactly once")
}
}
func (rt *VMContext) _checkRunning() {
if !rt._running {
panic("Internal runtime error: actor API called with no actor code running")
}
}
func (rt *VMContext) SuccessReturn() InvocOutput {
return vmr.InvocOutput_Make(nil)
}
func (rt *VMContext) ValueReturn(value util.Bytes) InvocOutput {
return vmr.InvocOutput_Make(value)
}
func (rt *VMContext) _throwError(exitCode ExitCode) {
rt._throwErrorFull(exitCode, "")
}
func (rt *VMContext) _throwErrorFull(exitCode ExitCode, errMsg string) {
panic(RuntimeError_Make(exitCode, errMsg))
}
func (rt *VMContext) _apiError(errMsg string) {
rt._throwErrorFull(exitcode.RuntimeAPIError, errMsg)
}
func _gasAmountAssertValid(x msg.GasAmount) {
if x.LessThan(msg.GasAmount_Zero()) {
panic("Interpreter error: negative gas amount")
}
}
// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
// incurred yet.
func (rt *VMContext) _rtAllocGas(x msg.GasAmount) {
_gasAmountAssertValid(x)
var ok bool
rt._gasRemaining, ok = rt._gasRemaining.SubtractIfNonnegative(x)
if !ok {
rt._throwError(exitcode.OutOfGas)
}
}
func (rt *VMContext) _transferFunds(from addr.Address, to addr.Address, amount abi.TokenAmount) error {
rt._checkRunning()
rt._checkActorStateNotAcquired()
newGlobalStatePending, err := rt._globalStatePending.Impl().WithFundsTransfer(from, to, amount)
if err != nil {
return err
}
rt._globalStatePending = newGlobalStatePending
return nil
}
func (rt *VMContext) GetActorCodeID(actorAddr addr.Address) (ret abi.ActorCodeID, ok bool) {
IMPL_FINISH()
panic("")
}
type ErrorHandlingSpec int
const (
PropagateErrors ErrorHandlingSpec = 1 + iota
CatchErrors
)
// TODO: This function should be private (not intended to be exposed to actors).
// (merging runtime and interpreter packages should solve this)
// TODO: this should not use the MessageReceipt return type, even though it needs the same triple
// of values. This method cannot compute the total gas cost and the returned receipt will never
// go on chain.
func (rt *VMContext) SendToplevelFromInterpreter(input InvocInput) (MessageReceipt, st.StateTree) {
rt._running = true
ret := rt._sendInternal(input, CatchErrors)
rt._running = false
return ret, rt._globalStatePending
}
func _catchRuntimeErrors(f func() InvocOutput) (output InvocOutput, exitCode exitcode.ExitCode) {
defer func() {
if r := recover(); r != nil {
switch r.(type) {
case *RuntimeError:
output = vmr.InvocOutput_Make(nil)
exitCode = (r.(*RuntimeError).ExitCode)
default:
panic(r)
}
}
}()
output = f()
exitCode = exitcode.OK()
return
}
func _invokeMethodInternal(
rt *VMContext,
actorCode vmr.ActorCode,
method abi.MethodNum,
params abi.MethodParams) (
ret InvocOutput, exitCode exitcode.ExitCode, internalCallSeqNumFinal actstate.CallSeqNum) {
if method == builtin.MethodSend {
ret = vmr.InvocOutput_Make(nil)
return
}
rt._running = true
ret, exitCode = _catchRuntimeErrors(func() InvocOutput {
IMPL_TODO("dispatch to actor code")
var methodOutput vmr.InvocOutput // actorCode.InvokeMethod(rt, method, params)
if rt._actorSubstateUpdated {
rt._rtAllocGas(gascost.UpdateActorSubstate)
}
rt._checkActorStateNotAcquired()
rt._checkNumValidateCalls(1)
return methodOutput
})
rt._running = false
internalCallSeqNumFinal = rt._internalCallSeqNum
return
}
func (rtOuter *VMContext) _sendInternal(input InvocInput, errSpec ErrorHandlingSpec) MessageReceipt {
rtOuter._checkRunning()
rtOuter._checkActorStateNotAcquired()
initGasRemaining := rtOuter._gasRemaining
rtOuter._rtAllocGas(gascost.InvokeMethod(input.Value, input.Method))
receiver, receiverAddr := rtOuter._resolveReceiver(input.To)
receiverCode, err := loadActorCode(receiver.CodeID())
if err != nil {
rtOuter._throwError(exitcode.ActorCodeNotFound)
}
err = rtOuter._transferFunds(rtOuter._actorAddress, receiverAddr, input.Value)
if err != nil {
rtOuter._throwError(exitcode.InsufficientFunds_System)
}
rtInner := VMContext_Make(
rtOuter._store,
rtOuter._chain,
rtOuter._toplevelSender,
rtOuter._toplevelBlockWinner,
rtOuter._toplevelMsgCallSeqNum,
rtOuter._internalCallSeqNum+1,
rtOuter._globalStatePending,
receiverAddr,
input.Value,
rtOuter._gasRemaining,
)
invocOutput, exitCode, internalCallSeqNumFinal := _invokeMethodInternal(
rtInner,
receiverCode,
input.Method,
input.Params,
)
_gasAmountAssertValid(rtOuter._gasRemaining.Subtract(rtInner._gasRemaining))
rtOuter._gasRemaining = rtInner._gasRemaining
gasUsed := initGasRemaining.Subtract(rtOuter._gasRemaining)
_gasAmountAssertValid(gasUsed)
rtOuter._internalCallSeqNum = internalCallSeqNumFinal
if exitCode == exitcode.OutOfGas {
// OutOfGas error cannot be caught
rtOuter._throwError(exitCode)
}
if errSpec == PropagateErrors && exitCode.IsError() {
rtOuter._throwError(exitcode.MethodSubcallError)
}
if exitCode.AllowsStateUpdate() {
rtOuter._globalStatePending = rtInner._globalStatePending
}
return MessageReceipt_Make(invocOutput, exitCode, gasUsed)
}
// Loads a receiving actor state from the state tree, resolving non-ID addresses through the InitActor state.
// If it doesn't exist, and the message is a simple value send to a pubkey-style address,
// creates the receiver as an account actor in the returned state.
// Aborts otherwise.
func (rt *VMContext) _resolveReceiver(targetRaw addr.Address) (actstate.ActorState, addr.Address) {
// Resolve the target address via the InitActor, and attempt to load state.
initSubState := rt._loadInitActorState()
targetIdAddr := initSubState.ResolveAddress(targetRaw)
act, found := rt._globalStatePending.GetActor(targetIdAddr)
if found {
return act, targetIdAddr
}
if targetRaw.Protocol() != addr.SECP256K1 && targetRaw.Protocol() != addr.BLS {
// Don't implicitly create an account actor for an address without an associated key.
rt._throwError(exitcode.ActorNotFound)
}
// Allocate an ID address from the init actor and map the pubkey To address to it.
newIdAddr := initSubState.MapAddressToNewID(targetRaw)
rt._saveInitActorState(initSubState)
// Create new account actor (charges gas).
rt._createActor(builtin.AccountActorCodeID, newIdAddr)
// Initialize account actor substate with it's pubkey address.
substate := &acctact.AccountActorState{
Address: targetRaw,
}
rt._saveAccountActorState(newIdAddr, *substate)
act, _ = rt._globalStatePending.GetActor(newIdAddr)
return act, newIdAddr
}
func (rt *VMContext) _loadInitActorState() initact.InitActorState {
initState, ok := rt._globalStatePending.GetActor(builtin.InitActorAddr)
util.Assert(ok)
var initSubState initact.InitActorState
ok = rt.IpldGet(cid.Cid(initState.State()), &initSubState)
util.Assert(ok)
return initSubState
}
func (rt *VMContext) _saveInitActorState(state initact.InitActorState) {
// Gas is charged here separately from _actorSubstateUpdated because this is a different actor
// than the receiver.
rt._rtAllocGas(gascost.UpdateActorSubstate)
rt._updateActorSubstateInternal(builtin.InitActorAddr, actor.ActorSubstateCID(rt.IpldPut(&state)))
}
func (rt *VMContext) _saveAccountActorState(address addr.Address, state acctact.AccountActorState) {
// Gas is charged here separately from _actorSubstateUpdated because this is a different actor
// than the receiver.
rt._rtAllocGas(gascost.UpdateActorSubstate)
rt._updateActorSubstateInternal(address, actor.ActorSubstateCID(rt.IpldPut(state)))
}
func (rt *VMContext) _sendInternalOutputs(input InvocInput, errSpec ErrorHandlingSpec) (InvocOutput, exitcode.ExitCode) {
ret := rt._sendInternal(input, errSpec)
return vmr.InvocOutput_Make(ret.ReturnValue), ret.ExitCode
}
func (rt *VMContext) Send(
toAddr addr.Address, methodNum abi.MethodNum, params abi.MethodParams, value abi.TokenAmount) InvocOutput {
return rt.SendPropagatingErrors(vmr.InvocInput_Make(toAddr, methodNum, params, value))
}
func (rt *VMContext) SendQuery(toAddr addr.Address, methodNum abi.MethodNum, params abi.MethodParams) util.Serialization {
invocOutput := rt.Send(toAddr, methodNum, params, abi.TokenAmount(0))
ret := invocOutput.ReturnValue
Assert(ret != nil)
return ret
}
func (rt *VMContext) SendFunds(toAddr addr.Address, value abi.TokenAmount) {
rt.Send(toAddr, builtin.MethodSend, nil, value)
}
func (rt *VMContext) SendPropagatingErrors(input InvocInput) InvocOutput {
ret, _ := rt._sendInternalOutputs(input, PropagateErrors)
return ret
}
func (rt *VMContext) SendCatchingErrors(input InvocInput) (InvocOutput, exitcode.ExitCode) {
rt.ValidateImmediateCallerIs(builtin.CronActorAddr)
return rt._sendInternalOutputs(input, CatchErrors)
}
func (rt *VMContext) CurrentBalance() abi.TokenAmount {
IMPL_FINISH()
panic("")
}
func (rt *VMContext) ValueReceived() abi.TokenAmount {
return rt._valueReceived
}
func (rt *VMContext) GetRandomness(epoch abi.ChainEpoch) abi.RandomnessSeed {
return rt._chain.RandomnessSeedAtEpoch(epoch)
}
func (rt *VMContext) NewActorAddress() addr.Address {
addrBuf := new(bytes.Buffer)
senderState, ok := rt._globalStatePending.GetActor(rt._toplevelSender)
util.Assert(ok)
var aast acctact.AccountActorState
ok = rt.IpldGet(cid.Cid(senderState.State()), &aast)
util.Assert(ok)
err := aast.Address.MarshalCBOR(addrBuf)
util.Assert(err == nil)
err = binary.Write(addrBuf, binary.BigEndian, rt._toplevelMsgCallSeqNum)
util.Assert(err != nil)
err = binary.Write(addrBuf, binary.BigEndian, rt._internalCallSeqNum)
util.Assert(err != nil)
newAddr, err := addr.NewActorAddress(addrBuf.Bytes())
util.Assert(err == nil)
return newAddr
}
func (rt *VMContext) IpldPut(x ipld.Object) cid.Cid {
IMPL_FINISH() // Serialization
serialized := []byte{}
cid := rt._store.Put(serialized)
rt._rtAllocGas(gascost.IpldPut(len(serialized)))
return cid
}
func (rt *VMContext) IpldGet(c cid.Cid, o ipld.Object) bool {
serialized, ok := rt._store.Get(c)
if ok {
rt._rtAllocGas(gascost.IpldGet(len(serialized)))
}
IMPL_FINISH() // Deserialization into o
return ok
}
func (rt *VMContext) CurrEpoch() abi.ChainEpoch {
IMPL_FINISH()
panic("")
}
func (rt *VMContext) CurrIndices() indices.Indices {
// TODO: compute from state tree (rt._globalStatePending), using individual actor
// state helper functions when possible
TODO()
panic("")
}
func (rt *VMContext) AcquireState() ActorStateHandle {
rt._checkRunning()
rt._checkActorStateNotAcquired()
rt._actorStateAcquired = true
state, ok := rt._globalStatePending.GetActor(rt._actorAddress)
util.Assert(ok)
stateRef := state.State().Ref()
return &ActorStateHandle_I{
_initValue: &stateRef,
_rt: rt,
}
}
func (rt *VMContext) Compute(f ComputeFunctionID, args []util.Any) Any {
def, found := _computeFunctionDefs[f]
if !found {
rt.AbortAPI("Function definition in rt.Compute() not found")
}
gasCost := def.GasCostFn(args)
rt._rtAllocGas(gasCost)
return def.Body(args)
}
Code Loading
package impl
import (
abi "github.com/filecoin-project/specs-actors/actors/abi"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
)
func loadActorCode(codeID abi.ActorCodeID) (vmr.ActorCode, error) {
panic("TODO")
// TODO: resolve circular dependency
// // load the code from StateTree.
// // TODO: this is going to be enabled in the future.
// // code, err := loadCodeFromStateTree(input.InTree, codeCID)
// return staticActorCodeRegistry.LoadActor(codeCID)
}
Exit codes
package exitcode
// Common error codes that may be shared by different actors.
// Actors may also define their own codes, including redefining these values.
const (
// Indicates a method parameter is invalid.
ErrIllegalArgument = FirstActorErrorCode + iota
// Indicates a requested resource does not exist.
ErrNotFound
// Indicates an action is disallowed.
ErrForbidden
// Indicates a balance of funds is insufficient.
ErrInsufficientFunds
// Indicates an actor's internal state is invalid.
ErrIllegalState
// Indicates de/serialization failure within actor code.
ErrSerialization
// Common error codes stop here. If you define a common error code above
// this value it will have conflicting interpretations
FirstActorSpecificExitCode = ExitCode(32)
)
VM Gas Cost Constants
package runtime
import (
abi "github.com/filecoin-project/specs-actors/actors/abi"
actor "github.com/filecoin-project/specs-actors/actors/builtin"
msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
util "github.com/filecoin-project/specs/util"
)
type Bytes = util.Bytes
var TODO = util.TODO
var (
// TODO: assign all of these.
GasAmountPlaceholder = msg.GasAmount_FromInt(1)
GasAmountPlaceholder_UpdateStateTree = GasAmountPlaceholder
)
var (
///////////////////////////////////////////////////////////////////////////
// System operations
///////////////////////////////////////////////////////////////////////////
// Gas cost charged to the originator of an on-chain message (regardless of
// whether it succeeds or fails in application) is given by:
// OnChainMessageBase + len(serialized message)*OnChainMessagePerByte
// Together, these account for the cost of message propagation and validation,
// up to but excluding any actual processing by the VM.
// This is the cost a block producer burns when including an invalid message.
OnChainMessageBase = GasAmountPlaceholder
OnChainMessagePerByte = GasAmountPlaceholder
// Gas cost charged to the originator of a non-nil return value produced
// by an on-chain message is given by:
// len(return value)*OnChainReturnValuePerByte
OnChainReturnValuePerByte = GasAmountPlaceholder
// Gas cost for any message send execution(including the top-level one
// initiated by an on-chain message).
// This accounts for the cost of loading sender and receiver actors and
// (for top-level messages) incrementing the sender's sequence number.
// Load and store of actor sub-state is charged separately.
SendBase = GasAmountPlaceholder
// Gas cost charged, in addition to SendBase, if a message send
// is accompanied by any nonzero currency amount.
// Accounts for writing receiver's new balance (the sender's state is
// already accounted for).
SendTransferFunds = GasAmountPlaceholder
// Gas cost charged, in addition to SendBase, if a message invokes
// a method on the receiver.
// Accounts for the cost of loading receiver code and method dispatch.
SendInvokeMethod = GasAmountPlaceholder
// Gas cost (Base + len*PerByte) for any Get operation to the IPLD store
// in the runtime VM context.
IpldGetBase = GasAmountPlaceholder
IpldGetPerByte = GasAmountPlaceholder
// Gas cost (Base + len*PerByte) for any Put operation to the IPLD store
// in the runtime VM context.
//
// Note: these costs should be significantly higher than the costs for Get
// operations, since they reflect not only serialization/deserialization
// but also persistent storage of chain data.
IpldPutBase = GasAmountPlaceholder
IpldPutPerByte = GasAmountPlaceholder
// Gas cost for updating an actor's substate (i.e., UpdateRelease).
// This is in addition to a per-byte fee for the state as for IPLD Get/Put.
UpdateActorSubstate = GasAmountPlaceholder_UpdateStateTree
// Gas cost for creating a new actor (via InitActor's Exec method).
// Actor sub-state is charged separately.
ExecNewActor = GasAmountPlaceholder
// Gas cost for deleting an actor.
DeleteActor = GasAmountPlaceholder
///////////////////////////////////////////////////////////////////////////
// Pure functions (VM ABI)
///////////////////////////////////////////////////////////////////////////
// Gas cost charged per public-key cryptography operation (e.g., signature
// verification).
PublicKeyCryptoOp = GasAmountPlaceholder
)
func OnChainMessage(onChainMessageLen int) msg.GasAmount {
return msg.GasAmount_Affine(OnChainMessageBase, onChainMessageLen, OnChainMessagePerByte)
}
func OnChainReturnValue(returnValue Bytes) msg.GasAmount {
retLen := 0
if returnValue != nil {
retLen = len(returnValue)
}
return msg.GasAmount_Affine(msg.GasAmount_Zero(), retLen, OnChainReturnValuePerByte)
}
func IpldGet(dataSize int) msg.GasAmount {
return msg.GasAmount_Affine(IpldGetBase, dataSize, IpldGetPerByte)
}
func IpldPut(dataSize int) msg.GasAmount {
return msg.GasAmount_Affine(IpldPutBase, dataSize, IpldPutPerByte)
}
func InvokeMethod(value abi.TokenAmount, method abi.MethodNum) msg.GasAmount {
ret := SendBase
if value != abi.TokenAmount(0) {
ret = ret.Add(SendTransferFunds)
}
if method != actor.MethodSend {
ret = ret.Add(SendInvokeMethod)
}
return ret
}
System Actors
There are eleven (11) builtin System Actors in total, but not all of them interact with the VM. Each actor is identified by a Code ID (or CID).
There are two system actors required for VM processing:
- the InitActor, which initializes new actors and records the network name, and
- the CronActor, a scheduler actor that runs critical functions at every epoch. There are another two actors that interact with the VM:
- the AccountActor responsible for user accounts (non-singleton), and
- the RewardActor for block reward and token vesting (singleton).
The remaining seven (7) builtin System Actors that do not interact directly with the VM are the following:
StorageMarketActor
: responsible for managing storage and retrieval deals [ Market Actor Repo]StorageMinerActor
: actor responsible to deal with storage mining operations and collect proofs [ Storage Miner Actor Repo]MultisigActor
(or Multi-Signature Wallet Actor): responsible for dealing with operations involving the Filecoin wallet [ Multisig Actor Repo]PaymentChannelActor
: responsible for setting up and settling funds related to payment channels [ Paych Actor Repo]StoragePowerActor
: responsible for keeping track of the storage power allocated at each storage miner [ Storage Power Actor]VerifiedRegistryActor
: responsible for managing verified clients [ Verifreg Actor Repo]SystemActor
: general system actor [ System Actor Repo]
CronActor
Built in to the genesis state, the CronActor
's dispatch table invokes the StoragePowerActor
and StorageMarketActor
for them to maintain internal state and process deferred events. It could in principle invoke other actors after a network upgrade.
package cron
import (
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
// The cron actor is a built-in singleton that sends messages to other registered actors at the end of each epoch.
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.EpochTick,
}
}
var _ abi.Invokee = Actor{}
type ConstructorParams struct {
Entries []Entry
}
func (a Actor) Constructor(rt vmr.Runtime, params *ConstructorParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
rt.State().Create(ConstructState(params.Entries))
return nil
}
// Invoked by the system after all other messages in the epoch have been processed.
func (a Actor) EpochTick(rt vmr.Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
var st State
rt.State().Readonly(&st)
for _, entry := range st.Entries {
_, _ = rt.Send(entry.Receiver, entry.MethodNum, nil, abi.NewTokenAmount(0))
// Any error and return value are ignored.
}
return nil
}
InitActor
The InitActor
has the power to create new actors, e.g., those that enter the system. It maintains a table resolving a public key and temporary actor addresses to their canonical ID-addresses. Invalid CIDs should not get committed to the state tree.
Note that the canonical ID address does not persist in case of chain re-organization. The actor address or public key survives chain re-organization.
package init
import (
addr "github.com/filecoin-project/go-address"
cid "github.com/ipfs/go-cid"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
runtime "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
autil "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
// The init actor uniquely has the power to create new actors.
// It maintains a table resolving pubkey and temporary actor addresses to the canonical ID-addresses.
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.Exec,
}
}
var _ abi.Invokee = Actor{}
type ConstructorParams struct {
NetworkName string
}
func (a Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
emptyMap, err := adt.MakeEmptyMap(adt.AsStore(rt)).Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
st := ConstructState(emptyMap, params.NetworkName)
rt.State().Create(st)
return nil
}
type ExecParams struct {
CodeCID cid.Cid `checked:"true"` // invalid CIDs won't get committed to the state tree
ConstructorParams []byte
}
type ExecReturn struct {
IDAddress addr.Address // The canonical ID-based address for the actor.
RobustAddress addr.Address // A more expensive but re-org-safe address for the newly created actor.
}
func (a Actor) Exec(rt runtime.Runtime, params *ExecParams) *ExecReturn {
rt.ValidateImmediateCallerAcceptAny()
callerCodeCID, ok := rt.GetActorCodeCID(rt.Message().Caller())
autil.AssertMsg(ok, "no code for actor at %s", rt.Message().Caller())
if !canExec(callerCodeCID, params.CodeCID) {
rt.Abortf(exitcode.ErrForbidden, "caller type %v cannot exec actor type %v", callerCodeCID, params.CodeCID)
}
// Compute a re-org-stable address.
// This address exists for use by messages coming from outside the system, in order to
// stably address the newly created actor even if a chain re-org causes it to end up with
// a different ID.
uniqueAddress := rt.NewActorAddress()
// Allocate an ID for this actor.
// Store mapping of pubkey or actor address to actor ID
var st State
var idAddr addr.Address
rt.State().Transaction(&st, func() {
var err error
idAddr, err = st.MapAddressToNewID(adt.AsStore(rt), uniqueAddress)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to allocate ID address")
})
// Create an empty actor.
rt.CreateActor(params.CodeCID, idAddr)
// Invoke constructor.
_, code := rt.Send(idAddr, builtin.MethodConstructor, runtime.CBORBytes(params.ConstructorParams), rt.Message().ValueReceived())
builtin.RequireSuccess(rt, code, "constructor failed")
return &ExecReturn{idAddr, uniqueAddress}
}
func canExec(callerCodeID cid.Cid, execCodeID cid.Cid) bool {
switch execCodeID {
case builtin.StorageMinerActorCodeID:
if callerCodeID == builtin.StoragePowerActorCodeID {
return true
}
return false
case builtin.PaymentChannelActorCodeID, builtin.MultisigActorCodeID:
return true
default:
return false
}
}
RewardActor
The RewardActor
is where unminted Filecoin tokens are kept. The actor distributes rewards directly to miner actors, where they are locked for vesting. The reward value used for the current epoch is updated at the end of an epoch through a cron tick.
package reward
import (
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/specs-actors/actors/util/smoothing"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
. "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.AwardBlockReward,
3: a.ThisEpochReward,
4: a.UpdateNetworkKPI,
}
}
var _ abi.Invokee = Actor{}
func (a Actor) Constructor(rt vmr.Runtime, currRealizedPower *abi.StoragePower) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
if currRealizedPower == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "arugment should not be nil")
return nil // linter does not understand abort exiting
}
st := ConstructState(*currRealizedPower)
rt.State().Create(st)
return nil
}
type AwardBlockRewardParams struct {
Miner address.Address
Penalty abi.TokenAmount // penalty for including bad messages in a block, >= 0
GasReward abi.TokenAmount // gas reward from all gas fees in a block, >= 0
WinCount int64 // number of reward units won, > 0
}
// Awards a reward to a block producer.
// This method is called only by the system actor, implicitly, as the last message in the evaluation of a block.
// The system actor thus computes the parameters and attached value.
//
// The reward includes two components:
// - the epoch block reward, computed and paid from the reward actor's balance,
// - the block gas reward, expected to be transferred to the reward actor with this invocation.
//
// The reward is reduced before the residual is credited to the block producer, by:
// - a penalty amount, provided as a parameter, which is burnt,
func (a Actor) AwardBlockReward(rt vmr.Runtime, params *AwardBlockRewardParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
priorBalance := rt.CurrentBalance()
if params.Penalty.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "negative penalty %v", params.Penalty)
}
if params.GasReward.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "negative gas reward %v", params.GasReward)
}
if priorBalance.LessThan(params.GasReward) {
rt.Abortf(exitcode.ErrIllegalState, "actor current balance %v insufficient to pay gas reward %v",
priorBalance, params.GasReward)
}
if params.WinCount <= 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid win count %d", params.WinCount)
}
minerAddr, ok := rt.ResolveAddress(params.Miner)
if !ok {
rt.Abortf(exitcode.ErrNotFound, "failed to resolve given owner address")
}
penalty := abi.NewTokenAmount(0)
totalReward := big.Zero()
var st State
rt.State().Transaction(&st, func() {
blockReward := big.Mul(st.ThisEpochReward, big.NewInt(params.WinCount))
blockReward = big.Div(blockReward, big.NewInt(builtin.ExpectedLeadersPerEpoch))
totalReward = big.Add(blockReward, params.GasReward)
currBalance := rt.CurrentBalance()
if totalReward.GreaterThan(currBalance) {
rt.Log(vmr.WARN, "reward actor balance %d below totalReward expected %d, paying out rest of balance", currBalance, totalReward)
totalReward = currBalance
blockReward = big.Sub(totalReward, params.GasReward)
// Since we have already asserted the balance is greater than gas reward blockReward is >= 0
AssertMsg(blockReward.GreaterThanEqual(big.Zero()), "programming error, block reward is %v below zero", blockReward)
}
st.TotalMined = big.Add(st.TotalMined, blockReward)
})
// Cap the penalty at the total reward value.
penalty = big.Min(params.Penalty, totalReward)
// Reduce the payable reward by the penalty.
rewardPayable := big.Sub(totalReward, penalty)
AssertMsg(big.Add(rewardPayable, penalty).LessThanEqual(priorBalance),
"reward payable %v + penalty %v exceeds balance %v", rewardPayable, penalty, priorBalance)
// if this fails, we can assume the miner is responsible and avoid failing here.
_, code := rt.Send(minerAddr, builtin.MethodsMiner.AddLockedFund, &rewardPayable, rewardPayable)
if !code.IsSuccess() {
rt.Log(vmr.ERROR, "failed to send AddLockedFund call to the miner actor with funds: %v, code: %v", rewardPayable, code)
_, code := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, rewardPayable)
if !code.IsSuccess() {
rt.Log(vmr.ERROR, "failed to send unsent reward to the burnt funds actor, code: %v", code)
}
}
// Burn the penalty amount.
if penalty.GreaterThan(abi.NewTokenAmount(0)) {
_, code = rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, penalty)
builtin.RequireSuccess(rt, code, "failed to send penalty to burnt funds actor")
}
return nil
}
type ThisEpochRewardReturn struct {
ThisEpochReward abi.TokenAmount
ThisEpochRewardSmoothed *smoothing.FilterEstimate
ThisEpochBaselinePower abi.StoragePower
}
// The award value used for the current epoch, updated at the end of an epoch
// through cron tick. In the case previous epochs were null blocks this
// is the reward value as calculated at the last non-null epoch.
func (a Actor) ThisEpochReward(rt vmr.Runtime, _ *adt.EmptyValue) *ThisEpochRewardReturn {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.State().Readonly(&st)
return &ThisEpochRewardReturn{
ThisEpochReward: st.ThisEpochReward,
ThisEpochBaselinePower: st.ThisEpochBaselinePower,
ThisEpochRewardSmoothed: st.ThisEpochRewardSmoothed,
}
}
// Called at the end of each epoch by the power actor (in turn by its cron hook).
// This is only invoked for non-empty tipsets, but catches up any number of null
// epochs to compute the next epoch reward.
func (a Actor) UpdateNetworkKPI(rt vmr.Runtime, currRealizedPower *abi.StoragePower) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
if currRealizedPower == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "arugment should not be nil")
}
var st State
rt.State().Transaction(&st, func() {
prev := st.Epoch
// if there were null runs catch up the computation until
// st.Epoch == rt.CurrEpoch()
for st.Epoch < rt.CurrEpoch() {
// Update to next epoch to process null rounds
st.updateToNextEpoch(*currRealizedPower)
}
st.updateToNextEpochWithReward(*currRealizedPower)
// only update smoothed estimates after updating reward and epoch
st.updateSmoothedEstimates(st.Epoch - prev)
})
return nil
}
AccountActor
The AccountActor
is responsible for user accounts. Account actors are not created by the InitActor
, but their constructor is called by the system. Account actors are created by sending a message to a public-key style address. The address must be BLS
or SECP
, or otherwise there should be an exit error. The account actor is updating the state tree with the new actor address.
package account
import (
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
1: a.Constructor,
2: a.PubkeyAddress,
}
}
var _ abi.Invokee = Actor{}
type State struct {
Address addr.Address
}
func (a Actor) Constructor(rt vmr.Runtime, address *addr.Address) *adt.EmptyValue {
// Account actors are created implicitly by sending a message to a pubkey-style address.
// This constructor is not invoked by the InitActor, but by the system.
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
switch address.Protocol() {
case addr.SECP256K1:
case addr.BLS:
break // ok
default:
rt.Abortf(exitcode.ErrIllegalArgument, "address must use BLS or SECP protocol, got %v", address.Protocol())
}
st := State{Address: *address}
rt.State().Create(&st)
return nil
}
// Fetches the pubkey-type address from this actor.
func (a Actor) PubkeyAddress(rt vmr.Runtime, _ *adt.EmptyValue) *addr.Address {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.State().Readonly(&st)
return &st.Address
}
VM Interpreter - Message Invocation (Outside VM)
The VM interpreter orchestrates the execution of messages from a tipset on that tipset’s parent state, producing a new state and a sequence of message receipts. The CIDs of this new state and of the receipt collection are included in blocks from the subsequent epoch, which must agree about those CIDs in order to form a new tipset.
Every state change is driven by the execution of a message. The messages from all the blocks in a tipset must be executed in order to produce a next state. All messages from the first block are executed before those of second and subsequent blocks in the tipset. For each block, BLS-aggregated messages are executed first, then SECP signed messages.
Implicit messages
In addition to the messages explicitly included in each block, a few state changes at each epoch are made by implicit messages. Implicit messages are not transmitted between nodes, but constructed by the interpreter at evaluation time.
For each block in a tipset, an implicit message:
- invokes the block producer’s miner actor to process the (already-validated) election PoSt submission, as the first message in the block;
- invokes the reward actor to pay the block reward to the miner’s owner account, as the final message in the block;
For each tipset, an implicit message:
- invokes the cron actor to process automated checks and payments, as the final message in the tipset.
All implicit messages are constructed with a From
address being the distinguished system account actor.
They specify a gas price of zero, but must be included in the computation.
They must succeed (have an exit code of zero) in order for the new state to be computed.
Receipts for implicit messages are not included in the receipt list; only explicit messages have an
explicit receipt.
Gas payments
In most cases, the sender of a message pays the miner which produced the block including that message a gas fee for its execution.
The gas payments for each message execution are paid to the miner owner account immediately after that message is executed. There are no encumbrances to either the block reward or gas fees earned: both may be spent immediately.
Duplicate messages
Since different miners produce blocks in the same epoch, multiple blocks in a single tipset may include the same message (identified by the same CID). When this happens, the message is processed only the first time it is encountered in the tipset’s canonical order. Subsequent instances of the message are ignored and do not result in any state mutation, produce a receipt, or pay gas to the block producer.
The sequence of executions for a tipset is thus summarised:
- pay reward for first block
- process election post for first block
- messages for first block (BLS before SECP)
- pay reward for second block
- process election post for second block
- messages for second block (BLS before SECP, skipping any already encountered)
[... subsequent blocks ...]
- cron tick
Message validity and failure
Every message in a valid block can be processed and produce a receipt (note that block validity implies all messages are syntactically valid – see Message Syntax – and correctly signed). However, execution may or may not succeed, depending on the state to which the message is applied. If the execution of a message fails, the corresponding receipt will carry a non-zero exit code.
If a message fails due to a reason that can reasonably be attributed to the miner including a message that could never have succeeded in the parent state, or because the sender lacks funds to cover the maximum message cost, then the miner pays a penalty by burning the gas fee (rather than the sender paying fees to the block miner).
The only state changes resulting from a message failure are either:
- incrementing of the sending actor’s
CallSeqNum
, and payment of gas fees from the sender to the owner of the miner of the block including the message; or - a penalty equivalent to the gas fee for the failed message, burnt by the miner (sender’s
CallSeqNum
unchanged).
A message execution will fail if, in the immediately preceding state:
- the
From
actor does not exist in the state (miner penalized), - the
From
actor is not an account actor (miner penalized), - the
CallSeqNum
of the message does not match theCallSeqNum
of theFrom
actor (miner penalized), - the
From
actor does not have sufficient balance to cover the sum of the messageValue
plus the maximum gas cost,GasLimit * GasPrice
(miner penalized), - the
To
actor does not exist in state and theTo
address is not a pubkey-style address, - the
To
actor exists (or is implicitly created as an account) but does not have a method corresponding to the non-zeroMethodNum
, - deserialized
Params
is not an array of length matching the arity of theTo
actor’sMethodNum
method, - deserialized
Params
are not valid for the types specified by theTo
actor’sMethodNum
method, - the invoked method consumes more gas than the
GasLimit
allows, - the invoked method exits with a non-zero code (via
Runtime.Abort()
), or - any inner message sent by the receiver fails for any of the above reasons.
Note that if the To
actor does not exist in state and the address is a valid H(pubkey)
address,
it will be created as an account actor.
(You can see the old VM interpreter here )
vm/interpreter
interface
import addr "github.com/filecoin-project/go-address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import vmri "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/impl"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import abi "github.com/filecoin-project/specs-actors/actors/abi"
type UInt64 UInt
// The messages from one block in a tipset.
type BlockMessages struct {
BLSMessages [msg.UnsignedMessage]
SECPMessages [msg.SignedMessage]
Miner addr.Address // The block miner's actor address
PoStProof Bytes // The miner's Election PoSt proof output
}
// The messages from a tipset, grouped by block.
type TipSetMessages struct {
Blocks [BlockMessages]
Epoch UInt64 // The chain epoch of the blocks
}
type VMInterpreter struct {
Node node_base.FilecoinNode
ApplyTipSetMessages(
inTree st.StateTree
tipset chain.Tipset
msgs TipSetMessages
) struct {outTree st.StateTree, ret [vmri.MessageReceipt]}
ApplyMessage(
inTree st.StateTree
chain chain.Chain
msg msg.UnsignedMessage
onChainMsgSize int
minerAddr addr.Address
) struct {
outTree st.StateTree
ret vmri.MessageReceipt
retMinerPenalty abi.TokenAmount
}
}
vm/interpreter
implementation
package interpreter
import (
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
serde "github.com/filecoin-project/specs-actors/actors/serde"
ipld "github.com/filecoin-project/specs/libraries/ipld"
chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
actstate "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
vmri "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/impl"
st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
util "github.com/filecoin-project/specs/util"
cid "github.com/ipfs/go-cid"
)
type Bytes = util.Bytes
var Assert = util.Assert
var TODO = util.TODO
var IMPL_FINISH = util.IMPL_FINISH
type SenderResolveSpec int
const (
SenderResolveSpec_OK SenderResolveSpec = 1 + iota
SenderResolveSpec_Invalid
)
// Applies all the message in a tipset, along with implicit block- and tipset-specific state
// transitions.
func (vmi *VMInterpreter_I) ApplyTipSetMessages(inTree st.StateTree, tipset chain.Tipset, msgs TipSetMessages) (outTree st.StateTree, receipts []vmri.MessageReceipt) {
outTree = inTree
seenMsgs := make(map[cid.Cid]struct{}) // CIDs of messages already seen once.
var receipt vmri.MessageReceipt
store := vmi.Node().Repository().StateStore()
// get chain from Tipset
chainRand := &chain.Chain_I{
HeadTipset_: tipset,
}
for _, blk := range msgs.Blocks() {
minerAddr := blk.Miner()
util.Assert(minerAddr.Protocol() == addr.ID) // Block syntactic validation requires this.
// Process block miner's Election PoSt.
epostMessage := _makeElectionPoStMessage(outTree, minerAddr)
outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, epostMessage, minerAddr)
minerPenaltyTotal := abi.TokenAmount(0)
var minerPenaltyCurr abi.TokenAmount
minerGasRewardTotal := abi.TokenAmount(0)
var minerGasRewardCurr abi.TokenAmount
// Process BLS messages from the block.
for _, m := range blk.BLSMessages() {
_, found := seenMsgs[_msgCID(m)]
if found {
continue
}
onChainMessageLen := len(msg.Serialize_UnsignedMessage(m))
outTree, receipt, minerPenaltyCurr, minerGasRewardCurr = vmi.ApplyMessage(outTree, chainRand, m, onChainMessageLen, minerAddr)
minerPenaltyTotal += minerPenaltyCurr
minerGasRewardTotal += minerGasRewardCurr
receipts = append(receipts, receipt)
seenMsgs[_msgCID(m)] = struct{}{}
}
// Process SECP messages from the block.
for _, sm := range blk.SECPMessages() {
m := sm.Message()
_, found := seenMsgs[_msgCID(m)]
if found {
continue
}
onChainMessageLen := len(msg.Serialize_SignedMessage(sm))
outTree, receipt, minerPenaltyCurr, minerGasRewardCurr = vmi.ApplyMessage(outTree, chainRand, m, onChainMessageLen, minerAddr)
minerPenaltyTotal += minerPenaltyCurr
minerGasRewardTotal += minerGasRewardCurr
receipts = append(receipts, receipt)
seenMsgs[_msgCID(m)] = struct{}{}
}
// transfer gas reward from BurntFundsActor to RewardActor
_withTransferFundsAssert(outTree, builtin.BurntFundsActorAddr, builtin.RewardActorAddr, minerGasRewardTotal)
// Pay block reward.
rewardMessage := _makeBlockRewardMessage(outTree, minerAddr, minerPenaltyTotal, minerGasRewardTotal)
outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, rewardMessage, minerAddr)
}
// Invoke cron tick.
// Since this is outside any block, the top level block winner is declared as the system actor.
cronMessage := _makeCronTickMessage(outTree)
outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, cronMessage, builtin.SystemActorAddr)
return
}
func (vmi *VMInterpreter_I) ApplyMessage(inTree st.StateTree, chain chain.Chain, message msg.UnsignedMessage, onChainMessageSize int, minerAddr addr.Address) (
retTree st.StateTree, retReceipt vmri.MessageReceipt, retMinerPenalty abi.TokenAmount, retMinerGasReward abi.TokenAmount) {
store := vmi.Node().Repository().StateStore()
senderAddr := _resolveSender(store, inTree, message.From())
vmiGasRemaining := message.GasLimit()
vmiGasUsed := msg.GasAmount_Zero()
_applyReturn := func(
tree st.StateTree, invocOutput vmr.InvocOutput, exitCode exitcode.ExitCode,
senderResolveSpec SenderResolveSpec) {
vmiGasRemainingFIL := _gasToFIL(vmiGasRemaining, message.GasPrice())
vmiGasUsedFIL := _gasToFIL(vmiGasUsed, message.GasPrice())
switch senderResolveSpec {
case SenderResolveSpec_OK:
// In this case, the sender is valid and has already transferred funds to the burnt funds actor
// sufficient for the gas limit. Thus, we may refund the unused gas funds to the sender here.
Assert(!message.GasLimit().LessThan(vmiGasUsed))
Assert(message.GasLimit().Equals(vmiGasUsed.Add(vmiGasRemaining)))
tree = _withTransferFundsAssert(tree, builtin.BurntFundsActorAddr, senderAddr, vmiGasRemainingFIL)
retMinerGasReward = vmiGasUsedFIL
retMinerPenalty = abi.TokenAmount(0)
case SenderResolveSpec_Invalid:
retMinerPenalty = vmiGasUsedFIL
retMinerGasReward = abi.TokenAmount(0)
default:
Assert(false)
}
retTree = tree
retReceipt = vmri.MessageReceipt_Make(invocOutput, exitCode, vmiGasUsed)
}
// TODO move this to a package with a less redundant name
_applyError := func(tree st.StateTree, errExitCode exitcode.ExitCode, senderResolveSpec SenderResolveSpec) {
_applyReturn(tree, vmr.InvocOutput_Make(nil), errExitCode, senderResolveSpec)
}
// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
// incurred yet.
_vmiAllocGas := func(amount msg.GasAmount) (vmiAllocGasOK bool) {
vmiGasRemaining, vmiAllocGasOK = vmiGasRemaining.SubtractIfNonnegative(amount)
vmiGasUsed = message.GasLimit().Subtract(vmiGasRemaining)
Assert(!vmiGasRemaining.LessThan(msg.GasAmount_Zero()))
Assert(!vmiGasUsed.LessThan(msg.GasAmount_Zero()))
return
}
// Deduct an amount of gas corresponding to costs already incurred, and for which the
// gas cost must be paid even if it would cause the gas used to exceed the limit.
_vmiBurnGas := func(amount msg.GasAmount) (vmiBurnGasOK bool) {
vmiGasUsedPre := vmiGasUsed
vmiBurnGasOK = _vmiAllocGas(amount)
if !vmiBurnGasOK {
vmiGasRemaining = msg.GasAmount_Zero()
vmiGasUsed = vmiGasUsedPre.Add(amount)
}
return
}
ok := _vmiBurnGas(gascost.OnChainMessage(onChainMessageSize))
if !ok {
// Invalid message; insufficient gas limit to pay for the on-chain message size.
_applyError(inTree, exitcode.OutOfGas, SenderResolveSpec_Invalid)
return
}
fromActor, ok := inTree.GetActor(senderAddr)
if !ok {
// Execution error; sender does not exist at time of message execution.
_applyError(inTree, exitcode.ActorNotFound, SenderResolveSpec_Invalid)
return
}
// make sure this is the right message order for fromActor
if message.CallSeqNum() != fromActor.CallSeqNum() {
_applyError(inTree, exitcode.InvalidCallSeqNum, SenderResolveSpec_Invalid)
return
}
// Check sender balance.
gasLimitCost := _gasToFIL(message.GasLimit(), message.GasPrice())
tidx := indicesFromStateTree(inTree)
networkTxnFee := tidx.NetworkTransactionFee(
inTree.GetActorCodeID_Assert(message.To()), message.Method())
totalCost := message.Value() + gasLimitCost + networkTxnFee
if fromActor.Balance() < totalCost {
// Execution error; sender does not have sufficient funds to pay for the gas limit.
_applyError(inTree, exitcode.InsufficientFunds_System, SenderResolveSpec_Invalid)
return
}
// At this point, construct compTreePreSend as a state snapshot which includes
// the sender paying gas, and the sender's CallSeqNum being incremented;
// at least that much state change will be persisted even if the
// method invocation subsequently fails.
compTreePreSend := _withTransferFundsAssert(inTree, senderAddr, builtin.BurntFundsActorAddr, gasLimitCost+networkTxnFee)
compTreePreSend = compTreePreSend.Impl().WithIncrementedCallSeqNum_Assert(senderAddr)
invoc := _makeInvocInput(message)
sendRet, compTreePostSend := _applyMessageInternal(store, compTreePreSend, chain, message.CallSeqNum(), senderAddr, invoc, vmiGasRemaining, minerAddr)
ok = _vmiBurnGas(sendRet.GasUsed)
if !ok {
panic("Interpreter error: runtime execution used more gas than provided")
}
ok = _vmiAllocGas(gascost.OnChainReturnValue(sendRet.ReturnValue))
if !ok {
// Insufficient gas remaining to cover the on-chain return value; proceed as in the case
// of method execution failure.
_applyError(compTreePreSend, exitcode.OutOfGas, SenderResolveSpec_OK)
return
}
compTreeRet := compTreePreSend
if sendRet.ExitCode.AllowsStateUpdate() {
compTreeRet = compTreePostSend
}
_applyReturn(
compTreeRet, vmr.InvocOutput_Make(sendRet.ReturnValue), sendRet.ExitCode, SenderResolveSpec_OK)
return
}
// Resolves an address through the InitActor's map.
// Returns the resolved address (which will be an ID address) if found, else the original address.
func _resolveSender(store ipld.GraphStore, tree st.StateTree, address addr.Address) addr.Address {
initState, ok := tree.GetActor(builtin.InitActorAddr)
util.Assert(ok)
serialized, ok := store.Get(cid.Cid(initState.State()))
var initSubState initact.InitActorState
serde.MustDeserialize(serialized, &initSubState)
return initSubState.ResolveAddress(address)
}
func _applyMessageBuiltinAssert(store ipld.GraphStore, tree st.StateTree, chain chain.Chain, message msg.UnsignedMessage, minerAddr addr.Address) st.StateTree {
senderAddr := message.From()
Assert(senderAddr == builtin.SystemActorAddr)
Assert(senderAddr.Protocol() == addr.ID)
// Note: this message CallSeqNum is never checked (b/c it's created in this file), but probably should be.
// Since it changes state, we should be sure about the state transition.
// Alternatively we could special-case the system actor and declare that its CallSeqNumber
// never changes (saving us the state-change overhead).
tree = tree.Impl().WithIncrementedCallSeqNum_Assert(senderAddr)
invoc := _makeInvocInput(message)
retReceipt, retTree := _applyMessageInternal(store, tree, chain, message.CallSeqNum(), senderAddr, invoc, message.GasLimit(), minerAddr)
if retReceipt.ExitCode != exitcode.OK() {
panic("internal message application failed")
}
return retTree
}
func _applyMessageInternal(store ipld.GraphStore, tree st.StateTree, chain chain.Chain, messageCallSequenceNumber actstate.CallSeqNum, senderAddr addr.Address, invoc vmr.InvocInput,
gasRemainingInit msg.GasAmount, topLevelBlockWinner addr.Address) (vmri.MessageReceipt, st.StateTree) {
rt := vmri.VMContext_Make(
store,
chain,
senderAddr,
topLevelBlockWinner,
messageCallSequenceNumber,
actstate.CallSeqNum(0),
tree,
senderAddr,
abi.TokenAmount(0),
gasRemainingInit,
)
return rt.SendToplevelFromInterpreter(invoc)
}
func _withTransferFundsAssert(tree st.StateTree, from addr.Address, to addr.Address, amount abi.TokenAmount) st.StateTree {
// TODO: assert amount nonnegative
retTree, err := tree.Impl().WithFundsTransfer(from, to, amount)
if err != nil {
panic("Interpreter error: insufficient funds (or transfer error) despite checks")
} else {
return retTree
}
}
func indicesFromStateTree(st st.StateTree) indices.Indices {
TODO()
panic("")
}
func _gasToFIL(gas msg.GasAmount, price abi.TokenAmount) abi.TokenAmount {
IMPL_FINISH()
panic("") // BigInt arithmetic
// return abi.TokenAmount(util.UVarint(gas) * util.UVarint(price))
}
func _makeInvocInput(message msg.UnsignedMessage) vmr.InvocInput {
return vmr.InvocInput{
To: message.To(), // Receiver address is resolved during execution.
Method: message.Method(),
Params: message.Params(),
Value: message.Value(),
}
}
// Builds a message for paying block reward to a miner's owner.
func _makeBlockRewardMessage(state st.StateTree, minerAddr addr.Address, penalty abi.TokenAmount, gasReward abi.TokenAmount) msg.UnsignedMessage {
params := serde.MustSerializeParams(minerAddr, penalty)
TODO() // serialize other inputs to BlockRewardMessage or get this from query in RewardActor
sysActor, ok := state.GetActor(builtin.SystemActorAddr)
Assert(ok)
return &msg.UnsignedMessage_I{
From_: builtin.SystemActorAddr,
To_: builtin.RewardActorAddr,
Method_: builtin.Method_RewardActor_AwardBlockReward,
Params_: params,
CallSeqNum_: sysActor.CallSeqNum(),
Value_: 0,
GasPrice_: 0,
GasLimit_: msg.GasAmount_SentinelUnlimited(),
}
}
// Builds a message for submitting ElectionPost on behalf of a miner actor.
func _makeElectionPoStMessage(state st.StateTree, minerActorAddr addr.Address) msg.UnsignedMessage {
sysActor, ok := state.GetActor(builtin.SystemActorAddr)
Assert(ok)
return &msg.UnsignedMessage_I{
From_: builtin.SystemActorAddr,
To_: minerActorAddr,
Method_: builtin.Method_StorageMinerActor_OnVerifiedElectionPoSt,
Params_: nil,
CallSeqNum_: sysActor.CallSeqNum(),
Value_: 0,
GasPrice_: 0,
GasLimit_: msg.GasAmount_SentinelUnlimited(),
}
}
// Builds a message for invoking the cron actor tick.
func _makeCronTickMessage(state st.StateTree) msg.UnsignedMessage {
sysActor, ok := state.GetActor(builtin.SystemActorAddr)
Assert(ok)
return &msg.UnsignedMessage_I{
From_: builtin.SystemActorAddr,
To_: builtin.CronActorAddr,
Method_: builtin.Method_CronActor_EpochTick,
Params_: nil,
CallSeqNum_: sysActor.CallSeqNum(),
Value_: 0,
GasPrice_: 0,
GasLimit_: msg.GasAmount_SentinelUnlimited(),
}
}
func _msgCID(msg msg.UnsignedMessage) cid.Cid {
panic("TODO")
}
vm/interpreter/registry
package interpreter
import (
"errors"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
accact "github.com/filecoin-project/specs-actors/actors/builtin/account"
cronact "github.com/filecoin-project/specs-actors/actors/builtin/cron"
initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
smarkact "github.com/filecoin-project/specs-actors/actors/builtin/storage_market"
spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
)
var (
ErrActorNotFound = errors.New("Actor Not Found")
)
var staticActorCodeRegistry = &actorCodeRegistry{}
type actorCodeRegistry struct {
code map[abi.ActorCodeID]vmr.ActorCode
}
func (r *actorCodeRegistry) _registerActor(id abi.ActorCodeID, actor vmr.ActorCode) {
r.code[id] = actor
}
func (r *actorCodeRegistry) _loadActor(id abi.ActorCodeID) (vmr.ActorCode, error) {
a, ok := r.code[id]
if !ok {
return nil, ErrActorNotFound
}
return a, nil
}
func RegisterActor(id abi.ActorCodeID, actor vmr.ActorCode) {
staticActorCodeRegistry._registerActor(id, actor)
}
func LoadActor(id abi.ActorCodeID) (vmr.ActorCode, error) {
return staticActorCodeRegistry._loadActor(id)
}
// init is called in Go during initialization of a program.
// this is an idiomatic way to do this. Implementations should approach this
// however they wish. The point is to initialize a static registry with
// built in pure types that have the code for each actor. Once we have
// a way to load code from the StateTree, use that instead.
func init() {
_registerBuiltinActors()
}
func _registerBuiltinActors() {
// TODO
cron := &cronact.CronActor{}
RegisterActor(builtin.InitActorCodeID, &initact.InitActor{})
RegisterActor(builtin.CronActorCodeID, cron)
RegisterActor(builtin.AccountActorCodeID, &accact.AccountActor{})
RegisterActor(builtin.StoragePowerActorCodeID, &spowact.StoragePowerActor{})
RegisterActor(builtin.StorageMarketActorCodeID, &smarkact.StorageMarketActor{})
// wire in CRON actions.
// TODO: move this to CronActor's constructor method
cron.Entries = append(cron.Entries, cronact.CronTableEntry{
ToAddr: builtin.StoragePowerActorAddr,
MethodNum: builtin.Method_StoragePowerActor_OnEpochTickEnd,
})
cron.Entries = append(cron.Entries, cronact.CronTableEntry{
ToAddr: builtin.StorageMarketActorAddr,
MethodNum: builtin.Method_StorageMarketActor_OnEpochTickEnd,
})
}
Blockchain
The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol. It is the main interface linking various actors in the Filecoin system.
It includes:
- A Message Pool subsystem that nodes use to track and propagate messages related to the storage market throughout a gossip network.
- A Virtual Machine subsystem used to interpret and execute messages in order to update system state.
- A State Tree subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
- A Chain Synchronisation (ChainSync) susbystem that tracks and propagates validated message blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
- A Storage Power Consensus subsystem which tracks storage state (i.e., Storage Subystem) for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.
And also:
- A Chain Manager – which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
- A Block Producer – which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.
At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards. Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage that storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.
Most of the functions of the Filecoin blockchain system are detailed in the code below.
Blocks
The Block is the main unit of the Filecoin blockchain, as is also the case with most other blockchains. Block messages are directly linked with Tipsets, which are groups of Block messages as detailed later on in this section. In the following we discuss the main structure of a Block message and the process of validating Block messages in the Filecoin blockchain.
Block
A block header contains information relevant to a particular point in time over which the network may achieve consensus.
Note: A block is functionally the same as a block header in the Filecoin protocol. While a block header contains Merkle links to the full system state, messages, and message receipts, a block can be thought of as the full set of this information (not just the Merkle roots, but rather the full data of the state tree, message tree, receipts tree, etc.). Because a full block is quite large, our chain consists of block headers rather than full blocks. We often use the terms
block
andblock header
interchangeably.
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import addr "github.com/filecoin-project/go-address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
type ChainWeight UVarint
type MessageReceipt util.Bytes
// On-chain representation of a block header.
type BlockHeader struct {
// Chain linking
Parents [&BlockHeader]
ParentWeight ChainWeight
// State
ParentState &st.StateTree
ParentMessageReceipts &[&MessageReceipt] // array-mapped trie ref
// Consensus things
Epoch abi.ChainEpoch
Timestamp clock.UnixTime
Ticket
BeaconEntries [BeaconEntry]
Miner addr.Address
ElectionProof &ElectionProof
// Fork Signal bitfield with bits used to advertise support for
// proposed forks and reset if fork is executed.
ForkSignal uint64
// Proposed update
Messages &TxMeta
BLSAggregate filcrypto.Signature
// Signatures
Signature filcrypto.Signature
ForkSignaling uint64
// SerializeSigned() []byte
// ComputeUnsignedFingerprint() []
}
type ElectionProof struct {
VRFProof [byte]
}
type TxMeta struct {
BLSMessages &[&msg.UnsignedMessage] // array-mapped trie
SECPMessages &[&msg.SignedMessage] // array-mapped trie
}
// Internal representation of a full block, with all messages.
type Block struct {
Header BlockHeader
BLSMessages [msg.UnsignedMessage]
SECPMessages [msg.SignedMessage]
}
type BeaconEntry struct {
// Drand Round for the given randomness
Round uint64
// Drand Signature for the given Randomness
Data [byte]
}
// HACK: All of the below was duplicated from posting.id
// in order to get spec to compile. Check the actual source for details
type ElectionPoStVerifyInfo struct {
Candidates [PoStCandidate]
Proof PoStProof
Randomness PoStRandomness
}
type ChallengeTicketsCommitment struct {} // see sector
type PoStCandidate struct {} // see sector
type PoStRandomness struct {} // see sector
type PoStProof struct {}
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/go-address"
type Ticket struct {
VRFResult filcrypto.VRFResult
Output Bytes @(cached)
DrawRandomness(round abi.ChainEpoch) Bytes
ValidateSyntax() bool
Verify(
input Bytes
pk filcrypto.VRFPublicKey
minerActorAddr addr.Address
) bool
}
Block syntax validation
Syntax validation refers to validation that may be performed on a block and its messages without reference to outside information such as the parent state tree.
An invalid block must not be transmitted or referenced as a parent.
A syntactically valid block header must decode into fields matching the type definition below.
A syntactically valid header must have:
- between 1 and
5*ec.ExpectedLeaders
Parents
CIDs ifEpoch
is greater than zero (else emptyParents
), - a non-negative
ParentWeight
, - a
Miner
address which is an ID-address, - a non-negative
Epoch
, - a positive
Timestamp
, - a
Ticket
with non-emptyVRFResult
, ElectionPoStOutput
containing:- a
Candidates
array with between 1 andEC.ExpectedLeaders
values (inclusive), - a non-empty
PoStRandomness
field, - a non-empty
Proof
field,
- a
- a non-empty
ForkSignal
field.
A syntactically valid full block must have:
- all referenced messages syntactically valid,
- all referenced parent receipts syntactically valid,
- the sum of the serialized sizes of the block header and included messages is no greater than
block.BlockMaxSize
, - the sum of the gas limit of all explicit messages is no greater than
block.BlockGasLimit
.
Note that validation of the block signature requires access to the miner worker address and
public key from the parent tipset state, so signature validation forms part of semantic validation.
Similarly, message signature validation requires lookup of the public key associated with
each message’s From
account actor in the block’s parent state.
Block semantic validation
Semantic validation refers to validation that requires reference to information outside the block header and messages themselves, in particular the parent tipset and state on which the block is built.
A semantically valid block must have:
Parents
listed in lexicographic order of their header’sTicket
,Parents
all reference valid blocks and form a valid Tipset,ParentState
matching the state tree produced by executing the parent tipset’s messages (as defined by the VM interpreter) against that tipset’s parent state,ParentMessageReceipts
identifying the receipt list produced by parent tipset execution, with one receipt for each unique message from the parent tipset,ParentWeight
matching the weight of the chain up to and including the parent tipset,Epoch
greater than that of its parents, and- not in the future according to the node’s local clock reading of the current epoch,
- blocks with future epochs should not be rejected, but should not be evaluated (validated or included in a tipset) until the appropriate epoch
- not farther in the past than the soft finality as defined by SPC
Finality,
- this rule only applied when receiving new gossip blocks (i.e. from the current chain head), not when syncing to the chain for the first time (e.g.)
- not in the future according to the node’s local clock reading of the current epoch,
Miner
that is active in the storage power table in the parent tipset state,- A valid
BeaconEntry
array (can be empty) - a
Ticket
derived from the minimum ticket from the parent tipset’s block headers,Ticket.VRFResult
validly signed by theMiner
actor’s worker account public key,
ElectionPoStOutput
yielding winning partial tickets that were generated validly,ElectionPoSt.Randomness
is well formed and appropriately drawn from a past tipset according to the PoStLookback,ElectionPoSt.Proof
is a valid proof verifying the generation of theElectionPoSt.Candidates
from theMiner
's eligible sectors,ElectionPoSt.Candidates
contains well formedPoStCandidate
s each of which has aPartialTicket
yielding a winningChallengeTicket
in Expected Consensus.
- a
Timestamp
in seconds that must be- not in the future at time of reception
- of the precise value implied implied by the genesis block’s timestamp, the network’s block time and the block’s
Epoch
,
- all SECP messages correctly signed by their sending actor’s worker account key,
- a
BLSAggregate
signature that signs the array of CIDs of the BLS messages referenced by the block with their sending actor’s key. - a valid
Signature
over the block header’s fields from the block’sMiner
actor’s worker account public key.
There is no semantic validation of the messages included in a block beyond validation of their signatures. If all messages included in a block are syntactically valid then they may be executed and produce a receipt.
A chain sync system may perform syntactic and semantic validation in stages in order to minimize unnecessary resource expenditure.
Tipset
Expected Consensus probabilistically elects multiple leaders in each epoch meaning a Filecoin chain may contain zero or multiple blocks at each epoch (one per elected miner). Blocks from the same epoch are assembled into tipsets. The VM Interpreter modifies the Filecoin state tree by executing all messages in a tipset (after de-duplication of identical messages included in more than one block).
Each block references a parent tipset and validates that tipset’s state, while proposing messages to be included for the current epoch. The state to which a new block’s messages apply cannot be known until that block is incorporated into a tipset. It is thus not meaningful to execute the messages from a single block in isolation: a new state tree is only known once all messages in that block’s tipset are executed.
A valid tipset contains a non-empty collection of blocks that have distinct miners and all specify identical:
Epoch
Parents
ParentWeight
StateRoot
ReceiptsRoot
The blocks in a tipset are canonically ordered by the lexicographic ordering of the bytes in each block’s ticket, breaking ties with the bytes of the CID of the block itself.
Due to network propagation delay, it is possible for a miner in epoch N+1 to omit valid blocks mined at epoch N from their parent tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.
Block producers are expected to coordinate how they select messages for inclusion in blocks in order to avoid duplicates and thus maximize their expected earnings from transaction fees (see Message Pool).
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
type Tipset struct {
BlockCIDs [&block.BlockHeader]
Blocks [block.BlockHeader]
Has(b block.Block) bool @(cached)
Parents Tipset @(cached)
StateTree st.StateTree @(cached)
Weight block.ChainWeight @(cached)
Epoch abi.ChainEpoch @(cached)
// Returns the largest timestamp fom the tipset's blocks.
LatestTimestamp() clock.UnixTime @(cached)
// Returns the lexicographically smallest ticket from the tipset's blocks.
MinTicket() block.Ticket @(cached)
}
Chain
A Chain is a sequence of tipsets, linked together. It is a single history of execution in the Filecoin blockchain.
package chain
import (
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
)
// Returns the tipset at or immediately prior to `epoch`.
// For negative epochs, it should return a tipset composed of the genesis block.
func (chain *Chain_I) TipsetAtEpoch(epoch abi.ChainEpoch) Tipset {
current := chain.HeadTipset()
genesisEpoch := abi.ChainEpoch(0)
for current.Epoch() > epoch && epoch >= genesisEpoch {
// for epoch <= genesisEpoch, this should return a single-block tipset that includes the genesis block
current = current.Parents()
}
return current
}
// Draws randomness from the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) GetRandomnessFromVRFChain(epoch abi.ChainEpoch) abi.RandomnessSeed {
ts := chain.TipsetAtEpoch(epoch)
// return ts.MinTicket().Digest()
return ts.MinTicket().DrawRandomness(epoch)
}
func (chain *Chain_I) GetTicketProductionRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
return chain.RandomnessSeedAtEpoch(epoch - node_base.SPC_LOOKBACK_TICKET)
}
func (chain *Chain_I) GetSealRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
return chain.RandomnessSeedAtEpoch(epoch - builtin.SPC_LOOKBACK_SEAL)
}
func (chain *Chain_I) GetPoStChallengeRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
return chain.RandomnessSeedAtEpoch(epoch - builtin.SPC_LOOKBACK_POST)
}
func (chain *Chain_I) RandomnessSeedAtEpoch(epoch abi.ChainEpoch) abi.RandomnessSeed {
panic("not implemented")
}
Chain Manager
The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.
In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.
The chain manager interfaces and functions are included here, but we expand on important details below for clarity.
Chain Expansion
Incoming block reception
Once a block has been received and passes syntactic and semantic validation it must be added to the local datastore, regardless whether it is understood as the best tip at this point. Future blocks from other miners may be mined on top of it and in that case we will want to have it around to avoid refetching.
NOTE: To make certain validation checks simpler, blocks should be indexed by height and by parent set. That way sets of blocks with a given height and common parents may be quickly queried. It may also be useful to compute and cache the resultant aggregate state of blocks in these sets, this saves extra state computation when checking which state root to start a block at when it has multiple parents.
Chain selection is a crucial component of how the Filecoin blockchain works. Every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. It is always preferable to mine atop a heavier Tipset rather than a lighter one. While a miner may be foregoing block rewards earned in the past, this lighter chain is likely to be abandoned by other miners forfeiting any block reward earned as miners converge on a final chain. For more on this, see chain selection in the Expected Consensus spec.
However, ahead of finality, a given subchain may be abandoned in order of another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.
That is, for every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either:
- it is able to do so, through the reception of another block in that subchain
- it is able to discard it, as that block was mined before finality
We give an example of how this could work in the block reception algorithm.
ChainTipsManager
The Chain Tips Manager is a subcomponent of Filecoin consensus that is technically up to the implementer, but since the pseudocode in previous sections reference it, it is documented here for clarity.
The Chain Tips Manager is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.
// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}
// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}
// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()
// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)
Block Producer
Mining Blocks
A miner registered with the storage power actor may begin generating and checking election tickets if it has proven storage meeting the Minimum Miner Size threshold requirement.
In order to do so, the miner must be running chain validation, and be keeping track of the most recent blocks received. A miner’s new block will be based on parents from the previous epoch.
Block Creation
Producing a block for epoch H
requires computing a tipset for epoch H-1
(or possibly a prior epoch,
if no blocks were received for that epoch). Using the state produced by this tipset, a miner can
scratch winning ElectionPoSt ticket(s).
Armed with the requisite ElectionPoStOutput
, as well as a new randomness ticket generated in this epoch, a miner can produce a new block.
See VM Interpreter for details of parent tipset evaluation, and Block for constraints on valid block header values.
To create a block, the eligible miner must compute a few fields:
Parents
- the CIDs of the parent tipset’s blocks.ParentWeight
- the parent chain’s weight (see Chain Selection).ParentState
- the CID of the state root from the parent tipset state evaluation (see the VM Interpreter).ParentMessageReceipts
- the CID of the root of an AMT containing receipts produced while computingParentState
.Epoch
- the block’s epoch, derived from theParents
epoch and the number of epochs it took to generate this block.Timestamp
- a Unix timestamp, in seconds, generated at block creation.BeaconEntries
- a set of drand entries generated since the last block (see Beacon Entries).Ticket
- a new ticket generated from that in the prior epoch (see Ticket Generation).Miner
- the block producer’s miner actor address.Messages
- The CID of aTxMeta
object containing message proposed for inclusion in the new block:- Select a set of messages from the mempool to include in the block, satisfying block size and gas limits
- Separate the messages into BLS signed messages and secpk signed messages
TxMeta.BLSMessages
: The CID of the root of an AMT comprising the bareUnsignedMessage
sTxMeta.SECPMessages
: the CID of the root of an AMT comprising theSignedMessage
s
BeaconEntries
: a list of beacon entries to derive randomness fromBLSAggregate
- The aggregated signature of all messages in the block that used BLS signing.Signature
- A signature with the miner’s worker account private key (must also match the ticket signature) over the block header’s serialized representation (with empty signature).ForkSignaling
- A uint64 flag used as part of signaling forks. Should be set to 0 by default.
Note that the messages to be included in a block need not be evaluated in order to produce a valid block. A miner may wish to speculatively evaluate the messages anyway in order to optimize for including messages which will succeed in execution and pay the most gas.
The block reward is not evaluated when producing a block. It is paid when the block is included in a tipset in the following epoch.
The block’s signature ensures integrity of the block after propagation, since unlike many PoW blockchains, a winning ticket is found independently of block generation.
Block Broadcast
An eligible miner broadcasts the completed block to the network and, assuming everything was done correctly, the network will accept it and other miners will mine on top of it, earning the miner a block reward!
Miners should output their valid block as soon as it is produced, otherwise they risk other miners receiving the block after the EPOCH_CUTOFF and not including them.
Block Rewards
TODO: Rework this.Over the entire lifetime of the protocol, 1,400,000,000 FIL (
TotalIssuance
) will be given out to miners. Each of the miners who produced a block in a tipset will receive a block reward.Note: Due to jitter in EC, and the gregorian calendar, there may be some error in the issuance schedule over time. This is expected to be small enough that it’s not worth correcting for. Additionally, since the payout mechanism is transferring from the network account to the miner, there is no risk of minting too much FIL.
TODO: Ensure that if a miner earns a block reward while undercollateralized, then min(blockReward, requiredCollateral-availableBalance)
is garnished (transfered to the miner actor instead of the owner).
Message Pool
The Message Pool, or mpool
or mempool
is a Pool of Transaction Messages in the Filecoin protocol. It acts as the interface between Filecoin nodes and the peer-to-peer network of other nodes used for off-chain message propagation. The message pool is used by nodes to maintain a set of messages they want to transmit to the Filecoin VM and add to the chain (i.e., add for “on-chain” execution).
In order for a transaction message to end up in the blockchain it first has to be in the message pool. In reality, at least in the Lotus implementation of Filecoin, there is no central pool of messages stored somewhere. Instead, the message pool is an abstraction and is realised as a list of messages kept by every node in the network. Therefore, when a node puts a new message in the message pool, this message is propagated to the rest of the network using libp2p’s pubsub protocol, GossipSub. Nodes need to subscribe to the corresponding pubsub topic in order to receive messages.
Message propagation using GossipSub does not happen immediately and therefore, there is some lag before message pools at different nodes can be in sync. In practice, and given continuous streams of messages being added to the message pool and the delay to propagate messages, the message pool is never synchronised across all nodes in the network. This is not a deficiency of the system, as the message pool does not need to be synchronized across the network.
The message pool should have a maximum size defined to avoid DoS attacks, where nodes are spammed and run out of memory. The recommended size for the message pool is 5000 Transaction messages.
Message Propagation
The message pool has to interface with the libp2p pubsub
GossipSub protocol. This is because transaction messages are propagated over
GossipSub the corresponding /fil/msgs/
topic. Every
Message is announced in the corresponding /fil/msgs/
topic by any node participating in the network.
There are two main pubsub topics related to transactions and blocks: i) the /fil/msgs/
topic that carries transactions and, ii) the /fil/blocks/
topic that carries blocks. The /fil/msgs/
topic is linked to the mpool
. The process is as follows:
- When a client wants to carry out a transaction in the Filecoin network, they publish a transaction message in the
/fil/msgs/
topic. - The message propagates to all other nodes in the network using GossipSub and eventually ends up in the
mpool
of all miners. - Depending on cryptoeconomic rules, some miner will eventually pick the transaction message from the
mpool
(together with other transaction messages) and include it in a block. - The miner publishes the newly-mined block in the
/fil/blocks/
pubsub topic and the block message propagates to all nodes in the network (including the nodes that published the transactions included in this block).
Nodes must check that incoming transaction messages are valid, that is, that they have a valid signature. If the message is not valid it should be dropped and must not be forwarded.
The updated, hardened version of the GossipSub protocol includes a number of attack mitigation strategies. For instance, when a node receives an invalid message it assigns a negative score to the sender peer. Peer scores are not shared with other nodes, but are rather kept locally by every peer for all other peers it is interacting with. If a peer’s score drops below a threshold it is excluded from the scoring peer’s mesh. We discuss more details on these settings in the GossipSub section. The full details can be found in the GossipSub Specification.
NOTES:
- Fund Checking: It is important to note that the
mpool
logic is not checking whether there are enough funds in the account of the transaction message issuer. This is checked by the miner before including a transaction message in a block. - Message Sorting: Transaction messages are sorted in the
mpool
of miners as they arrive according to cryptoeconomic rules followed by the miner and in order for the miner to compose the next block.
TODO: discuss checking signatures and account balances, some tricky bits that need consideration. Does the fund check cause improper dropping? E.g. I have a message sending funds then use the newly constructed account to send funds, as long as the previous wasn’t executed the second will be considered “invalid” … though it won’t be at the time of execution.
Message Storage
As mentioned earlier, there is no central pool where messages are included. Instead, every node must have allocated memory for incoming transaction messages.
ChainSync - synchronizing the Blockchain
What is blockchain synchronization?
Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and transactions (messages), and thus is in charge of distributed state replication. This process is security critical – problems here can be catastrophic to the operation of a blockchain.
What is ChainSync?
ChainSync
is the protocol Filecoin uses to synchronize its blockchain. It is
specific to Filecoin’s choices in state representation and consensus rules,
but is general enough that it can serve other blockchains. ChainSync
is a
group of smaller protocols, which handle different parts of the sync process.
Terms and Concepts
LastCheckpoint
the last hard social-consensus oriented checkpoint thatChainSync
is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on.ChainSync
takesLastCheckpoint
on faith, and builds on it, never switching away from its history.TargetHeads
a list ofBlockCIDs
that represent blocks at the fringe of block production. These are the newest and best blocksChainSync
knows about. They are “target” heads becauseChainSync
will try to sync to them. This list is sorted by “likelihood of being the best chain” (eg for now, simplyChainWeight
)BestTargetHead
the single best chain headBlockCID
to try to sync to. This is the first element ofTargetHeads
ChainSync Summary
At a high level, ChainSync
does the following:
- Part 1: Verify internal state (
INIT
state below)- SHOULD verify data structures and validate local chain
- Resource expensive verification MAY be skipped at nodes’ own risk
- Part 2: Bootstrap to the network (
BOOTSTRAP
)- Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
- Step 2. Bootstrap to the
BlockPubsub
channels - Step 3. Listen and serve on Graphsync
- Part 3: Synchronize trusted checkpoint state (
SYNC_CHECKPOINT
)- Step 1. Start with a
TrustedCheckpoint
(defaults toGenesisCheckpoint
). - Step 2. Get the block it points to, and that block’s parents
- Step 3. Graphsync the
StateTree
- Step 1. Start with a
- Part 4: Catch up to the chain (
CHAIN_CATCHUP
)- Step 1. Maintain a set of
TargetHeads
(BlockCIDs
), and select theBestTargetHead
from it - Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
- Step 3. As validation progresses,
TargetHeads
andBestTargetHead
will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate. - Step 4. Finish when node has “caught up” with
BestTargetHead
(retrieved all the state, linked to local chain, validated all the blocks, etc).
- Step 1. Maintain a set of
- Part 5: Stay in sync, and participate in block propagation (
CHAIN_FOLLOW
)- Step 1. If security conditions change, go back to Part 4 (
CHAIN_CATCHUP
) - Step 2. Receive, validate, and propagate received
Blocks
- Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.
- Step 1. If security conditions change, go back to Part 4 (
libp2p Network Protocols
As a networking-heavy protocol, ChainSync
makes heavy use of libp2p
. In particular, we use three sets of protocols:
libp2p.PubSub
a family of publish/subscribe protocols to propagate recentBlocks
. The concrete protocol choice impactsChainSync
's effectiveness, efficiency, and security dramatically. For Filecoin v1.0 we will uselibp2p.Gossipsub
, a recentlibp2p
protocol that combines features and learnings from many excellent PubSub systems. In the future, Filecoin may use otherPubSub
protocols. Important Note: is entirely possible for Filecoin Nodes to run multiple versions simultaneously. That said, this specification requires that filecoin nodesMUST
connect and participate in the main channel, usinglibp2p.Gossipsub
.libp2p.PeerDiscovery
a family of discovery protocols, to learn about peers in the network. This is especially important for security because network “Bootstrap” is a difficult problem in peer-to-peer networks. The set of peers we initially connect to may completely dominate our awareness of other peers, and therefore all state. We use a union ofPeerDiscovery
protocols as each by itself is not secure or appropriate for users’ threat models. The union of these provides a pragmatic and effective solution. Discovery protocols marked as required MUST be included in implementations and will be provided by implementation teams. Protocols marked as optional MAY be provided by implementation teams but can be built independently by third parties to augment bootstrap security.libp2p.DataTransfer
a family of protocols for transfering data Filecoin Nodes must runlibp2p.Graphsync
.
More concretely, we use these protocols:
libp2p.PeerDiscovery
- (required)
libp2p.BootstrapList
a protocol that uses a persistent and user-configurable list of semi-trusted bootstrap peers. The default list includes a set of peers semi-trusted by the Filecoin Community. - (optional)
libp2p.KademliaDHT
a DHT protocol that enables random queries across the entire network - (required)
libp2p.Gossipsub
a pub/sub protocol that includes “prune peer exchange” by default, disseminating peer info as part of operation - (optional)
libp2p.PersistentPeerstore
a connectivity component that keeps persistent information about peers observed in the network throughout the lifetime of the node. This is useful because we resume and continually improve Bootstrap security. - (optional)
libp2p.DNSDiscovery
to learn about peers via DNS lookups to semi-trusted peer aggregators - (optional)
libp2p.HTTPDiscovery
to learn about peers via HTTP lookups to semi-trusted peer aggregators - (optional)
libp2p.PEX
a general use peer exchange protocol distinct from pubsub peer exchange for 1:1 adhoc peer exchange
- (required)
libp2p.PubSub
- (required)
libp2p.Gossipsub
the concretelibp2p.PubSub
protocolChainSync
uses
- (required)
libp2p.DataTransfer
- (required)
libp2p.Graphsync
the data transfer protocol nodes must support for providing blockchain and user data - (optional)
BlockSync
a blockchain data transfer protocol that can be used by some nodes
- (required)
Subcomponents
Aside from libp2p
, ChainSync
uses or relies on the following components:
- Libraries:
ipld
data structures, selectors, and protocolsipld.GraphStore
local persistent storage forchain
datastructuresipld.Selector
a way to express requests for chain data structuresipfs.GraphSync
a general-purposeipld
datastructure syncing protocol
- Data Structures:
- Data structures in the
chain
package:Block, Tipset, Chain, Checkpoint ...
chainsync.BlockCache
a temporary cache of blocks, to constrain resource expendedchainsync.AncestryGraph
a datastructure to efficiently linkBlocks
,Tipsets
, andPartialChains
chainsync.ValidationGraph
a datastructure for efficient and secure validation ofBlocks
andTipsets
- Data structures in the
Graphsync in ChainSync
ChainSync
is written in terms of Graphsync
. ChainSync
adds blockchain and filecoin-specific
synchronization functionality that is critical for Filecoin security.
Rate Limiting Graphsync responses (SHOULD)
When running Graphsync, Filecoin nodes must respond to graphsync queries. Filecoin requires nodes to provide critical data structures to others, otherwise the network will not function. During ChainSync, it is in operators’ interests to provide data structures critical to validating, following, and participating in the blockchain they are on. However, this has limitations, and some level of rate limiting is critical for maintaining security in the presence of attackers who might issue large Graphsync requests to cause DOS.
We recommend the following:
- Set and enforce batch size rate limits.
Force selectors to be shaped like:
LimitedBlockIpldSelector(blockCID, BatchSize)
for a single constantBatchSize = 1000
. Nodes may push for this equilibrium by only providingBatchSize
objects in responses, even for pulls much larger thanBatchSize
. This forces subsequent pulls to be run, re-rooted appropriately, and hints at other parties that they should be requesting with thatBatchSize
. - Force all Graphsync queries for blocks to be aligned along cacheable bounderies.
In conjunction with a
BatchSize
, implementations should aim to cache the results of Graphsync queries, so that they may propagate them to others very efficiently. Aligning on certain boundaries (eg specificChainEpoch
limits) increases the likelihood many parties in the network will request the same batches of content. Another good cacheable boundary is the entire contents of aBlock
(BlockHeader
,Messages
,Signatures
, etc). - Maintain per-peer rate-limits. Use bandwidth usage to decide whether to respond and how much on a per-peer basis. Libp2p already tracks bandwidth usage in each connection. This information can be used to impose rate limits in Graphsync and other Filecoin protocols.
- Detect and react to DOS: restrict operation.
The safest implementations will likely detect and react to DOS attacks. Reactions could include:
- Smaller
Graphsync.BatchSize
limits - Fewer connections to other peers
- Rate limit total Graphsync bandwidth
- Assign Graphsync bandwidth based on a peer priority queue
- Disconnect from and do not accept connections from unknown peers
- Introspect Graphsync requests and filter/deny/rate limit suspicious ones
- Smaller
Previous BlockSync protocol
Prior versions of this spec recommended a BlockSync
protocol. This protocol definition is
available here.
Filecoin nodes are libp2p nodes, and therefore may run a variety
of other protocols, including this BlockSync
protocol. As with anything else in Filecoin, nodes
MAY opt to use additional protocols to achieve the results.
That said, Nodes MUST implement the version of ChainSync
as described in this spec in order to
be considered implementations of Filecoin. Test suites will assume this protocol.
ChainSync State Machine
ChainSync
uses the following conceptual state machine. Since this is a conceptual state machine,
implementations MAY deviate from implementing precisely these states, or dividing them strictly.
Implementations MAY blur the lines between the states. If so, implementations MUST ensure security
of the altered protocol.
ChainSync FSM: INIT
- beginning state. no network connections, not synchronizing.
- local state is loaded: internal data structures (eg chain, cache) are loaded
LastTrustedCheckpoint
is set the latest network-wide acceptedTrustedCheckpoint
FinalityTipset
is set to finality achieved in a prior protocol run.- Default: If no later
FinalityTipset
has been achieved, setFinalityTipset
toLastTrustedCheckpoint
- Default: If no later
- Chain State and Finality:
- In this state, the chain MUST NOT advance beyond whatever the node already has.
- No new blocks are reported to consumers.
- The chain state provided is whatever was loaded from prior executions (worst case is
LastTrustedCheckpoint
)
- security conditions to transition out:
- local state and data structures SHOULD be verified to be correct
- this means validating any parts of the chain or
StateTree
the node has, fromLastTrustedCheckpoint
on.
- this means validating any parts of the chain or
LastTrustedCheckpoint
is well-known across the Filecoin Network to be a trueTrustedCheckpoint
- this SHOULD NOT be verified in software, it SHOULD be verified by operators
- Note: we ALWAYS have at least one
TrustedCheckpoint
, theGenesisCheckpoint
.
- local state and data structures SHOULD be verified to be correct
- transitions out:
- once done verifying things: move to
BOOTSTRAP
- once done verifying things: move to
ChainSync FSM: BOOTSTRAP
network.Bootstrap()
: establish connections to peers until we satisfy security requirement- for better security, use many different
libp2p.PeerDiscovery
protocols
- for better security, use many different
BlockPubsub.Bootstrap()
: establish connections toBlockPubsub
peers- The subscription is for both peer discovery and to start selecting best heads. Listing on pubsub from the start keeps the node informed about potential head changes.
Graphsync.Serve()
: set up a Graphsync service, that responds to others’ queries- Chain State and Finality:
- In this state, the chain MUST NOT advance beyond whatever the node already has.
- No new blocks are reported to consumers.
- The chain state provided is whatever was loaded from prior executions (worst case is
LastTrustedCheckpoint
).
- security conditions to transition out:
Network
connectivity MUST have reached the security level acceptable forChainSync
BlockPubsub
connectivity MUST have reached the security level acceptable forChainSync
- “on time” blocks MUST be arriving through
BlockPubsub
- transitions out:
- once bootstrap is deemed secure enough:
- if node does not have the
Blocks
orStateTree
corresponding toLastTrustedCheckpoint
: move toSYNC_CHECKPOINT
- otherwise: move to
CHAIN_CATCHUP
- if node does not have the
- once bootstrap is deemed secure enough:
ChainSync FSM: SYNC_CHECKPOINT
- While in this state:
ChainSync
is well-bootstrapped, but does not yet have theBlocks
orStateTree
forLastTrustedCheckpoint
ChainSync
issuesGraphsync
requests to its peers randomly for theBlocks
andStateTree
forLastTrustedCheckpoint
:ChainSync
's counterparts in other peers MUST provide the state tree.- It is only semi-rational to do so, so
ChainSync
may have to try many peers. - Some of these requests MAY fail.
- Chain State and Finality:
- In this state, the chain MUST NOT advance beyond whatever the node already has.
- No new blocks are reported to consumers.
- The chain state provided is the available
Blocks
andStateTree
forLastTrustedCheckpoint
.
- Important Notes:
ChainSync
needs to fetch several blocks: theBlock
pointed at byLastTrustedCheckpoint
, and its directBlock.Parents
.- Nodes only need hashing to validate these
Blocks
andStateTrees
– no block validation or state machine computation is needed. - The initial value of
LastTrustedCheckpoint
isGenesisCheckpoint
, but it MAY be a value later in Chain history. LastTrustedCheckpoint
enables efficient syncing by making the implicit economic consensus of chain history explicit.- By allowing fetching of the
StateTree
ofLastTrustedCheckpoint
viaGraphsync
,ChainSync
can yield much more efficient syncing than comparable blockchain synchronization protocols, as syncing and validation can start there. - Nodes DO NOT need to validate the chain from
GenesisCheckpoint
.LastTrustedCheckpoint
MAY be a value later in Chain history. - Nodes DO NOT need to but MAY sync earlier
StateTrees
thanLastTrustedCheckpoint
as well.
- Pseudocode 1: a basic version of
SYNC_CHECKPOINT
:func (c *ChainSync) SyncCheckpoint() { while !c.HasCompleteStateTreeFor(c.LastTrustedCheckpoint) { selector := ipldselector.SelectAll(c.LastTrustedCheckpoint) c.Graphsync.Pull(c.Peers, sel, c.IpldStore) // Pull SHOULD NOT pull what c.IpldStore already has (check first) // Pull SHOULD pull from different peers simultaneously // Pull SHOULD be efficient (try different parts of the tree from many peers) // Graphsync implementations may not offer these features. These features // can be implemented on top of a graphsync that only pulls from a single // peer and does not check local store first. } c.ChainCatchup() // on to CHAIN_CATCHUP }
- security conditions to transition out:
StateTree
forLastTrustedCheckpoint
MUST be stored locally and verified (hashing is enough)
- transitions out:
- once node receives and verifies complete
StateTree
forLastTrustedCheckpoint
: move toCHAIN_CATCHUP
- once node receives and verifies complete
ChainSync FSM: CHAIN_CATCHUP
- While in this state:
ChainSync
is well-bootstrapped, and has an initial trustedStateTree
to start from.ChainSync
is receiving latestBlocks
fromBlockPubsub
ChainSync
starts fetching and validating blocksChainSync
has unvalidated blocks betweenChainSync.FinalityTipset
andChainSync.TargetHeads
- Chain State and Finality:
- In this state, the chain MUST NOT advance beyond whatever the node already has:
FinalityTipset
does not change.- No new blocks are reported to consumers/users of
ChainSync
yet. - The chain state provided is the available
Blocks
andStateTree
for all available epochs, specially theFinalityTipset
. - Finality must not move forward here because there are serious attack vectors where a node can be forced to end up on the wrong fork if finality advances before validation is complete up to the block production fringe.
- Validation must advance, all the way to the block production fringe:
- Validate the whole chain, from
FinalityTipset
toBestTargetHead
- The node can reach
BestTargetHead
only to find out it was invalid, then has to updateBestTargetHead
with next best one, and sync to it (without having advancedFinalityTipset
yet, as otherwise we may end up on the wrong fork)
- Validate the whole chain, from
- In this state, the chain MUST NOT advance beyond whatever the node already has:
- security conditions to transition out:
- Gaps between
ChainSync.FinalityTipset ... ChainSync.BestTargetHead
have been closed:- All
Blocks
and their content MUST be fetched, stored, linked, and validated locally. This includesBlockHeaders
,Messages
, etc. - Bad heads have been expunged from
ChainSync.TargetHeads
. Bad heads include heads that initially seemed good but turned out invalid, or heads thatChainSync
has failed to connect (ie. cannot fetch ancestors connecting back toChainSync.FinalityTipset
within a reasonable amount of time). - All blocks between
ChainSync.FinalityTipset ... ChainSync.TargetHeads
have been validated This means all blocks before the best heads.
- All
- Not under a temporary network partition
- Gaps between
- transitions out:
- once gaps between
ChainSync.FinalityTipset ... ChainSync.TargetHeads
are closed: move toCHAIN_FOLLOW
- (Perhaps moving to
CHAIN_FOLLOW
when 1-2 blocks back in validation may be ok.- we dont know we have the right head until we validate it, so if other heads of similar height are right/better, we won’t know until then.)
- once gaps between
ChainSync FSM: CHAIN_FOLLOW
- While in this state:
ChainSync
is well-bootstrapped, and has an initial trustedStateTree
to start from.ChainSync
fetches and validates blocks.ChainSync
is receiving and validating latestBlocks
fromBlockPubsub
ChainSync
DOES NOT have unvalidated blocks betweenChainSync.FinalityTipset
andChainSync.TargetHeads
ChainSync
MUST drop back to another state if security conditions change.- Keep a set of gap measures:
BlockGap
is the number of remaining blocks to validate between the Validated blocks andBestTargetHead
.- (ie how many epochs do we need to validate to have validated
BestTargetHead
, does not include null blocks)
- (ie how many epochs do we need to validate to have validated
EpochGap
is the number of epochs between the latest validated block, andBestTargetHead
(includes null blocks).MaxBlockGap = 2
, which means how many blocks mayChainSync
fall behind on before switching back toCHAIN_CATCHUP
(does not include null blocks)MaxEpochGap = 10
, which means how many epochs mayChainSync
fall behind on before switching back toCHAIN_CATCHUP
(includes null blocks)
- Chain State and Finality:
- In this state, the chain MUST advance as all the blocks up to
BestTargetHead
are validated. - New blocks are finalized as they cross the finality threshold (
ValidG.Heads[0].ChainEpoch - FinalityLookback
) - New finalized blocks are reported to consumers.
- The chain state provided includes the
Blocks
andStateTree
for theFinality
epoch, as well as candidateBlocks
andStateTrees
for unfinalized epochs.
- In this state, the chain MUST advance as all the blocks up to
- security conditions to transition out:
- Temporary network partitions (see Detecting Network Partitions).
- Encounter gaps of
>MaxBlockGap
or>MaxEpochGap
between Validated set and a newChainSync.BestTargetHead
- transitions out:
- if a temporary network partition is detected: move to
CHAIN_CATCHUP
- if
BlockGap > MaxBlockGap
: move toCHAIN_CATCHUP
- if
EpochGap > MaxEpochGap
: move toCHAIN_CATCHUP
- if node is shut down: move to
INIT
- if a temporary network partition is detected: move to
Block Fetching, Validation, and Propagation
Notes on changing TargetHeads
while syncing
TargetHeads
is changing, asChainSync
must be aware of the best heads at any time. reorgs happen, and our first set of peers could’ve been bad, we keep discovering others.- Hello protocol is good, but it’s polling. Unless node is constantly polllng, won’t see all the heads.
BlockPubsub
gives us the realtime view into what’s actually going on.- Weight can also be close between 2+ possible chains (long-forked), and
ChainSync
must select the right one (which, we may not be able to distinguish until validating all the way)
- fetching + validation are strictly faster per round on average than blocks produced/block time (if they’re not, will always fall behind), so we definitely catch up eventually (and even quickly). The last couple rounds can be close (“almost got it, almost got it, there”).
General notes on fetching Blocks
ChainSync
selects and maintains a set of the most likely heads to be correct from among those received viaBlockPubsub
. As more blocks are received, the set ofTargetHeads
is reevaluated.ChainSync
fetchesBlocks
,Messages
, andStateTree
through theGraphsync
protocol.ChainSync
maintains sets ofBlocks/Tipsets
inGraphs
(seeChainSync.id
)ChainSync
gathers a list ofTargetHeads
fromBlockPubsub
, sorted by likelihood of being the best chain (see below).ChainSync
makes requests for chains ofBlockHeaders
to close gaps betweenTargetHeads
ChainSync
forms partial unvalidated chains ofBlockHeaders
, from those received viaBlockPubsub
, and those requested viaGraphsync
.ChainSync
attempts to form fully connected chains ofBlockHeaders
, parting fromStateTree
, toward observedHeads
ChainSync
minimizes resource expenditures to fetch and validate blocks, to protect against DOS attack vectors.ChainSync
employs Progressive Block Validation, validating different facets at different stages of syncing.ChainSync
delays syncingMessages
until they are needed. Much of the structure of the partial chains can be checked and used to make syncing decisions without fetching theMessages
.
Progressive Block Validation
Blocks may be validated in progressive stages, in order to minimize resource expenditure.
Validation computation is considerable, and a serious DOS attack vector.
Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.
ChainSync
SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed byFinalityTipset
, or whenChainSync
is under significant resource load.These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.
Progressive Stages of Block Validation
- BV0 - Syntax: Serialization, typing, value ranges.
- BV1 - Plausible Consensus: Plausible miner, weight, and epoch values (e.g from chain state at
b.ChainEpoch - consensus.LookbackParameter
). - BV2 - Block Signature
- BV3 - Beacon entries: Valid random beacon entries have been inserted in the block (see beacon entry validation).
- BV4 - ElectionProof: A valid election proof was generated.
- BV5 - WinningPoSt: Correct PoSt generated.
- BV6 - Chain ancestry and finality: Verify block links back to trusted chain, not prior to finality.
- BV7 - Message Signatures:
- BV8 - State tree: Parent tipset message execution produces the claimed state tree root and receipts.
Notes:
- in
CHAIN_CATCHUP
, if a node is receiving/fetching hundreds/thousands ofBlockHeaders
, validating signatures can be very expensive, and can be deferred in favor of other validation. (ie lots of BlockHeaders coming in through network pipe, don’t want to bound on sig verification, other checks can help dump blocks on the floor faster (BV0, BV2)- in
CHAIN_FOLLOW
, we’re not receiving thousands, we’re receiving maybe a dozen or 2 dozen packets in a few seconds. We receive cid w/ Sig and addr first (ideally fits in 1 packet), and can afford to (a) check if we already have the cid (if so done, cheap), or (b) if not, check if sig is correct before fetching header (expensive computation, but checking 1 sig is way faster than checking a ton). In practice likely that which one to do is dependent on miner tradeoffs. we’ll recommend something but let miners decide, because one strat or the other may be much more effective depending on their hardware, on their bandwidth limitations, or their propensity to getting DOSed
Progressive Block Propagation (or BlockSend)
- In order to make Block propagation more efficient, we trade off network round trips for bandwidth usage.
- Motivating observations:
- Block propagation is one of the most security critical points of the whole protocol.
- Bandwidth usage during Block propagation is the biggest rate limiter for network scalability.
- The time it takes for a Block to propagate to the whole network is a critical factor in determining a secure
BlockTime
- Blocks propagating through the network should take as few sequential roundtrips as possible, as these roundtrips impose serious block time delays. However, interleaved roundtrips may be fine. Meaning that
block.CIDs
may be propagated on their own, without the header, then the header without the messages, then the messages. Blocks
will propagate over alibp2p.PubSub
.libp2p.PubSub.Messages
will most likely arrive multiple times at a node. Therefore, using only theblock.CID
here could make this very cheap in bandwidth (more expensive in round trips)Blocks
in a single epoch may include the sameMessages
, and duplicate transfers can be avoidedMessages
propagate through their ownMessagePubsub
, and nodes have a significant probability of already having a large fraction of the messages in a block. Since messages are the bulk of the size of aBlock
, this can present great bandwidth savings.
- Progressive Steps of Block Propagation
- IMPORTANT NOTES:
- These can be effectively pipelined. The
receiver
is in control of what to pull, and when. It is up them to decide when to trade-off RTTs for Bandwidth. - If the
sender
is propagating the block at all toreceiver
, it is in their interest to provide the full content toreceiver
when asked. Otherwise the block may not get included at all. - Lots of security assumptions here – this needs to be hyper verified, in both spec and code.
sender
is a filecoin node runningChainSync
, propagating a block via Gossipsub (as the originator, as another peer in the network, or just a Gossipsub router).receiver
is the local filecoin node runningChainSync
, trying to get the blocks.- For
receiver
toPull
things fromsender
,receiver
must conntect tosender
. Usuallysender
is sending toreceiver
because of the Gossipsub propagation rules.receiver
could choose toPull
from any other node they are connected to, but it is most likelysender
will have the needed information. They usually will be more well-connected in the network.
- These can be effectively pipelined. The
- Step 1. (sender)
Push BlockHeader
:sender
sendsblock.BlockHeader
toreceiver
via Gossipsub:bh := Gossipsub.Send(h block.BlockHeader)
- This is a light-ish object (<4KB).
receiver
receivesbh
.- This has many fields that can be validated before pulling the messages. (See Progressive Block Validation).
- BV0, BV1, BV2, and BV3 validation takes place before propagating
bh
to other nodes. receiver
MAY receive many advertisements for each winning block in an epoch in quick succession. This is because (a) many want propagation as fast as possible, (b) many want to make those network advertisements as light as reasonable, (c) we want to enablereceiver
to choose who to ask it from (usually the first party to advertise it, and that’s what spec will recommend), and (d) want to be able to fall back to asking others if that fails (fail = dont get it in 1s or so)
- Step 2. (receiver)
Pull MessageCids
:- upon receiving
bh
,receiver
checks whether it already has the full block forbh.BlockCID
. if not:receiver
requestsbh.MessageCids
fromsender
:bm := Graphsync.Pull(sender, SelectAMTCIDs(b.Messages))
- upon receiving
- Step 3. (receiver)
Pull Messages
:- If
receiver
DOES NOT already have the all messages forb.BlockCID
, then:- If
receiver
has some of the messages:receiver
requests missingMessages
fromsender
:Graphsync.Pull(sender, SelectAll(bm[3], bm[10], bm[50], ...))
orfor m in bm { Graphsync.Pull(sender, SelectAll(m)) }
- If
receiver
does not have any of the messages (default safe but expensive thing to do):receiver
requests allMessages
fromsender
:Graphsync.Pull(sender, SelectAll(bh.Messages))
- (This is the largest amount of stuff)
- If
- If
- Step 4. (receiver)
Validate Block
:- The only remaining thing to do is to complete Block Validation.
- IMPORTANT NOTES:
Calculations
Security Parameters
Peers
>= 32 – direct connections- ideally
Peers
>= {64, 128}
- ideally
Pubsub Bandwidth
These bandwidth calculations are used to motivate choices in ChainSync
.
If you imagine that you will receive the header once per gossipsub peer (or if lucky, half of them), and that there is EC.E_LEADERS=10 blocks per round, then we’re talking the difference between:
16 peers, 1 pkt -- 1 * 16 * 10 = 160 dup pkts (256KB) in <5s
16 peers, 4 pkts -- 4 * 16 * 10 = 640 dup pkts (1MB) in <5s
32 peers, 1 pkt -- 1 * 32 * 10 = 320 dup pkts (512KB) in <5s
32 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (2MB) in <5s
64 peers, 1 pkt -- 1 * 32 * 10 = 320 dup pkts (1MB) in <5s
64 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (4MB) in <5s
2MB in <5s may not be worth saving– and maybe gossipsub can be much better about supressing dups.
Notes (TODO: move elsewhere)
Checkpoints
- A checkpoint is the CID of a block (not a tipset list of CIDs, or StateTree)
- The reason a block is OK is that it uniquely identifies a tipset.
- using tipsets directly would make Checkpoints harder to communicate. We want to make checkpoints a single hash, as short as we can have it. They will be shared in tweets, URLs, emails, printed into newspapers, etc. Compactness, ease of copy-paste, etc matters.
- We’ll make human readable lists of checkpoints, and making “lists of lists” is more annoying.
- When we have
EC.E_PARENTS > 5
or= 10
, tipsets will get annoyingly large. - The big quirk/weirdness with blocks it that it also must be in the chain. (If you relaxed that constraint you could end up in a weird case where a checkpoint isn’t in the chain and that’s weird/violates assumptions.)
Bootstrap chain stub
- The mainnet filecoin chain will need to start with a small chain stub of blocks.
- We must include some data in different blocks.
- We do need a genesis block – we derive randomness from the ticket there. Rather than special casing, it is easier/less complex to ensure a well-formed chain always, including at the beginning
- A lot of code expects lookbacks, especially actor code. Rather than introducing a bunch of special case logic for what happens ostensibly once in network history (special case logic which adds complexity and likelihood of problems), it is easiest to assume the chain is always at least X blocks long, and the system lookback parameters are all fine and dont need to be scaled in the beginning of network’s history.
PartialGraph
The PartialGraph
of blocks.
Is a graph necessarily connected, or is this just a bag of blocks, with each disconnected subgraph being reported in heads/tails?
The latter. The partial graph is a DAG fragment– including disconnected components. here’s a visual example, 4 example PartialGraphs, with Heads and Tails. (note they aren’t tipsets)
Storage Power Consensus
TODO: remove all stale .id, .go files referenced
The Storage Power Consensus subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. SPC accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.
Succinctly, the SPC subsystem offers the following services:
Access to the Power Table for every subchain, accounting for individual storage miner power and total power on-chain.
Access to Expected Consensus for individual storage miners, enabling:
- Access to verifiable randomness Tickets as provided by drand for the rest of the protocol.
- Running Leader Election to produce new blocks.
- Running Chain Selection across subchains using EC’s weighting function.
- Identification of the most recently finalized tipset, for use by all protocol participants.
Much of the Storage Power Consensus’ subsystem functionality is detailed in the code below but we touch upon some of its behaviors in more detail.
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import addr "github.com/filecoin-project/go-address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
type StoragePowerConsensusSubsystem struct {//(@mutable)
ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]
node node_base.FilecoinNode
ec ExpectedConsensus
blockchain blockchain.BlockchainSubsystem
// call by BlockchainSubsystem during block reception
ValidateBlock(block block.Block) error
IsWinningPartialTicket(
st st.StateTree
partialTicket abi.PartialTicket
sectorUtilization abi.StoragePower
numSectors util.UVarint
) bool
_getStoragePowerActorState(stateTree st.StateTree) spowact.StoragePowerActorState
validateTicket(
tix block.Ticket
pk filcrypto.VRFPublicKey
minerActorAddr addr.Address
) bool
computeChainWeight(tipset chain.Tipset) block.ChainWeight
StoragePowerConsensusError() StoragePowerConsensusError
GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch
}
type StoragePowerConsensusError struct {}
Distinguishing between storage miners and block miners
There are two ways to earn Filecoin tokens in the Filecoin network:
- By participating in the Storage Market as a storage provider and being paid by clients for file storage deals.
- By mining new blocks on the network, helping modify system state and secure the Filecoin consensus mechanism.
We must distinguish between both types of “miners” (storage and block miners). Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.
However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage (PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.
On Power
Claimed power is assigned to every sector as a static function of its SectorStorageWeightDesc
which includes SectorSize
, Duration
, and DealWeight
. DealWeight is a measure that maps size and duration of active deals in a sector during its lifetime to its impact on power and reward distribution. A CommittedCapacity Sector (see Sector Types in
Storage Mining Subsystem) will have a DealWeight of zero but all sectors have an explicit Duration which is defined from the ChainEpoch that the sector comes online in a ProveCommit message to the Expiration ChainEpoch of the sector. In principle, power is the number of votes a miner has in leader election and it is a point in time concept of storage. However, the exact function that maps SectorStorageWeightDesc
to claimed StoragePower
and BlockReward
will be announced soon.
More precisely,
- Claimed power = power from ProveCommit sectors minus sectors in TemporaryFault effective duration.
- Nominal power = claimed power, unless the miner is in DetectedFault or Challenged state. Nominal power is used to determine total network storage power for purposes of consensus minimum.
- Consensus power = nominal power, unless the miner fails to meet consensus minimum, or is undercollateralized.
Beacon Entries
The Filecoin protocol uses randomness produced by a drand beacon to seed unbiasable randomness seeds for use in the chain (see randomness).
In turn these random seeds are used by:
- The sector_sealer as SealSeeds to bind sector commitments to a given subchain.
- The post_generator as PoStChallenges to prove sectors remain committed as of a given block.
- The Storage Power subsystem as randomness in leader_election to determine their eligibility to mine a block.
This randomness may be drawn from various Filecoin chain epochs by the respective protocols that use them according to their security requirements.
It is important to note that a given Filecoin network and a given drand network
need not have the same round time, i.e. blocks may be generated faster or slower
by Filecoin than randomness is generated by drand. For instance, if the drand
beacon is producing randomness twice as fast as Filecoin produces blocks, we
might expect two random values to be produced in a Filecoin epoch, conversely if
the Filecoin network is twice as fast as drand, we might expect a random value
every other Filecoin epoch. Accordingly, depending on both networks’
configurations, certain Filecoin blocks could contain multiple or no drand
entries.
Furthermore, it must be that any call to the drand network for a new randomness
entry during an outage should be blocking, as noted with the drand.Public()
calls below.
In all cases, Filecoin blocks must include all drand beacon outputs generated
since the last epoch in the BeaconEntries
field of the block header. Any use
of randomness from a given Filecoin epoch should use the last valid drand entry
included in a Filecoin block. This is shown below.
Get drand randomness for VM
For operations such as PoRep creation, proof validations, or anything that requires randomness for the Filecoin VM, the following method shows how to extract the drand entry from the chain. Note that the round may span multiple filecoin epochs if drand is slower; the lowest epoch number block will contain the requested beacon entry. As well, if there has been null rounds where the beacon should have been inserted, we need to iterate on the chain to find where the entry is inserted.
func GetRandomnessFromBeacon(e ChainEpoch, head ChainEpoch) (DrandEntry,error) {
// get the drand round associated with the timestamp of this epoch.
drandRound := MaxBeaconRoundForEpoch(e)
// get the minimum drand timestamp associated with the drand round
drandTs := drandGenesisTime + (drandPeriod-1) * drandRound
// get the minimum filecoin epoch associated with this timestamp
minEpoch := (drandTs - filGenesisTime) / filEpochDuration
for minEpoch < head {
// if this is not a null block, then it must have the entry we want
if !chain.IsNullBlock(minEpoch)
// the requested drand entry must be in the list of drand entries
// included in this block. If it is not the case,
// it means the block is invalid - but this condition is caught by the
// block validation logic.
returns getDrandEntryFromBlockHeader(chain.Block(minEpoch))
// otherwise, we need to continue progressing on the chain, i.e. maybe no
// miner were elected or filecoin / drand outage
minEpoch++
}
}
func getDrandEntryFromBlockHeader(block,round) (DrandEntry,error) {
for _,dr := range block.DrandEntries {
if dr.Round == round {
return dr
}
}
return errors.New("drand entry not found in block")
}
Fetch randomness from drand network
When mining, a miner can fetch entries from the drand network to include them in
the new block by calling the method GetBeaconEntriesForEpoch
.
GetBeaconEntriesForEpoch(epoch) []BeaconEntry {
// special case genesis: the genesis block is pre-generated and so cannot include a beacon entry
// (since it will not have been generated). Hence, we only start checking beacon entries at the first block after genesis.
// If that block includes a wrong beacon entry, we simply assume that a majority of honest miners at network birth will
// simply fork.
entries := []
if epoch == 0 {
return entries
}
maxDrandRound := MaxBeaconRoundForEpoch(epoch)
// if checking the first post-genesis block, simply fetch the latest entry.
if epoch == 1 {
rand := drand.Public(maxDrandRound)
return append(entries, rand)
}
// for the rest, fetch all drand entries generated between this epoch and last
prevMaxDrandRound := MaxBeaconRoundForEpoch(epoch - 1)
if (maxDrandRound == prevMaxDrandRound) {
// no new beacon randomness
return entries
}
entries := []
curr := maxDrandRound
for curr > prevMaxDrandRound {
rand := drand.Public(curr)
entries = append(entries, rand)
curr -= 1
}
// return entries in increasing order
reverse(entries)
return entries
}
Validating Beacon Entries on block reception
Per the above, a Filecoin chain will contain the entirety of the beacon’s output from the Filecoin genesis to the current block.
Given their role in leader election and other critical protocols in Filecoin, a block’s beacon entries must be validated for every block. See
drand for details. This can be done by ensuring every beacon entry is a valid signature over the prior one in the chain, using drand’s
Verify
endpoint as follows:
// This need not be done for the genesis block
// We assume that blockHeader and priorBlockHeader are two valid subsequent headers where block was mined atop priorBlock
ValidateBeaconEntries(blockHeader, priorBlockHeader) error {
currEntries := blockHeader.BeaconEntries
prevEntries := priorBlockHeader.BeaconEntries
// special case for genesis block (it has no beacon entry and so the first
verifiable value comes at height 2,
// as with GetBeaconEntriesForEpoch()
if priorBlockHeader.Epoch == 0 {
return nil
}
maxRoundForEntry := MaxBeaconRoundForEpoch(blockHeader.Epoch)
// ensure entries are not repeated in blocks
lastBlocksLastEntry := prevEntries[len(prevEntries)-1]
if lastBlocksLastEntry == maxRound && len(currEntries) != 0 {
return errors.New("Did not expect a new entry in this round.")
}
// preparing to check that entries properly follow one another
var entries []BeaconEntry
// at currIdx == 0, must fetch last Fil block's last BeaconEntry
entries := append(entries, lastBlocksLastEntry)
entries := append(entries, currEntries...)
currIdx := len(entries) - 1
// ensure that the last entry in the header is not in the future (i.e. that this is not a Filecoin
// block being mined with a future known drand entry).
if entries[currIdx].Round != maxRoundForEntry {
return fmt.Errorf("expected final beacon entry in block to be at round %d, got %d", maxRound, last.Round)
}
for currIdx >= 0 {
// walking back the entries to ensure they follow one another
currEntry := entries[currIdx]
prevEntry := entries[currIdx - 1]
err := drand.Verify(node.drandPubKey, prevEntry.Data, currEntry.Data, currEntry.Round)
if err != nil {
return err
}
currIdx -= 1
}
return nil
}
Tickets
Filecoin block headers also contain a single “ticket” generated from its epoch’s beacon entry. Tickets are used to break ties in the Fork Choice Rule, for forks of equal weight.
You can find the Ticket data structure here
Whenever comparing tickets in Filecoin, the comparison is that of the ticket’s VRFDigest’s bytes.
Randomness Ticket generation
At a Filecoin epoch n, a new ticket is generated using the appropriate beacon entry for epoch n.
The miner runs the beacon entry through a Verifiable Random Function (VRF) to get a new unique ticket. The beacon entry is prepended with the ticket domain separation tag and concatenated with the miner actor address (to ensure miners using the same worker keys get different tickets).
To generate a ticket for a given epoch n:
randSeed = GetRandomnessFromBeacon(n)
newTicketRandomness = VRF_miner(H(TicketProdDST || index || Serialization(randSeed, minerActorAddress)))
We use the VRF from
Verifiable Random Functions for ticket generation (see the PrepareNewTicket
method below).
package storage_mining
import (
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
smarkact "github.com/filecoin-project/specs-actors/actors/builtin/storage_market"
sminact "github.com/filecoin-project/specs-actors/actors/builtin/storage_miner"
spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
acrypto "github.com/filecoin-project/specs-actors/actors/crypto"
indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
serde "github.com/filecoin-project/specs-actors/actors/serde"
filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
util "github.com/filecoin-project/specs/util"
cid "github.com/ipfs/go-cid"
peer "github.com/libp2p/go-libp2p-core/peer"
)
type Serialization = util.Serialization
var Assert = util.Assert
var TODO = util.TODO
// Note that implementations may choose to provide default generation methods for miners created
// without miner/owner keypairs. We omit these details from the spec.
// Also note that the pledge amount should be available in the ownerAddr in order for this call
// to succeed.
func (sms *StorageMiningSubsystem_I) CreateMiner(
state stateTree.StateTree,
ownerAddr addr.Address,
workerAddr addr.Address,
sectorSize util.UInt,
peerId peer.ID,
pledgeAmt abi.TokenAmount,
) (addr.Address, error) {
ownerActor, ok := state.GetActor(ownerAddr)
Assert(ok)
unsignedCreationMessage := &msg.UnsignedMessage_I{
From_: ownerAddr,
To_: builtin.StoragePowerActorAddr,
Method_: builtin.Method_StoragePowerActor_CreateMiner,
Params_: serde.MustSerializeParams(ownerAddr, workerAddr, peerId),
CallSeqNum_: ownerActor.CallSeqNum(),
Value_: pledgeAmt,
GasPrice_: 0,
GasLimit_: msg.GasAmount_SentinelUnlimited(),
}
var workerKey filcrypto.SigKeyPair // sms._keyStore().Worker()
signedMessage, err := msg.Sign(unsignedCreationMessage, workerKey)
if err != nil {
return addr.Undef, err
}
err = sms.Node().MessagePool().Syncer().SubmitMessage(signedMessage)
if err != nil {
return addr.Undef, err
}
// WAIT for block reception with appropriate response from SPA
util.IMPL_TODO()
// harvest address from that block
var storageMinerAddr addr.Address
// and set in key store appropriately
return storageMinerAddr, nil
}
func (sms *StorageMiningSubsystem_I) HandleStorageDeal(deal smarkact.StorageDeal) {
sms.SectorIndex().AddNewDeal(deal)
// stagedDealResponse := sms.SectorIndex().AddNewDeal(deal)
// TODO: way within a node to notify different components
// market.StorageProvider().NotifyStorageDealStaged(&storage_provider.StorageDealStagedNotification_I{
// Deal_: deal,
// SectorID_: stagedDealResponse.SectorID(),
// })
}
func (sms *StorageMiningSubsystem_I) CommitSectorError() smarkact.StorageDeal {
panic("TODO")
}
// triggered by new block reception and tipset assembly
func (sms *StorageMiningSubsystem_I) OnNewBestChain() {
sms._runMiningCycle()
}
// triggered by wall clock
func (sms *StorageMiningSubsystem_I) OnNewRound() {
sms._runMiningCycle()
}
func (sms *StorageMiningSubsystem_I) _runMiningCycle() {
chainHead := sms._blockchain().BestChain().HeadTipset()
sma := sms._getStorageMinerActorState(chainHead.StateTree(), sms.Node().Repository().KeyStore().MinerAddress())
if sma.PoStState.Is_OK() {
ePoSt := sms._tryLeaderElection(chainHead.StateTree(), sma)
if ePoSt != nil {
// Randomness for ticket generation in block production
randomness1 := sms._blockchain().BestChain().GetTicketProductionRandSeed(sms._blockchain().LatestEpoch())
newTicket := sms.PrepareNewTicket(randomness1, sms.Node().Repository().KeyStore().MinerAddress())
sms._blockProducer().GenerateBlock(*ePoSt, newTicket, chainHead, sms.Node().Repository().KeyStore().MinerAddress())
}
} else if sma.PoStState.Is_Challenged() {
sPoSt := sms._trySurprisePoSt(chainHead.StateTree(), sma)
var gasLimit msg.GasAmount
var gasPrice = abi.TokenAmount(0)
util.IMPL_FINISH("read from consts (in this case user set param)")
sms._submitSurprisePoStMessage(chainHead.StateTree(), *sPoSt, gasPrice, gasLimit)
}
}
func (sms *StorageMiningSubsystem_I) _tryLeaderElection(currState stateTree.StateTree, sma sminact.StorageMinerActorState) *abi.OnChainElectionPoStVerifyInfo {
// Randomness for ElectionPoSt
randomnessK := sms._blockchain().BestChain().GetPoStChallengeRandSeed(sms._blockchain().LatestEpoch())
input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_ElectionPoStChallengeSeed, randomnessK, sms.Node().Repository().KeyStore().MinerAddress())
// Use VRF to generate secret randomness
postRandomness := sms.Node().Repository().KeyStore().WorkerKey().Impl().Generate(input).Output()
// TODO: add how sectors are actually stored in the SMS proving set
util.TODO()
provingSet := make([]abi.SectorID, 0)
candidates := sms.StorageProving().Impl().GenerateElectionPoStCandidates(postRandomness, provingSet)
if len(candidates) <= 0 {
return nil // fail to generate post candidates
}
winningCandidates := make([]abi.PoStCandidate, 0)
var numMinerSectors uint64
TODO() // update
// numMinerSectors := uint64(len(sma.SectorTable().Impl().ActiveSectors_.SectorsOn()))
for _, candidate := range candidates {
sectorNum := candidate.SectorID.Number
sectorWeightDesc, ok := sma.GetStorageWeightDescForSectorMaybe(sectorNum)
if !ok {
return nil
}
sectorPower := indices.ConsensusPowerForStorageWeight(sectorWeightDesc)
if sms._consensus().IsWinningPartialTicket(currState, candidate.PartialTicket, sectorPower, numMinerSectors) {
winningCandidates = append(winningCandidates, candidate)
}
}
if len(winningCandidates) <= 0 {
return nil
}
postProofs := sms.StorageProving().Impl().CreateElectionPoStProof(postRandomness, winningCandidates)
electionPoSt := &abi.OnChainElectionPoStVerifyInfo{
Candidates: winningCandidates,
Randomness: postRandomness,
Proofs: postProofs,
}
return electionPoSt
}
func (sms *StorageMiningSubsystem_I) PrepareNewTicket(randomness abi.RandomnessSeed, minerActorAddr addr.Address) block.Ticket {
// run it through the VRF and get deterministic output
// take the VRFResult of that ticket as input, specifying the personalization (see data structures)
// append the miner actor address for the miner generifying this in order to prevent miners with the same
// worker keys from generating the same randomness (given the VRF)
input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_TicketProduction, randomness, minerActorAddr)
// run through VRF
vrfRes := sms.Node().Repository().KeyStore().WorkerKey().Impl().Generate(input)
newTicket := &block.Ticket_I{
VRFResult_: vrfRes,
Output_: vrfRes.Output(),
}
return newTicket
}
func (sms *StorageMiningSubsystem_I) _getStorageMinerActorState(stateTree stateTree.StateTree, minerAddr addr.Address) sminact.StorageMinerActorState {
actorState, ok := stateTree.GetActor(minerAddr)
util.Assert(ok)
substateCID := actorState.State()
substate, ok := sms.Node().Repository().StateStore().Get(cid.Cid(substateCID))
if !ok {
panic("Couldn't find sma state")
}
// fix conversion to bytes
util.IMPL_TODO(substate)
var serializedSubstate Serialization
var st sminact.StorageMinerActorState
serde.MustDeserialize(serializedSubstate, &st)
return st
}
func (sms *StorageMiningSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) spowact.StoragePowerActorState {
powerAddr := builtin.StoragePowerActorAddr
actorState, ok := stateTree.GetActor(powerAddr)
util.Assert(ok)
substateCID := actorState.State()
substate, ok := sms.Node().Repository().StateStore().Get(cid.Cid(substateCID))
if !ok {
panic("Couldn't find spa state")
}
// fix conversion to bytes
util.IMPL_TODO(substate)
var serializedSubstate util.Serialization
var st spowact.StoragePowerActorState
serde.MustDeserialize(serializedSubstate, &st)
return st
}
func (sms *StorageMiningSubsystem_I) VerifyElectionPoSt(inds indices.Indices, header block.BlockHeader, onChainInfo abi.OnChainElectionPoStVerifyInfo) bool {
sma := sms._getStorageMinerActorState(header.ParentState(), header.Miner())
spa := sms._getStoragePowerActorState(header.ParentState())
pow, found := spa.PowerTable[header.Miner()]
if !found {
return false
}
// 1. Verify miner has enough power (includes implicit checks on min miner size
// and challenge status via SPA's power table).
if pow == abi.StoragePower(0) {
return false
}
// 2. verify no duplicate tickets included
challengeIndices := make(map[int64]bool)
for _, tix := range onChainInfo.Candidates {
if _, ok := challengeIndices[tix.ChallengeIndex]; ok {
return false
}
challengeIndices[tix.ChallengeIndex] = true
}
// 3. Verify partialTicket values are appropriate
if !sms._verifyElection(header, onChainInfo) {
return false
}
// verify the partialTickets themselves
// 4. Verify appropriate randomness
// TODO: fix away from BestChain()... every block should track its own chain up to its own production.
randomness := sms._blockchain().BestChain().GetPoStChallengeRandSeed(header.Epoch())
input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_ElectionPoStChallengeSeed, randomness, header.Miner())
postRand := &filcrypto.VRFResult_I{
Output_: onChainInfo.Randomness,
}
// TODO if the workerAddress is secp then payload will be the blake2b hash of its public key
// and we will need to recover the entire public key from worker before handing off to Verify
// example of recover code: https://github.com/ipsn/go-secp256k1/blob/master/secp256.go#L93
workerKey := sma.Info.Worker.Payload()
// Verify VRF output from appropriate input corresponds to randomness used
if !postRand.Verify(input, filcrypto.VRFPublicKey(workerKey)) {
return false
}
// A proof must be a valid snark proof with the correct public inputs
// 5. Get public inputs
info := sma.Info
sectorSize := info.SectorSize
postCfg := filproofs.ElectionPoStCfg(sectorSize)
pvInfo := abi.PoStVerifyInfo{
Candidates: onChainInfo.Candidates,
Proofs: onChainInfo.Proofs,
Randomness: onChainInfo.Randomness,
}
pv := filproofs.MakeElectionPoStVerifier(postCfg)
// 5. Verify the PoSt Proof
isPoStVerified := pv.VerifyElectionPoSt(pvInfo)
return isPoStVerified
}
func (sms *StorageMiningSubsystem_I) _verifyElection(header block.BlockHeader, onChainInfo abi.OnChainElectionPoStVerifyInfo) bool {
st := sms._getStorageMinerActorState(header.ParentState(), header.Miner())
var numMinerSectors uint64
TODO()
// TODO: Decide whether to sample sectors uniformly for EPoSt (the cleanest),
// or to sample weighted by nominal power.
for _, info := range onChainInfo.Candidates {
sectorNum := info.SectorID.Number
sectorWeightDesc, ok := st.GetStorageWeightDescForSectorMaybe(sectorNum)
if !ok {
return false
}
sectorPower := indices.ConsensusPowerForStorageWeight(sectorWeightDesc)
if !sms._consensus().IsWinningPartialTicket(header.ParentState(), info.PartialTicket, sectorPower, numMinerSectors) {
return false
}
}
return true
}
func (sms *StorageMiningSubsystem_I) _trySurprisePoSt(currState stateTree.StateTree, sma sminact.StorageMinerActorState) *abi.OnChainSurprisePoStVerifyInfo {
if !sma.PoStState.Is_Challenged() {
return nil
}
// get randomness for SurprisePoSt
challEpoch := sma.PoStState.SurpriseChallengeEpoch
randomnessK := sms._blockchain().BestChain().GetPoStChallengeRandSeed(challEpoch)
// unlike with ElectionPoSt no need to use a VRF
postRandomness := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_SurprisePoStChallengeSeed, randomnessK, sms.Node().Repository().KeyStore().MinerAddress())
// TODO: add how sectors are actually stored in the SMS proving set
util.TODO()
provingSet := make([]abi.SectorID, 0)
candidates := sms.StorageProving().Impl().GenerateSurprisePoStCandidates(abi.PoStRandomness(postRandomness), provingSet)
if len(candidates) <= 0 {
// Error. Will fail this surprise post and must then redeclare faults
return nil // fail to generate post candidates
}
winningCandidates := make([]abi.PoStCandidate, 0)
for _, candidate := range candidates {
if sma.VerifySurprisePoStMeetsTargetReq(candidate) {
winningCandidates = append(winningCandidates, candidate)
}
}
postProofs := sms.StorageProving().Impl().CreateSurprisePoStProof(abi.PoStRandomness(postRandomness), winningCandidates)
// var ctc sector.ChallengeTicketsCommitment // TODO: proofs to fix when complete
surprisePoSt := &abi.OnChainSurprisePoStVerifyInfo{
// CommT_: ctc,
Candidates: winningCandidates,
Proofs: postProofs,
}
return surprisePoSt
}
func (sms *StorageMiningSubsystem_I) _submitSurprisePoStMessage(state stateTree.StateTree, sPoSt abi.OnChainSurprisePoStVerifyInfo, gasPrice abi.TokenAmount, gasLimit msg.GasAmount) error {
// TODO if workerAddr is not a secp key (e.g. BLS) then this will need to be handled differently
workerAddr, err := addr.NewSecp256k1Address(sms.Node().Repository().KeyStore().WorkerKey().VRFPublicKey())
if err != nil {
return err
}
worker, ok := state.GetActor(workerAddr)
Assert(ok)
unsignedCreationMessage := &msg.UnsignedMessage_I{
From_: sms.Node().Repository().KeyStore().MinerAddress(),
To_: sms.Node().Repository().KeyStore().MinerAddress(),
Method_: builtin.Method_StorageMinerActor_SubmitSurprisePoStResponse,
Params_: serde.MustSerializeParams(sPoSt),
CallSeqNum_: worker.CallSeqNum(),
Value_: abi.TokenAmount(0),
GasPrice_: gasPrice,
GasLimit_: gasLimit,
}
var workerKey filcrypto.SigKeyPair // sms.Node().Repository().KeyStore().Worker()
signedMessage, err := msg.Sign(unsignedCreationMessage, workerKey)
if err != nil {
return err
}
err = sms.Node().MessagePool().Syncer().SubmitMessage(signedMessage)
if err != nil {
return err
}
return nil
}
Ticket Validation
Each Ticket should be generated from the prior one in the VRF-chain and verified accordingly as shown in validateTicket
below.
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import addr "github.com/filecoin-project/go-address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
type StoragePowerConsensusSubsystem struct {//(@mutable)
ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]
node node_base.FilecoinNode
ec ExpectedConsensus
blockchain blockchain.BlockchainSubsystem
// call by BlockchainSubsystem during block reception
ValidateBlock(block block.Block) error
IsWinningPartialTicket(
st st.StateTree
partialTicket abi.PartialTicket
sectorUtilization abi.StoragePower
numSectors util.UVarint
) bool
_getStoragePowerActorState(stateTree st.StateTree) spowact.StoragePowerActorState
validateTicket(
tix block.Ticket
pk filcrypto.VRFPublicKey
minerActorAddr addr.Address
) bool
computeChainWeight(tipset chain.Tipset) block.ChainWeight
StoragePowerConsensusError() StoragePowerConsensusError
GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch
}
type StoragePowerConsensusError struct {}
package storage_power_consensus
import (
"math"
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
acrypto "github.com/filecoin-project/specs-actors/actors/crypto"
inds "github.com/filecoin-project/specs-actors/actors/runtime/indices"
serde "github.com/filecoin-project/specs-actors/actors/serde"
filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
util "github.com/filecoin-project/specs/util"
cid "github.com/ipfs/go-cid"
)
// Storage Power Consensus Subsystem
func (spc *StoragePowerConsensusSubsystem_I) ValidateBlock(block block.Block_I) error {
util.IMPL_FINISH()
return nil
}
func (spc *StoragePowerConsensusSubsystem_I) validateTicket(ticket block.Ticket, pk filcrypto.VRFPublicKey, minerActorAddr addr.Address) bool {
randomness1 := spc.blockchain().BestChain().GetTicketProductionRandSeed(spc.blockchain().LatestEpoch())
return ticket.Verify(randomness1, pk, minerActorAddr)
}
func (spc *StoragePowerConsensusSubsystem_I) ComputeChainWeight(tipset chain.Tipset) block.ChainWeight {
return spc.ec().ComputeChainWeight(tipset)
}
func (spc *StoragePowerConsensusSubsystem_I) IsWinningPartialTicket(stateTree stateTree.StateTree, inds inds.Indices, partialTicket abi.PartialTicket, sectorUtilization abi.StoragePower, numSectors util.UVarint) bool {
// finalize the partial ticket
challengeTicket := acrypto.SHA256(abi.Bytes(partialTicket))
networkPower := inds.TotalNetworkEffectivePower()
sectorsSampled := uint64(math.Ceil(float64(node_base.EPOST_SAMPLE_RATE_NUM/node_base.EPOST_SAMPLE_RATE_DENOM) * float64(numSectors)))
return spc.ec().IsWinningChallengeTicket(challengeTicket, sectorUtilization, networkPower, sectorsSampled, numSectors)
}
func (spc *StoragePowerConsensusSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) spowact.StoragePowerActorState {
powerAddr := builtin.StoragePowerActorAddr
actorState, ok := stateTree.GetActor(powerAddr)
util.Assert(ok)
substateCID := actorState.State()
substate, ok := spc.node().Repository().StateStore().Get(cid.Cid(substateCID))
util.Assert(ok)
// fix conversion to bytes
util.IMPL_FINISH(substate)
var serializedSubstate util.Serialization
var st spowact.StoragePowerActorState
serde.MustDeserialize(serializedSubstate, &st)
return st
}
func (spc *StoragePowerConsensusSubsystem_I) GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch {
return currentEpoch - node_base.FINALITY
}
Minimum Miner Size
In order to secure Storage Power Consensus, the system defines a minimum miner size required to participate in consensus.
Specifically, miners must have either at least MIN_MINER_SIZE_STOR
of power (i.e. storage power currently used in storage deals) in order to participate in leader election. If no miner has MIN_MINER_SIZE_STOR
or more power, miners with at least as much power as the smallest miner in the top MIN_MINER_SIZE_TARG
of miners (sorted by storage power) will be able to participate in leader election. In plain english, take MIN_MINER_SIZE_TARG = 3
for instance, this means that miners with at least as much power as the 3rd largest miner will be eligible to participate in consensus.
Miners smaller than this cannot mine blocks and earn block rewards in the network. Their power will still be counted in the total network (raw or claimed) storage power, even though their power will not be counted as votes for leader election. However, it is important to note that such miners can still have their power faulted and be penalized accordingly.
Accordingly, to bootstrap the network, the genesis block must include miners, potentially just CommittedCapacity sectors, to initiate the network.
The MIN_MINER_SIZE_TARG
condition will not be used in a network in which any miner has more than MIN_MINER_SIZE_STOR
power. It is nonetheless defined to ensure liveness in small networks (e.g. close to genesis or after large power drops).
NOTE: The below values are currently placeholders.
We currently set:
MIN_MINER_SIZE_STOR = 100 * (1 << 40) Bytes
(100 TiB)MIN_MINER_SIZE_TARG = 3
Network recovery after halting
Placeholder where we will define a means of rebooting network liveness after it halts catastrophically (i.e. empty power table).
Storage Power Actor
StoragePowerActorState
implementation
package power
import (
"fmt"
"reflect"
addr "github.com/filecoin-project/go-address"
cid "github.com/ipfs/go-cid"
errors "github.com/pkg/errors"
"golang.org/x/xerrors"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
"github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
. "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/specs-actors/actors/util/smoothing"
)
// genesis power in bytes = 750,000 GiB
var InitialQAPowerEstimatePosition = big.Mul(big.NewInt(750_000), big.NewInt(1<<30))
// max chain throughput in bytes per epoch = 120 ProveCommits / epoch = 3,840 GiB
var InitialQAPowerEstimateVelocity = big.Mul(big.NewInt(3_840), big.NewInt(1<<30))
type State struct {
TotalRawBytePower abi.StoragePower
// TotalBytesCommitted includes claims from miners below min power threshold
TotalBytesCommitted abi.StoragePower
TotalQualityAdjPower abi.StoragePower
// TotalQABytesCommitted includes claims from miners below min power threshold
TotalQABytesCommitted abi.StoragePower
TotalPledgeCollateral abi.TokenAmount
// These fields are set once per epoch in the previous cron tick and used
// for consistent values across a single epoch's state transition.
ThisEpochRawBytePower abi.StoragePower
ThisEpochQualityAdjPower abi.StoragePower
ThisEpochPledgeCollateral abi.TokenAmount
ThisEpochQAPowerSmoothed *smoothing.FilterEstimate
MinerCount int64
// Number of miners having proven the minimum consensus power.
MinerAboveMinPowerCount int64
// A queue of events to be triggered by cron, indexed by epoch.
CronEventQueue cid.Cid // Multimap, (HAMT[ChainEpoch]AMT[CronEvent]
// First epoch in which a cron task may be stored.
// Cron will iterate every epoch between this and the current epoch inclusively to find tasks to execute.
FirstCronEpoch abi.ChainEpoch
// Last epoch power cron tick has been processed.
LastProcessedCronEpoch abi.ChainEpoch
// Claimed power for each miner.
Claims cid.Cid // Map, HAMT[address]Claim
ProofValidationBatch *cid.Cid
}
type Claim struct {
// Sum of raw byte power for a miner's sectors.
RawBytePower abi.StoragePower
// Sum of quality adjusted power for a miner's sectors.
QualityAdjPower abi.StoragePower
}
type CronEvent struct {
MinerAddr addr.Address
CallbackPayload []byte
}
type AddrKey = adt.AddrKey
func ConstructState(emptyMapCid, emptyMMapCid cid.Cid) *State {
return &State{
TotalRawBytePower: abi.NewStoragePower(0),
TotalBytesCommitted: abi.NewStoragePower(0),
TotalQualityAdjPower: abi.NewStoragePower(0),
TotalQABytesCommitted: abi.NewStoragePower(0),
TotalPledgeCollateral: abi.NewTokenAmount(0),
ThisEpochRawBytePower: abi.NewStoragePower(0),
ThisEpochQualityAdjPower: abi.NewStoragePower(0),
ThisEpochPledgeCollateral: abi.NewTokenAmount(0),
ThisEpochQAPowerSmoothed: smoothing.NewEstimate(InitialQAPowerEstimatePosition, InitialQAPowerEstimateVelocity),
FirstCronEpoch: 0,
LastProcessedCronEpoch: abi.ChainEpoch(-1),
CronEventQueue: emptyMMapCid,
Claims: emptyMapCid,
MinerCount: 0,
MinerAboveMinPowerCount: 0,
}
}
// MinerNominalPowerMeetsConsensusMinimum is used to validate Election PoSt
// winners outside the chain state. If the miner has over a threshold of power
// the miner meets the minimum. If the network is a below a threshold of
// miners and has power > zero the miner meets the minimum.
func (st *State) MinerNominalPowerMeetsConsensusMinimum(s adt.Store, miner addr.Address) (bool, error) { //nolint:deadcode,unused
claims, err := adt.AsMap(s, st.Claims)
if err != nil {
return false, xerrors.Errorf("failed to load claims: %w", err)
}
claim, ok, err := getClaim(claims, miner)
if err != nil {
return false, err
}
if !ok {
return false, errors.Errorf("no claim for actor %v", miner)
}
minerNominalPower := claim.QualityAdjPower
// if miner is larger than min power requirement, we're set
if minerNominalPower.GreaterThanEqual(ConsensusMinerMinPower) {
return true, nil
}
// otherwise, if ConsensusMinerMinMiners miners meet min power requirement, return false
if st.MinerAboveMinPowerCount >= ConsensusMinerMinMiners {
return false, nil
}
// If fewer than ConsensusMinerMinMiners over threshold miner can win a block with non-zero power
return minerNominalPower.GreaterThanEqual(abi.NewStoragePower(0)), nil
}
// Parameters may be negative to subtract.
func (st *State) AddToClaim(s adt.Store, miner addr.Address, power abi.StoragePower, qapower abi.StoragePower) error {
claims, err := adt.AsMap(s, st.Claims)
if err != nil {
return xerrors.Errorf("failed to load claims: %w", err)
}
if err := st.addToClaim(claims, miner, power, qapower); err != nil {
return xerrors.Errorf("failed to add claim: %w", err)
}
st.Claims, err = claims.Root()
if err != nil {
return xerrors.Errorf("failed to flush claims: %w", err)
}
return nil
}
func (st *State) addToClaim(claims *adt.Map, miner addr.Address, power abi.StoragePower, qapower abi.StoragePower) error {
oldClaim, ok, err := getClaim(claims, miner)
if err != nil {
return fmt.Errorf("failed to get claim: %w", err)
}
if !ok {
return exitcode.ErrNotFound.Wrapf("no claim for actor %v", miner)
}
// TotalBytes always update directly
st.TotalQABytesCommitted = big.Add(st.TotalQABytesCommitted, qapower)
st.TotalBytesCommitted = big.Add(st.TotalBytesCommitted, power)
newClaim := Claim{
RawBytePower: big.Add(oldClaim.RawBytePower, power),
QualityAdjPower: big.Add(oldClaim.QualityAdjPower, qapower),
}
prevBelow := oldClaim.QualityAdjPower.LessThan(ConsensusMinerMinPower)
stillBelow := newClaim.QualityAdjPower.LessThan(ConsensusMinerMinPower)
if prevBelow && !stillBelow {
// just passed min miner size
st.MinerAboveMinPowerCount++
st.TotalQualityAdjPower = big.Add(st.TotalQualityAdjPower, newClaim.QualityAdjPower)
st.TotalRawBytePower = big.Add(st.TotalRawBytePower, newClaim.RawBytePower)
} else if !prevBelow && stillBelow {
// just went below min miner size
st.MinerAboveMinPowerCount--
st.TotalQualityAdjPower = big.Sub(st.TotalQualityAdjPower, oldClaim.QualityAdjPower)
st.TotalRawBytePower = big.Sub(st.TotalRawBytePower, oldClaim.RawBytePower)
} else if !prevBelow && !stillBelow {
// Was above the threshold, still above
st.TotalQualityAdjPower = big.Add(st.TotalQualityAdjPower, qapower)
st.TotalRawBytePower = big.Add(st.TotalRawBytePower, power)
}
AssertMsg(newClaim.RawBytePower.GreaterThanEqual(big.Zero()), "negative claimed raw byte power: %v", newClaim.RawBytePower)
AssertMsg(newClaim.QualityAdjPower.GreaterThanEqual(big.Zero()), "negative claimed quality adjusted power: %v", newClaim.QualityAdjPower)
AssertMsg(st.MinerAboveMinPowerCount >= 0, "negative number of miners larger than min: %v", st.MinerAboveMinPowerCount)
return setClaim(claims, miner, &newClaim)
}
func getClaim(claims *adt.Map, a addr.Address) (*Claim, bool, error) {
var out Claim
found, err := claims.Get(AddrKey(a), &out)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to get claim for address %v", a)
}
if !found {
return nil, false, nil
}
return &out, true, nil
}
func (st *State) addPledgeTotal(amount abi.TokenAmount) {
st.TotalPledgeCollateral = big.Add(st.TotalPledgeCollateral, amount)
AssertMsg(st.TotalPledgeCollateral.GreaterThanEqual(big.Zero()), "pledged amount cannot be negative")
}
func (st *State) appendCronEvent(events *adt.Multimap, epoch abi.ChainEpoch, event *CronEvent) error {
// if event is in past, alter FirstCronEpoch so it will be found.
if epoch < st.FirstCronEpoch {
st.FirstCronEpoch = epoch
}
if err := events.Add(epochKey(epoch), event); err != nil {
return xerrors.Errorf("failed to store cron event at epoch %v for miner %v: %w", epoch, event, err)
}
return nil
}
func (st *State) updateSmoothedEstimate(delta abi.ChainEpoch) {
filterQAPower := smoothing.LoadFilter(st.ThisEpochQAPowerSmoothed, smoothing.DefaultAlpha, smoothing.DefaultBeta)
st.ThisEpochQAPowerSmoothed = filterQAPower.NextEstimate(st.ThisEpochQualityAdjPower, delta)
}
func loadCronEvents(mmap *adt.Multimap, epoch abi.ChainEpoch) ([]CronEvent, error) {
var events []CronEvent
var ev CronEvent
err := mmap.ForEach(epochKey(epoch), &ev, func(i int64) error {
events = append(events, ev)
return nil
})
return events, err
}
func setClaim(claims *adt.Map, a addr.Address, claim *Claim) error {
Assert(claim.RawBytePower.GreaterThanEqual(big.Zero()))
Assert(claim.QualityAdjPower.GreaterThanEqual(big.Zero()))
if err := claims.Put(AddrKey(a), claim); err != nil {
return xerrors.Errorf("failed to put claim with address %s power %v: %w", a, claim, err)
}
return nil
}
// CurrentTotalPower returns current power values accounting for minimum miner
// and minimum power
func CurrentTotalPower(st *State) (abi.StoragePower, abi.StoragePower) {
if st.MinerAboveMinPowerCount < ConsensusMinerMinMiners {
return st.TotalBytesCommitted, st.TotalQABytesCommitted
}
return st.TotalRawBytePower, st.TotalQualityAdjPower
}
func epochKey(e abi.ChainEpoch) adt.Keyer {
return adt.IntKey(int64(e))
}
func init() {
// Check that ChainEpoch is indeed a signed integer to confirm that epochKey is making the right interpretation.
var e abi.ChainEpoch
if reflect.TypeOf(e).Kind() != reflect.Int64 {
panic("incorrect chain epoch encoding")
}
}
StoragePowerActor
implementation
package power
import (
"bytes"
"github.com/filecoin-project/go-address"
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
. "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/specs-actors/actors/util/smoothing"
)
type Runtime = vmr.Runtime
type SectorTermination int64
const (
ErrTooManyProveCommits = exitcode.FirstActorSpecificExitCode + iota
)
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.CreateMiner,
3: a.UpdateClaimedPower,
4: a.EnrollCronEvent,
5: a.OnEpochTickEnd,
6: a.UpdatePledgeTotal,
7: a.OnConsensusFault,
8: a.SubmitPoRepForBulkVerify,
9: a.CurrentTotalPower,
}
}
var _ abi.Invokee = Actor{}
// Storage miner actor constructor params are defined here so the power actor can send them to the init actor
// to instantiate miners.
type MinerConstructorParams struct {
OwnerAddr addr.Address
WorkerAddr addr.Address
ControlAddrs []addr.Address
SealProofType abi.RegisteredSealProof
PeerId abi.PeerID
Multiaddrs []abi.Multiaddrs
}
type SectorStorageWeightDesc struct {
SectorSize abi.SectorSize
Duration abi.ChainEpoch
DealWeight abi.DealWeight
VerifiedDealWeight abi.DealWeight
}
////////////////////////////////////////////////////////////////////////////////
// Actor methods
////////////////////////////////////////////////////////////////////////////////
func (a Actor) Constructor(rt Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
emptyMap, err := adt.MakeEmptyMap(adt.AsStore(rt)).Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
emptyMMapCid, err := adt.MakeEmptyMultimap(adt.AsStore(rt)).Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
st := ConstructState(emptyMap, emptyMMapCid)
rt.State().Create(st)
return nil
}
type CreateMinerParams struct {
Owner addr.Address
Worker addr.Address
SealProofType abi.RegisteredSealProof
Peer abi.PeerID
Multiaddrs []abi.Multiaddrs
}
type CreateMinerReturn struct {
IDAddress addr.Address // The canonical ID-based address for the actor.
RobustAddress addr.Address // A more expensive but re-org-safe address for the newly created actor.
}
func (a Actor) CreateMiner(rt Runtime, params *CreateMinerParams) *CreateMinerReturn {
rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
ctorParams := MinerConstructorParams{
OwnerAddr: params.Owner,
WorkerAddr: params.Worker,
SealProofType: params.SealProofType,
PeerId: params.Peer,
Multiaddrs: params.Multiaddrs,
}
ctorParamBuf := new(bytes.Buffer)
err := ctorParams.MarshalCBOR(ctorParamBuf)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to serialize miner constructor params %v", ctorParams)
ret, code := rt.Send(
builtin.InitActorAddr,
builtin.MethodsInit.Exec,
&initact.ExecParams{
CodeCID: builtin.StorageMinerActorCodeID,
ConstructorParams: ctorParamBuf.Bytes(),
},
rt.Message().ValueReceived(), // Pass on any value to the new actor.
)
builtin.RequireSuccess(rt, code, "failed to init new actor")
var addresses initact.ExecReturn
err = ret.Into(&addresses)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to unmarshal exec return value %v", ret)
var st State
rt.State().Transaction(&st, func() {
claims, err := adt.AsMap(adt.AsStore(rt), st.Claims)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load claims")
err = setClaim(claims, addresses.IDAddress, &Claim{abi.NewStoragePower(0), abi.NewStoragePower(0)})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to put power in claimed table while creating miner")
st.MinerCount += 1
st.Claims, err = claims.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush claims")
})
return &CreateMinerReturn{
IDAddress: addresses.IDAddress,
RobustAddress: addresses.RobustAddress,
}
}
type UpdateClaimedPowerParams struct {
RawByteDelta abi.StoragePower
QualityAdjustedDelta abi.StoragePower
}
// Adds or removes claimed power for the calling actor.
// May only be invoked by a miner actor.
func (a Actor) UpdateClaimedPower(rt Runtime, params *UpdateClaimedPowerParams) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
minerAddr := rt.Message().Caller()
var st State
rt.State().Transaction(&st, func() {
claims, err := adt.AsMap(adt.AsStore(rt), st.Claims)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load claims")
err = st.addToClaim(claims, minerAddr, params.RawByteDelta, params.QualityAdjustedDelta)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update power raw %s, qa %s", params.RawByteDelta, params.QualityAdjustedDelta)
st.Claims, err = claims.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush claims")
})
return nil
}
type EnrollCronEventParams struct {
EventEpoch abi.ChainEpoch
Payload []byte
}
func (a Actor) EnrollCronEvent(rt Runtime, params *EnrollCronEventParams) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
minerAddr := rt.Message().Caller()
minerEvent := CronEvent{
MinerAddr: minerAddr,
CallbackPayload: params.Payload,
}
// Ensure it is not possible to enter a large negative number which would cause problems in cron processing.
if params.EventEpoch < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "cron event epoch %d cannot be less than zero", params.EventEpoch)
}
var st State
rt.State().Transaction(&st, func() {
events, err := adt.AsMultimap(adt.AsStore(rt), st.CronEventQueue)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load cron events")
err = st.appendCronEvent(events, params.EventEpoch, &minerEvent)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to enroll cron event")
st.CronEventQueue, err = events.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush cron events")
})
return nil
}
// Called by Cron.
func (a Actor) OnEpochTickEnd(rt Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.CronActorAddr)
a.processDeferredCronEvents(rt)
a.processBatchProofVerifies(rt)
var st State
rt.State().Transaction(&st, func() {
// update next epoch's power and pledge values
// this must come before the next epoch's rewards are calculated
// so that next epoch reward reflects power added this epoch
rawBytePower, qaPower := CurrentTotalPower(&st)
st.ThisEpochPledgeCollateral = st.TotalPledgeCollateral
st.ThisEpochQualityAdjPower = qaPower
st.ThisEpochRawBytePower = rawBytePower
delta := rt.CurrEpoch() - st.LastProcessedCronEpoch
st.updateSmoothedEstimate(delta)
st.LastProcessedCronEpoch = rt.CurrEpoch()
})
// update network KPI in RewardActor
_, code := rt.Send(
builtin.RewardActorAddr,
builtin.MethodsReward.UpdateNetworkKPI,
&st.ThisEpochRawBytePower,
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to update network KPI with Reward Actor")
return nil
}
func (a Actor) UpdatePledgeTotal(rt Runtime, pledgeDelta *abi.TokenAmount) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
var st State
rt.State().Transaction(&st, func() {
st.addPledgeTotal(*pledgeDelta)
})
return nil
}
func (a Actor) OnConsensusFault(rt Runtime, pledgeAmount *abi.TokenAmount) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
minerAddr := rt.Message().Caller()
var st State
rt.State().Transaction(&st, func() {
claims, err := adt.AsMap(adt.AsStore(rt), st.Claims)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load claims")
claim, powerOk, err := getClaim(claims, minerAddr)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to read claimed power for fault")
if !powerOk {
rt.Abortf(exitcode.ErrNotFound, "miner %v not registered (already slashed?)", minerAddr)
}
Assert(claim.RawBytePower.GreaterThanEqual(big.Zero()))
Assert(claim.QualityAdjPower.GreaterThanEqual(big.Zero()))
err = st.addToClaim(claims, minerAddr, claim.RawBytePower.Neg(), claim.QualityAdjPower.Neg())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not add to claim for %s after loading existing claim for this address", minerAddr)
st.addPledgeTotal(pledgeAmount.Neg())
// delete miner actor claims
err = claims.Delete(AddrKey(minerAddr))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to remove miner %v", minerAddr)
st.MinerCount -= 1
st.Claims, err = claims.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush claims")
})
return nil
}
// GasOnSubmitVerifySeal is amount of gas charged for SubmitPoRepForBulkVerify
// This number is empirically determined
const GasOnSubmitVerifySeal = 34721049
func (a Actor) SubmitPoRepForBulkVerify(rt Runtime, sealInfo *abi.SealVerifyInfo) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
minerAddr := rt.Message().Caller()
var st State
rt.State().Transaction(&st, func() {
store := adt.AsStore(rt)
var mmap *adt.Multimap
if st.ProofValidationBatch == nil {
mmap = adt.MakeEmptyMultimap(store)
} else {
var err error
mmap, err = adt.AsMultimap(adt.AsStore(rt), *st.ProofValidationBatch)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load proof batch set")
}
arr, found, err := mmap.Get(adt.AddrKey(minerAddr))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get get seal verify infos at addr %s", minerAddr)
if found && arr.Length() >= MaxMinerProveCommitsPerEpoch {
rt.Abortf(ErrTooManyProveCommits, "miner %s attempting to prove commit over %d sectors in epoch", minerAddr, MaxMinerProveCommitsPerEpoch)
}
err = mmap.Add(adt.AddrKey(minerAddr), sealInfo)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to insert proof into batch")
mmrc, err := mmap.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush proofs batch")
rt.ChargeGas("OnSubmitVerifySeal", GasOnSubmitVerifySeal, 0)
st.ProofValidationBatch = &mmrc
})
return nil
}
type CurrentTotalPowerReturn struct {
RawBytePower abi.StoragePower
QualityAdjPower abi.StoragePower
PledgeCollateral abi.TokenAmount
QualityAdjPowerSmoothed *smoothing.FilterEstimate
}
// Returns the total power and pledge recorded by the power actor.
// The returned values are frozen during the cron tick before this epoch
// so that this method returns consistent values while processing all messages
// of an epoch.
func (a Actor) CurrentTotalPower(rt Runtime, _ *adt.EmptyValue) *CurrentTotalPowerReturn {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.State().Readonly(&st)
return &CurrentTotalPowerReturn{
RawBytePower: st.ThisEpochRawBytePower,
QualityAdjPower: st.ThisEpochQualityAdjPower,
PledgeCollateral: st.ThisEpochPledgeCollateral,
QualityAdjPowerSmoothed: st.ThisEpochQAPowerSmoothed,
}
}
////////////////////////////////////////////////////////////////////////////////
// Method utility functions
////////////////////////////////////////////////////////////////////////////////
func (a Actor) processBatchProofVerifies(rt Runtime) {
var st State
var miners []address.Address
verifies := make(map[address.Address][]abi.SealVerifyInfo)
rt.State().Transaction(&st, func() {
store := adt.AsStore(rt)
if st.ProofValidationBatch == nil {
return
}
mmap, err := adt.AsMultimap(store, *st.ProofValidationBatch)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load proofs validation batch")
err = mmap.ForAll(func(k string, arr *adt.Array) error {
a, err := address.NewFromBytes([]byte(k))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to parse address key")
miners = append(miners, a)
var infos []abi.SealVerifyInfo
var svi abi.SealVerifyInfo
err = arr.ForEach(&svi, func(i int64) error {
infos = append(infos, svi)
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to iterate over proof verify array for miner %s", a)
verifies[a] = infos
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to iterate proof batch")
st.ProofValidationBatch = nil
})
res, err := rt.Syscalls().BatchVerifySeals(verifies)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to batch verify")
for _, m := range miners {
vres, ok := res[m]
if !ok {
rt.Abortf(exitcode.ErrNotFound, "batch verify seals syscall implemented incorrectly")
}
verifs := verifies[m]
seen := map[abi.SectorNumber]struct{}{}
var successful []abi.SectorNumber
for i, r := range vres {
if r {
snum := verifs[i].SectorID.Number
if _, exists := seen[snum]; exists {
// filter-out duplicates
continue
}
seen[snum] = struct{}{}
successful = append(successful, snum)
}
}
// The exit code is explicitly ignored
_, _ = rt.Send(
m,
builtin.MethodsMiner.ConfirmSectorProofsValid,
&builtin.ConfirmSectorProofsParams{Sectors: successful},
abi.NewTokenAmount(0),
)
}
}
func (a Actor) processDeferredCronEvents(rt Runtime) {
rtEpoch := rt.CurrEpoch()
var cronEvents []CronEvent
var st State
rt.State().Transaction(&st, func() {
events, err := adt.AsMultimap(adt.AsStore(rt), st.CronEventQueue)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load cron events")
for epoch := st.FirstCronEpoch; epoch <= rtEpoch; epoch++ {
epochEvents, err := loadCronEvents(events, epoch)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load cron events at %v", epoch)
cronEvents = append(cronEvents, epochEvents...)
if len(epochEvents) > 0 {
err = events.RemoveAll(epochKey(epoch))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to clear cron events at %v", epoch)
}
}
st.FirstCronEpoch = rtEpoch + 1
st.CronEventQueue, err = events.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush events")
})
failedMinerCrons := make([]addr.Address, 0)
for _, event := range cronEvents {
_, code := rt.Send(
event.MinerAddr,
builtin.MethodsMiner.OnDeferredCronEvent,
vmr.CBORBytes(event.CallbackPayload),
abi.NewTokenAmount(0),
)
// If a callback fails, this actor continues to invoke other callbacks
// and persists state removing the failed event from the event queue. It won't be tried again.
// Failures are unexpected here but will result in removal of miner power
// A log message would really help here.
if code != exitcode.Ok {
rt.Log(vmr.WARN, "OnDeferredCronEvent failed for miner %s: exitcode %d", event.MinerAddr, code)
failedMinerCrons = append(failedMinerCrons, event.MinerAddr)
}
}
rt.State().Transaction(&st, func() {
claims, err := adt.AsMap(adt.AsStore(rt), st.Claims)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load claims")
// Remove power and leave miner frozen
for _, minerAddr := range failedMinerCrons {
claim, found, err := getClaim(claims, minerAddr)
if err != nil {
rt.Log(vmr.ERROR, "failed to get claim for miner %s after failing OnDeferredCronEvent: %s", minerAddr, err)
continue
}
if !found {
rt.Log(vmr.WARN, "miner OnDeferredCronEvent failed for miner %s with no power", minerAddr)
continue
}
// zero out miner power
err = st.addToClaim(claims, minerAddr, claim.RawBytePower.Neg(), claim.QualityAdjPower.Neg())
if err != nil {
rt.Log(vmr.WARN, "failed to remove (%d, %d) power for miner %s after to failed cron", claim.RawBytePower, claim.QualityAdjPower, minerAddr)
continue
}
}
st.Claims, err = claims.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush claims")
})
}
The Power Table
The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Power Fraction
over time. That is, a miner whose storage represents 1% of total storage on the network should mine 1% of blocks on expectation.
SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), for failed PoSts (decrementing miner power) or for other storage and consensus faults.
Sector ProveCommit is the first time power is proven to the network and hence power is first added upon successful sector ProveCommit. Power is also added when a sector’s TemporaryFault period has ended. Miners are expected to prove over all their sectors that contribute to their power.
Power is decremented when a sector expires, when a sector enters TemporaryFault, or when it is invoked by miners through Sector Termination. Miners can also extend the lifetime of a sector through ExtendSectorExpiration
and thus modifying SectorStorageWeightDesc
. This may or may not have an impact on power but the machinery is in place to preserve the flexibility.
The Miner lifecycle in the power table should be roughly as follows:
- MinerRegistration: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker).
- UpdatePower: These power increments and decrements are called by various storage actor (and must thus be verified by every full node on the network). Specifically:
- Power is incremented at SectorProveCommit
- All Power of a particular miner is decremented immediately after a missed SurprisePoSt (DetectedFault).
- A particular sector’s power is decremented when its TemporaryFault begins.
- A particular sector’s power is added back when its TemporaryFault ends and miner is expected to prove over this sector.
- A particular sector’s power is removed when the sector is terminated through sector expiration or miner invocation.
To summarize, only sectors in the Active state will command power. A Sector becomes Active when it is added upon ProveCommit. Power is immediately decremented upon when TemporaryFault begins on an Active sector or when the miner is in Challenged or DetectedFault state. Power will be restored when TemporaryFault has ended and when the miner successfully responds to a SurprisePoSt challenge. A sector’s power is removed when it is terminated through either miner invocation or normal expiration.
Pledge Collateral
Consensus in Filecoin is secured in part by economic incentives enforced by Pledge Collateral.
Pledge collateral amount is committed based on power pledged to the system (i.e. proportional to number of sectors committed and sector size for a miner). It is a system-wide parameter and is committed to the StoragePowerActor
. Pledge collateral can be posted by the StorageMinerActor
at any time by a miner and its requirement is dependent on miner’s power. Details around pledge collateral will be announced soon.
Pledge Collateral will be slashed when
Consensus Faults are reported to the StoragePowerActor
's ReportConsensusFault
method, when a miner fails a SurprisePoSt (DetectedFault), or when a miner terminates a sector earlier than its duration.
Pledge Collateral is slashed for any fault affecting storage-power consensus, these include:
- faults to expected consensus in particular (see
Consensus Faults) which will be reported by a slasher to the
StoragePowerActor
in exchange for a reward. - faults affecting consensus power more generally, specifically uncommitted power faults (i.e.
Storage Faults) which will be reported by the
CronActor
automatically or when a miner terminates a sector earlier than its promised duration.
Token
FIL Wallet
Payment Channels
Payment channels are generally used as a mechanism to increase the scalability of blockchains and enable users to transact without involving (i.e., publishing their transactions on) the blockchain, which: i) increases the load of the system, and ii) incurs gas costs for the user. Payment channels generally use a smart contract as an agreement between the two participants. In the Filecoin blockchain Payment Channels are realised by the paychActor
.
The goal of the Payment Channel Actor specified here is to enable a series of off-chain microtransactions for applications built on top of Filecoin to be reconciled on-chain at a later time with fewer messages that involve the blockchain. Payment channels are already used in the Retrieval Market of the Filecoin Network, but their applicability is not constrained within this use-case only. Hence, here, we provide a detailed description of Payment Channels in the Filecoin network and then describe how Payment Channels are used in the specific case of the Filecoin Retrieval Market.
The payment channel actor can be used to open long-lived, flexible payment channels between users. Filecoin payment channels are uni-directional and can be funded by adding to their balance. Given the context of uni-directional payment channels, we define the payment channel sender as the party that receives some service, creates the channel, deposits funds and sends payments (hence the term payment channel sender). The payment channel recipient, on the other hand is defined as the party that provides services and receives payment for the services delivered (hence the term payment channel recipient). The fact that payment channels are uni-directional means that only the payment channel sender can add funds and the recipient can receive funds. Payment channels are identified by a unique address, as is the case with all Filecoin actors.
The payment channel state structure looks like this:
// A given payment channel actor is established by From (the receipent of a service)
// to enable off-chain microtransactions to To (the provider of a service) to be reconciled
// and tallied on chain.
type State struct {
// Channel owner, who has created and funded the actor - the channel sender
From addr.Address
// Recipient of payouts from channel
To addr.Address
// Amount successfully redeemed through the payment channel, paid out on `Collect()`
ToSend abi.TokenAmount
// Height at which the channel can be `Collected`
SettlingAt abi.ChainEpoch
// Height before which the channel `ToSend` cannot be collected
MinSettleHeight abi.ChainEpoch
// Collections of lane states for the channel, maintained in ID order.
LaneStates []*LaneState
}
Before continuing with the details of the Payment Channel and its components and features, it is worth defining a few terms.
- Voucher: a signed message created by either of the two channel parties that updates the channel balance. To differentiate to the payment channel sender/recipient, we refer to the voucher parties as voucher sender/recipient, who might or might not be the same as the payment channel ones (i.e., the voucher sender might be either the payment channel recipient or the payment channel sender).
- Redeeming a voucher: the voucher MUST be submitted on-chain by the opposite party from the one that created it. Redeeming a voucher does not trigger movement of funds from the channel to the recipient’s account, but it does incur transaction/gas costs. Vouchers can be redeemed at any time up to
Collect
(see below), as long as it has got a higherNonce
than a previously submitted one. UpdateChannelState
: this is the process by which a voucher is redeemed, i.e., a voucher is submitted (but not cashed-out) on-chain.Settle
: this process starts closing the channel. It can be called by either the channel creator (sender) or the channel recipient.Collect
: with this process funds are eventually transferred from the payment channel sender to the payment channel recipient. This process incurs transaction/gas costs.
Vouchers
Traditionally, in order to transact through a Payment Channel, the payment channel parties send to each other signed messages that update the balance of the channel. In Filecoin, these signed messages are called vouchers.
Throughout the interaction between the two parties, the channel sender (From
address) is sending vouchers to the recipient (To
address). The Value
included in the voucher indicates the value available for the receiving party to redeem. The Value
is based on the service that the payment channel recipient has provided to the payment channel sender. Either the payment channel recipient or the payment channel sender can Update
the balance of the channel and the balance ToSend
to the payment channel recipient (using a voucher), but the Update
(i.e., the voucher) has to be accepted by the other party before funds can be collected. Furthermore, the voucher has to be redeemed by the opposite party from the one that issued the voucher. The payment channel recipient can choose to Collect
this balance at any time incurring the corresponding gas cost.
Redeeming a voucher is not transferring funds from the payment channel to the recipient’s account. Instead, redeeming a voucher denotes the fact that some service worth of Value
has been provided by the payment channel recipient to the payment channel sender. It is not until the whole payment channel is collected that the funds are dispatched to the provider’s account.
This is the structure of the voucher:
// A voucher can be created and sent by any of the two parties. The `To` payment channel address can redeem the voucher and then `Collect` the funds.
type SignedVoucher struct {
// ChannelAddr is the address of the payment channel this signed voucher is valid for
ChannelAddr addr.Address
// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
TimeLockMin abi.ChainEpoch
// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
// TimeLockMax set to 0 means no timeout
TimeLockMax abi.ChainEpoch
// (optional) The SecretPreImage is used by `To` to validate
SecretPreimage []byte
// (optional) Extra can be specified by `From` to add a verification method to the voucher
Extra *ModVerifyParams
// Specifies which lane the Voucher is added to (will be created if does not exist)
Lane uint64
// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
Nonce uint64
// Amount voucher can be redeemed for
Amount big.Int
// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
MinSettleHeight abi.ChainEpoch
// (optional) Set of lanes to be merged into `Lane`
Merges []Merge
// Sender's signature over the voucher
Signature *crypto.Signature
}
Over the course of a transaction cycle, each participant in the payment channel can send Voucher
s to the other participant.
For instance, if the payment channel sender (From
address) has sent to the payment channel recipient (To
address) the following three vouchers (voucher_val, voucher_nonce)
for a lane with 100 FIL to be redeemed: (10, 1), (20, 2), (30, 3), then the recipient could choose to redeem (30, 3) bringing the lane’s value to 70 (100 - 30) and cancelling the preceding vouchers, i.e., they would not be able to redeem (10, 1) or (20, 2) anymore. However, they could redeem (20, 2), that is, 20 FIL, and then follow up with (30, 3) to redeem the remaining 10 FIL later.
It is worth highlighting that while the Nonce
is a strictly increasing value to denote the sequence of vouchers issued within the remit of a payment channel, the Value
is not a strictly increasing value. Decreasing Value
(although expected rarely) can be realized in cases of refunds that need to flow in the direction from the payment channel recipient to the payment channel sender. This can be the case when some bits arrive corrupted in the case of file retrieval, for instance.
Vouchers are signed by the party that creates them and are authenticated using a (Secret
, PreImage
) pair provided by the paying party (channel sender). If the PreImage
is indeed a pre-image of the Secret
when used as input to some given algorithm (typically a one-way function like a hash), the Voucher
is valid. The Voucher
itself contains the PreImage
but not the Secret
(communicated separately to the receiving party). This enables multi-hop payments since an intermediary cannot redeem a voucher on their own. Vouchers can also be used to update the minimum height at which a channel will be settled (i.e., closed), or have TimeLock
s to prevent voucher recipients from redeeming them too early. A channel can also have a MinCloseHeight
to prevent it being closed prematurely (e.g. before the payment channel recipient has collected funds) by the payment channel creator/sender.
Once their transactions have completed, either party can choose to Settle
(i.e., close) the channel. There is a 12hr period after Settle
during which either party can submit any outstanding vouchers. Once the vouchers are submitted, either party can then call Collect
. This will send the payment channel recipient the ToPay
amount from the channel, and the channel sender (From
address) will be refunded the remaining balance in the channel (if any).
Lanes
In addition, payment channels in Filecoin can be split into lane
s created as part of updating the channel state with a payment voucher
. Each lane has an associated nonce
and amount of tokens it can be redeemed
for. Lanes can be thought of as transactions for several different services provided by the channel recipient to the channel sender. The nonce
plays the role of a sequence number of vouchers within a given lane, where a voucher with a higher nonce replaces a voucher with a lower nonce.
Payment channel lanes allow for a lot of accounting between parties to be done off-chain and reconciled via single updates to the payment channel. The multiple lanes enable two parties to use a single payment channel to adjudicate multiple independent sets of payments.
One example of such accounting is merging of lanes. When a pair of channel sender-recipient nodes have a payment channel established between them with many lanes, the channel recipient will have to pay gas cost for each one of the lanes in order to Collect
funds. Merging of lanes allow the channel recipient to send a “merge” request to the channel sender to request merging of (some of the) lanes and consolidate the funds. This way, the recipient can reduce the overall gas cost. As an incentive for the channel sender to accept the merge lane request, the channel recipient can ask for a lower total value to balance out the gas cost. For instance, if the recipient has collected vouchers worth of 10 FIL from two lanes, say 5 from each, and the gas cost of submitting the vouchers for these funds is 2, then it can ask for 9 from the creator if the latter accepts to merge the two lanes. This way, the channel sender pays less overall for the services it received and the channel recipient pays less gas cost to submit the voucher for the services they provided.
Lifecycle of a Payment Channel
Summarising, we have the following sequence:
- Two parties agree to a series of transactions (for instance as part of file retrieval) with one party paying the other party up to some total sum of Filecoin over time. This is part of the deal-phase, it takes place off-chain and does not (at this stage) involve payment channels.
- The Payment Channel Actor is used, called the payment channel sender (who is the recipient of some service, e.g., file in case of file retrieval) to create the payment channel and deposit funds.
- Any of the two parties can create vouchers to send to the other party.
- The voucher recipient saves the voucher locally. Each voucher has to be submitted by the opposite party from the one that created the voucher.
- Either immediately or later, the voucher recipient “redeems” the voucher by submitting it to the chain, calling
UpdateChannelState
- The channel sender or the channel recipient
Settle
the payment channel. - 12-hour period to close the channel begins.
- If any of the two parties have outstanding (i.e., non-redeemed) vouchers, they should now submit the vouchers to the chain (there should be the option of this being done automatically). If the channel recipient so desires, they should send a “merge lanes” request to the sender.
- 12-hour period ends.
- Either the channel sender or the channel recipient calls
Collect
. - Funds are transferred to the channel recipient’s account and any unclaimed balance goes back to channel sender.
Payment Channels as part of the Filecoin Retrieval
Payment Channels are used in the Filecoin Retrieval Market to enable efficient off-chain payments and accounting between parties for what is expected to be a series of microtransactions, as these occur during data retrieval.
In particular, given that there is no proving method provided for the act of sending data from a provider (miner) to a client, there is no trust anchor between the two. Therefore, in order to avoid mis-behaviour, Filecoin is making use of payment channels in order to realise a step-wise “data transfer <-> payment” relationship between the data provider and the client (data receiver). Clients issue requests for data that miners are responding to. The miner is entitled to ask for interim payments, the volume-oriented interval for which is agreed in the Deal phase. In order to facilitate this process, the Filecoin client is creating a payment channel once the provider has agreed on the proposed deal. The client should also lock monetary value in the payment channel equal to the one needed for retrieval of the entire block of data requested. Every time a provider is completing transfer of the pre-specified amount of data, they can request a payment. The client is responding to this payment with a voucher which the provider can redeem (immediately or later), as per the process described earlier.
Payment Channel Implementation
package paychmgr
import (
"bytes"
"context"
"fmt"
"github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/go-address"
cborutil "github.com/filecoin-project/go-cbor-util"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/sigs"
"github.com/filecoin-project/specs-actors/actors/abi/big"
"github.com/filecoin-project/specs-actors/actors/builtin"
"github.com/filecoin-project/specs-actors/actors/builtin/account"
"github.com/filecoin-project/specs-actors/actors/builtin/paych"
xerrors "golang.org/x/xerrors"
)
// channelAccessor is used to simplify locking when accessing a channel
type channelAccessor struct {
// waitCtx is used by processes that wait for things to be confirmed
// on chain
waitCtx context.Context
sa *stateAccessor
api managerAPI
store *Store
lk *channelLock
fundsReqQueue []*fundsReq
msgListeners msgListeners
}
func newChannelAccessor(pm *Manager) *channelAccessor {
return &channelAccessor{
lk: &channelLock{globalLock: &pm.lk},
sa: pm.sa,
api: pm.pchapi,
store: pm.store,
msgListeners: newMsgListeners(),
waitCtx: pm.ctx,
}
}
func (ca *channelAccessor) getChannelInfo(addr address.Address) (*ChannelInfo, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.store.ByAddress(addr)
}
func (ca *channelAccessor) checkVoucherValid(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]*paych.LaneState, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.checkVoucherValidUnlocked(ctx, ch, sv)
}
func (ca *channelAccessor) checkVoucherValidUnlocked(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]*paych.LaneState, error) {
if sv.ChannelAddr != ch {
return nil, xerrors.Errorf("voucher ChannelAddr doesn't match channel address, got %s, expected %s", sv.ChannelAddr, ch)
}
// Load payment channel actor state
act, pchState, err := ca.sa.loadPaychActorState(ctx, ch)
if err != nil {
return nil, err
}
// Load channel "From" account actor state
var actState account.State
_, err = ca.api.LoadActorState(ctx, pchState.From, &actState, nil)
if err != nil {
return nil, err
}
from := actState.Address
// verify voucher signature
vb, err := sv.SigningBytes()
if err != nil {
return nil, err
}
// TODO: technically, either party may create and sign a voucher.
// However, for now, we only accept them from the channel creator.
// More complex handling logic can be added later
if err := sigs.Verify(sv.Signature, from, vb); err != nil {
return nil, err
}
// Check the voucher against the highest known voucher nonce / value
laneStates, err := ca.laneState(ctx, pchState, ch)
if err != nil {
return nil, err
}
// If the new voucher nonce value is less than the highest known
// nonce for the lane
ls, lsExists := laneStates[sv.Lane]
if lsExists && sv.Nonce <= ls.Nonce {
return nil, fmt.Errorf("nonce too low")
}
// If the voucher amount is less than the highest known voucher amount
if lsExists && sv.Amount.LessThanEqual(ls.Redeemed) {
return nil, fmt.Errorf("voucher amount is lower than amount for voucher with lower nonce")
}
// Total redeemed is the total redeemed amount for all lanes, including
// the new voucher
// eg
//
// lane 1 redeemed: 3
// lane 2 redeemed: 2
// voucher for lane 1: 5
//
// Voucher supersedes lane 1 redeemed, therefore
// effective lane 1 redeemed: 5
//
// lane 1: 5
// lane 2: 2
// -
// total: 7
totalRedeemed, err := ca.totalRedeemedWithVoucher(laneStates, sv)
if err != nil {
return nil, err
}
// Total required balance = total redeemed + toSend
// Must not exceed actor balance
newTotal := types.BigAdd(totalRedeemed, pchState.ToSend)
if act.Balance.LessThan(newTotal) {
return nil, fmt.Errorf("not enough funds in channel to cover voucher")
}
if len(sv.Merges) != 0 {
return nil, fmt.Errorf("dont currently support paych lane merges")
}
return laneStates, nil
}
func (ca *channelAccessor) checkVoucherSpendable(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, secret []byte, proof []byte) (bool, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
recipient, err := ca.getPaychRecipient(ctx, ch)
if err != nil {
return false, err
}
if sv.Extra != nil && proof == nil {
known, err := ca.store.VouchersForPaych(ch)
if err != nil {
return false, err
}
for _, v := range known {
eq, err := cborutil.Equals(v.Voucher, sv)
if err != nil {
return false, err
}
if v.Proof != nil && eq {
log.Info("CheckVoucherSpendable: using stored proof")
proof = v.Proof
break
}
}
if proof == nil {
log.Warn("CheckVoucherSpendable: nil proof for voucher with validation")
}
}
enc, err := actors.SerializeParams(&paych.UpdateChannelStateParams{
Sv: *sv,
Secret: secret,
Proof: proof,
})
if err != nil {
return false, err
}
ret, err := ca.api.Call(ctx, &types.Message{
From: recipient,
To: ch,
Method: builtin.MethodsPaych.UpdateChannelState,
Params: enc,
}, nil)
if err != nil {
return false, err
}
if ret.MsgRct.ExitCode != 0 {
return false, nil
}
return true, nil
}
func (ca *channelAccessor) getPaychRecipient(ctx context.Context, ch address.Address) (address.Address, error) {
var state paych.State
if _, err := ca.api.LoadActorState(ctx, ch, &state, nil); err != nil {
return address.Address{}, err
}
return state.To, nil
}
func (ca *channelAccessor) addVoucher(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, proof []byte, minDelta types.BigInt) (types.BigInt, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ch)
if err != nil {
return types.BigInt{}, err
}
// Check if the voucher has already been added
for i, v := range ci.Vouchers {
eq, err := cborutil.Equals(sv, v.Voucher)
if err != nil {
return types.BigInt{}, err
}
if !eq {
continue
}
// This is a duplicate voucher.
// Update the proof on the existing voucher
if len(proof) > 0 && !bytes.Equal(v.Proof, proof) {
log.Warnf("AddVoucher: adding proof to stored voucher")
ci.Vouchers[i] = &VoucherInfo{
Voucher: v.Voucher,
Proof: proof,
}
return types.NewInt(0), ca.store.putChannelInfo(ci)
}
// Otherwise just ignore the duplicate voucher
log.Warnf("AddVoucher: voucher re-added with matching proof")
return types.NewInt(0), nil
}
// Check voucher validity
laneStates, err := ca.checkVoucherValidUnlocked(ctx, ch, sv)
if err != nil {
return types.NewInt(0), err
}
// The change in value is the delta between the voucher amount and
// the highest previous voucher amount for the lane
laneState, exists := laneStates[sv.Lane]
redeemed := big.NewInt(0)
if exists {
redeemed = laneState.Redeemed
}
delta := types.BigSub(sv.Amount, redeemed)
if minDelta.GreaterThan(delta) {
return delta, xerrors.Errorf("addVoucher: supplied token amount too low; minD=%s, D=%s; laneAmt=%s; v.Amt=%s", minDelta, delta, redeemed, sv.Amount)
}
ci.Vouchers = append(ci.Vouchers, &VoucherInfo{
Voucher: sv,
Proof: proof,
})
if ci.NextLane <= sv.Lane {
ci.NextLane = sv.Lane + 1
}
return delta, ca.store.putChannelInfo(ci)
}
func (ca *channelAccessor) allocateLane(ch address.Address) (uint64, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
// TODO: should this take into account lane state?
return ca.store.AllocateLane(ch)
}
func (ca *channelAccessor) listVouchers(ctx context.Context, ch address.Address) ([]*VoucherInfo, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
// TODO: just having a passthrough method like this feels odd. Seems like
// there should be some filtering we're doing here
return ca.store.VouchersForPaych(ch)
}
func (ca *channelAccessor) nextNonceForLane(ctx context.Context, ch address.Address, lane uint64) (uint64, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
// TODO: should this take into account lane state?
vouchers, err := ca.store.VouchersForPaych(ch)
if err != nil {
return 0, err
}
var maxnonce uint64
for _, v := range vouchers {
if v.Voucher.Lane == lane {
if v.Voucher.Nonce > maxnonce {
maxnonce = v.Voucher.Nonce
}
}
}
return maxnonce + 1, nil
}
// laneState gets the LaneStates from chain, then applies all vouchers in
// the data store over the chain state
func (ca *channelAccessor) laneState(ctx context.Context, state *paych.State, ch address.Address) (map[uint64]*paych.LaneState, error) {
// TODO: we probably want to call UpdateChannelState with all vouchers to be fully correct
// (but technically dont't need to)
// Get the lane state from the chain
store := ca.api.AdtStore(ctx)
lsamt, err := adt.AsArray(store, state.LaneStates)
if err != nil {
return nil, err
}
// Note: we use a map instead of an array to store laneStates because the
// client sets the lane ID (the index) and potentially they could use a
// very large index.
var ls paych.LaneState
laneStates := make(map[uint64]*paych.LaneState, lsamt.Length())
err = lsamt.ForEach(&ls, func(i int64) error {
current := ls
laneStates[uint64(i)] = ¤t
return nil
})
if err != nil {
return nil, err
}
// Apply locally stored vouchers
vouchers, err := ca.store.VouchersForPaych(ch)
if err != nil && err != ErrChannelNotTracked {
return nil, err
}
for _, v := range vouchers {
for range v.Voucher.Merges {
return nil, xerrors.Errorf("paych merges not handled yet")
}
// If there's a voucher for a lane that isn't in chain state just
// create it
ls, ok := laneStates[v.Voucher.Lane]
if !ok {
ls = &paych.LaneState{
Redeemed: types.NewInt(0),
Nonce: 0,
}
laneStates[v.Voucher.Lane] = ls
}
if v.Voucher.Nonce < ls.Nonce {
continue
}
ls.Nonce = v.Voucher.Nonce
ls.Redeemed = v.Voucher.Amount
}
return laneStates, nil
}
// Get the total redeemed amount across all lanes, after applying the voucher
func (ca *channelAccessor) totalRedeemedWithVoucher(laneStates map[uint64]*paych.LaneState, sv *paych.SignedVoucher) (big.Int, error) {
// TODO: merges
if len(sv.Merges) != 0 {
return big.Int{}, xerrors.Errorf("dont currently support paych lane merges")
}
total := big.NewInt(0)
for _, ls := range laneStates {
total = big.Add(total, ls.Redeemed)
}
lane, ok := laneStates[sv.Lane]
if ok {
// If the voucher is for an existing lane, and the voucher nonce
// and is higher than the lane nonce
if sv.Nonce > lane.Nonce {
// Add the delta between the redeemed amount and the voucher
// amount to the total
delta := big.Sub(sv.Amount, lane.Redeemed)
total = big.Add(total, delta)
}
} else {
// If the voucher is *not* for an existing lane, just add its
// value (implicitly a new lane will be created for the voucher)
total = big.Add(total, sv.Amount)
}
return total, nil
}
func (ca *channelAccessor) settle(ctx context.Context, ch address.Address) (cid.Cid, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ch)
if err != nil {
return cid.Undef, err
}
msg := &types.Message{
To: ch,
From: ci.Control,
Value: types.NewInt(0),
Method: builtin.MethodsPaych.Settle,
}
smgs, err := ca.api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
return cid.Undef, err
}
ci.Settling = true
err = ca.store.putChannelInfo(ci)
if err != nil {
log.Errorf("Error marking channel as settled: %s", err)
}
return smgs.Cid(), err
}
func (ca *channelAccessor) collect(ctx context.Context, ch address.Address) (cid.Cid, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ch)
if err != nil {
return cid.Undef, err
}
msg := &types.Message{
To: ch,
From: ci.Control,
Value: types.NewInt(0),
Method: builtin.MethodsPaych.Collect,
}
smsg, err := ca.api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
return cid.Undef, err
}
return smsg.Cid(), nil
}
Payment Channel Voucher
package types
import (
"encoding/base64"
"github.com/filecoin-project/specs-actors/actors/builtin/paych"
cbor "github.com/ipfs/go-ipld-cbor"
)
func DecodeSignedVoucher(s string) (*paych.SignedVoucher, error) {
data, err := base64.RawURLEncoding.DecodeString(s)
if err != nil {
return nil, err
}
var sv paych.SignedVoucher
if err := cbor.DecodeInto(data, &sv); err != nil {
return nil, err
}
return &sv, nil
}
Payment Channel Actor
package paych
import (
"bytes"
"math"
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
"github.com/filecoin-project/specs-actors/actors/builtin"
crypto "github.com/filecoin-project/specs-actors/actors/crypto"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
"github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
// Maximum number of lanes in a channel.
const MaxLane = math.MaxInt64
const SettleDelay = builtin.EpochsInHour * 12
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.UpdateChannelState,
3: a.Settle,
4: a.Collect,
}
}
var _ abi.Invokee = Actor{}
type ConstructorParams struct {
From addr.Address // Payer
To addr.Address // Payee
}
// Constructor creates a payment channel actor. See State for meaning of params.
func (pca *Actor) Constructor(rt vmr.Runtime, params *ConstructorParams) *adt.EmptyValue {
// Only InitActor can create a payment channel actor. It creates the actor on
// behalf of the payer/payee.
rt.ValidateImmediateCallerType(builtin.InitActorCodeID)
// check that both parties are capable of signing vouchers
to, err := pca.resolveAccount(rt, params.To)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve to address: %s", params.To)
from, err := pca.resolveAccount(rt, params.From)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve from address: %s", params.From)
emptyArrCid, err := adt.MakeEmptyArray(adt.AsStore(rt)).Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to create empty array")
st := ConstructState(from, to, emptyArrCid)
rt.State().Create(st)
return nil
}
// Resolves an address to a canonical ID address and requires it to address an account actor.
// The account actor constructor checks that the embedded address is associated with an appropriate key.
// An alternative (more expensive) would be to send a message to the actor to fetch its key.
func (pca *Actor) resolveAccount(rt vmr.Runtime, raw addr.Address) (addr.Address, error) {
resolved, ok := rt.ResolveAddress(raw)
if !ok {
return addr.Undef, exitcode.ErrNotFound.Wrapf("failed to resolve address %v", raw)
}
codeCID, ok := rt.GetActorCodeCID(resolved)
if !ok {
return addr.Undef, exitcode.ErrForbidden.Wrapf("no code for address %v", resolved)
}
if codeCID != builtin.AccountActorCodeID {
return addr.Undef, exitcode.ErrForbidden.Wrapf("actor %v must be an account (%v), was %v", raw,
builtin.AccountActorCodeID, codeCID)
}
return resolved, nil
}
////////////////////////////////////////////////////////////////////////////////
// Payment Channel state operations
////////////////////////////////////////////////////////////////////////////////
type UpdateChannelStateParams struct {
Sv SignedVoucher
Secret []byte
Proof []byte
}
// A voucher is sent by `From` to `To` off-chain in order to enable
// `To` to redeem payments on-chain in the future
type SignedVoucher struct {
// ChannelAddr is the address of the payment channel this signed voucher is valid for
ChannelAddr addr.Address
// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
TimeLockMin abi.ChainEpoch
// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
// TimeLockMax set to 0 means no timeout
TimeLockMax abi.ChainEpoch
// (optional) The SecretPreImage is used by `To` to validate
SecretPreimage []byte
// (optional) Extra can be specified by `From` to add a verification method to the voucher
Extra *ModVerifyParams
// Specifies which lane the Voucher merges into (will be created if does not exist)
Lane uint64
// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
Nonce uint64
// Amount voucher can be redeemed for
Amount big.Int
// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
MinSettleHeight abi.ChainEpoch
// (optional) Set of lanes to be merged into `Lane`
Merges []Merge
// Sender's signature over the voucher
Signature *crypto.Signature
}
// Modular Verification method
type ModVerifyParams struct {
Actor addr.Address
Method abi.MethodNum
Data []byte
}
type PaymentVerifyParams struct {
Extra []byte
Proof []byte
}
func (pca Actor) UpdateChannelState(rt vmr.Runtime, params *UpdateChannelStateParams) *adt.EmptyValue {
var st State
rt.State().Readonly(&st)
// both parties must sign voucher: one who submits it, the other explicitly signs it
rt.ValidateImmediateCallerIs(st.From, st.To)
var signer addr.Address
if rt.Message().Caller() == st.From {
signer = st.To
} else {
signer = st.From
}
sv := params.Sv
if sv.Signature == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher has no signature")
}
vb, err := sv.SigningBytes()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to serialize signedvoucher")
err = rt.Syscalls().VerifySignature(*sv.Signature, signer, vb)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "voucher signature invalid")
pchAddr := rt.Message().Receiver()
if pchAddr != sv.ChannelAddr {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher payment channel address %s does not match receiver %s", sv.ChannelAddr, pchAddr)
}
if rt.CurrEpoch() < sv.TimeLockMin {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot use this voucher yet!")
}
if sv.TimeLockMax != 0 && rt.CurrEpoch() > sv.TimeLockMax {
rt.Abortf(exitcode.ErrIllegalArgument, "this voucher has expired!")
}
if sv.Amount.Sign() < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher amount must be non-negative, was %v", sv.Amount)
}
if len(sv.SecretPreimage) > 0 {
hashedSecret := rt.Syscalls().HashBlake2b(params.Secret)
if !bytes.Equal(hashedSecret[:], sv.SecretPreimage) {
rt.Abortf(exitcode.ErrIllegalArgument, "incorrect secret!")
}
}
if sv.Extra != nil {
_, code := rt.Send(
sv.Extra.Actor,
sv.Extra.Method,
&PaymentVerifyParams{
sv.Extra.Data,
params.Proof,
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "spend voucher verification failed")
}
rt.State().Transaction(&st, func() {
laneFound := true
lstates, err := adt.AsArray(adt.AsStore(rt), st.LaneStates)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lanes")
// Find the voucher lane, creating if necessary.
laneId := sv.Lane
laneState := findLane(rt, lstates, sv.Lane)
if laneState == nil {
laneState = &LaneState{
Redeemed: big.Zero(),
Nonce: 0,
}
laneFound = false
}
if laneFound {
if laneState.Nonce >= sv.Nonce {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher has an outdated nonce, existing nonce: %d, voucher nonce: %d, cannot redeem",
laneState.Nonce, sv.Nonce)
}
}
// The next section actually calculates the payment amounts to update the payment channel state
// 1. (optional) sum already redeemed value of all merging lanes
redeemedFromOthers := big.Zero()
for _, merge := range sv.Merges {
if merge.Lane == sv.Lane {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher cannot merge lanes into its own lane")
}
otherls := findLane(rt, lstates, merge.Lane)
if otherls == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher specifies invalid merge lane %v", merge.Lane)
return // makes linters happy
}
if otherls.Nonce >= merge.Nonce {
rt.Abortf(exitcode.ErrIllegalArgument, "merged lane in voucher has outdated nonce, cannot redeem")
}
redeemedFromOthers = big.Add(redeemedFromOthers, otherls.Redeemed)
otherls.Nonce = merge.Nonce
err = lstates.Set(merge.Lane, otherls)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane %d", merge.Lane)
}
// 2. To prevent double counting, remove already redeemed amounts (from
// voucher or other lanes) from the voucher amount
laneState.Nonce = sv.Nonce
balanceDelta := big.Sub(sv.Amount, big.Add(redeemedFromOthers, laneState.Redeemed))
// 3. set new redeemed value for merged-into lane
laneState.Redeemed = sv.Amount
newSendBalance := big.Add(st.ToSend, balanceDelta)
// 4. check operation validity
if newSendBalance.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher would leave channel balance negative")
}
if newSendBalance.GreaterThan(rt.CurrentBalance()) {
rt.Abortf(exitcode.ErrIllegalArgument, "not enough funds in channel to cover voucher")
}
// 5. add new redemption ToSend
st.ToSend = newSendBalance
// update channel settlingAt and MinSettleHeight if delayed by voucher
if sv.MinSettleHeight != 0 {
if st.SettlingAt != 0 && st.SettlingAt < sv.MinSettleHeight {
st.SettlingAt = sv.MinSettleHeight
}
if st.MinSettleHeight < sv.MinSettleHeight {
st.MinSettleHeight = sv.MinSettleHeight
}
}
err = lstates.Set(laneId, laneState)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane", laneId)
st.LaneStates, err = lstates.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save lanes")
})
return nil
}
func (pca Actor) Settle(rt vmr.Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
var st State
rt.State().Transaction(&st, func() {
rt.ValidateImmediateCallerIs(st.From, st.To)
if st.SettlingAt != 0 {
rt.Abortf(exitcode.ErrIllegalState, "channel already settling")
}
st.SettlingAt = rt.CurrEpoch() + SettleDelay
if st.SettlingAt < st.MinSettleHeight {
st.SettlingAt = st.MinSettleHeight
}
})
return nil
}
func (pca Actor) Collect(rt vmr.Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
var st State
rt.State().Readonly(&st)
rt.ValidateImmediateCallerIs(st.From, st.To)
if st.SettlingAt == 0 || rt.CurrEpoch() < st.SettlingAt {
rt.Abortf(exitcode.ErrForbidden, "payment channel not settling or settled")
}
// send ToSend to "To"
_, codeTo := rt.Send(
st.To,
builtin.MethodSend,
nil,
st.ToSend,
)
builtin.RequireSuccess(rt, codeTo, "Failed to send funds to `To`")
// the remaining balance will be returned to "From" upon deletion.
rt.DeleteActor(st.From)
return nil
}
func (t *SignedVoucher) SigningBytes() ([]byte, error) {
osv := *t
osv.Signature = nil
buf := new(bytes.Buffer)
if err := osv.MarshalCBOR(buf); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// Returns the insertion index for a lane ID, with the matching lane state if found, or nil.
func findLane(rt vmr.Runtime, ls *adt.Array, id uint64) *LaneState {
if id > MaxLane {
rt.Abortf(exitcode.ErrIllegalArgument, "maximum lane ID is 2^63-1")
}
var out LaneState
found, err := ls.Get(id, &out)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lane %d", id)
if !found {
return nil
}
return &out
}
Multisig Wallet
Multisig Actor
package multisig
import (
"bytes"
"encoding/binary"
"fmt"
addr "github.com/filecoin-project/go-address"
abi "github.com/filecoin-project/specs-actors/actors/abi"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
. "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
type TxnID int64
func (t TxnID) Key() string {
// convert a TxnID to a HAMT key.
txnKey := make([]byte, binary.MaxVarintLen64)
n := binary.PutVarint(txnKey, int64(t))
return string(txnKey[:n])
}
type Transaction struct {
To addr.Address
Value abi.TokenAmount
Method abi.MethodNum
Params []byte
// This address at index 0 is the transaction proposer, order of this slice must be preserved.
Approved []addr.Address
}
// Data for a BLAKE2B-256 to be attached to methods referencing proposals via TXIDs.
// Ensures the existence of a cryptographic reference to the original proposal. Useful
// for offline signers and for protection when reorgs change a multisig TXID.
//
// Requester - The requesting multisig wallet member.
// All other fields - From the "Transaction" struct.
type ProposalHashData struct {
Requester addr.Address
To addr.Address
Value abi.TokenAmount
Method abi.MethodNum
Params []byte
}
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.Propose,
3: a.Approve,
4: a.Cancel,
5: a.AddSigner,
6: a.RemoveSigner,
7: a.SwapSigner,
8: a.ChangeNumApprovalsThreshold,
}
}
var _ abi.Invokee = Actor{}
type ConstructorParams struct {
Signers []addr.Address
NumApprovalsThreshold uint64
UnlockDuration abi.ChainEpoch
}
func (a Actor) Constructor(rt vmr.Runtime, params *ConstructorParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.InitActorAddr)
if len(params.Signers) < 1 {
rt.Abortf(exitcode.ErrIllegalArgument, "must have at least one signer")
}
// resolve signer addresses and do not allow duplicate signers
resolvedSigners := make([]addr.Address, 0, len(params.Signers))
deDupSigners := make(map[addr.Address]struct{}, len(params.Signers))
for _, signer := range params.Signers {
resolved, err := builtin.ResolveToIDAddr(rt, signer)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve addr %v to ID addr", signer)
if _, ok := deDupSigners[resolved]; ok {
rt.Abortf(exitcode.ErrIllegalArgument, "duplicate signer not allowed: %s", signer)
}
resolvedSigners = append(resolvedSigners, resolved)
deDupSigners[resolved] = struct{}{}
}
if params.NumApprovalsThreshold > uint64(len(params.Signers)) {
rt.Abortf(exitcode.ErrIllegalArgument, "must not require more approvals than signers")
}
if params.NumApprovalsThreshold < 1 {
rt.Abortf(exitcode.ErrIllegalArgument, "must require at least one approval")
}
if params.UnlockDuration < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "negative unlock duration disallowed")
}
pending, err := adt.MakeEmptyMap(adt.AsStore(rt)).Root()
if err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to create empty map: %v", err)
}
var st State
st.Signers = resolvedSigners
st.NumApprovalsThreshold = params.NumApprovalsThreshold
st.PendingTxns = pending
st.InitialBalance = abi.NewTokenAmount(0)
if params.UnlockDuration != 0 {
st.InitialBalance = rt.Message().ValueReceived()
st.UnlockDuration = params.UnlockDuration
st.StartEpoch = rt.CurrEpoch()
}
rt.State().Create(&st)
return nil
}
type ProposeParams struct {
To addr.Address
Value abi.TokenAmount
Method abi.MethodNum
Params []byte
}
type ProposeReturn struct {
// TxnID is the ID of the proposed transaction
TxnID TxnID
// Applied indicates if the transaction was applied as opposed to proposed but not applied due to lack of approvals
Applied bool
// Code is the exitcode of the transaction, if Applied is false this field should be ignored.
Code exitcode.ExitCode
// Ret is the return vale of the transaction, if Applied is false this field should be ignored.
Ret []byte
}
func (a Actor) Propose(rt vmr.Runtime, params *ProposeParams) *ProposeReturn {
rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
proposer := rt.Message().Caller()
if params.Value.Sign() < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "proposed value must be non-negative, was %v", params.Value)
}
var txnID TxnID
var st State
var txn *Transaction
rt.State().Transaction(&st, func() {
if !isSigner(proposer, st.Signers) {
rt.Abortf(exitcode.ErrForbidden, "%s is not a signer", proposer)
}
ptx, err := adt.AsMap(adt.AsStore(rt), st.PendingTxns)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pending transactions")
txnID = st.NextTxnID
st.NextTxnID += 1
txn = &Transaction{
To: params.To,
Value: params.Value,
Method: params.Method,
Params: params.Params,
Approved: []addr.Address{},
}
if err := ptx.Put(txnID, txn); err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to put transaction for propose: %v", err)
}
st.PendingTxns, err = ptx.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush pending transactions")
})
applied, ret, code := a.approveTransaction(rt, txnID, txn)
// Note: this transaction ID may not be stable across chain re-orgs.
// The proposal hash may be provided as a stability check when approving.
return &ProposeReturn{
TxnID: txnID,
Applied: applied,
Code: code,
Ret: ret,
}
}
type TxnIDParams struct {
ID TxnID
// Optional hash of proposal to ensure an operation can only apply to a
// specific proposal.
ProposalHash []byte
}
type ApproveReturn struct {
// Applied indicates if the transaction was applied as opposed to proposed but not applied due to lack of approvals
Applied bool
// Code is the exitcode of the transaction, if Applied is false this field should be ignored.
Code exitcode.ExitCode
// Ret is the return vale of the transaction, if Applied is false this field should be ignored.
Ret []byte
}
func (a Actor) Approve(rt vmr.Runtime, params *TxnIDParams) *ApproveReturn {
rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
callerAddr := rt.Message().Caller()
var st State
var txn *Transaction
rt.State().Transaction(&st, func() {
callerIsSigner := isSigner(callerAddr, st.Signers)
if !callerIsSigner {
rt.Abortf(exitcode.ErrForbidden, "%s is not a signer", callerAddr)
}
ptx, err := adt.AsMap(adt.AsStore(rt), st.PendingTxns)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pending transactions")
txn = getTransaction(rt, ptx, params.ID, params.ProposalHash, true)
})
// if the transaction already has enough approvers, execute it without "processing" this approval.
approved, ret, code := executeTransactionIfApproved(rt, st, params.ID, txn)
if !approved {
// if the transaction hasn't already been approved, let's "process" this approval
// and see if we can execute the transaction
approved, ret, code = a.approveTransaction(rt, params.ID, txn)
}
return &ApproveReturn{
Applied: approved,
Code: code,
Ret: ret,
}
}
func (a Actor) Cancel(rt vmr.Runtime, params *TxnIDParams) *adt.EmptyValue {
rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
callerAddr := rt.Message().Caller()
var st State
rt.State().Transaction(&st, func() {
callerIsSigner := isSigner(callerAddr, st.Signers)
if !callerIsSigner {
rt.Abortf(exitcode.ErrForbidden, "%s is not a signer", callerAddr)
}
ptx, err := adt.AsMap(adt.AsStore(rt), st.PendingTxns)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pending txns")
txn, err := getPendingTransaction(ptx, params.ID)
if err != nil {
rt.Abortf(exitcode.ErrNotFound, "failed to get transaction for cancel: %v", err)
}
proposer := txn.Approved[0]
if proposer != callerAddr {
rt.Abortf(exitcode.ErrForbidden, "Cannot cancel another signers transaction")
}
// confirm the hashes match
calculatedHash, err := ComputeProposalHash(&txn, rt.Syscalls().HashBlake2b)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to compute proposal hash for %v", params.ID)
if params.ProposalHash != nil && !bytes.Equal(params.ProposalHash, calculatedHash[:]) {
rt.Abortf(exitcode.ErrIllegalState, "hash does not match proposal params (ensure requester is an ID address)")
}
err = ptx.Delete(params.ID)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete pending transaction")
st.PendingTxns, err = ptx.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush pending transactions")
})
return nil
}
type AddSignerParams struct {
Signer addr.Address
Increase bool
}
func (a Actor) AddSigner(rt vmr.Runtime, params *AddSignerParams) *adt.EmptyValue {
// Can only be called by the multisig wallet itself.
rt.ValidateImmediateCallerIs(rt.Message().Receiver())
resolvedNewSigner, err := builtin.ResolveToIDAddr(rt, params.Signer)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve address %v", params.Signer)
var st State
rt.State().Transaction(&st, func() {
isSigner := isSigner(resolvedNewSigner, st.Signers)
if isSigner {
rt.Abortf(exitcode.ErrForbidden, "%s is already a signer", resolvedNewSigner)
}
st.Signers = append(st.Signers, resolvedNewSigner)
if params.Increase {
st.NumApprovalsThreshold = st.NumApprovalsThreshold + 1
}
})
return nil
}
type RemoveSignerParams struct {
Signer addr.Address
Decrease bool
}
func (a Actor) RemoveSigner(rt vmr.Runtime, params *RemoveSignerParams) *adt.EmptyValue {
// Can only be called by the multisig wallet itself.
rt.ValidateImmediateCallerIs(rt.Message().Receiver())
resolvedOldSigner, err := builtin.ResolveToIDAddr(rt, params.Signer)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve address %v", params.Signer)
var st State
rt.State().Transaction(&st, func() {
isSigner := isSigner(resolvedOldSigner, st.Signers)
if !isSigner {
rt.Abortf(exitcode.ErrForbidden, "%s is not a signer", resolvedOldSigner)
}
if len(st.Signers) == 1 {
rt.Abortf(exitcode.ErrForbidden, "cannot remove only signer")
}
newSigners := make([]addr.Address, 0, len(st.Signers))
// signers have already been resolved
for _, s := range st.Signers {
if resolvedOldSigner != s {
newSigners = append(newSigners, s)
}
}
// if the number of signers is below the threshold after removing the given signer,
// we should decrease the threshold by 1. This means that decrease should NOT be set to false
// in such a scenario.
if !params.Decrease && uint64(len(st.Signers)-1) < st.NumApprovalsThreshold {
rt.Abortf(exitcode.ErrIllegalArgument, "can't reduce signers to %d below threshold %d with decrease=false", len(st.Signers)-1, st.NumApprovalsThreshold)
}
if params.Decrease {
st.NumApprovalsThreshold = st.NumApprovalsThreshold - 1
}
st.Signers = newSigners
})
return nil
}
type SwapSignerParams struct {
From addr.Address
To addr.Address
}
func (a Actor) SwapSigner(rt vmr.Runtime, params *SwapSignerParams) *adt.EmptyValue {
// Can only be called by the multisig wallet itself.
rt.ValidateImmediateCallerIs(rt.Message().Receiver())
fromResolved, err := builtin.ResolveToIDAddr(rt, params.From)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve from address %v", params.From)
toResolved, err := builtin.ResolveToIDAddr(rt, params.To)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to resolve to address %v", params.To)
var st State
rt.State().Transaction(&st, func() {
fromIsSigner := isSigner(fromResolved, st.Signers)
if !fromIsSigner {
rt.Abortf(exitcode.ErrForbidden, "from addr %s is not a signer", fromResolved)
}
toIsSigner := isSigner(toResolved, st.Signers)
if toIsSigner {
rt.Abortf(exitcode.ErrIllegalArgument, "%s already a signer", toResolved)
}
newSigners := make([]addr.Address, 0, len(st.Signers))
for _, s := range st.Signers {
if s != fromResolved {
newSigners = append(newSigners, s)
}
}
newSigners = append(newSigners, toResolved)
st.Signers = newSigners
})
return nil
}
type ChangeNumApprovalsThresholdParams struct {
NewThreshold uint64
}
func (a Actor) ChangeNumApprovalsThreshold(rt vmr.Runtime, params *ChangeNumApprovalsThresholdParams) *adt.EmptyValue {
// Can only be called by the multisig wallet itself.
rt.ValidateImmediateCallerIs(rt.Message().Receiver())
var st State
rt.State().Transaction(&st, func() {
if params.NewThreshold == 0 || params.NewThreshold > uint64(len(st.Signers)) {
rt.Abortf(exitcode.ErrIllegalArgument, "New threshold value not supported")
}
st.NumApprovalsThreshold = params.NewThreshold
})
return nil
}
func (a Actor) approveTransaction(rt vmr.Runtime, txnID TxnID, txn *Transaction) (bool, []byte, exitcode.ExitCode) {
caller := rt.Message().Caller()
var st State
// abort duplicate approval
for _, previousApprover := range txn.Approved {
if previousApprover == caller {
rt.Abortf(exitcode.ErrForbidden, "%s already approved this message", previousApprover)
}
}
// add the caller to the list of approvers
rt.State().Transaction(&st, func() {
ptx, err := adt.AsMap(adt.AsStore(rt), st.PendingTxns)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pending transactions")
// update approved on the transaction
txn.Approved = append(txn.Approved, caller)
err = ptx.Put(txnID, txn)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to put transaction %v for approval", txnID)
st.PendingTxns, err = ptx.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush pending transactions")
})
return executeTransactionIfApproved(rt, st, txnID, txn)
}
func getTransaction(rt vmr.Runtime, ptx *adt.Map, txnID TxnID, proposalHash []byte, checkHash bool) *Transaction {
var txn Transaction
// get transaction from the state trie
var err error
txn, err = getPendingTransaction(ptx, txnID)
if err != nil {
rt.Abortf(exitcode.ErrNotFound, "failed to get transaction for approval: %v", err)
}
// confirm the hashes match
if checkHash {
calculatedHash, err := ComputeProposalHash(&txn, rt.Syscalls().HashBlake2b)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to compute proposal hash for %v", txnID)
if proposalHash != nil && !bytes.Equal(proposalHash, calculatedHash[:]) {
rt.Abortf(exitcode.ErrIllegalArgument, "hash does not match proposal params (ensure requester is an ID address)")
}
}
return &txn
}
func executeTransactionIfApproved(rt vmr.Runtime, st State, txnID TxnID, txn *Transaction) (bool, []byte, exitcode.ExitCode) {
var out vmr.CBORBytes
var code exitcode.ExitCode
applied := false
thresholdMet := uint64(len(txn.Approved)) >= st.NumApprovalsThreshold
if thresholdMet {
if err := st.assertAvailable(rt.CurrentBalance(), txn.Value, rt.CurrEpoch()); err != nil {
rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds unlocked: %v", err)
}
var ret vmr.SendReturn
// A sufficient number of approvals have arrived and sufficient funds have been unlocked: relay the message and delete from pending queue.
ret, code = rt.Send(
txn.To,
txn.Method,
vmr.CBORBytes(txn.Params),
txn.Value,
)
applied = true
// Pass the return value through uninterpreted with the expectation that serializing into a CBORBytes never fails
// since it just copies the bytes.
err := ret.Into(&out)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to deserialize result")
// This could be rearranged to happen inside the first state transaction, before the send().
rt.State().Transaction(&st, func() {
ptx, err := adt.AsMap(adt.AsStore(rt), st.PendingTxns)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pending transactions")
if err := ptx.Delete(txnID); err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to delete transaction for cleanup: %v", err)
}
st.PendingTxns, err = ptx.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush pending transactions")
})
}
return applied, out, code
}
func isSigner(address addr.Address, signers []addr.Address) bool {
AssertMsg(address.Protocol() == addr.ID, "address %v passed to isSigner must be a resolved address", address)
// signer addresses have already been resolved
for _, signer := range signers {
if signer == address {
return true
}
}
return false
}
// Computes a digest of a proposed transaction. This digest is used to confirm identity of the transaction
// associated with an ID, which might change under chain re-orgs.
func ComputeProposalHash(txn *Transaction, hash func([]byte) [32]byte) ([]byte, error) {
hashData := ProposalHashData{
Requester: txn.Approved[0],
To: txn.To,
Value: txn.Value,
Method: txn.Method,
Params: txn.Params,
}
data, err := hashData.Serialize()
if err != nil {
return nil, fmt.Errorf("failed to construct multisig approval hash: %w", err)
}
hashResult := hash(data)
return hashResult[:], nil
}
func (phd *ProposalHashData) Serialize() ([]byte, error) {
buf := new(bytes.Buffer)
if err := phd.MarshalCBOR(buf); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
package multisig
import (
address "github.com/filecoin-project/go-address"
cid "github.com/ipfs/go-cid"
"golang.org/x/xerrors"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
"github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
type State struct {
// Signers may be either public-key or actor ID-addresses. The ID address is canonical, but doesn't exist
// for a public key that has not yet received a message on chain.
// If any signer address is a public-key address, it will be resolved to an ID address and persisted
// in this state when the address is used.
Signers []address.Address
NumApprovalsThreshold uint64
NextTxnID TxnID
// Linear unlock
InitialBalance abi.TokenAmount
StartEpoch abi.ChainEpoch
UnlockDuration abi.ChainEpoch
PendingTxns cid.Cid
}
func (st *State) AmountLocked(elapsedEpoch abi.ChainEpoch) abi.TokenAmount {
if elapsedEpoch >= st.UnlockDuration {
return abi.NewTokenAmount(0)
}
unitLocked := big.Div(st.InitialBalance, big.NewInt(int64(st.UnlockDuration)))
return big.Mul(unitLocked, big.Sub(big.NewInt(int64(st.UnlockDuration)), big.NewInt(int64(elapsedEpoch))))
}
// return nil if MultiSig maintains required locked balance after spending the amount, else return an error.
func (st *State) assertAvailable(currBalance abi.TokenAmount, amountToSpend abi.TokenAmount, currEpoch abi.ChainEpoch) error {
if amountToSpend.LessThan(big.Zero()) {
return xerrors.Errorf("amount to spend %s less than zero", amountToSpend.String())
}
if currBalance.LessThan(amountToSpend) {
return xerrors.Errorf("current balance %s less than amount to spend %s", currBalance.String(), amountToSpend.String())
}
remainingBalance := big.Sub(currBalance, amountToSpend)
amountLocked := st.AmountLocked(currEpoch - st.StartEpoch)
if remainingBalance.LessThan(amountLocked) {
return xerrors.Errorf("actor balance if spent %s would be less than required locked amount %s", remainingBalance.String(), amountLocked.String())
}
return nil
}
func getPendingTransaction(ptx *adt.Map, txnID TxnID) (Transaction, error) {
var out Transaction
found, err := ptx.Get(txnID, &out)
if err != nil {
return Transaction{}, xerrors.Errorf("failed to read transaction: %w", err)
}
if !found {
return Transaction{}, exitcode.ErrNotFound.Wrapf("failed to find transaction %v", txnID)
}
return out, nil
}
Storage Mining System - proving storage for producing blocks
The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data, producing proof artifacts that demonstrate correct storage behavior, and managing the work involved.
Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable
choices. Miners should explore the design space to arrive at something that (a) satisfies protocol
and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in
Deals
), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec
primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for
various optimizations for implementers, miners, and users to make. In some parts, we describe
algorithms that could be replaced by other, more optimized versions, but in those cases it is
important that the protocol constraints are satisfied. The protocol constraints are
spelled out in clear detail (an unclear, unmentioned constraint is a “spec error”). It is up
to implementers who deviate from the algorithms presented here to ensure their modifications
satisfy those constraints, especially those relating to protocol security.
Storage Miner
Filecoin Storage Mining Subsystem
The Filecoin Storage Mining Subsystem ensures a storage miner can effectively commit storage to the Filecoin protocol in order to both:
- Participate in the Filecoin Storage Market by taking on client data and participating in storage deals.
- Participate in Filecoin Storage Power Consensus, verifying and generating blocks to grow the Filecoin blockchain and earning block rewards and fees for doing so.
The above involves a number of steps to putting on and maintaining online storage, such as:
- Committing new storage (see Sealing and PoRep)
- Continuously proving storage (see Election PoSt)
- Declaring storage faults and recovering from them.
Sector Types
There are two types of sectors, Regular Sectors with storage deals in them and Committed Capacity (CC) Sectors with no deals. All sectors require an expiration epoch that is declared upon PreCommit and sectors are assigned a StartEpoch at ProveCommit. Start and Expiration epoch collectively define the lifetime of a Sector. Length and size of active deals in a sector’s lifetime determine the DealWeight
of the sector. SectorSize
, Duration
, and DealWeight
statically determine the power assigned to a sector that will remain constant throughout its lifetime. More details on cost and reward for different sector types will be announced soon.
Sector States
When managing their storage
Sector as part of Filecoin mining, storage providers will account for where in the
Mining Cycle their sectors are. For instance, has a sector been committed? Does it need a new PoSt? Most of these operations happen as part of cycles of chain epochs called Proving Period
s each of which yield high confidence that every miner in the chain has proven their power (see
Election PoSt).
There are three states that an individual sector can be in:
PreCommit
when a sector has been added through a PreCommit message.Active
when a sector has been proven through a ProveCommit message and when a sector’s TemporaryFault period has ended.TemporaryFault
when a miner declares fault on a particular sector.
Sectors enter Active
from PreCommit
through a ProveCommit message that serves as the first proof for the sector. PreCommit requires a PreCommit deposit which will be returned upon successful and timely ProveCommit. However, if there is no matching ProveCommit for a particular PreCommit message, the deposit will be burned at PreCommit expiration.
A particular sector enters TemporaryFault
from Active
through DeclareTemporaryFault
with a specified period. Power associated with the sector will be lost immediately and miner needs to pay a TemporaryFaultFee
determined by the power suspended and the duration of suspension. At the end of the declared duration, faulted sectors automatically regain power and enter Active
. Miners are expected to prove over this recovered sector. Failure to do so may result in failing ElectionPoSt or DetectedFault
from failing SurprisePoSt.
Miner PoSt State
MinerPoStState
keeps track of a miner’s state in responding to PoSt and there are three states in MinerPoStState
:
OK
miner has passed either a ElectionPoSt or a SurprisePoSt sufficiently recently.Challenged
miner has been selected to prove its storage via SurprisePoSt and is currently in the Challenged stateDetectedFault
miner has failed at least one SurprisePoSt, indicating that all claimed storage may not be proven. Miner has lost power on its sector and recovery can only proceed by a successful response to a subsequent SurprisePoSt challenge, up until the limit of number of consecutive failures.
DetectedFault
is a miner-wide PoSt state when all sectors are considered inactive. All power is lost immediately and pledge collateral is slashed. If a miner remains in DetectedFault
for more than MaxConsecutiveFailures, all sectors will be terminated, both power and market actors will be notified for slashing and return of client deal collateral.
ProvingSet
consists of sectors that miners are required to generate proofs against and is what counts towards miners’ power. In other words, ProvingSet
is a set of all Active
sectors for a particular miner. ProvingSet
is only relevant when the miner is in OK stage of its MinerPoStState
. When a miner is in the Challenged
state, ChallengedSectors
specify the list of sectors to be challenged which is the ProvingSet
before the challenge is issued thus allowing more sectors to be added while it is in the Challenged
state.
Miners can call ProveCommit to commit a sector and add to their Claimed Power. However, a miner’s Nominal Power and Consensus Power will be zero when it is in either Challenged or DetectedFault state. Note also that miners can call DeclareTemporaryFault when they are in Challenged or DetectedFault state. This does not change the list of sectors that are currently challenged which is a snapshot of all active sectors (ProvingSet) at the time of challenge.
Storage Miner Actor
StorageMinerActorState
implementation
package miner
import (
"fmt"
"reflect"
"sort"
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
cid "github.com/ipfs/go-cid"
errors "github.com/pkg/errors"
xerrors "golang.org/x/xerrors"
. "github.com/filecoin-project/specs-actors/actors/util"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
xc "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
)
// Balance of Miner Actor should be greater than or equal to
// the sum of PreCommitDeposits and LockedFunds.
// It is possible for balance to fall below the sum of
// PCD, LF and InitialPledgeRequirements, and this is a bad
// state (IP Debt) that limits a miner actor's behavior (i.e. no balance withdrawals)
// Excess balance as computed by st.GetAvailableBalance will be
// withdrawable or usable for pre-commit deposit or pledge lock-up.
type State struct {
// Information not related to sectors.
Info cid.Cid
PreCommitDeposits abi.TokenAmount // Total funds locked as PreCommitDeposits
LockedFunds abi.TokenAmount // Total rewards and added funds locked in vesting table
VestingFunds cid.Cid // VestingFunds (Vesting Funds schedule for the miner).
InitialPledgeRequirement abi.TokenAmount // Sum of initial pledge requirements of all active sectors
// Sectors that have been pre-committed but not yet proven.
PreCommittedSectors cid.Cid // Map, HAMT[SectorNumber]SectorPreCommitOnChainInfo
// PreCommittedSectorsExpiry maintains the state required to expire PreCommittedSectors.
PreCommittedSectorsExpiry cid.Cid // BitFieldQueue (AMT[Epoch]*BitField)
// Allocated sector IDs. Sector IDs can never be reused once allocated.
AllocatedSectors cid.Cid // BitField
// Information for all proven and not-yet-garbage-collected sectors.
//
// Sectors are removed from this AMT when the partition to which the
// sector belongs is compacted.
Sectors cid.Cid // Array, AMT[SectorNumber]SectorOnChainInfo (sparse)
// The first epoch in this miner's current proving period. This is the first epoch in which a PoSt for a
// partition at the miner's first deadline may arrive. Alternatively, it is after the last epoch at which
// a PoSt for the previous window is valid.
// Always greater than zero, this may be greater than the current epoch for genesis miners in the first
// WPoStProvingPeriod epochs of the chain; the epochs before the first proving period starts are exempt from Window
// PoSt requirements.
// Updated at the end of every period by a cron callback.
ProvingPeriodStart abi.ChainEpoch
// Index of the deadline within the proving period beginning at ProvingPeriodStart that has not yet been
// finalized.
// Updated at the end of each deadline window by a cron callback.
CurrentDeadline uint64
// The sector numbers due for PoSt at each deadline in the current proving period, frozen at period start.
// New sectors are added and expired ones removed at proving period boundary.
// Faults are not subtracted from this in state, but on the fly.
Deadlines cid.Cid
// Deadlines with outstanding fees for early sector termination.
EarlyTerminations bitfield.BitField
}
type MinerInfo struct {
// Account that owns this miner.
// - Income and returned collateral are paid to this address.
// - This address is also allowed to change the worker address for the miner.
Owner addr.Address // Must be an ID-address.
// Worker account for this miner.
// The associated pubkey-type address is used to sign blocks and messages on behalf of this miner.
Worker addr.Address // Must be an ID-address.
// Additional addresses that are permitted to submit messages controlling this actor (optional).
ControlAddresses []addr.Address // Must all be ID addresses.
PendingWorkerKey *WorkerKeyChange
// Byte array representing a Libp2p identity that should be used when connecting to this miner.
PeerId abi.PeerID
// Slice of byte arrays representing Libp2p multi-addresses used for establishing a connection with this miner.
Multiaddrs []abi.Multiaddrs
// The proof type used by this miner for sealing sectors.
SealProofType abi.RegisteredSealProof
// Amount of space in each sector committed by this miner.
// This is computed from the proof type and represented here redundantly.
SectorSize abi.SectorSize
// The number of sectors in each Window PoSt partition (proof).
// This is computed from the proof type and represented here redundantly.
WindowPoStPartitionSectors uint64
}
type WorkerKeyChange struct {
NewWorker addr.Address // Must be an ID address
EffectiveAt abi.ChainEpoch
}
// Information provided by a miner when pre-committing a sector.
type SectorPreCommitInfo struct {
SealProof abi.RegisteredSealProof
SectorNumber abi.SectorNumber
SealedCID cid.Cid `checked:"true"` // CommR
SealRandEpoch abi.ChainEpoch
DealIDs []abi.DealID
Expiration abi.ChainEpoch
ReplaceCapacity bool // Whether to replace a "committed capacity" no-deal sector (requires non-empty DealIDs)
// The committed capacity sector to replace, and it's deadline/partition location
ReplaceSectorDeadline uint64
ReplaceSectorPartition uint64
ReplaceSectorNumber abi.SectorNumber
}
// Information stored on-chain for a pre-committed sector.
type SectorPreCommitOnChainInfo struct {
Info SectorPreCommitInfo
PreCommitDeposit abi.TokenAmount
PreCommitEpoch abi.ChainEpoch
DealWeight abi.DealWeight // Integral of active deals over sector lifetime
VerifiedDealWeight abi.DealWeight // Integral of active verified deals over sector lifetime
}
// Information stored on-chain for a proven sector.
type SectorOnChainInfo struct {
SectorNumber abi.SectorNumber
SealProof abi.RegisteredSealProof // The seal proof type implies the PoSt proof/s
SealedCID cid.Cid // CommR
DealIDs []abi.DealID
Activation abi.ChainEpoch // Epoch during which the sector proof was accepted
Expiration abi.ChainEpoch // Epoch during which the sector expires
DealWeight abi.DealWeight // Integral of active deals over sector lifetime
VerifiedDealWeight abi.DealWeight // Integral of active verified deals over sector lifetime
InitialPledge abi.TokenAmount // Pledge collected to commit this sector
ExpectedDayReward abi.TokenAmount // Expected one day projection of reward for sector computed at activation time
ExpectedStoragePledge abi.TokenAmount // Expected twenty day projection of reward for sector computed at activation time
}
func ConstructState(infoCid cid.Cid, periodStart abi.ChainEpoch, emptyBitfieldCid, emptyArrayCid, emptyMapCid, emptyDeadlinesCid cid.Cid,
emptyVestingFundsCid cid.Cid) (*State, error) {
return &State{
Info: infoCid,
PreCommitDeposits: abi.NewTokenAmount(0),
LockedFunds: abi.NewTokenAmount(0),
VestingFunds: emptyVestingFundsCid,
InitialPledgeRequirement: abi.NewTokenAmount(0),
PreCommittedSectors: emptyMapCid,
PreCommittedSectorsExpiry: emptyArrayCid,
AllocatedSectors: emptyBitfieldCid,
Sectors: emptyArrayCid,
ProvingPeriodStart: periodStart,
CurrentDeadline: 0,
Deadlines: emptyDeadlinesCid,
EarlyTerminations: bitfield.New(),
}, nil
}
func ConstructMinerInfo(owner addr.Address, worker addr.Address, controlAddrs []addr.Address, pid []byte,
multiAddrs [][]byte, sealProofType abi.RegisteredSealProof) (*MinerInfo, error) {
sectorSize, err := sealProofType.SectorSize()
if err != nil {
return nil, err
}
partitionSectors, err := sealProofType.WindowPoStPartitionSectors()
if err != nil {
return nil, err
}
return &MinerInfo{
Owner: owner,
Worker: worker,
ControlAddresses: controlAddrs,
PendingWorkerKey: nil,
PeerId: pid,
Multiaddrs: multiAddrs,
SealProofType: sealProofType,
SectorSize: sectorSize,
WindowPoStPartitionSectors: partitionSectors,
}, nil
}
func (st *State) GetInfo(store adt.Store) (*MinerInfo, error) {
var info MinerInfo
if err := store.Get(store.Context(), st.Info, &info); err != nil {
return nil, xerrors.Errorf("failed to get miner info %w", err)
}
return &info, nil
}
func (st *State) SaveInfo(store adt.Store, info *MinerInfo) error {
c, err := store.Put(store.Context(), info)
if err != nil {
return err
}
st.Info = c
return nil
}
// Returns deadline calculations for the current (according to state) proving period.
func (st *State) DeadlineInfo(currEpoch abi.ChainEpoch) *DeadlineInfo {
return NewDeadlineInfo(st.ProvingPeriodStart, st.CurrentDeadline, currEpoch)
}
// Returns deadline calculations for the current (according to state) proving period.
func (st *State) QuantSpecForDeadline(dlIdx uint64) QuantSpec {
return NewDeadlineInfo(st.ProvingPeriodStart, dlIdx, 0).QuantSpec()
}
func (st *State) AllocateSectorNumber(store adt.Store, sectorNo abi.SectorNumber) error {
// This will likely already have been checked, but this is a good place
// to catch any mistakes.
if sectorNo > abi.MaxSectorNumber {
return xc.ErrIllegalArgument.Wrapf("sector number out of range: %d", sectorNo)
}
var allocatedSectors bitfield.BitField
if err := store.Get(store.Context(), st.AllocatedSectors, &allocatedSectors); err != nil {
return xc.ErrIllegalState.Wrapf("failed to load allocated sectors bitfield: %w", err)
}
if allocated, err := allocatedSectors.IsSet(uint64(sectorNo)); err != nil {
return xc.ErrIllegalState.Wrapf("failed to lookup sector number in allocated sectors bitfield: %w", err)
} else if allocated {
return xc.ErrIllegalArgument.Wrapf("sector number %d has already been allocated", sectorNo)
}
allocatedSectors.Set(uint64(sectorNo))
if root, err := store.Put(store.Context(), allocatedSectors); err != nil {
return xc.ErrIllegalArgument.Wrapf("failed to store allocated sectors bitfield after adding sector %d: %w", sectorNo, err)
} else {
st.AllocatedSectors = root
}
return nil
}
func (st *State) MaskSectorNumbers(store adt.Store, sectorNos bitfield.BitField) error {
lastSectorNo, err := sectorNos.Last()
if err != nil {
return xc.ErrIllegalArgument.Wrapf("invalid mask bitfield: %w", err)
}
if lastSectorNo > abi.MaxSectorNumber {
return xc.ErrIllegalArgument.Wrapf("masked sector number %d exceeded max sector number", lastSectorNo)
}
var allocatedSectors bitfield.BitField
if err := store.Get(store.Context(), st.AllocatedSectors, &allocatedSectors); err != nil {
return xc.ErrIllegalState.Wrapf("failed to load allocated sectors bitfield: %w", err)
}
allocatedSectors, err = bitfield.MergeBitFields(allocatedSectors, sectorNos)
if err != nil {
return xc.ErrIllegalState.Wrapf("failed to merge allocated bitfield with mask: %w", err)
}
if root, err := store.Put(store.Context(), allocatedSectors); err != nil {
return xc.ErrIllegalArgument.Wrapf("failed to mask allocated sectors bitfield: %w", err)
} else {
st.AllocatedSectors = root
}
return nil
}
func (st *State) PutPrecommittedSector(store adt.Store, info *SectorPreCommitOnChainInfo) error {
precommitted, err := adt.AsMap(store, st.PreCommittedSectors)
if err != nil {
return err
}
err = precommitted.Put(SectorKey(info.Info.SectorNumber), info)
if err != nil {
return errors.Wrapf(err, "failed to store precommitment for %v", info)
}
st.PreCommittedSectors, err = precommitted.Root()
return err
}
func (st *State) GetPrecommittedSector(store adt.Store, sectorNo abi.SectorNumber) (*SectorPreCommitOnChainInfo, bool, error) {
precommitted, err := adt.AsMap(store, st.PreCommittedSectors)
if err != nil {
return nil, false, err
}
var info SectorPreCommitOnChainInfo
found, err := precommitted.Get(SectorKey(sectorNo), &info)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to load precommitment for %v", sectorNo)
}
return &info, found, nil
}
// This method gets and returns the requested pre-committed sectors, skipping
// missing sectors.
func (st *State) FindPrecommittedSectors(store adt.Store, sectorNos ...abi.SectorNumber) ([]*SectorPreCommitOnChainInfo, error) {
precommitted, err := adt.AsMap(store, st.PreCommittedSectors)
if err != nil {
return nil, err
}
result := make([]*SectorPreCommitOnChainInfo, 0, len(sectorNos))
for _, sectorNo := range sectorNos {
var info SectorPreCommitOnChainInfo
found, err := precommitted.Get(SectorKey(sectorNo), &info)
if err != nil {
return nil, errors.Wrapf(err, "failed to load precommitment for %v", sectorNo)
}
if !found {
// TODO #564 log: "failed to get precommitted sector on sector %d, dropping from prove commit set"
continue
}
result = append(result, &info)
}
return result, nil
}
func (st *State) DeletePrecommittedSectors(store adt.Store, sectorNos ...abi.SectorNumber) error {
precommitted, err := adt.AsMap(store, st.PreCommittedSectors)
if err != nil {
return err
}
for _, sectorNo := range sectorNos {
err = precommitted.Delete(SectorKey(sectorNo))
if err != nil {
return xerrors.Errorf("failed to delete precommitment for %v: %w", sectorNo, err)
}
}
st.PreCommittedSectors, err = precommitted.Root()
return err
}
func (st *State) HasSectorNo(store adt.Store, sectorNo abi.SectorNumber) (bool, error) {
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return false, err
}
_, found, err := sectors.Get(sectorNo)
if err != nil {
return false, xerrors.Errorf("failed to get sector %v: %w", sectorNo, err)
}
return found, nil
}
func (st *State) PutSectors(store adt.Store, newSectors ...*SectorOnChainInfo) error {
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return xerrors.Errorf("failed to load sectors: %w", err)
}
err = sectors.Store(newSectors...)
if err != nil {
return err
}
st.Sectors, err = sectors.Root()
if err != nil {
return xerrors.Errorf("failed to persist sectors: %w", err)
}
return nil
}
func (st *State) GetSector(store adt.Store, sectorNo abi.SectorNumber) (*SectorOnChainInfo, bool, error) {
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return nil, false, err
}
return sectors.Get(sectorNo)
}
func (st *State) DeleteSectors(store adt.Store, sectorNos bitfield.BitField) error {
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return err
}
err = sectorNos.ForEach(func(sectorNo uint64) error {
if err = sectors.Delete(sectorNo); err != nil {
return xerrors.Errorf("failed to delete sector %v: %w", sectorNos, err)
}
return nil
})
if err != nil {
return err
}
st.Sectors, err = sectors.Root()
return err
}
// Iterates sectors.
// The pointer provided to the callback is not safe for re-use. Copy the pointed-to value in full to hold a reference.
func (st *State) ForEachSector(store adt.Store, f func(*SectorOnChainInfo)) error {
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return err
}
var sector SectorOnChainInfo
return sectors.ForEach(§or, func(idx int64) error {
f(§or)
return nil
})
}
func (st *State) FindSector(store adt.Store, sno abi.SectorNumber) (uint64, uint64, error) {
deadlines, err := st.LoadDeadlines(store)
if err != nil {
return 0, 0, err
}
return FindSector(store, deadlines, sno)
}
// Schedules each sector to expire at its next deadline end. If it can't find
// any given sector, it skips it.
//
// This method assumes that each sector's power has not changed, despite the rescheduling.
//
// Note: this method is used to "upgrade" sectors, rescheduling the now-replaced
// sectors to expire at the end of the next deadline. Given the expense of
// sealing a sector, this function skips missing/faulty/terminated "upgraded"
// sectors instead of failing. That way, the new sectors can still be proved.
func (st *State) RescheduleSectorExpirations(
store adt.Store, currEpoch abi.ChainEpoch, ssize abi.SectorSize,
deadlineSectors DeadlineSectorMap,
) error {
deadlines, err := st.LoadDeadlines(store)
if err != nil {
return err
}
sectors, err := LoadSectors(store, st.Sectors)
if err != nil {
return err
}
if err = deadlineSectors.ForEach(func(dlIdx uint64, pm PartitionSectorMap) error {
dlInfo := NewDeadlineInfo(st.ProvingPeriodStart, dlIdx, currEpoch).NextNotElapsed()
newExpiration := dlInfo.Last()
dl, err := deadlines.LoadDeadline(store, dlIdx)
if err != nil {
return err
}
if err := dl.RescheduleSectorExpirations(store, sectors, newExpiration, pm, ssize, dlInfo.QuantSpec()); err != nil {
return err
}
if err := deadlines.UpdateDeadline(store, dlIdx, dl); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return st.SaveDeadlines(store, deadlines)
}
// Assign new sectors to deadlines.
func (st *State) AssignSectorsToDeadlines(
store adt.Store,
currentEpoch abi.ChainEpoch,
sectors []*SectorOnChainInfo,
partitionSize uint64,
sectorSize abi.SectorSize,
) (PowerPair, error) {
deadlines, err := st.LoadDeadlines(store)
if err != nil {
return NewPowerPairZero(), err
}
// Sort sectors by number to get better runs in partition bitfields.
sort.Slice(sectors, func(i, j int) bool {
return sectors[i].SectorNumber < sectors[j].SectorNumber
})
var deadlineArr [WPoStPeriodDeadlines]*Deadline
err = deadlines.ForEach(store, func(idx uint64, dl *Deadline) error {
// Skip deadlines that aren't currently mutable.
if deadlineIsMutable(st.ProvingPeriodStart, idx, currentEpoch) {
deadlineArr[int(idx)] = dl
}
return nil
})
if err != nil {
return NewPowerPairZero(), err
}
newPower := NewPowerPairZero()
for dlIdx, deadlineSectors := range assignDeadlines(partitionSize, &deadlineArr, sectors) {
if len(deadlineSectors) == 0 {
continue
}
quant := st.QuantSpecForDeadline(uint64(dlIdx))
dl := deadlineArr[dlIdx]
deadlineNewPower, err := dl.AddSectors(store, partitionSize, deadlineSectors, sectorSize, quant)
if err != nil {
return NewPowerPairZero(), err
}
newPower = newPower.Add(deadlineNewPower)
err = deadlines.UpdateDeadline(store, uint64(dlIdx), dl)
if err != nil {
return NewPowerPairZero(), err
}
}
err = st.SaveDeadlines(store, deadlines)
if err != nil {
return NewPowerPairZero(), err
}
return newPower, nil
}
// Pops up to max early terminated sectors from all deadlines.
//
// Returns hasMore if we still have more early terminations to process.
func (st *State) PopEarlyTerminations(store adt.Store, maxPartitions, maxSectors uint64) (result TerminationResult, hasMore bool, err error) {
stopErr := errors.New("stop error")
// Anything to do? This lets us avoid loading the deadlines if there's nothing to do.
noEarlyTerminations, err := st.EarlyTerminations.IsEmpty()
if err != nil {
return TerminationResult{}, false, xerrors.Errorf("failed to count deadlines with early terminations: %w", err)
} else if noEarlyTerminations {
return TerminationResult{}, false, nil
}
// Load deadlines
deadlines, err := st.LoadDeadlines(store)
if err != nil {
return TerminationResult{}, false, xerrors.Errorf("failed to load deadlines: %w", err)
}
// Process early terminations.
if err = st.EarlyTerminations.ForEach(func(dlIdx uint64) error {
// Load deadline + partitions.
dl, err := deadlines.LoadDeadline(store, dlIdx)
if err != nil {
return xerrors.Errorf("failed to load deadline %d: %w", dlIdx, err)
}
deadlineResult, more, err := dl.PopEarlyTerminations(store, maxPartitions-result.PartitionsProcessed, maxSectors-result.SectorsProcessed)
if err != nil {
return xerrors.Errorf("failed to pop early terminations for deadline %d: %w", dlIdx, err)
}
err = result.Add(deadlineResult)
if err != nil {
return xerrors.Errorf("failed to merge result from popping early terminations from deadline: %w", err)
}
if !more {
// safe to do while iterating.
st.EarlyTerminations.Unset(dlIdx)
}
// Save the deadline
err = deadlines.UpdateDeadline(store, dlIdx, dl)
if err != nil {
return xerrors.Errorf("failed to store deadline %d: %w", dlIdx, err)
}
if result.BelowLimit(maxPartitions, maxSectors) {
return nil
}
return stopErr
}); err != nil && err != stopErr {
return TerminationResult{}, false, xerrors.Errorf("failed to walk early terminations bitfield for deadlines: %w", err)
}
// Save back the deadlines.
err = st.SaveDeadlines(store, deadlines)
if err != nil {
return TerminationResult{}, false, xerrors.Errorf("failed to save deadlines: %w", err)
}
// Ok, check to see if we've handled all early terminations.
noEarlyTerminations, err = st.EarlyTerminations.IsEmpty()
if err != nil {
return TerminationResult{}, false, xerrors.Errorf("failed to count remaining early terminations deadlines")
}
return result, !noEarlyTerminations, nil
}
// Returns an error if the target sector cannot be found and/or is faulty/terminated.
func (st *State) CheckSectorHealth(store adt.Store, dlIdx, pIdx uint64, sector abi.SectorNumber) error {
dls, err := st.LoadDeadlines(store)
if err != nil {
return err
}
dl, err := dls.LoadDeadline(store, dlIdx)
if err != nil {
return err
}
partition, err := dl.LoadPartition(store, pIdx)
if err != nil {
return err
}
if exists, err := partition.Sectors.IsSet(uint64(sector)); err != nil {
return xc.ErrIllegalState.Wrapf("failed to decode sectors bitfield (deadline %d, partition %d): %w", dlIdx, pIdx, err)
} else if !exists {
return xc.ErrNotFound.Wrapf("sector %d not a member of partition %d, deadline %d", sector, pIdx, dlIdx)
}
if faulty, err := partition.Faults.IsSet(uint64(sector)); err != nil {
return xc.ErrIllegalState.Wrapf("failed to decode faults bitfield (deadline %d, partition %d): %w", dlIdx, pIdx, err)
} else if faulty {
return xc.ErrForbidden.Wrapf("sector %d of partition %d, deadline %d is faulty", sector, pIdx, dlIdx)
}
if terminated, err := partition.Terminated.IsSet(uint64(sector)); err != nil {
return xc.ErrIllegalState.Wrapf("failed to decode terminated bitfield (deadline %d, partition %d): %w", dlIdx, pIdx, err)
} else if terminated {
return xc.ErrNotFound.Wrapf("sector %d of partition %d, deadline %d is terminated", sector, pIdx, dlIdx)
}
return nil
}
// Loads sector info for a sequence of sectors.
func (st *State) LoadSectorInfos(store adt.Store, sectors bitfield.BitField) ([]*SectorOnChainInfo, error) {
sectorsArr, err := LoadSectors(store, st.Sectors)
if err != nil {
return nil, err
}
return sectorsArr.Load(sectors)
}
// Loads info for a set of sectors to be proven.
// If any of the sectors are declared faulty and not to be recovered, info for the first non-faulty sector is substituted instead.
// If any of the sectors are declared recovered, they are returned from this method.
func (st *State) LoadSectorInfosForProof(store adt.Store, provenSectors, expectedFaults bitfield.BitField) ([]*SectorOnChainInfo, error) {
nonFaults, err := bitfield.SubtractBitField(provenSectors, expectedFaults)
if err != nil {
return nil, xerrors.Errorf("failed to diff bitfields: %w", err)
}
// Return empty if no non-faults
if empty, err := nonFaults.IsEmpty(); err != nil {
return nil, xerrors.Errorf("failed to check if bitfield was empty: %w", err)
} else if empty {
return nil, nil
}
// Select a non-faulty sector as a substitute for faulty ones.
goodSectorNo, err := nonFaults.First()
if err != nil {
return nil, xerrors.Errorf("failed to get first good sector: %w", err)
}
// Load sector infos
sectorInfos, err := st.LoadSectorInfosWithFaultMask(store, provenSectors, expectedFaults, abi.SectorNumber(goodSectorNo))
if err != nil {
return nil, xerrors.Errorf("failed to load sector infos: %w", err)
}
return sectorInfos, nil
}
// Loads sector info for a sequence of sectors, substituting info for a stand-in sector for any that are faulty.
func (st *State) LoadSectorInfosWithFaultMask(store adt.Store, sectors bitfield.BitField, faults bitfield.BitField, faultStandIn abi.SectorNumber) ([]*SectorOnChainInfo, error) {
sectorArr, err := LoadSectors(store, st.Sectors)
if err != nil {
return nil, xerrors.Errorf("failed to load sectors array: %w", err)
}
standInInfo, err := sectorArr.MustGet(faultStandIn)
if err != nil {
return nil, fmt.Errorf("failed to load stand-in sector %d: %v", faultStandIn, err)
}
// Expand faults into a map for quick lookups.
// The faults bitfield should already be a subset of the sectors bitfield.
sectorCount, err := sectors.Count()
if err != nil {
return nil, err
}
faultSet, err := faults.AllMap(sectorCount)
if err != nil {
return nil, fmt.Errorf("failed to expand faults: %w", err)
}
// Load the sector infos, masking out fault sectors with a good one.
sectorInfos := make([]*SectorOnChainInfo, 0, sectorCount)
err = sectors.ForEach(func(i uint64) error {
sector := standInInfo
faulty := faultSet[i]
if !faulty {
sectorOnChain, err := sectorArr.MustGet(abi.SectorNumber(i))
if err != nil {
return xerrors.Errorf("failed to load sector %d: %w", i, err)
}
sector = sectorOnChain
}
sectorInfos = append(sectorInfos, sector)
return nil
})
return sectorInfos, err
}
func (st *State) LoadDeadlines(store adt.Store) (*Deadlines, error) {
var deadlines Deadlines
if err := store.Get(store.Context(), st.Deadlines, &deadlines); err != nil {
return nil, xc.ErrIllegalState.Wrapf("failed to load deadlines (%s): %w", st.Deadlines, err)
}
return &deadlines, nil
}
func (st *State) SaveDeadlines(store adt.Store, deadlines *Deadlines) error {
c, err := store.Put(store.Context(), deadlines)
if err != nil {
return err
}
st.Deadlines = c
return nil
}
// LoadVestingFunds loads the vesting funds table from the store
func (st *State) LoadVestingFunds(store adt.Store) (*VestingFunds, error) {
var funds VestingFunds
if err := store.Get(store.Context(), st.VestingFunds, &funds); err != nil {
return nil, xerrors.Errorf("failed to load vesting funds (%s): %w", st.VestingFunds, err)
}
return &funds, nil
}
// SaveVestingFunds saves the vesting table to the store
func (st *State) SaveVestingFunds(store adt.Store, funds *VestingFunds) error {
c, err := store.Put(store.Context(), funds)
if err != nil {
return err
}
st.VestingFunds = c
return nil
}
//
// Funds and vesting
//
func (st *State) AddPreCommitDeposit(amount abi.TokenAmount) {
newTotal := big.Add(st.PreCommitDeposits, amount)
AssertMsg(newTotal.GreaterThanEqual(big.Zero()), "negative pre-commit deposit %s after adding %s to prior %s",
newTotal, amount, st.PreCommitDeposits)
st.PreCommitDeposits = newTotal
}
func (st *State) AddInitialPledgeRequirement(amount abi.TokenAmount) {
newTotal := big.Add(st.InitialPledgeRequirement, amount)
AssertMsg(newTotal.GreaterThanEqual(big.Zero()), "negative initial pledge requirement %s after adding %s to prior %s",
newTotal, amount, st.InitialPledgeRequirement)
st.InitialPledgeRequirement = newTotal
}
// AddLockedFunds first vests and unlocks the vested funds AND then locks the given funds in the vesting table.
func (st *State) AddLockedFunds(store adt.Store, currEpoch abi.ChainEpoch, vestingSum abi.TokenAmount, spec *VestSpec) (vested abi.TokenAmount, err error) {
AssertMsg(vestingSum.GreaterThanEqual(big.Zero()), "negative vesting sum %s", vestingSum)
vestingFunds, err := st.LoadVestingFunds(store)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load vesting funds: %w", err)
}
// unlock vested funds first
amountUnlocked := vestingFunds.unlockVestedFunds(currEpoch)
st.LockedFunds = big.Sub(st.LockedFunds, amountUnlocked)
Assert(st.LockedFunds.GreaterThanEqual(big.Zero()))
// add locked funds now
vestingFunds.addLockedFunds(currEpoch, vestingSum, st.ProvingPeriodStart, spec)
st.LockedFunds = big.Add(st.LockedFunds, vestingSum)
// save the updated vesting table state
if err := st.SaveVestingFunds(store, vestingFunds); err != nil {
return big.Zero(), xerrors.Errorf("failed to save vesting funds: %w", err)
}
return amountUnlocked, nil
}
// PenalizeFundsInPriorityOrder first unlocks unvested funds from the vesting table.
// If the target is not yet hit it deducts funds from the (new) available balance.
// Returns the amount unlocked from the vesting table and the amount taken from current balance.
// If the penalty exceeds the total amount available in the vesting table and unlocked funds
// the penalty is reduced to match. This must be fixed when handling bankrupcy:
// https://github.com/filecoin-project/specs-actors/issues/627
func (st *State) PenalizeFundsInPriorityOrder(store adt.Store, currEpoch abi.ChainEpoch, target, unlockedBalance abi.TokenAmount) (fromVesting abi.TokenAmount, fromBalance abi.TokenAmount, err error) {
fromVesting, err = st.UnlockUnvestedFunds(store, currEpoch, target)
if err != nil {
return abi.NewTokenAmount(0), abi.NewTokenAmount(0), err
}
if fromVesting.Equals(target) {
return fromVesting, abi.NewTokenAmount(0), nil
}
// unlocked funds were just deducted from available, so track that
remaining := big.Sub(target, fromVesting)
fromBalance = big.Min(unlockedBalance, remaining)
return fromVesting, fromBalance, nil
}
// Unlocks an amount of funds that have *not yet vested*, if possible.
// The soonest-vesting entries are unlocked first.
// Returns the amount actually unlocked.
func (st *State) UnlockUnvestedFunds(store adt.Store, currEpoch abi.ChainEpoch, target abi.TokenAmount) (abi.TokenAmount, error) {
vestingFunds, err := st.LoadVestingFunds(store)
if err != nil {
return big.Zero(), xerrors.Errorf("failed tp load vesting funds: %w", err)
}
amountUnlocked := vestingFunds.unlockUnvestedFunds(currEpoch, target)
st.LockedFunds = big.Sub(st.LockedFunds, amountUnlocked)
Assert(st.LockedFunds.GreaterThanEqual(big.Zero()))
if err := st.SaveVestingFunds(store, vestingFunds); err != nil {
return big.Zero(), xerrors.Errorf("failed to save vesting funds: %w", err)
}
return amountUnlocked, nil
}
// Unlocks all vesting funds that have vested before the provided epoch.
// Returns the amount unlocked.
func (st *State) UnlockVestedFunds(store adt.Store, currEpoch abi.ChainEpoch) (abi.TokenAmount, error) {
vestingFunds, err := st.LoadVestingFunds(store)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load vesting funds: %w", err)
}
amountUnlocked := vestingFunds.unlockVestedFunds(currEpoch)
st.LockedFunds = big.Sub(st.LockedFunds, amountUnlocked)
Assert(st.LockedFunds.GreaterThanEqual(big.Zero()))
err = st.SaveVestingFunds(store, vestingFunds)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to save vesing funds: %w", err)
}
return amountUnlocked, nil
}
// CheckVestedFunds returns the amount of vested funds that have vested before the provided epoch.
func (st *State) CheckVestedFunds(store adt.Store, currEpoch abi.ChainEpoch) (abi.TokenAmount, error) {
vestingFunds, err := st.LoadVestingFunds(store)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load vesting funds: %w", err)
}
amountVested := abi.NewTokenAmount(0)
for i := range vestingFunds.Funds {
vf := vestingFunds.Funds[i]
epoch := vf.Epoch
amount := vf.Amount
if epoch >= currEpoch {
break
}
amountVested = big.Add(amountVested, amount)
}
return amountVested, nil
}
// Unclaimed funds that are not locked -- includes funds used to cover initial pledge requirement
func (st *State) GetUnlockedBalance(actorBalance abi.TokenAmount) abi.TokenAmount {
unlockedBalance := big.Subtract(actorBalance, st.LockedFunds, st.PreCommitDeposits)
Assert(unlockedBalance.GreaterThanEqual(big.Zero()))
return unlockedBalance
}
// Unclaimed funds. Actor balance - (locked funds, precommit deposit, ip requirement)
// Can go negative if the miner is in IP debt
func (st *State) GetAvailableBalance(actorBalance abi.TokenAmount) abi.TokenAmount {
availableBalance := st.GetUnlockedBalance(actorBalance)
return big.Sub(availableBalance, st.InitialPledgeRequirement)
}
func (st *State) AssertBalanceInvariants(balance abi.TokenAmount) {
Assert(st.PreCommitDeposits.GreaterThanEqual(big.Zero()))
Assert(st.LockedFunds.GreaterThanEqual(big.Zero()))
Assert(balance.GreaterThanEqual(big.Sum(st.PreCommitDeposits, st.LockedFunds)))
}
func (st *State) MeetsInitialPledgeCondition(balance abi.TokenAmount) bool {
available := st.GetUnlockedBalance(balance)
return available.GreaterThanEqual(st.InitialPledgeRequirement)
}
// pre-commit expiry
func (st *State) QuantSpecEveryDeadline() QuantSpec {
return NewQuantSpec(WPoStChallengeWindow, st.ProvingPeriodStart)
}
func (st *State) AddPreCommitExpiry(store adt.Store, expireEpoch abi.ChainEpoch, sectorNum abi.SectorNumber) error {
// Load BitField Queue for sector expiry
quant := st.QuantSpecEveryDeadline()
queue, err := LoadBitfieldQueue(store, st.PreCommittedSectorsExpiry, quant)
if err != nil {
return xerrors.Errorf("failed to load pre-commit expiry queue: %w", err)
}
// add entry for this sector to the queue
if err := queue.AddToQueueValues(expireEpoch, uint64(sectorNum)); err != nil {
return xerrors.Errorf("failed to add pre-commit sector expiry to queue: %w", err)
}
st.PreCommittedSectorsExpiry, err = queue.Root()
if err != nil {
return xerrors.Errorf("failed to save pre-commit sector queue: %w", err)
}
return nil
}
func (st *State) checkPrecommitExpiry(store adt.Store, sectors abi.BitField) (depositToBurn abi.TokenAmount, err error) {
depositToBurn = abi.NewTokenAmount(0)
var precommitsToDelete []abi.SectorNumber
if err = sectors.ForEach(func(i uint64) error {
sectorNo := abi.SectorNumber(i)
sector, found, err := st.GetPrecommittedSector(store, sectorNo)
if err != nil {
return err
}
if !found {
// already committed/deleted
return nil
}
// mark it for deletion
precommitsToDelete = append(precommitsToDelete, sectorNo)
// increment deposit to burn
depositToBurn = big.Add(depositToBurn, sector.PreCommitDeposit)
return nil
}); err != nil {
return big.Zero(), xerrors.Errorf("failed to check pre-commit expiries: %w", err)
}
// Actually delete it.
if len(precommitsToDelete) > 0 {
if err := st.DeletePrecommittedSectors(store, precommitsToDelete...); err != nil {
return big.Zero(), fmt.Errorf("failed to delete pre-commits: %w", err)
}
}
st.PreCommitDeposits = big.Sub(st.PreCommitDeposits, depositToBurn)
Assert(st.PreCommitDeposits.GreaterThanEqual(big.Zero()))
// This deposit was locked separately to pledge collateral so there's no pledge change here.
return depositToBurn, nil
}
//
// Misc helpers
//
func SectorKey(e abi.SectorNumber) adt.Keyer {
return adt.UIntKey(uint64(e))
}
func init() {
// Check that ChainEpoch is indeed an unsigned integer to confirm that SectorKey is making the right interpretation.
var e abi.SectorNumber
if reflect.TypeOf(e).Kind() != reflect.Uint64 {
panic("incorrect sector number encoding")
}
}
StorageMinerActorCode
implementation
package miner
import (
"bytes"
"encoding/binary"
"fmt"
"math"
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
cid "github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors"
abi "github.com/filecoin-project/specs-actors/actors/abi"
big "github.com/filecoin-project/specs-actors/actors/abi/big"
builtin "github.com/filecoin-project/specs-actors/actors/builtin"
market "github.com/filecoin-project/specs-actors/actors/builtin/market"
power "github.com/filecoin-project/specs-actors/actors/builtin/power"
"github.com/filecoin-project/specs-actors/actors/builtin/reward"
crypto "github.com/filecoin-project/specs-actors/actors/crypto"
vmr "github.com/filecoin-project/specs-actors/actors/runtime"
exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
. "github.com/filecoin-project/specs-actors/actors/util"
adt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/specs-actors/actors/util/smoothing"
)
type Runtime = vmr.Runtime
type CronEventType int64
const (
CronEventWorkerKeyChange CronEventType = iota
CronEventProvingDeadline
CronEventProcessEarlyTerminations
)
type CronEventPayload struct {
EventType CronEventType
}
// Identifier for a single partition within a miner.
type PartitionKey struct {
Deadline uint64
Partition uint64
}
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.ControlAddresses,
3: a.ChangeWorkerAddress,
4: a.ChangePeerID,
5: a.SubmitWindowedPoSt,
6: a.PreCommitSector,
7: a.ProveCommitSector,
8: a.ExtendSectorExpiration,
9: a.TerminateSectors,
10: a.DeclareFaults,
11: a.DeclareFaultsRecovered,
12: a.OnDeferredCronEvent,
13: a.CheckSectorProven,
14: a.AddLockedFund,
15: a.ReportConsensusFault,
16: a.WithdrawBalance,
17: a.ConfirmSectorProofsValid,
18: a.ChangeMultiaddrs,
19: a.CompactPartitions,
20: a.CompactSectorNumbers,
}
}
var _ abi.Invokee = Actor{}
/////////////////
// Constructor //
/////////////////
// Storage miner actors are created exclusively by the storage power actor. In order to break a circular dependency
// between the two, the construction parameters are defined in the power actor.
type ConstructorParams = power.MinerConstructorParams
func (a Actor) Constructor(rt Runtime, params *ConstructorParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.InitActorAddr)
_, ok := SupportedProofTypes[params.SealProofType]
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "proof type %d not allowed for new miner actors", params.SealProofType)
}
owner := resolveControlAddress(rt, params.OwnerAddr)
worker := resolveWorkerAddress(rt, params.WorkerAddr)
controlAddrs := make([]addr.Address, 0, len(params.ControlAddrs))
for _, ca := range params.ControlAddrs {
resolved := resolveControlAddress(rt, ca)
controlAddrs = append(controlAddrs, resolved)
}
emptyMap, err := adt.MakeEmptyMap(adt.AsStore(rt)).Root()
if err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to construct initial state: %v", err)
}
emptyArray, err := adt.MakeEmptyArray(adt.AsStore(rt)).Root()
if err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to construct initial state: %v", err)
}
emptyBitfield := bitfield.NewFromSet(nil)
emptyBitfieldCid := rt.Store().Put(emptyBitfield)
emptyDeadline := ConstructDeadline(emptyArray)
emptyDeadlineCid := rt.Store().Put(emptyDeadline)
emptyDeadlines := ConstructDeadlines(emptyDeadlineCid)
emptyVestingFunds := ConstructVestingFunds()
emptyDeadlinesCid := rt.Store().Put(emptyDeadlines)
emptyVestingFundsCid := rt.Store().Put(emptyVestingFunds)
currEpoch := rt.CurrEpoch()
offset, err := assignProvingPeriodOffset(rt.Message().Receiver(), currEpoch, rt.Syscalls().HashBlake2b)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to assign proving period offset")
periodStart := nextProvingPeriodStart(currEpoch, offset)
Assert(periodStart > currEpoch)
info, err := ConstructMinerInfo(owner, worker, controlAddrs, params.PeerId, params.Multiaddrs, params.SealProofType)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to construct initial miner info")
infoCid := rt.Store().Put(info)
state, err := ConstructState(infoCid, periodStart, emptyBitfieldCid, emptyArray, emptyMap, emptyDeadlinesCid, emptyVestingFundsCid)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to construct state")
rt.State().Create(state)
// Register first cron callback for epoch before the first proving period starts.
enrollCronEvent(rt, periodStart-1, &CronEventPayload{
EventType: CronEventProvingDeadline,
})
return nil
}
/////////////
// Control //
/////////////
type GetControlAddressesReturn struct {
Owner addr.Address
Worker addr.Address
ControlAddrs []addr.Address
}
func (a Actor) ControlAddresses(rt Runtime, _ *adt.EmptyValue) *GetControlAddressesReturn {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.State().Readonly(&st)
info := getMinerInfo(rt, &st)
return &GetControlAddressesReturn{
Owner: info.Owner,
Worker: info.Worker,
ControlAddrs: info.ControlAddresses,
}
}
type ChangeWorkerAddressParams struct {
NewWorker addr.Address
NewControlAddrs []addr.Address
}
// ChangeWorkerAddress will ALWAYS overwrite the existing control addresses with the control addresses passed in the params.
// If a nil addresses slice is passed, the control addresses will be cleared.
// A worker change will be scheduled if the worker passed in the params is different from the existing worker.
func (a Actor) ChangeWorkerAddress(rt Runtime, params *ChangeWorkerAddressParams) *adt.EmptyValue {
var effectiveEpoch abi.ChainEpoch
newWorker := resolveWorkerAddress(rt, params.NewWorker)
var controlAddrs []addr.Address
for _, ca := range params.NewControlAddrs {
resolved := resolveControlAddress(rt, ca)
controlAddrs = append(controlAddrs, resolved)
}
var st State
isWorkerChange := false
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
// Only the Owner is allowed to change the newWorker and control addresses.
rt.ValidateImmediateCallerIs(info.Owner)
{
// save the new control addresses
info.ControlAddresses = controlAddrs
}
{
// save newWorker addr key change request
// This may replace another pending key change.
if newWorker != info.Worker {
isWorkerChange = true
effectiveEpoch = rt.CurrEpoch() + WorkerKeyChangeDelay
info.PendingWorkerKey = &WorkerKeyChange{
NewWorker: newWorker,
EffectiveAt: effectiveEpoch,
}
}
}
err := st.SaveInfo(adt.AsStore(rt), info)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
})
// we only need to enroll the cron event for newWorker key change as we change the control
// addresses immediately
if isWorkerChange {
cronPayload := CronEventPayload{
EventType: CronEventWorkerKeyChange,
}
enrollCronEvent(rt, effectiveEpoch, &cronPayload)
}
return nil
}
type ChangePeerIDParams struct {
NewID abi.PeerID
}
func (a Actor) ChangePeerID(rt Runtime, params *ChangePeerIDParams) *adt.EmptyValue {
// TODO: Consider limiting the maximum number of bytes used by the peer ID on-chain.
// https://github.com/filecoin-project/specs-actors/issues/712
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
info.PeerId = params.NewID
err := st.SaveInfo(adt.AsStore(rt), info)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
})
return nil
}
type ChangeMultiaddrsParams struct {
NewMultiaddrs []abi.Multiaddrs
}
func (a Actor) ChangeMultiaddrs(rt Runtime, params *ChangeMultiaddrsParams) *adt.EmptyValue {
// TODO: Consider limiting the maximum number of bytes used by multiaddrs on-chain.
// https://github.com/filecoin-project/specs-actors/issues/712
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
info.Multiaddrs = params.NewMultiaddrs
err := st.SaveInfo(adt.AsStore(rt), info)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
})
return nil
}
//////////////////
// WindowedPoSt //
//////////////////
type PoStPartition struct {
// Partitions are numbered per-deadline, from zero.
Index uint64
// Sectors skipped while proving that weren't already declared faulty
Skipped bitfield.BitField
}
// Information submitted by a miner to provide a Window PoSt.
type SubmitWindowedPoStParams struct {
// The deadline index which the submission targets.
Deadline uint64
// The partitions being proven.
Partitions []PoStPartition
// Array of proofs, one per distinct registered proof type present in the sectors being proven.
// In the usual case of a single proof type, this array will always have a single element (independent of number of partitions).
Proofs []abi.PoStProof
// The epoch at which these proofs is being committed to a particular chain.
ChainCommitEpoch abi.ChainEpoch
// The ticket randomness on the chain at the ChainCommitEpoch on the chain this post is committed to
ChainCommitRand abi.Randomness
}
// Invoked by miner's worker address to submit their fallback post
func (a Actor) SubmitWindowedPoSt(rt Runtime, params *SubmitWindowedPoStParams) *adt.EmptyValue {
currEpoch := rt.CurrEpoch()
store := adt.AsStore(rt)
var st State
if params.Deadline >= WPoStPeriodDeadlines {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d of %d", params.Deadline, WPoStPeriodDeadlines)
}
if params.ChainCommitEpoch >= currEpoch {
rt.Abortf(exitcode.ErrIllegalArgument, "PoSt chain commitment %d must be in the past", params.ChainCommitEpoch)
}
if params.ChainCommitEpoch < currEpoch-WPoStMaxChainCommitAge {
rt.Abortf(exitcode.ErrIllegalArgument, "PoSt chain commitment %d too far in the past, must be after %d", params.ChainCommitEpoch, currEpoch-WPoStMaxChainCommitAge)
}
commRand := rt.GetRandomnessFromTickets(crypto.DomainSeparationTag_PoStChainCommit, params.ChainCommitEpoch, nil)
if !bytes.Equal(commRand, params.ChainCommitRand) {
rt.Abortf(exitcode.ErrIllegalArgument, "post commit randomness mismatched")
}
// TODO: limit the length of proofs array https://github.com/filecoin-project/specs-actors/issues/416
// Get the total power/reward. We need these to compute penalties.
rewardStats := requestCurrentEpochBlockReward(rt)
pwrTotal := requestCurrentTotalPower(rt)
penaltyTotal := abi.NewTokenAmount(0)
pledgeDelta := abi.NewTokenAmount(0)
var postResult *PoStResult
var info *MinerInfo
rt.State().Transaction(&st, func() {
info = getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
// Validate that the miner didn't try to prove too many partitions at once.
submissionPartitionLimit := loadPartitionsSectorsMax(info.WindowPoStPartitionSectors)
if uint64(len(params.Partitions)) > submissionPartitionLimit {
rt.Abortf(exitcode.ErrIllegalArgument, "too many partitions %d, limit %d", len(params.Partitions), submissionPartitionLimit)
}
// Load and check deadline.
currDeadline := st.DeadlineInfo(currEpoch)
deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
// Check that the miner state indicates that the current proving deadline has started.
// This should only fail if the cron actor wasn't invoked, and matters only in case that it hasn't been
// invoked for a whole proving period, and hence the missed PoSt submissions from the prior occurrence
// of this deadline haven't been processed yet.
if !currDeadline.IsOpen() {
rt.Abortf(exitcode.ErrIllegalState, "proving period %d not yet open at %d", currDeadline.PeriodStart, currEpoch)
}
// The miner may only submit a proof for the current deadline.
if params.Deadline != currDeadline.Index {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d at epoch %d, expected %d",
params.Deadline, currEpoch, currDeadline.Index)
}
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors")
deadline, err := deadlines.LoadDeadline(store, params.Deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", params.Deadline)
// Record proven sectors/partitions, returning updates to power and the final set of sectors
// proven/skipped.
//
// NOTE: This function does not actually check the proofs but does assume that they'll be
// successfully validated. The actual proof verification is done below in verifyWindowedPost.
//
// If proof verification fails, the this deadline MUST NOT be saved and this function should
// be aborted.
faultExpiration := currDeadline.Last() + FaultMaxAge
postResult, err = deadline.RecordProvenSectors(store, sectors, info.SectorSize, currDeadline.QuantSpec(), faultExpiration, params.Partitions)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to process post submission for deadline %d", params.Deadline)
// Validate proofs
// Load sector infos for proof, substituting a known-good sector for known-faulty sectors.
// Note: this is slightly sub-optimal, loading info for the recovering sectors again after they were already
// loaded above.
sectorInfos, err := st.LoadSectorInfosForProof(store, postResult.Sectors, postResult.IgnoredSectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load proven sector info")
// Skip verification if all sectors are faults.
// We still need to allow this call to succeed so the miner can declare a whole partition as skipped.
if len(sectorInfos) > 0 {
// Verify the proof.
// A failed verification doesn't immediately cause a penalty; the miner can try again.
//
// This function aborts on failure.
verifyWindowedPost(rt, currDeadline.Challenge, sectorInfos, params.Proofs)
}
// Penalize new skipped faults and retracted recoveries as undeclared faults.
// These pay a higher fee than faults declared before the deadline challenge window opened.
undeclaredPenaltyPower := postResult.PenaltyPower()
undeclaredPenaltyTarget := PledgePenaltyForUndeclaredFault(
rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, undeclaredPenaltyPower.QA,
)
// Subtract the "ongoing" fault fee from the amount charged now, since it will be charged at
// the end-of-deadline cron.
undeclaredPenaltyTarget = big.Sub(undeclaredPenaltyTarget, PledgePenaltyForDeclaredFault(
rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, undeclaredPenaltyPower.QA,
))
// Penalize recoveries as declared faults (a lower fee than the undeclared, above).
// It sounds odd, but because faults are penalized in arrears, at the _end_ of the faulty period, we must
// penalize recovered sectors here because they won't be penalized by the end-of-deadline cron for the
// immediately-prior faulty period.
declaredPenaltyTarget := PledgePenaltyForDeclaredFault(
rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, postResult.RecoveredPower.QA,
)
// Note: We could delay this charge until end of deadline, but that would require more accounting state.
totalPenaltyTarget := big.Add(undeclaredPenaltyTarget, declaredPenaltyTarget)
unlockedBalance := st.GetUnlockedBalance(rt.CurrentBalance())
vestingPenaltyTotal, balancePenaltyTotal, err := st.PenalizeFundsInPriorityOrder(store, currEpoch, totalPenaltyTarget, unlockedBalance)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock penalty for %v", undeclaredPenaltyPower)
penaltyTotal = big.Add(vestingPenaltyTotal, balancePenaltyTotal)
pledgeDelta = big.Sub(pledgeDelta, vestingPenaltyTotal)
err = deadlines.UpdateDeadline(store, params.Deadline, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", params.Deadline)
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
})
// Restore power for recovered sectors. Remove power for new faults.
// NOTE: It would be permissible to delay the power loss until the deadline closes, but that would require
// additional accounting state.
// https://github.com/filecoin-project/specs-actors/issues/414
requestUpdatePower(rt, postResult.PowerDelta())
// Burn penalties.
burnFunds(rt, penaltyTotal)
notifyPledgeChanged(rt, pledgeDelta)
return nil
}
///////////////////////
// Sector Commitment //
///////////////////////
// Proposals must be posted on chain via sma.PublishStorageDeals before PreCommitSector.
// Optimization: PreCommitSector could contain a list of deals that are not published yet.
func (a Actor) PreCommitSector(rt Runtime, params *SectorPreCommitInfo) *adt.EmptyValue {
if _, ok := SupportedProofTypes[params.SealProof]; !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "unsupported seal proof type: %s", params.SealProof)
}
if params.SectorNumber > abi.MaxSectorNumber {
rt.Abortf(exitcode.ErrIllegalArgument, "sector number %d out of range 0..(2^63-1)", params.SectorNumber)
}
if !params.SealedCID.Defined() {
rt.Abortf(exitcode.ErrIllegalArgument, "sealed CID undefined")
}
if params.SealedCID.Prefix() != SealedCIDPrefix {
rt.Abortf(exitcode.ErrIllegalArgument, "sealed CID had wrong prefix")
}
if params.SealRandEpoch >= rt.CurrEpoch() {
rt.Abortf(exitcode.ErrIllegalArgument, "seal challenge epoch %v must be before now %v", params.SealRandEpoch, rt.CurrEpoch())
}
challengeEarliest := sealChallengeEarliest(rt.CurrEpoch(), params.SealProof)
if params.SealRandEpoch < challengeEarliest {
// The subsequent commitment proof can't possibly be accepted because the seal challenge will be deemed
// too old. Note that passing this check doesn't guarantee the proof will be soon enough, depending on
// when it arrives.
rt.Abortf(exitcode.ErrIllegalArgument, "seal challenge epoch %v too old, must be after %v", params.SealRandEpoch, challengeEarliest)
}
if params.Expiration <= rt.CurrEpoch() {
rt.Abortf(exitcode.ErrIllegalArgument, "sector expiration %v must be after now (%v)", params.Expiration, rt.CurrEpoch())
}
if params.ReplaceCapacity && len(params.DealIDs) == 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot replace sector without committing deals")
}
if params.ReplaceSectorDeadline >= WPoStPeriodDeadlines {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d", params.ReplaceSectorDeadline)
}
if params.ReplaceSectorNumber > abi.MaxSectorNumber {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid sector number %d", params.ReplaceSectorNumber)
}
// gather information from other actors
rewardStats := requestCurrentEpochBlockReward(rt)
pwrTotal := requestCurrentTotalPower(rt)
dealWeight := requestDealWeight(rt, params.DealIDs, rt.CurrEpoch(), params.Expiration)
store := adt.AsStore(rt)
var st State
newlyVested := big.Zero()
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
if params.SealProof != info.SealProofType {
rt.Abortf(exitcode.ErrIllegalArgument, "sector seal proof %v must match miner seal proof type %d", params.SealProof, info.SealProofType)
}
maxDealLimit := dealPerSectorLimit(info.SectorSize)
if uint64(len(params.DealIDs)) > maxDealLimit {
rt.Abortf(exitcode.ErrIllegalArgument, "too many deals for sector %d > %d", len(params.DealIDs), maxDealLimit)
}
err := st.AllocateSectorNumber(store, params.SectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to allocate sector id %d", params.SectorNumber)
// The following two checks shouldn't be necessary, but it can't
// hurt to double-check (unless it's really just too
// expensive?).
_, preCommitFound, err := st.GetPrecommittedSector(store, params.SectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to check pre-commit %v", params.SectorNumber)
if preCommitFound {
rt.Abortf(exitcode.ErrIllegalState, "sector %v already pre-committed", params.SectorNumber)
}
sectorFound, err := st.HasSectorNo(store, params.SectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to check sector %v", params.SectorNumber)
if sectorFound {
rt.Abortf(exitcode.ErrIllegalState, "sector %v already committed", params.SectorNumber)
}
// Require sector lifetime meets minimum by assuming activation happens at last epoch permitted for seal proof.
// This could make sector maximum lifetime validation more lenient if the maximum sector limit isn't hit first.
maxActivation := rt.CurrEpoch() + MaxSealDuration[params.SealProof]
validateExpiration(rt, maxActivation, params.Expiration, params.SealProof)
depositMinimum := big.Zero()
if params.ReplaceCapacity {
replaceSector := validateReplaceSector(rt, &st, store, params)
// Note the replaced sector's initial pledge as a lower bound for the new sector's deposit
depositMinimum = replaceSector.InitialPledge
}
newlyVested, err = st.UnlockVestedFunds(store, rt.CurrEpoch())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to vest funds")
availableBalance := st.GetAvailableBalance(rt.CurrentBalance())
duration := params.Expiration - rt.CurrEpoch()
sectorWeight := QAPowerForWeight(info.SectorSize, duration, dealWeight.DealWeight, dealWeight.VerifiedDealWeight)
depositReq := big.Max(
PreCommitDepositForPower(rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, sectorWeight),
depositMinimum,
)
if availableBalance.LessThan(depositReq) {
rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds for pre-commit deposit: %v", depositReq)
}
st.AddPreCommitDeposit(depositReq)
st.AssertBalanceInvariants(rt.CurrentBalance())
if err := st.PutPrecommittedSector(store, &SectorPreCommitOnChainInfo{
Info: *params,
PreCommitDeposit: depositReq,
PreCommitEpoch: rt.CurrEpoch(),
DealWeight: dealWeight.DealWeight,
VerifiedDealWeight: dealWeight.VerifiedDealWeight,
}); err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to write pre-committed sector %v: %v", params.SectorNumber, err)
}
// add precommit expiry to the queue
msd, ok := MaxSealDuration[params.SealProof]
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "no max seal duration set for proof type: %d", params.SealProof)
}
// The +1 here is critical for the batch verification of proofs. Without it, if a proof arrived exactly on the
// due epoch, ProveCommitSector would accept it, then the expiry event would remove it, and then
// ConfirmSectorProofsValid would fail to find it.
expiryBound := rt.CurrEpoch() + msd + 1
err = st.AddPreCommitExpiry(store, expiryBound, params.SectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add pre-commit expiry to queue")
})
notifyPledgeChanged(rt, newlyVested.Neg())
return nil
}
type ProveCommitSectorParams struct {
SectorNumber abi.SectorNumber
Proof []byte
}
// Checks state of the corresponding sector pre-commitment, then schedules the proof to be verified in bulk
// by the power actor.
// If valid, the power actor will call ConfirmSectorProofsValid at the end of the same epoch as this message.
func (a Actor) ProveCommitSector(rt Runtime, params *ProveCommitSectorParams) *adt.EmptyValue {
rt.ValidateImmediateCallerAcceptAny()
store := adt.AsStore(rt)
var st State
rt.State().Readonly(&st)
// Verify locked funds are are at least the sum of sector initial pledges.
// Note that this call does not actually compute recent vesting, so the reported locked funds may be
// slightly higher than the true amount (i.e. slightly in the miner's favour).
// Computing vesting here would be almost always redundant since vesting is quantized to ~daily units.
// Vesting will be at most one proving period old if computed in the cron callback.
verifyPledgeMeetsInitialRequirements(rt, &st)
sectorNo := params.SectorNumber
precommit, found, err := st.GetPrecommittedSector(store, sectorNo)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pre-committed sector %v", sectorNo)
if !found {
rt.Abortf(exitcode.ErrNotFound, "no pre-committed sector %v", sectorNo)
}
msd, ok := MaxSealDuration[precommit.Info.SealProof]
if !ok {
rt.Abortf(exitcode.ErrIllegalState, "no max seal duration for proof type: %d", precommit.Info.SealProof)
}
proveCommitDue := precommit.PreCommitEpoch + msd
if rt.CurrEpoch() > proveCommitDue {
rt.Abortf(exitcode.ErrIllegalArgument, "commitment proof for %d too late at %d, due %d", sectorNo, rt.CurrEpoch(), proveCommitDue)
}
svi := getVerifyInfo(rt, &SealVerifyStuff{
SealedCID: precommit.Info.SealedCID,
InteractiveEpoch: precommit.PreCommitEpoch + PreCommitChallengeDelay,
SealRandEpoch: precommit.Info.SealRandEpoch,
Proof: params.Proof,
DealIDs: precommit.Info.DealIDs,
SectorNumber: precommit.Info.SectorNumber,
RegisteredSealProof: precommit.Info.SealProof,
})
_, code := rt.Send(
builtin.StoragePowerActorAddr,
builtin.MethodsPower.SubmitPoRepForBulkVerify,
svi,
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to submit proof for bulk verification")
return nil
}
func (a Actor) ConfirmSectorProofsValid(rt Runtime, params *builtin.ConfirmSectorProofsParams) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
// get network stats from other actors
rewardStats := requestCurrentEpochBlockReward(rt)
pwrTotal := requestCurrentTotalPower(rt)
circulatingSupply := rt.TotalFilCircSupply()
// 1. Activate deals, skipping pre-commits with invalid deals.
// - calls the market actor.
// 2. Reschedule replacement sector expiration.
// - loads and saves sectors
// - loads and saves deadlines/partitions
// 3. Add new sectors.
// - loads and saves sectors.
// - loads and saves deadlines/partitions
//
// Ideally, we'd combine some of these operations, but at least we have
// a constant number of them.
var st State
rt.State().Readonly(&st)
store := adt.AsStore(rt)
info := getMinerInfo(rt, &st)
//
// Activate storage deals.
//
// This skips missing pre-commits.
precommittedSectors, err := st.FindPrecommittedSectors(store, params.Sectors...)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pre-committed sectors")
// Committed-capacity sectors licensed for early removal by new sectors being proven.
replaceSectors := make(DeadlineSectorMap)
// Pre-commits for new sectors.
var preCommits []*SectorPreCommitOnChainInfo
for _, precommit := range precommittedSectors {
// Check (and activate) storage deals associated to sector. Abort if checks failed.
// TODO: we should batch these calls...
// https://github.com/filecoin-project/specs-actors/issues/474
_, code := rt.Send(
builtin.StorageMarketActorAddr,
builtin.MethodsMarket.ActivateDeals,
&market.ActivateDealsParams{
DealIDs: precommit.Info.DealIDs,
SectorExpiry: precommit.Info.Expiration,
},
abi.NewTokenAmount(0),
)
if code != exitcode.Ok {
rt.Log(vmr.INFO, "failed to activate deals on sector %d, dropping from prove commit set", precommit.Info.SectorNumber)
continue
}
preCommits = append(preCommits, precommit)
if precommit.Info.ReplaceCapacity {
err := replaceSectors.AddValues(
precommit.Info.ReplaceSectorDeadline,
precommit.Info.ReplaceSectorPartition,
uint64(precommit.Info.ReplaceSectorNumber),
)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to record sectors for replacement")
}
}
// When all prove commits have failed abort early
if len(preCommits) == 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "all prove commits failed to validate")
}
var newPower PowerPair
totalPledge := big.Zero()
totalPrecommitDeposit := big.Zero()
newSectors := make([]*SectorOnChainInfo, 0)
newlyVested := big.Zero()
rt.State().Transaction(&st, func() {
// Schedule expiration for replaced sectors to the end of their next deadline window.
// They can't be removed right now because we want to challenge them immediately before termination.
err = st.RescheduleSectorExpirations(store, rt.CurrEpoch(), info.SectorSize, replaceSectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to replace sector expirations")
newSectorNos := make([]abi.SectorNumber, 0, len(preCommits))
for _, precommit := range preCommits {
// compute initial pledge
activation := rt.CurrEpoch()
duration := precommit.Info.Expiration - activation
// This should have been caught in precommit, but don't let other sectors fail because of it.
if duration < MinSectorExpiration {
rt.Log(vmr.WARN, "precommit %d has lifetime %d less than minimum. ignoring", precommit.Info.SectorNumber, duration, MinSectorExpiration)
continue
}
power := QAPowerForWeight(info.SectorSize, duration, precommit.DealWeight, precommit.VerifiedDealWeight)
dayReward := ExpectedRewardForPower(rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, power, builtin.EpochsInDay)
// The storage pledge is recorded for use in computing the penalty if this sector is terminated
// before its declared expiration.
// It's not capped to 1 FIL for Space Race, so likely exceeds the actual initial pledge requirement.
storagePledge := ExpectedRewardForPower(rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, power, InitialPledgeProjectionPeriod)
initialPledge := InitialPledgeForPower(power, rewardStats.ThisEpochBaselinePower, pwrTotal.PledgeCollateral,
rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, circulatingSupply)
totalPrecommitDeposit = big.Add(totalPrecommitDeposit, precommit.PreCommitDeposit)
totalPledge = big.Add(totalPledge, initialPledge)
newSectorInfo := SectorOnChainInfo{
SectorNumber: precommit.Info.SectorNumber,
SealProof: precommit.Info.SealProof,
SealedCID: precommit.Info.SealedCID,
DealIDs: precommit.Info.DealIDs,
Expiration: precommit.Info.Expiration,
Activation: activation,
DealWeight: precommit.DealWeight,
VerifiedDealWeight: precommit.VerifiedDealWeight,
InitialPledge: initialPledge,
ExpectedDayReward: dayReward,
ExpectedStoragePledge: storagePledge,
}
newSectors = append(newSectors, &newSectorInfo)
newSectorNos = append(newSectorNos, newSectorInfo.SectorNumber)
}
err = st.PutSectors(store, newSectors...)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to put new sectors")
err = st.DeletePrecommittedSectors(store, newSectorNos...)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete precommited sectors")
newPower, err = st.AssignSectorsToDeadlines(store, rt.CurrEpoch(), newSectors, info.WindowPoStPartitionSectors, info.SectorSize)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to assign new sectors to deadlines")
// Add sector and pledge lock-up to miner state
newlyVested, err = st.UnlockVestedFunds(store, rt.CurrEpoch())
if err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to vest new funds: %s", err)
}
// Unlock deposit for successful proofs, make it available for lock-up as initial pledge.
st.AddPreCommitDeposit(totalPrecommitDeposit.Neg())
availableBalance := st.GetAvailableBalance(rt.CurrentBalance())
if availableBalance.LessThan(totalPledge) {
rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds for aggregate initial pledge requirement %s, available: %s", totalPledge, availableBalance)
}
st.AddInitialPledgeRequirement(totalPledge)
st.AssertBalanceInvariants(rt.CurrentBalance())
})
// Request power and pledge update for activated sector.
requestUpdatePower(rt, newPower)
notifyPledgeChanged(rt, big.Sub(totalPledge, newlyVested))
return nil
}
type CheckSectorProvenParams struct {
SectorNumber abi.SectorNumber
}
func (a Actor) CheckSectorProven(rt Runtime, params *CheckSectorProvenParams) *adt.EmptyValue {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.State().Readonly(&st)
store := adt.AsStore(rt)
sectorNo := params.SectorNumber
if _, found, err := st.GetSector(store, sectorNo); err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to load proven sector %v", sectorNo)
} else if !found {
rt.Abortf(exitcode.ErrNotFound, "sector %v not proven", sectorNo)
}
return nil
}
/////////////////////////
// Sector Modification //
/////////////////////////
type ExtendSectorExpirationParams struct {
Extensions []ExpirationExtension
}
type ExpirationExtension struct {
Deadline uint64
Partition uint64
Sectors bitfield.BitField
NewExpiration abi.ChainEpoch
}
// Changes the expiration epoch for a sector to a new, later one.
// The sector must not be terminated or faulty.
// The sector's power is recomputed for the new expiration.
func (a Actor) ExtendSectorExpiration(rt Runtime, params *ExtendSectorExpirationParams) *adt.EmptyValue {
if uint64(len(params.Extensions)) > AddressedPartitionsMax {
rt.Abortf(exitcode.ErrIllegalArgument, "too many declarations %d, max %d", len(params.Extensions), AddressedPartitionsMax)
}
// limit the number of sectors declared at once
// https://github.com/filecoin-project/specs-actors/issues/416
var sectorCount uint64
for _, decl := range params.Extensions {
if decl.Deadline >= WPoStPeriodDeadlines {
rt.Abortf(exitcode.ErrIllegalArgument, "deadline %d not in range 0..%d", decl.Deadline, WPoStPeriodDeadlines)
}
count, err := decl.Sectors.Count()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
"failed to count sectors for deadline %d, partition %d",
decl.Deadline, decl.Partition,
)
if sectorCount > math.MaxUint64-count {
rt.Abortf(exitcode.ErrIllegalArgument, "sector bitfield integer overflow")
}
sectorCount += count
}
if sectorCount > AddressedSectorsMax {
rt.Abortf(exitcode.ErrIllegalArgument,
"too many sectors for declaration %d, max %d",
sectorCount, AddressedSectorsMax,
)
}
powerDelta := NewPowerPairZero()
pledgeDelta := big.Zero()
store := adt.AsStore(rt)
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
// Group declarations by deadline, and remember iteration order.
declsByDeadline := map[uint64][]*ExpirationExtension{}
var deadlinesToLoad []uint64
for i := range params.Extensions {
// Take a pointer to the value inside the slice, don't
// take a reference to the temporary loop variable as it
// will be overwritten every iteration.
decl := ¶ms.Extensions[i]
if _, ok := declsByDeadline[decl.Deadline]; !ok {
deadlinesToLoad = append(deadlinesToLoad, decl.Deadline)
}
declsByDeadline[decl.Deadline] = append(declsByDeadline[decl.Deadline], decl)
}
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")
for _, dlIdx := range deadlinesToLoad {
deadline, err := deadlines.LoadDeadline(store, dlIdx)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)
partitions, err := deadline.PartitionsArray(store)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load partitions for deadline %d", dlIdx)
quant := st.QuantSpecForDeadline(dlIdx)
for _, decl := range declsByDeadline[dlIdx] {
key := PartitionKey{dlIdx, decl.Partition}
var partition Partition
found, err := partitions.Get(decl.Partition, &partition)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load partition %v", key)
if !found {
rt.Abortf(exitcode.ErrNotFound, "no such partition %v", key)
}
oldSectors, err := sectors.Load(decl.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors")
newSectors := make([]*SectorOnChainInfo, len(oldSectors))
for i, sector := range oldSectors {
if decl.NewExpiration < sector.Expiration {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot reduce sector expiration to %d from %d",
decl.NewExpiration, sector.Expiration)
}
validateExpiration(rt, sector.Activation, decl.NewExpiration, sector.SealProof)
newSector := *sector
newSector.Expiration = decl.NewExpiration
newSectors[i] = &newSector
}
// Overwrite sector infos.
err = sectors.Store(newSectors...)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update sectors %v", decl.Sectors)
// Remove old sectors from partition and assign new sectors.
partitionPowerDelta, partitionPledgeDelta, err := partition.ReplaceSectors(store, oldSectors, newSectors, info.SectorSize, quant)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to replaces sector expirations at %v", key)
powerDelta = powerDelta.Add(partitionPowerDelta)
pledgeDelta = big.Add(pledgeDelta, partitionPledgeDelta) // expected to be zero, see note below.
err = partitions.Set(decl.Partition, &partition)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save partition", key)
}
deadline.Partitions, err = partitions.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save partitions for deadline %d", dlIdx)
err = deadlines.UpdateDeadline(store, dlIdx, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadline %d", dlIdx)
}
st.Sectors, err = sectors.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save sectors")
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
})
requestUpdatePower(rt, powerDelta)
// Note: the pledge delta is expected to be zero, since pledge is not re-calculated for the extension.
// But in case that ever changes, we can do the right thing here.
notifyPledgeChanged(rt, pledgeDelta)
return nil
}
type TerminateSectorsParams struct {
Terminations []TerminationDeclaration
}
type TerminationDeclaration struct {
Deadline uint64
Partition uint64
Sectors bitfield.BitField
}
type TerminateSectorsReturn struct {
// Set to true if all early termination work has been completed. When
// false, the miner may choose to repeatedly invoke TerminateSectors
// with no new sectors to process the remainder of the pending
// terminations. While pending terminations are outstanding, the miner
// will not be able to withdraw funds.
Done bool
}
// Marks some sectors as terminated at the present epoch, earlier than their
// scheduled termination, and adds these sectors to the early termination queue.
// This method then processes up to AddressedSectorsMax sectors and
// AddressedPartitionsMax partitions from the early termination queue,
// terminating deals, paying fines, and returning pledge collateral. While
// sectors remain in this queue:
//
// 1. The miner will be unable to withdraw funds.
// 2. The chain will process up to AddressedSectorsMax sectors and
// AddressedPartitionsMax per epoch until the queue is empty.
//
// The sectors are immediately ignored for Window PoSt proofs, and should be
// masked in the same way as faulty sectors. A miner terminating sectors in the
// current deadline must be careful to compute an appropriate Window PoSt proof
// for the sectors that will be active at the time the PoSt is submitted.
//
// This function may be invoked with no new sectors to explicitly process the
// next batch of sectors.
func (a Actor) TerminateSectors(rt Runtime, params *TerminateSectorsParams) *TerminateSectorsReturn {
// Note: this cannot terminate pre-committed but un-proven sectors.
// They must be allowed to expire (and deposit burnt).
toProcess := make(DeadlineSectorMap)
for _, term := range params.Terminations {
err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
)
}
err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")
var hadEarlyTerminations bool
var st State
store := adt.AsStore(rt)
currEpoch := rt.CurrEpoch()
powerDelta := NewPowerPairZero()
rt.State().Transaction(&st, func() {
hadEarlyTerminations = havePendingEarlyTerminations(rt, &st)
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
// We're only reading the sectors, so there's no need to save this back.
// However, we still want to avoid re-loading this array per-partition.
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors")
err = toProcess.ForEach(func(dlIdx uint64, partitionSectors PartitionSectorMap) error {
quant := st.QuantSpecForDeadline(dlIdx)
deadline, err := deadlines.LoadDeadline(store, dlIdx)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)
removedPower, err := deadline.TerminateSectors(store, sectors, currEpoch, partitionSectors, info.SectorSize, quant)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to terminate sectors in deadline %d", dlIdx)
st.EarlyTerminations.Set(dlIdx)
powerDelta = powerDelta.Sub(removedPower)
err = deadlines.UpdateDeadline(store, dlIdx, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", dlIdx)
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to walk sectors")
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
})
// Now, try to process these sectors.
more := processEarlyTerminations(rt)
if more && !hadEarlyTerminations {
// We have remaining terminations, and we didn't _previously_
// have early terminations to process, schedule a cron job.
// NOTE: This isn't quite correct. If we repeatedly fill, empty,
// fill, and empty, the queue, we'll keep scheduling new cron
// jobs. However, in practice, that shouldn't be all that bad.
scheduleEarlyTerminationWork(rt)
}
requestUpdatePower(rt, powerDelta)
return &TerminateSectorsReturn{Done: !more}
}
////////////
// Faults //
////////////
type DeclareFaultsParams struct {
Faults []FaultDeclaration
}
type FaultDeclaration struct {
// The deadline to which the faulty sectors are assigned, in range [0..WPoStPeriodDeadlines)
Deadline uint64
// Partition index within the deadline containing the faulty sectors.
Partition uint64
// Sectors in the partition being declared faulty.
Sectors bitfield.BitField
}
func (a Actor) DeclareFaults(rt Runtime, params *DeclareFaultsParams) *adt.EmptyValue {
toProcess := make(DeadlineSectorMap)
for _, term := range params.Faults {
err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
)
}
err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")
store := adt.AsStore(rt)
var st State
newFaultPowerTotal := NewPowerPairZero()
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
deadlines, err := st.LoadDeadlines(store)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")
err = toProcess.ForEach(func(dlIdx uint64, pm PartitionSectorMap) error {
targetDeadline, err := declarationDeadlineInfo(st.ProvingPeriodStart, dlIdx, rt.CurrEpoch())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid fault declaration deadline %d", dlIdx)
err = validateFRDeclarationDeadline(targetDeadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed fault declaration at deadline %d", dlIdx)
deadline, err := deadlines.LoadDeadline(store, dlIdx)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)
faultExpirationEpoch := targetDeadline.Last() + FaultMaxAge
newFaultyPower, err := deadline.DeclareFaults(store, sectors, info.SectorSize, targetDeadline.QuantSpec(), faultExpirationEpoch, pm)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to declare faults for deadline %d", dlIdx)
err = deadlines.UpdateDeadline(store, dlIdx, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store deadline %d partitions", dlIdx)
newFaultPowerTotal = newFaultPowerTotal.Add(newFaultyPower)
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to iterate deadlines")
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
})
// Remove power for new faulty sectors.
// NOTE: It would be permissible to delay the power loss until the deadline closes, but that would require
// additional accounting state.
// https://github.com/filecoin-project/specs-actors/issues/414
requestUpdatePower(rt, newFaultPowerTotal.Neg())
// Payment of penalty for declared faults is deferred to the deadline cron.
return nil
}
type DeclareFaultsRecoveredParams struct {
Recoveries []RecoveryDeclaration
}
type RecoveryDeclaration struct {
// The deadline to which the recovered sectors are assigned, in range [0..WPoStPeriodDeadlines)
Deadline uint64
// Partition index within the deadline containing the recovered sectors.
Partition uint64
// Sectors in the partition being declared recovered.
Sectors bitfield.BitField
}
func (a Actor) DeclareFaultsRecovered(rt Runtime, params *DeclareFaultsRecoveredParams) *adt.EmptyValue {
toProcess := make(DeadlineSectorMap)
for _, term := range params.Recoveries {
err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
)
}
err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")
store := adt.AsStore(rt)
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")
err = toProcess.ForEach(func(dlIdx uint64, pm PartitionSectorMap) error {
targetDeadline, err := declarationDeadlineInfo(st.ProvingPeriodStart, dlIdx, rt.CurrEpoch())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid recovery declaration deadline %d", dlIdx)
err = validateFRDeclarationDeadline(targetDeadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed recovery declaration at deadline %d", dlIdx)
deadline, err := deadlines.LoadDeadline(store, dlIdx)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)
err = deadline.DeclareFaultsRecovered(store, sectors, info.SectorSize, pm)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to declare recoveries for deadline %d", dlIdx)
err = deadlines.UpdateDeadline(store, dlIdx, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store deadline %d", dlIdx)
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to walk sectors")
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
})
// Power is not restored yet, but when the recovered sectors are successfully PoSted.
return nil
}
/////////////////
// Maintenance //
/////////////////
type CompactPartitionsParams struct {
Deadline uint64
Partitions bitfield.BitField
}
// Compacts a number of partitions at one deadline by removing terminated sectors, re-ordering the remaining sectors,
// and assigning them to new partitions so as to completely fill all but one partition with live sectors.
// The addressed partitions are removed from the deadline, and new ones appended.
// The final partition in the deadline is always included in the compaction, whether or not explicitly requested.
// Removed sectors are removed from state entirely.
// May not be invoked if the deadline has any un-processed early terminations.
func (a Actor) CompactPartitions(rt Runtime, params *CompactPartitionsParams) *adt.EmptyValue {
if params.Deadline >= WPoStPeriodDeadlines {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %v", params.Deadline)
}
partitionCount, err := params.Partitions.Count()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to parse partitions bitfield")
store := adt.AsStore(rt)
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
if !deadlineIsMutable(st.ProvingPeriodStart, params.Deadline, rt.CurrEpoch()) {
rt.Abortf(exitcode.ErrForbidden,
"cannot compact deadline %d during its challenge window or the prior challenge window", params.Deadline)
}
submissionPartitionLimit := loadPartitionsSectorsMax(info.WindowPoStPartitionSectors)
if partitionCount > submissionPartitionLimit {
rt.Abortf(exitcode.ErrIllegalArgument, "too many partitions %d, limit %d", partitionCount, submissionPartitionLimit)
}
quant := st.QuantSpecForDeadline(params.Deadline)
deadlines, err := st.LoadDeadlines(store)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
deadline, err := deadlines.LoadDeadline(store, params.Deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", params.Deadline)
live, dead, removedPower, err := deadline.RemovePartitions(store, params.Partitions, quant)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to remove partitions from deadline %d", params.Deadline)
err = st.DeleteSectors(store, dead)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete dead sectors")
sectors, err := st.LoadSectorInfos(store, live)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load moved sectors")
newPower, err := deadline.AddSectors(store, info.WindowPoStPartitionSectors, sectors, info.SectorSize, quant)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add back moved sectors")
if !removedPower.Equals(newPower) {
rt.Abortf(exitcode.ErrIllegalState, "power changed when compacting partitions: was %v, is now %v", removedPower, newPower)
}
})
return nil
}
type CompactSectorNumbersParams struct {
MaskSectorNumbers bitfield.BitField
}
// Compacts sector number allocations to reduce the size of the allocated sector
// number bitfield.
//
// When allocating sector numbers sequentially, or in sequential groups, this
// bitfield should remain fairly small. However, if the bitfield grows large
// enough such that PreCommitSector fails (or becomes expensive), this method
// can be called to mask out (throw away) entire ranges of unused sector IDs.
// For example, if sectors 1-99 and 101-200 have been allocated, sector number
// 99 can be masked out to collapse these two ranges into one.
func (a Actor) CompactSectorNumbers(rt Runtime, params *CompactSectorNumbersParams) *adt.EmptyValue {
lastSectorNo, err := params.MaskSectorNumbers.Last()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid mask bitfield")
if lastSectorNo > abi.MaxSectorNumber {
rt.Abortf(exitcode.ErrIllegalArgument, "masked sector number %d exceeded max sector number", lastSectorNo)
}
store := adt.AsStore(rt)
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
err := st.MaskSectorNumbers(store, params.MaskSectorNumbers)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to mask sector numbers")
})
return nil
}
///////////////////////
// Pledge Collateral //
///////////////////////
// Locks up some amount of the miner's unlocked balance (including funds received alongside the invoking message).
func (a Actor) AddLockedFund(rt Runtime, amountToLock *abi.TokenAmount) *adt.EmptyValue {
if amountToLock.Sign() < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot lock up a negative amount of funds")
}
var st State
newlyVested := big.Zero()
rt.State().Transaction(&st, func() {
var err error
info := getMinerInfo(rt, &st)
rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker, builtin.RewardActorAddr)...)
// This may lock up unlocked balance that was covering InitialPledgeRequirements
// This ensures that the amountToLock is always locked up if the miner account
// can cover it.
unlockedBalance := st.GetUnlockedBalance(rt.CurrentBalance())
if unlockedBalance.LessThan(*amountToLock) {
rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds to lock, available: %v, requested: %v", unlockedBalance, *amountToLock)
}
newlyVested, err = st.AddLockedFunds(adt.AsStore(rt), rt.CurrEpoch(), *amountToLock, &RewardVestingSpec)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to lock funds in vesting table")
})
notifyPledgeChanged(rt, big.Sub(*amountToLock, newlyVested))
return nil
}
type ReportConsensusFaultParams struct {
BlockHeader1 []byte
BlockHeader2 []byte
BlockHeaderExtra []byte
}
func (a Actor) ReportConsensusFault(rt Runtime, params *ReportConsensusFaultParams) *adt.EmptyValue {
// Note: only the first reporter of any fault is rewarded.
// Subsequent invocations fail because the target miner has been removed.
rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
reporter := rt.Message().Caller()
fault, err := rt.Syscalls().VerifyConsensusFault(params.BlockHeader1, params.BlockHeader2, params.BlockHeaderExtra)
if err != nil {
rt.Abortf(exitcode.ErrIllegalArgument, "fault not verified: %s", err)
}
// Elapsed since the fault (i.e. since the higher of the two blocks)
faultAge := rt.CurrEpoch() - fault.Epoch
if faultAge <= 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid fault epoch %v ahead of current %v", fault.Epoch, rt.CurrEpoch())
}
// Reward reporter with a share of the miner's current balance.
slasherReward := RewardForConsensusSlashReport(faultAge, rt.CurrentBalance())
_, code := rt.Send(reporter, builtin.MethodSend, nil, slasherReward)
builtin.RequireSuccess(rt, code, "failed to reward reporter")
var st State
rt.State().Readonly(&st)
// Notify power actor with lock-up total being removed.
_, code = rt.Send(
builtin.StoragePowerActorAddr,
builtin.MethodsPower.OnConsensusFault,
&st.LockedFunds,
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to notify power actor on consensus fault")
// close deals and burn funds
terminateMiner(rt)
return nil
}
type WithdrawBalanceParams struct {
AmountRequested abi.TokenAmount
}
func (a Actor) WithdrawBalance(rt Runtime, params *WithdrawBalanceParams) *adt.EmptyValue {
var st State
if params.AmountRequested.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "negative fund requested for withdrawal: %s", params.AmountRequested)
}
var info *MinerInfo
newlyVested := big.Zero()
rt.State().Transaction(&st, func() {
var err error
info = getMinerInfo(rt, &st)
// Only the owner is allowed to withdraw the balance as it belongs to/is controlled by the owner
// and not the worker.
rt.ValidateImmediateCallerIs(info.Owner)
// Ensure we don't have any pending terminations.
if count, err := st.EarlyTerminations.Count(); err != nil {
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count early terminations")
} else if count > 0 {
rt.Abortf(exitcode.ErrForbidden,
"cannot withdraw funds while %d deadlines have terminated sectors with outstanding fees",
count,
)
}
// Unlock vested funds so we can spend them.
newlyVested, err = st.UnlockVestedFunds(adt.AsStore(rt), rt.CurrEpoch())
if err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to vest fund: %v", err)
}
// Verify InitialPledgeRequirement does not exceed unlocked funds
verifyPledgeMeetsInitialRequirements(rt, &st)
})
currBalance := rt.CurrentBalance()
amountWithdrawn := big.Min(st.GetAvailableBalance(currBalance), params.AmountRequested)
Assert(amountWithdrawn.GreaterThanEqual(big.Zero()))
Assert(amountWithdrawn.LessThanEqual(currBalance))
_, code := rt.Send(info.Owner, builtin.MethodSend, nil, amountWithdrawn)
builtin.RequireSuccess(rt, code, "failed to withdraw balance")
pledgeDelta := newlyVested.Neg()
notifyPledgeChanged(rt, pledgeDelta)
st.AssertBalanceInvariants(rt.CurrentBalance())
return nil
}
//////////
// Cron //
//////////
func (a Actor) OnDeferredCronEvent(rt Runtime, payload *CronEventPayload) *adt.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
switch payload.EventType {
case CronEventProvingDeadline:
handleProvingDeadline(rt)
case CronEventWorkerKeyChange:
commitWorkerKeyChange(rt)
case CronEventProcessEarlyTerminations:
if processEarlyTerminations(rt) {
scheduleEarlyTerminationWork(rt)
}
}
return nil
}
////////////////////////////////////////////////////////////////////////////////
// Utility functions & helpers
////////////////////////////////////////////////////////////////////////////////
func processEarlyTerminations(rt Runtime) (more bool) {
store := adt.AsStore(rt)
// TODO: We're using the current power+epoch reward. Technically, we
// should use the power/reward at the time of termination.
// https://github.com/filecoin-project/specs-actors/pull/648
rewardStats := requestCurrentEpochBlockReward(rt)
pwrTotal := requestCurrentTotalPower(rt)
var (
result TerminationResult
dealsToTerminate []market.OnMinerSectorsTerminateParams
penalty = big.Zero()
pledgeDelta = big.Zero()
)
var st State
rt.State().Transaction(&st, func() {
var err error
result, more, err = st.PopEarlyTerminations(store, AddressedPartitionsMax, AddressedSectorsMax)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pop early terminations")
// Nothing to do, don't waste any time.
// This can happen if we end up processing early terminations
// before the cron callback fires.
if result.IsEmpty() {
return
}
info := getMinerInfo(rt, &st)
sectors, err := LoadSectors(store, st.Sectors)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")
totalInitialPledge := big.Zero()
dealsToTerminate = make([]market.OnMinerSectorsTerminateParams, 0, len(result.Sectors))
err = result.ForEach(func(epoch abi.ChainEpoch, sectorNos bitfield.BitField) error {
sectors, err := sectors.Load(sectorNos)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sector infos")
params := market.OnMinerSectorsTerminateParams{
Epoch: epoch,
DealIDs: make([]abi.DealID, 0, len(sectors)), // estimate ~one deal per sector.
}
for _, sector := range sectors {
params.DealIDs = append(params.DealIDs, sector.DealIDs...)
totalInitialPledge = big.Add(totalInitialPledge, sector.InitialPledge)
}
penalty = big.Add(penalty, terminationPenalty(info.SectorSize, epoch, rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, sectors))
dealsToTerminate = append(dealsToTerminate, params)
return nil
})
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to process terminations")
// Unlock funds for penalties.
// TODO: handle bankrupt miner: https://github.com/filecoin-project/specs-actors/issues/627
// We're intentionally reducing the penalty paid to what we have.
unlockedBalance := st.GetUnlockedBalance(rt.CurrentBalance())
penaltyFromVesting, penaltyFromBalance, err := st.PenalizeFundsInPriorityOrder(store, rt.CurrEpoch(), penalty, unlockedBalance)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock unvested funds")
penalty = big.Add(penaltyFromVesting, penaltyFromBalance)
// Remove pledge requirement.
st.AddInitialPledgeRequirement(totalInitialPledge.Neg())
pledgeDelta = big.Add(totalInitialPledge, penaltyFromVesting).Neg()
})
// We didn't do anything, abort.
if result.IsEmpty() {
return more
}
// Burn penalty.
burnFunds(rt, penalty)
// Return pledge.
notifyPledgeChanged(rt, pledgeDelta)
// Terminate deals.
for _, params := range dealsToTerminate {
requestTerminateDeals(rt, params.Epoch, params.DealIDs)
}
// reschedule cron worker, if necessary.
return more
}
// Invoked at the end of the last epoch for each proving deadline.
func handleProvingDeadline(rt Runtime) {
currEpoch := rt.CurrEpoch()
store := adt.AsStore(rt)
epochReward := requestCurrentEpochBlockReward(rt)
pwrTotal := requestCurrentTotalPower(rt)
hadEarlyTerminations := false
powerDelta := PowerPair{big.Zero(), big.Zero()}
penaltyTotal := abi.NewTokenAmount(0)
pledgeDelta := abi.NewTokenAmount(0)
var st State
rt.State().Transaction(&st, func() {
var err error
{
// Vest locked funds.
// This happens first so that any subsequent penalties are taken
// from locked vesting funds before funds free this epoch.
newlyVested, err := st.UnlockVestedFunds(store, rt.CurrEpoch())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to vest funds")
pledgeDelta = big.Add(pledgeDelta, newlyVested.Neg())
}
{
// expire pre-committed sectors
expiryQ, err := LoadBitfieldQueue(store, st.PreCommittedSectorsExpiry, st.QuantSpecEveryDeadline())
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sector expiry queue")
bf, modified, err := expiryQ.PopUntil(currEpoch)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pop expired sectors")
if modified {
st.PreCommittedSectorsExpiry, err = expiryQ.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save expiry queue")
}
depositToBurn, err := st.checkPrecommitExpiry(store, bf)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to expire pre-committed sectors")
penaltyTotal = big.Add(penaltyTotal, depositToBurn)
}
// Record whether or not we _had_ early terminations in the queue before this method.
// That way, don't re-schedule a cron callback if one is already scheduled.
hadEarlyTerminations = havePendingEarlyTerminations(rt, &st)
// Note: because the cron actor is not invoked on epochs with empty tipsets, the current epoch is not necessarily
// exactly the final epoch of the deadline; it may be slightly later (i.e. in the subsequent deadline/period).
// Further, this method is invoked once *before* the first proving period starts, after the actor is first
// constructed; this is detected by !dlInfo.PeriodStarted().
// Use dlInfo.PeriodEnd() rather than rt.CurrEpoch unless certain of the desired semantics.
dlInfo := st.DeadlineInfo(currEpoch)
if !dlInfo.PeriodStarted() {
return // Skip checking faults on the first, incomplete period.
}
deadlines, err := st.LoadDeadlines(store)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")
deadline, err := deadlines.LoadDeadline(store, dlInfo.Index)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlInfo.Index)
quant := dlInfo.QuantSpec()
unlockedBalance := st.GetUnlockedBalance(rt.CurrentBalance())
{
// Detect and penalize missing proofs.
faultExpiration := dlInfo.Last() + FaultMaxAge
penalizePowerTotal := big.Zero()
newFaultyPower, failedRecoveryPower, err := deadline.ProcessDeadlineEnd(store, quant, faultExpiration)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to process end of deadline %d", dlInfo.Index)
powerDelta = powerDelta.Sub(newFaultyPower)
penalizePowerTotal = big.Sum(penalizePowerTotal, newFaultyPower.QA, failedRecoveryPower.QA)
// Unlock sector penalty for all undeclared faults.
penaltyTarget := PledgePenaltyForUndeclaredFault(epochReward.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, penalizePowerTotal)
// Subtract the "ongoing" fault fee from the amount charged now, since it will be added on just below.
penaltyTarget = big.Sub(penaltyTarget, PledgePenaltyForDeclaredFault(epochReward.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, penalizePowerTotal))
penaltyFromVesting, penaltyFromBalance, err := st.PenalizeFundsInPriorityOrder(store, currEpoch, penaltyTarget, unlockedBalance)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock penalty")
unlockedBalance = big.Sub(unlockedBalance, penaltyFromBalance)
penaltyTotal = big.Sum(penaltyTotal, penaltyFromVesting, penaltyFromBalance)
pledgeDelta = big.Sub(pledgeDelta, penaltyFromVesting)
}
{
// Record faulty power for penalisation of ongoing faults, before popping expirations.
// This includes any power that was just faulted from missing a PoSt.
penaltyTarget := PledgePenaltyForDeclaredFault(epochReward.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, deadline.FaultyPower.QA)
penaltyFromVesting, penaltyFromBalance, err := st.PenalizeFundsInPriorityOrder(store, currEpoch, penaltyTarget, unlockedBalance)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock penalty")
unlockedBalance = big.Sub(unlockedBalance, penaltyFromBalance) //nolint:ineffassign
penaltyTotal = big.Sum(penaltyTotal, penaltyFromVesting, penaltyFromBalance)
pledgeDelta = big.Sub(pledgeDelta, penaltyFromVesting)
}
{
// Expire sectors that are due, either for on-time expiration or "early" faulty-for-too-long.
expired, err := deadline.PopExpiredSectors(store, dlInfo.Last(), quant)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load expired sectors")
// Release pledge requirements for the sectors expiring on-time.
// Pledge for the sectors expiring early is retained to support the termination fee that will be assessed
// when the early termination is processed.
pledgeDelta = big.Sub(pledgeDelta, expired.OnTimePledge)
st.AddInitialPledgeRequirement(expired.OnTimePledge.Neg())
// Record reduction in power of the amount of expiring active power.
// Faulty power has already been lost, so the amount expiring can be excluded from the delta.
powerDelta = powerDelta.Sub(expired.ActivePower)
// Record deadlines with early terminations. While this
// bitfield is non-empty, the miner is locked until they
// pay the fee.
noEarlyTerminations, err := expired.EarlySectors.IsEmpty()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count early terminations")
if !noEarlyTerminations {
st.EarlyTerminations.Set(dlInfo.Index)
}
// The termination fee is paid later, in early-termination queue processing.
// We could charge at least the undeclared fault fee here, which is a lower bound on the penalty.
// https://github.com/filecoin-project/specs-actors/issues/674
// The deals are not terminated yet, that is left for processing of the early termination queue.
}
// Save new deadline state.
err = deadlines.UpdateDeadline(store, dlInfo.Index, deadline)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", dlInfo.Index)
err = st.SaveDeadlines(store, deadlines)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
// Increment current deadline, and proving period if necessary.
if dlInfo.PeriodStarted() {
st.CurrentDeadline = (st.CurrentDeadline + 1) % WPoStPeriodDeadlines
if st.CurrentDeadline == 0 {
st.ProvingPeriodStart = st.ProvingPeriodStart + WPoStProvingPeriod
}
}
})
// Remove power for new faults, and burn penalties.
requestUpdatePower(rt, powerDelta)
burnFunds(rt, penaltyTotal)
notifyPledgeChanged(rt, pledgeDelta)
// Schedule cron callback for next deadline's last epoch.
newDlInfo := st.DeadlineInfo(currEpoch)
enrollCronEvent(rt, newDlInfo.Last(), &CronEventPayload{
EventType: CronEventProvingDeadline,
})
// Record whether or not we _have_ early terminations now.
hasEarlyTerminations := havePendingEarlyTerminations(rt, &st)
// If we didn't have pending early terminations before, but we do now,
// handle them at the next epoch.
if !hadEarlyTerminations && hasEarlyTerminations {
// First, try to process some of these terminations.
if processEarlyTerminations(rt) {
// If that doesn't work, just defer till the next epoch.
scheduleEarlyTerminationWork(rt)
}
// Note: _don't_ process early terminations if we had a cron
// callback already scheduled. In that case, we'll already have
// processed AddressedSectorsMax terminations this epoch.
}
}
// Check expiry is exactly *the epoch before* the start of a proving period.
func validateExpiration(rt Runtime, activation, expiration abi.ChainEpoch, sealProof abi.RegisteredSealProof) {
// expiration cannot be less than minimum after activation
if expiration-activation < MinSectorExpiration {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, total sector lifetime (%d) must exceed %d after activation %d",
expiration, expiration-activation, MinSectorExpiration, activation)
}
// expiration cannot exceed MaxSectorExpirationExtension from now
if expiration > rt.CurrEpoch()+MaxSectorExpirationExtension {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, cannot be more than %d past current epoch %d",
expiration, MaxSectorExpirationExtension, rt.CurrEpoch())
}
// total sector lifetime cannot exceed SectorMaximumLifetime for the sector's seal proof
if expiration-activation > sealProof.SectorMaximumLifetime() {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, total sector lifetime (%d) cannot exceed %d after activation %d",
expiration, expiration-activation, sealProof.SectorMaximumLifetime(), activation)
}
}
func validateReplaceSector(rt Runtime, st *State, store adt.Store, params *SectorPreCommitInfo) *SectorOnChainInfo {
replaceSector, found, err := st.GetSector(store, params.ReplaceSectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sector %v", params.SectorNumber)
if !found {
rt.Abortf(exitcode.ErrNotFound, "no such sector %v to replace", params.ReplaceSectorNumber)
}
if len(replaceSector.DealIDs) > 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot replace sector %v which has deals", params.ReplaceSectorNumber)
}
if params.SealProof != replaceSector.SealProof {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot replace sector %v seal proof %v with seal proof %v",
params.ReplaceSectorNumber, replaceSector.SealProof, params.SealProof)
}
if params.Expiration < replaceSector.Expiration {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot replace sector %v expiration %v with sooner expiration %v",
params.ReplaceSectorNumber, replaceSector.Expiration, params.Expiration)
}
err = st.CheckSectorHealth(store, params.ReplaceSectorDeadline, params.ReplaceSectorPartition, params.ReplaceSectorNumber)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to replace sector %v", params.ReplaceSectorNumber)
return replaceSector
}
func enrollCronEvent(rt Runtime, eventEpoch abi.ChainEpoch, callbackPayload *CronEventPayload) {
payload := new(bytes.Buffer)
err := callbackPayload.MarshalCBOR(payload)
if err != nil {
rt.Abortf(exitcode.ErrIllegalArgument, "failed to serialize payload: %v", err)
}
_, code := rt.Send(
builtin.StoragePowerActorAddr,
builtin.MethodsPower.EnrollCronEvent,
&power.EnrollCronEventParams{
EventEpoch: eventEpoch,
Payload: payload.Bytes(),
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to enroll cron event")
}
func requestUpdatePower(rt Runtime, delta PowerPair) {
if delta.IsZero() {
return
}
_, code := rt.Send(
builtin.StoragePowerActorAddr,
builtin.MethodsPower.UpdateClaimedPower,
&power.UpdateClaimedPowerParams{
RawByteDelta: delta.Raw,
QualityAdjustedDelta: delta.QA,
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to update power with %v", delta)
}
func requestTerminateDeals(rt Runtime, epoch abi.ChainEpoch, dealIDs []abi.DealID) {
for len(dealIDs) > 0 {
size := min64(cbg.MaxLength, uint64(len(dealIDs)))
_, code := rt.Send(
builtin.StorageMarketActorAddr,
builtin.MethodsMarket.OnMinerSectorsTerminate,
&market.OnMinerSectorsTerminateParams{
Epoch: epoch,
DealIDs: dealIDs[:size],
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to terminate deals, exit code %v", code)
dealIDs = dealIDs[size:]
}
}
func requestTerminateAllDeals(rt Runtime, st *State) { //nolint:deadcode,unused
// TODO: red flag this is an ~unbounded computation.
// Transform into an idempotent partial computation that can be progressed on each invocation.
// https://github.com/filecoin-project/specs-actors/issues/675
dealIds := []abi.DealID{}
if err := st.ForEachSector(adt.AsStore(rt), func(sector *SectorOnChainInfo) {
dealIds = append(dealIds, sector.DealIDs...)
}); err != nil {
rt.Abortf(exitcode.ErrIllegalState, "failed to traverse sectors for termination: %v", err)
}
requestTerminateDeals(rt, rt.CurrEpoch(), dealIds)
}
func scheduleEarlyTerminationWork(rt Runtime) {
enrollCronEvent(rt, rt.CurrEpoch()+1, &CronEventPayload{
EventType: CronEventProcessEarlyTerminations,
})
}
func havePendingEarlyTerminations(rt Runtime, st *State) bool {
// Record this up-front
noEarlyTerminations, err := st.EarlyTerminations.IsEmpty()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count early terminations")
return !noEarlyTerminations
}
func verifyWindowedPost(rt Runtime, challengeEpoch abi.ChainEpoch, sectors []*SectorOnChainInfo, proofs []abi.PoStProof) {
minerActorID, err := addr.IDFromAddress(rt.Message().Receiver())
AssertNoError(err) // Runtime always provides ID-addresses
// Regenerate challenge randomness, which must match that generated for the proof.
var addrBuf bytes.Buffer
receiver := rt.Message().Receiver()
err = receiver.MarshalCBOR(&addrBuf)
AssertNoError(err)
postRandomness := rt.GetRandomnessFromBeacon(crypto.DomainSeparationTag_WindowedPoStChallengeSeed, challengeEpoch, addrBuf.Bytes())
sectorProofInfo := make([]abi.SectorInfo, len(sectors))
for i, s := range sectors {
sectorProofInfo[i] = abi.SectorInfo{
SealProof: s.SealProof,
SectorNumber: s.SectorNumber,
SealedCID: s.SealedCID,
}
}
// Get public inputs
pvInfo := abi.WindowPoStVerifyInfo{
Randomness: abi.PoStRandomness(postRandomness),
Proofs: proofs,
ChallengedSectors: sectorProofInfo,
Prover: abi.ActorID(minerActorID),
}
// Verify the PoSt Proof
if err = rt.Syscalls().VerifyPoSt(pvInfo); err != nil {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid PoSt %+v: %s", pvInfo, err)
}
}
// SealVerifyParams is the structure of information that must be sent with a
// message to commit a sector. Most of this information is not needed in the
// state tree but will be verified in sm.CommitSector. See SealCommitment for
// data stored on the state tree for each sector.
type SealVerifyStuff struct {
SealedCID cid.Cid // CommR
InteractiveEpoch abi.ChainEpoch // Used to derive the interactive PoRep challenge.
abi.RegisteredSealProof
Proof []byte
DealIDs []abi.DealID
abi.SectorNumber
SealRandEpoch abi.ChainEpoch // Used to tie the seal to a chain.
}
func getVerifyInfo(rt Runtime, params *SealVerifyStuff) *abi.SealVerifyInfo {
if rt.CurrEpoch() <= params.InteractiveEpoch {
rt.Abortf(exitcode.ErrForbidden, "too early to prove sector")
}
// Check randomness.
challengeEarliest := sealChallengeEarliest(rt.CurrEpoch(), params.RegisteredSealProof)
if params.SealRandEpoch < challengeEarliest {
rt.Abortf(exitcode.ErrIllegalArgument, "seal epoch %v too old, expected >= %v", params.SealRandEpoch, challengeEarliest)
}
commD := requestUnsealedSectorCID(rt, params.RegisteredSealProof, params.DealIDs)
minerActorID, err := addr.IDFromAddress(rt.Message().Receiver())
AssertNoError(err) // Runtime always provides ID-addresses
buf := new(bytes.Buffer)
receiver := rt.Message().Receiver()
err = receiver.MarshalCBOR(buf)
AssertNoError(err)
svInfoRandomness := rt.GetRandomnessFromTickets(crypto.DomainSeparationTag_SealRandomness, params.SealRandEpoch, buf.Bytes())
svInfoInteractiveRandomness := rt.GetRandomnessFromBeacon(crypto.DomainSeparationTag_InteractiveSealChallengeSeed, params.InteractiveEpoch, buf.Bytes())
return &abi.SealVerifyInfo{
SealProof: params.RegisteredSealProof,
SectorID: abi.SectorID{
Miner: abi.ActorID(minerActorID),
Number: params.SectorNumber,
},
DealIDs: params.DealIDs,
InteractiveRandomness: abi.InteractiveSealRandomness(svInfoInteractiveRandomness),
Proof: params.Proof,
Randomness: abi.SealRandomness(svInfoRandomness),
SealedCID: params.SealedCID,
UnsealedCID: commD,
}
}
// Closes down this miner by erasing its power, terminating all its deals and burning its funds
func terminateMiner(rt Runtime) {
var st State
rt.State().Readonly(&st)
requestTerminateAllDeals(rt, &st)
// Delete the actor and burn all remaining funds
rt.DeleteActor(builtin.BurntFundsActorAddr)
}
// Requests the storage market actor compute the unsealed sector CID from a sector's deals.
func requestUnsealedSectorCID(rt Runtime, proofType abi.RegisteredSealProof, dealIDs []abi.DealID) cid.Cid {
ret, code := rt.Send(
builtin.StorageMarketActorAddr,
builtin.MethodsMarket.ComputeDataCommitment,
&market.ComputeDataCommitmentParams{
SectorType: proofType,
DealIDs: dealIDs,
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed request for unsealed sector CID for deals %v", dealIDs)
var unsealedCID cbg.CborCid
AssertNoError(ret.Into(&unsealedCID))
return cid.Cid(unsealedCID)
}
func requestDealWeight(rt Runtime, dealIDs []abi.DealID, sectorStart, sectorExpiry abi.ChainEpoch) market.VerifyDealsForActivationReturn {
var dealWeights market.VerifyDealsForActivationReturn
ret, code := rt.Send(
builtin.StorageMarketActorAddr,
builtin.MethodsMarket.VerifyDealsForActivation,
&market.VerifyDealsForActivationParams{
DealIDs: dealIDs,
SectorStart: sectorStart,
SectorExpiry: sectorExpiry,
},
abi.NewTokenAmount(0),
)
builtin.RequireSuccess(rt, code, "failed to verify deals and get deal weight")
AssertNoError(ret.Into(&dealWeights))
return dealWeights
}
func commitWorkerKeyChange(rt Runtime) *adt.EmptyValue {
var st State
rt.State().Transaction(&st, func() {
info := getMinerInfo(rt, &st)
// A previously scheduled key change could have been replaced with a new key change request
// scheduled in the future. This case should be treated as a no-op.
if info.PendingWorkerKey == nil || info.PendingWorkerKey.EffectiveAt > rt.CurrEpoch() {
return
}
info.Worker = info.PendingWorkerKey.NewWorker
info.PendingWorkerKey = nil
err := st.SaveInfo(adt.AsStore(rt), info)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to save miner info")
})
return nil
}
// Requests the current epoch target block reward from the reward actor.
// return value includes reward, smoothed estimate of reward, and baseline power
func requestCurrentEpochBlockReward(rt Runtime) reward.ThisEpochRewardReturn {
rwret, code := rt.Send(builtin.RewardActorAddr, builtin.MethodsReward.ThisEpochReward, nil, big.Zero())
builtin.RequireSuccess(rt, code, "failed to check epoch baseline power")
var ret reward.ThisEpochRewardReturn
err := rwret.Into(&ret)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to unmarshal target power value")
return ret
}
// Requests the current network total power and pledge from the power actor.
func requestCurrentTotalPower(rt Runtime) *power.CurrentTotalPowerReturn {
pwret, code := rt.Send(builtin.StoragePowerActorAddr, builtin.MethodsPower.CurrentTotalPower, nil, big.Zero())
builtin.RequireSuccess(rt, code, "failed to check current power")
var pwr power.CurrentTotalPowerReturn
err := pwret.Into(&pwr)
builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to unmarshal power total value")
return &pwr
}
// Verifies that the total locked balance exceeds the sum of sector initial pledges.
func verifyPledgeMeetsInitialRequirements(rt Runtime, st *State) {
if !st.MeetsInitialPledgeCondition(rt.CurrentBalance()) {
rt.Abortf(exitcode.ErrInsufficientFunds,
"unlocked balance does not cover pledge requirements (%v < %v)",
st.GetUnlockedBalance(rt.CurrentBalance()), st.InitialPledgeRequirement)
}
}
// Resolves an address to an ID address and verifies that it is address of an account or multisig actor.
func resolveControlAddress(rt Runtime, raw addr.Address) addr.Address {
resolved, ok := rt.ResolveAddress(raw)
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "unable to resolve address %v", raw)
}
Assert(resolved.Protocol() == addr.ID)
ownerCode, ok := rt.GetActorCodeCID(resolved)
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "no code for address %v", resolved)
}
if !builtin.IsPrincipal(ownerCode) {
rt.Abortf(exitcode.ErrIllegalArgument, "owner actor type must be a principal, was %v", ownerCode)
}
return resolved
}
// Resolves an address to an ID address and verifies that it is address of an account actor with an associated BLS key.
// The worker must be BLS since the worker key will be used alongside a BLS-VRF.
func resolveWorkerAddress(rt Runtime, raw addr.Address) addr.Address {
resolved, ok := rt.ResolveAddress(raw)
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "unable to resolve address %v", raw)
}
Assert(resolved.Protocol() == addr.ID)
ownerCode, ok := rt.GetActorCodeCID(resolved)
if !ok {
rt.Abortf(exitcode.ErrIllegalArgument, "no code for address %v", resolved)
}
if ownerCode != builtin.AccountActorCodeID {
rt.Abortf(exitcode.ErrIllegalArgument, "worker actor type must be an account, was %v", ownerCode)
}
if raw.Protocol() != addr.BLS {
ret, code := rt.Send(resolved, builtin.MethodsAccount.PubkeyAddress, nil, big.Zero())
builtin.RequireSuccess(rt, code, "failed to fetch account pubkey from %v", resolved)
var pubkey addr.Address
err := ret.Into(&pubkey)
if err != nil {
rt.Abortf(exitcode.ErrSerialization, "failed to deserialize address result: %v", ret)
}
if pubkey.Protocol() != addr.BLS {
rt.Abortf(exitcode.ErrIllegalArgument, "worker account %v must have BLS pubkey, was %v", resolved, pubkey.Protocol())
}
}
return resolved
}
func burnFunds(rt Runtime, amt abi.TokenAmount) {
if amt.GreaterThan(big.Zero()) {
_, code := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, amt)
builtin.RequireSuccess(rt, code, "failed to burn funds")
}
}
func notifyPledgeChanged(rt Runtime, pledgeDelta abi.TokenAmount) {
if !pledgeDelta.IsZero() {
_, code := rt.Send(builtin.StoragePowerActorAddr, builtin.MethodsPower.UpdatePledgeTotal, &pledgeDelta, big.Zero())
builtin.RequireSuccess(rt, code, "failed to update total pledge")
}
}
// Assigns proving period offset randomly in the range [0, WPoStProvingPeriod) by hashing
// the actor's address and current epoch.
func assignProvingPeriodOffset(myAddr addr.Address, currEpoch abi.ChainEpoch, hash func(data []byte) [32]byte) (abi.ChainEpoch, error) {
offsetSeed := bytes.Buffer{}
err := myAddr.MarshalCBOR(&offsetSeed)
if err != nil {
return 0, fmt.Errorf("failed to serialize address: %w", err)
}
err = binary.Write(&offsetSeed, binary.BigEndian, currEpoch)
if err != nil {
return 0, fmt.Errorf("failed to serialize epoch: %w", err)
}
digest := hash(offsetSeed.Bytes())
var offset uint64
err = binary.Read(bytes.NewBuffer(digest[:]), binary.BigEndian, &offset)
if err != nil {
return 0, fmt.Errorf("failed to interpret digest: %w", err)
}
offset = offset % uint64(WPoStProvingPeriod)
return abi.ChainEpoch(offset), nil
}
// Computes the epoch at which a proving period should start such that it is greater than the current epoch, and
// has a defined offset from being an exact multiple of WPoStProvingPeriod.
// A miner is exempt from Winow PoSt until the first full proving period starts.
func nextProvingPeriodStart(currEpoch abi.ChainEpoch, offset abi.ChainEpoch) abi.ChainEpoch {
currModulus := currEpoch % WPoStProvingPeriod
var periodProgress abi.ChainEpoch // How far ahead is currEpoch from previous offset boundary.
if currModulus >= offset {
periodProgress = currModulus - offset
} else {
periodProgress = WPoStProvingPeriod - (offset - currModulus)
}
periodStart := currEpoch - periodProgress + WPoStProvingPeriod
Assert(periodStart > currEpoch)
return periodStart
}
// Computes deadline information for a fault or recovery declaration.
// If the deadline has not yet elapsed, the declaration is taken as being for the current proving period.
// If the deadline has elapsed, it's instead taken as being for the next proving period after the current epoch.
func declarationDeadlineInfo(periodStart abi.ChainEpoch, deadlineIdx uint64, currEpoch abi.ChainEpoch) (*DeadlineInfo, error) {
if deadlineIdx >= WPoStPeriodDeadlines {
return nil, fmt.Errorf("invalid deadline %d, must be < %d", deadlineIdx, WPoStPeriodDeadlines)
}
deadline := NewDeadlineInfo(periodStart, deadlineIdx, currEpoch).NextNotElapsed()
return deadline, nil
}
// Checks that a fault or recovery declaration at a specific deadline is outside the exclusion window for the deadline.
func validateFRDeclarationDeadline(deadline *DeadlineInfo) error {
if deadline.FaultCutoffPassed() {
return fmt.Errorf("late fault or recovery declaration at %v", deadline)
}
return nil
}
// Validates that a partition contains the given sectors.
func validatePartitionContainsSectors(partition *Partition, sectors bitfield.BitField) error {
// Check that the declared sectors are actually assigned to the partition.
contains, err := abi.BitFieldContainsAll(partition.Sectors, sectors)
if err != nil {
return xerrors.Errorf("failed to check sectors: %w", err)
}
if !contains {
return xerrors.Errorf("not all sectors are assigned to the partition")
}
return nil
}
func terminationPenalty(sectorSize abi.SectorSize, currEpoch abi.ChainEpoch, rewardEstimate, networkQAPowerEstimate *smoothing.FilterEstimate, sectors []*SectorOnChainInfo) abi.TokenAmount {
totalFee := big.Zero()
for _, s := range sectors {
sectorPower := QAPowerForSector(sectorSize, s)
fee := PledgePenaltyForTermination(s.ExpectedDayReward, s.ExpectedStoragePledge, currEpoch-s.Activation, rewardEstimate, networkQAPowerEstimate, sectorPower)
totalFee = big.Add(fee, totalFee)
}
return totalFee
}
func PowerForSector(sectorSize abi.SectorSize, sector *SectorOnChainInfo) PowerPair {
return PowerPair{
Raw: big.NewIntUnsigned(uint64(sectorSize)),
QA: QAPowerForSector(sectorSize, sector),
}
}
// Returns the sum of the raw byte and quality-adjusted power for sectors.
func PowerForSectors(ssize abi.SectorSize, sectors []*SectorOnChainInfo) PowerPair {
qa := big.Zero()
for _, s := range sectors {
qa = big.Add(qa, QAPowerForSector(ssize, s))
}
return PowerPair{
Raw: big.Mul(big.NewIntUnsigned(uint64(ssize)), big.NewIntUnsigned(uint64(len(sectors)))),
QA: qa,
}
}
// The oldest seal challenge epoch that will be accepted in the current epoch.
func sealChallengeEarliest(currEpoch abi.ChainEpoch, proof abi.RegisteredSealProof) abi.ChainEpoch {
return currEpoch - ChainFinality - MaxSealDuration[proof]
}
func getMinerInfo(rt Runtime, st *State) *MinerInfo {
info, err := st.GetInfo(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not read miner info")
return info
}
func min64(a, b uint64) uint64 {
if a < b {
return a
}
return b
}
func max64(a, b uint64) uint64 {
if a > b {
return a
}
return b
}
func minEpoch(a, b abi.ChainEpoch) abi.ChainEpoch {
if a < b {
return a
}
return b
}
Storage Mining Cycle
Block miners should constantly be performing Proofs of SpaceTime using Election PoSt, and checking the outputted partial tickets to run Leader Election and determine whether they can propose a block at each epoch. Epochs are currently set to take around X seconds, in order to account for election PoSt and network propagation around the world. The details of the mining cycle are defined here.
Active Miner Mining Cycle
In order to mine blocks on the Filecoin blockchain a miner must be running Block Validation at all times, keeping track of recent blocks received and the heaviest current chain (based on Expected Consensus).
With every new tipset, the miner can use their committed power to attempt to craft a new block.
For additional details around how consensus works in Filecoin, see Expected Consensus. For the purposes of this section, there is a consensus protocol (Expected Consensus) that guarantees a fair process for determining what blocks have been generated in a round, whether a miner is eligible to mine a block itself, and other rules pertaining to the production of some artifacts required of valid blocks (e.g. Tickets, ElectionPoSt).
After the chain has caught up to the current head using ChainSync. At a high-level, the mining process is as follows, (we go into more detail on epoch timing below):
- The node receives and transmits messages using the Message Syncer
- At the same time it
receives blocks
- Each block has an associated timestamp and epoch (quantized time window in which it was crafted)
- Blocks are validated as they come in block validation
- After an epoch’s “cutoff”, the miner should take all the valid blocks received for this epoch and assemble them into tipsets according to Tipset validation rules
- The miner then attempts to mine atop the heaviest tipset (as calculated with
EC’s weight function) using its smallest ticket to run leader election
- The miner runs
Leader Election using the most recent
random output by a
drand beacon.
- if this yields a valid
ElectionProof
, the miner generates a new ticket and winning PoSt for inclusion in the block. - the miner then assembles a new block (see “block creation” below) and waits until this epoch’s quantized timestamp to broadcast it
- if this yields a valid
- The miner runs
Leader Election using the most recent
random output by a
drand beacon.
This process is repeated until either the Leader Election process yields a winning ticket (in EC) and the miner publishes a block or a new valid block comes in from the network.
At any height H
, there are three possible situations:
- The miner is eligible to mine a block: they produce their block and propagate it. They then resume mining at the next height
H+1
. - The miner is not eligible to mine a block but has received blocks: they form a Tipset with them and resume mining at the next height
H+1
. - The miner is not eligible to mine a block and has received no blocks: prompted by their clock they run leader election again, incrementing the epoch number.
Anytime a miner receives new valid blocks, it should evaluate what is the heaviest Tipset it knows about and mine atop it.
The timing diagram above describes the sequence of block creation “mining”, propagation and reception.
This sequence of events applies only when the node is in the CHAIN_FOLLOW
syncing mode. Nodes in other syncing modes do not mine blocks.
The upper row represents the conceptual consumption channel consisting of successive receiving periods Rx
during which nodes validate incoming blocks.
The lower row is the conceptual production channel made up of a period of mining M
followed by a period of transmission Tx
(which lasts long enough for blocks to propagate throughout the network). The lengths of the periods are not to scale.
The above diagram represents the important events within an epoch:
- Epoch boundary: change of current epoch. New blocks mined are mined in new epoch, and timestamped accordingly.
- Epoch cutoff: blocks from the prior epoch propagated on the network are no longer accepted. Miners can form a new tipset to mine on.
In an epoch, blocks are received and validated during Rx
up to the prior epoch’s cutoff. At the cutoff, the miner computes the heaviest tipset from the blocks received during Rx
, and uses i