mod.rs
  1  //! This module implements fedimints custom atomic broadcast abstraction. A
  2  //! such, it is responsible for ordering serialized items in the form of byte
  3  //! vectors. The Broadcast is able to recover from a crash at any time via a
  4  //! backup that it maintains in the servers [fedimint_core::db::Database]. In
  5  //! Addition, it stores the history of accepted items in the form of
  6  //! signed session outcomes in the database as well in order to catch up fellow
  7  //! guardians which have been offline for a prolonged period of time.
  8  //!
  9  //! Though the broadcast depends on [fedimint_core] for [fedimint_core::PeerId],
 10  //! [fedimint_core::encoding::Encodable] and [fedimint_core::db::Database]
 11  //! it implements no consensus logic specific to Fedimint, to which we will
 12  //! refer as Fedimint Consensus going forward. To the broadcast a consensus item
 13  //! is merely a vector of bytes without any further structure.
 14  //!
 15  //! # The journey of a ConsensusItem
 16  //!
 17  //! Let us sketch the journey of an [fedimint_core::epoch::ConsensusItem] into a
 18  //! signed session outcome.
 19  //!
 20  //! * The node which wants to order the item calls consensus_encode to serialize
 21  //!   it and sends the resulting serialization to its running atomic broadcast
 22  //!   instance via the mempool item sender.
 23  //! * Every 250ms the broadcasts currently running session instance creates a
 24  //!   new batch from its mempool and attaches it to a unit in the form of a
 25  //!   UnitData::Batch. The size of a batch and therefore the size of a
 26  //!   serialization is limited to 10kB.
 27  //! * The unit is then included in a [Message] and send to the network layer via
 28  //!   the outgoing message sender.
 29  //! * The network layer receives the message, serializes it via consensus_encode
 30  //!   and sends it to its peers, which in turn deserialize it via
 31  //!   consensus_decode and relay it to their broadcast instance via their
 32  //!   incoming message sender.
 33  //! * The unit is added to the local subgraph of a common directed acyclic graph
 34  //!   of units generated cooperatively by all peers for every session.
 35  //! * As the local subgraph grows the units within it are ordered and so are the
 36  //!   attached batches. As soon as it is ordered the broadcast instances unpacks
 37  //!   our batch sends the serialization to Fedimint Consensus in the form of an
 38  //!   ordered item.
 39  //! * Fedimint Consensus then deserializes the item and either accepts the item
 40  //!   bases on its current consensus state or discards it otherwise. Fedimint
 41  //!   Consensus transmits its decision to its broadcast instance via the
 42  //!   decision_sender and processes the next item.
 43  //! * Assuming our item has been accepted the broadcast instance appends its
 44  //!   deserialization is added to the session outcome corresponding to the
 45  //!   current session.
 46  //! * Roughly every five minutes the session completes. Then the broadcast
 47  //!   creates a threshold signature for the session outcome's header and saves
 48  //!   both in the form of a signed session outcome in the local database.
 49  //!
 50  //! # Interplay with Fedimint Consensus
 51  //!
 52  //! As an item is only recorded in a session outcome if it has been accepted the
 53  //! decision has to be consisted for all correct nodes in order for them to
 54  //! create identical session outcomes for every session. We introduce this
 55  //! complexity in order to prevent a critical DOS vector were a client submits
 56  //! conflicting items, like double spending an ecash note for example, to
 57  //! different peers. If Fedimint Consensus would not be able to discard the
 58  //! conflicting items in such a way that they do not become part of the
 59  //! broadcasts history all of those items would need to be maintained on disk
 60  //! indefinitely.
 61  //!
 62  //! Therefore it cannot be guaranteed that all broadcast instances return the
 63  //! exact stream of ordered items. However, if two correct peers obtain two
 64  //! ordered items from their broadcast instances they are guaranteed to be in
 65  //! the same order. Furthermore, an ordered items is guaranteed to be seen by
 66  //! all correct nodes if a correct peer accepts it. Those two guarantees are
 67  //! sufficient to build consistent replicated state machines like Fedimint
 68  //! Consensus on top of the broadcast. Such a state machine has to accept an
 69  //! item if it changes the machines state and should discard it otherwise. Let
 70  //! us consider the case of an ecash note being double spend by the items
 71  //! A and B while one peer is offline. First, item A is ordered and all correct
 72  //! peers include the note as spent in their state. Therefore they also accept
 73  //! the item A. Then, item B is ordered and all correct nodes notice the double
 74  //! spend and make no changes to their state. Now they can safely discard the
 75  //! item B as it did not cause a state transition. When the session completes
 76  //! only item A is part of the corresponding session outcome. When the offline
 77  //! peer comes back online it downloads the session outcome. Therefore the
 78  //! recovering peer will only see Item A but arrives at the same state as its
 79  //! peers at the end of the session regardless. However, it did so by processing
 80  //! one less ordered item and without realizing that a double spend had
 81  //! occurred.
 82  
 83  pub mod backup;
 84  pub mod data_provider;
 85  pub mod finalization_handler;
 86  pub mod keychain;
 87  pub mod network;
 88  pub mod spawner;
 89  
 90  use aleph_bft::NodeIndex;
 91  use fedimint_core::encoding::{Decodable, Encodable};
 92  use fedimint_core::PeerId;
 93  /// This keychain implements naive threshold schnorr signatures over secp256k1.
 94  /// The broadcasts uses this keychain to sign messages for peers and create
 95  /// the threshold signatures for the signed session outcome.
 96  pub use keychain::Keychain;
 97  use serde::{Deserialize, Serialize};
 98  
 99  /// The majority of these messages need to be delivered to the intended
100  /// [Recipient] in order for the broadcast to make progress. However, the
101  /// broadcast does not assume a reliable network layer and implements all
102  /// necessary retry logic. Therefore, the caller can discard a message
103  /// immediately if its intended recipient is offline.
104  #[derive(Clone, Debug, Encodable, Decodable, Serialize, Deserialize)]
105  pub struct Message(Vec<u8>);
106  
107  /// This enum defines the intended destination of a [Message].
108  #[derive(Clone, Copy, Debug, PartialEq, Eq)]
109  pub enum Recipient {
110      Everyone,
111      Peer(PeerId),
112  }
113  
114  pub fn to_peer_id(node_index: NodeIndex) -> PeerId {
115      u16::try_from(usize::from(node_index))
116          .expect("The node index corresponds to a valid PeerId")
117          .into()
118  }
119  
120  pub fn to_node_index(peer_id: PeerId) -> NodeIndex {
121      usize::from(u16::from(peer_id)).into()
122  }