Introduction
Welcome to the Miden node documentation.
This book provides two separate guides aimed at node operators and developers looking to contribute to the node respectively. Each guide is standalone, but developers should also read through the operator guide as it provides some additional context.
At present, the Miden node is the central hub responsible for receiving user transactions and forming them into new blocks for a Miden network. As Miden decentralizes, the node will morph into the official reference implementation(s) of the various components required by a fully p2p network.
Each Miden network therefore has exactly one node receiving transactions and creating blocks. The node provides a gRPC interface for users, dApps, wallets and other clients to submit transactions and query the state.
Feedback
Please report any issues, ask questions or leave feedback in the node repository here.
This includes outdated, misleading, incorrect or just plain confusing information :)
Operator Guide
Welcome to the Miden
node operator guide which should cover everything you need to successfully run and maintain a
Miden node.
You can report any issues, ask questions or leave feedback at our project repo here.
Node architecture
The node itself consists of four distributed components: store, block-producer, network transaction builder, and RPC.
The components can be run on separate instances when optimised for performance, but can also be run as a single process for convenience. The exception to this is the network transaction builder which can currently only be run as part of the single process. At the moment both of Miden's public networks (testnet and devnet) are operating in single process mode.
The inter-component communication is done using a gRPC API which is assumed trusted. In other words this must not be public. External communication is handled by the RPC component with a separate external-only gRPC API.
RPC
The RPC component provides a public gRPC API with which users can submit transactions and query chain state. Queries are validated and then proxied to the store. Similarly, transaction proofs are verified before submitting them to the block-producer. This takes a non-trivial amount of load off the block-producer.
This is the only external facing component and it essentially acts as a shielding proxy that prevents bad requests from impacting block production.
It can be trivially scaled horizontally e.g. with a load-balancer in front as shown above.
Store
The store is responsible for persisting the chain state. It is effectively a database which holds the current state of the chain, wrapped in a gRPC interface which allows querying this state and submitting new blocks.
It expects that this gRPC interface is only accessible internally i.e. there is an implicit assumption of trust.
Block-producer
The block-producer is responsible for aggregating received transactions into blocks and submitting them to the store.
Transactions are placed in a mempool and are periodically sampled to form batches of transactions. These batches are proved, and then periodically aggregated into a block. This block is then proved and committed to the store.
Proof generation in production is typically outsourced to a remote machine with appropriate resources. For convenience, it is also possible to perform proving in-process. This is useful when running a local node for test purposes.
Network transaction builder
The network transaction builder monitors the mempool for network notes, and creates transactions consuming these. We call these network transactions and at present this is the only entity that is allowed to create such transactions. This restriction is will be lifted in the future, but for now this component must be enabled to have support for network transactions.
The mempool is monitored via a gRPC event stream served by the block-producer.
Installation
We provide Debian packages for official releases for the node software. Alternatively, it also can be installed from source on most systems using the Rust package manager cargo
.
Debian package
Official Debian packages are available under our releases page.
Both amd64
and arm64
packages are available.
Note that the packages include a systemd
service which is disabled by default.
To install, download the desired releases .deb
package and checksum files. Install using
sudo dpkg -i $package_name.deb
You can (and should) verify the checksum prior to installation using a SHA256 utility. This differs from platform to platform, but on most linux distros:
sha256sum --check $checksum_file.deb.checksum
can be used so long as the checksum file and the package file are in the same folder.
Install using cargo
Install Rust version 1.88 or greater using the official Rust installation instructions.
Depending on the platform, you may need to install additional libraries. For example, on Ubuntu 22.04 the following command ensures that all required libraries are installed.
sudo apt install llvm clang bindgen pkg-config libssl-dev libsqlite3-dev
Install the latest node binary:
cargo install miden-node --locked
This will install the latest official version of the node. You can install a specific version x.y.z
using
cargo install miden-node --locked --version x.y.z
You can also use cargo
to compile the node from the source code if for some reason you need a specific git revision.
Note that since these aren't official releases we cannot provide much support for any issues you run into, so consider
this for advanced use only. The incantation is a little different as you'll be targeting our repo instead:
# Install from a specific branch
cargo install --locked --git https://github.com/0xMiden/miden-node miden-node --branch <branch>
# Install a specific tag
cargo install --locked --git https://github.com/0xMiden/miden-node miden-node --tag <tag>
# Install a specific git revision
cargo install --locked --git https://github.com/0xMiden/miden-node miden-node --rev <git-sha>
More information on the various cargo install
options can be found
here.
Updating
warning
We currently have no backwards compatibility guarantees. This means updating your node is destructive - your existing chain will not work with the new version. This will change as our protocol and database schema mature and settle.
Updating the node to a new version is as simply as re-running the install process and repeating the bootstrapping instructions.
Configuration and Usage
As outlined in the Architecture chapter, the node consists of several components which can be run separately or as a single bundled process. At present, the recommended way to operate a node is in bundled mode and is what this guide will focus on. Operating the components separately is very similar and should be relatively straight-forward to derive from these instructions.
This guide focuses on basic usage. To discover more advanced options we recommend exploring the various help menus
which can be accessed by appending --help
to any of the commands.
Bootstrapping
The first step in starting a new Miden network is to initialize the genesis block data. This is a
one-off operation using the bootstrap
command and by default the genesis block will contain a single
faucet account.
# Create a folder to store the node's data.
mkdir data
# Bootstrap the node.
#
# This creates the node's database and initializes it with the genesis data.
#
# The genesis block currently contains a single public faucet account. The
# secret for this account is stored in the `<accounts-directory/account.mac>`
# file. This file is not used by the node and should instead by used wherever
# you intend to operate this faucet account.
#
# For example, you could operate a public faucet using our faucet reference
# implementation whose operation is described in a later section.
miden-node bundled bootstrap \
--data-directory data \
--accounts-directory .
You can also configure the account and asset data in the genesis block by passing in a toml configuration file.
This is particularly useful for setting up test scenarios without requiring multiple rounds of
transactions to achieve the desired state. Any account secrets will be written to disk inside the
the provided --accounts-directory
path in the process.
miden-node bundled bootstrap \
--data-directory data \
--accounts-directory . \
--genesis-config-file genesis.toml
The genesis configuration file should contain at least one faucet, and optionally, wallet definitions with assets, for example:
# The UNIX timestamp of the genesis block. It will influence the hash of the genesis block.
timestamp = 1717344256
# Defines the format of the block protocol to use for the genesis block.
version = 1
[[fungible_faucet]]
# The token symbol to use for the token
symbol = "FUZZY"
# Number of decimals your token will have, it effectively defines the fixed point accuracy.
decimals = 6
# Total supply, in _base units_
#
# e.g. a max supply of `1e15` _base units_ and decimals set to `6`, will yield you a total supply
# of `1e15/1e6 = 1e9` `FUZZY`s.
max_supply = 1_000_000_000_000_000
# Storage mode of the faucet account.
storage_mode = "public"
[[wallet]]
# List of all assets the account should hold. Each token type _must_ have a corresponding faucet.
# The number is in _base units_, e.g. specifying `999 FUZZY` at 6 decimals would become
# `999_000_000`.
assets = [{ amount = 999_000_000, symbol = "FUZZY" }]
# Storage mode of the wallet account.
storage_mode = "private"
# The code of the account can be updated or not.
# has_updatable_code = false # default value
Operation
Start the node with the desired public gRPC server address.
miden-node bundled start \
--data-directory data \
--rpc.url http://0.0.0.0:57291
Systemd
Our Debian packages install a systemd service which operates the node in bundled
mode. You'll still need to run the bootstrapping process before the node can be started.
You can inspect the service file with systemctl cat miden-node
or alternatively you can see it in
our repository in the packaging
folder. For the bootstrapping process be sure to specify the data-directory as
expected by the systemd file.
Environment variables
Most configuration options can also be configured using environment variables as an alternative to providing the values
via the command-line. This is useful for certain deployment options like docker
or systemd
, where they can be easier
to define or inject instead of changing the underlying command line options.
These are especially convenient where multiple different configuration profiles are used. Write the environment
variables to some specific profile.env
file and load it as part of the node command:
source profile.env && miden-node <...>
This works well on Linux and MacOS, but Windows requires some additional scripting unfortunately.
See the .env
files in each of the binary crates' directories for a list of all available environment variables.
Monitoring & telemetry
We provide logging to stdout
and an optional OpenTelemetry exporter for our traces.
OpenTelemetry exporting can be enabled by specifying --enable-otel
via the command-line or the
MIDEN_NODE_ENABLE_OTEL
environment variable when operating the node.
We do not export OpenTelemetry logs or metrics. Our end goal is to derive these based off of our tracing information. This approach is known as wide-events, structured logs, and Observibility 2.0.
What we're exporting are traces
which consist of spans
(covering a period of time), and events
(something happened
at a specific instance in time). These are extremely useful to debug distributed systems - even though miden
is still
centralized, the node
components are distributed.
OpenTelemetry provides a Span Metrics Converter which can be used to convert our traces into more conventional metrics.
What gets traced
We assign a unique trace (aka root span) to each RPC request, batch build, and block build process.
Span and attribute naming is unstable and should not be relied upon. This also means changes here will not be considered breaking, however we will do our best to document them.
RPC request/response
Not yet implemented.
Block building
This trace covers the building, proving and submission of a block.
Span tree
block_builder.build_block
┝━ block_builder.select_block
│ ┝━ mempool.lock
│ ┕━ mempool.select_block
┝━ block_builder.get_block_inputs
│ ┝━ block_builder.summarize_batches
│ ┕━ store.client.get_block_inputs
│ ┕━ store.rpc/GetBlockInputs
│ ┕━ store.server.get_block_inputs
│ ┝━ validate_nullifiers
│ ┝━ read_account_ids
│ ┝━ validate_notes
│ ┝━ select_block_header_by_block_num
│ ┝━ select_note_inclusion_proofs
│ ┕━ select_block_headers
┝━ block_builder.prove_block
│ ┝━ execute_program
│ ┕━ block_builder.simulate_proving
┝━ block_builder.inject_failure
┕━ block_builder.commit_block
┝━ store.client.apply_block
│ ┕━ store.rpc/ApplyBlock
│ ┕━ store.server.apply_block
│ ┕━ apply_block
│ ┝━ select_block_header_by_block_num
│ ┕━ update_in_memory_structs
┝━ mempool.lock
┕━ mempool.commit_block
┕━ mempool.revert_expired_transactions
┕━ mempool.revert_transactions
Batch building
This trace covers the building and proving of a batch.
Span tree
batch_builder.build_batch
┝━ batch_builder.wait_for_available_worker
┝━ batch_builder.select_batch
│ ┝━ mempool.lock
│ ┕━ mempool.select_batch
┝━ batch_builder.get_batch_inputs
│ ┕━ store.client.get_batch_inputs
┝━ batch_builder.propose_batch
┝━ batch_builder.prove_batch
┝━ batch_builder.inject_failure
┕━ batch_builder.commit_batch
┝━ mempool.lock
┕━ mempool.commit_batch
Verbosity
We log important spans and events at info
level or higher, which is also the default log level.
Changing this level should rarely be required - let us know if you're missing information that should be at info
.
The available log levels are trace
, debug
, info
(default), warn
, error
which can be configured using the
RUST_LOG
environment variable e.g.
export RUST_LOG=debug
The verbosity can also be specified by component (when running them as a single process):
export RUST_LOG=warn,block-producer=debug,rpc=error
The above would set the general level to warn
, and the block-producer
and rpc
components would be overridden to
debug
and error
respectively. Though as mentioned, it should be unusual to do this.
Configuration
The OpenTelemetry trace exporter is enabled by adding the --enable-otel
flag to the node's start command:
miden-node bundled start --enable-otel
The exporter can be configured using environment variables as specified in the official documents.
Note: we only support gRPC as the export protocol.
Example: Honeycomb configuration
This is based off Honeycomb's OpenTelemetry setup guide.
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io:443 \
OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key" \
miden-node bundled start --enable-otel
Honeycomb queries, triggers and board examples
Example Queries
Here are some useful Honeycomb queries to help monitor your Miden node:
Block building performance:
VISUALIZE
HEATMAP(duration_ms) AVG(duration_ms)
WHERE
name = "block_builder.build_block"
GROUP BY block.number
ORDER BY block.number DESC
LIMIT 100
Batch processing latency:
VISUALIZE
HEATMAP(duration_ms) AVG(duration_ms) P95(duration_ms)
WHERE
name = "batch_builder.build_batch"
GROUP BY batch.id
LIMIT 100
Block proving failures:
VISUALIZE
COUNT
WHERE
name = "block_builder.build_block"
AND status = "error"
CALCULATE RATE
Transaction volume by block:
VISUALIZE
MAX(transactions.count)
WHERE
name = "block_builder.build_block"
GROUP BY block.number
ORDER BY block.number DESC
LIMIT 100
RPC request rate by endpoint:
VISUALIZE
COUNT
WHERE
name contains "rpc"
GROUP BY name
RPC latency by endpoint:
VISUALIZE
AVG(duration_ms) P95(duration_ms)
WHERE
name contains "rpc"
GROUP BY name
RPC errors by status code:
VISUALIZE
COUNT
WHERE
name contains "rpc"
GROUP BY status_code
Example Triggers
Create triggers in Honeycomb to alert you when important thresholds are crossed:
Slow block building:
- Query:
VISUALIZE
AVG(duration_ms)
WHERE
name = "block_builder.build_block"
- Trigger condition:
AVG(duration_ms) > 30000
(adjust based on your expected block time) - Description: Alert when blocks take too long to build (more than 30 seconds on average)
High failure rate:
- Query:
VISUALIZE
COUNT
WHERE
name = "block_builder.build_block" AND error = true
- Trigger condition:
COUNT > 100 WHERE error = true
- Description: Alert when more than 100 block builds are failing
Advanced investigation with BubbleUp
To identify the root cause of performance issues or errors, use Honeycomb's BubbleUp feature:
- Create a query for a specific issue (e.g., high latency for block building)
- Click on a specific high-latency point in the visualization
- Use BubbleUp to see which attributes differ significantly between normal and slow operations
- Inspect the related spans in the trace to pinpoint the exact step causing problems
This approach helps identify patterns like:
- Which types of transactions are causing slow blocks
- Which specific operations within block/batch processing take the most time
- Correlations between resource usage and performance
- Common patterns in error cases
Versioning
We follow the semver standard for versioning.
The following is considered the node's public API, and will therefore be considered as breaking changes.
- RPC gRPC specification (note that this excludes internal inter-component gRPC schemas).
- Node configuration options.
- Database schema changes which cannot be reverted.
- Large protocol and behavioral changes.
We intend to include our OpenTelemetry trace specification in this once it stabilizes.
We will also call out non-breaking behavioral changes in our changelog and release notes.
Developer Guide
Welcome to the developer guide for the miden
node :)
This is intended to serve as a basic introduction to the codebase as well as covering relevant concepts and recording architectural decisions.
This is not intended for dApp developers or users of the node, but for development of the node itself.
It is also a good idea to familiarise yourself with the operator manual.
Living documents go stale - the code is the final arbitrator of truth.
If you encounter any outdated, incorrect or misleading information, please open an issue.
Contributing to Miden Node
First off, thanks for taking the time to contribute!
We want to make contributing to this project as easy and transparent as possible.
Before you begin..
Start by commenting your interest in the issue you want to address - this let's us assign the issue to you and prevents multiple people from repeating the same work. This also lets us add any additional information or context you may need.
We use the next
branch as our active development branch. This means your work should fork off the next
branch (and
not main
).
Typos and low-effort contributions
We don't accept PRs for typo fixes as these are often scanned for AI "contributors". If you find typos please open an issue instead.
Commits
Try keep your commit names and messages related to the content. This provides reviewers with context if they need to step through your changes by commit.
This does not need to be perfect because we generally squash merge a PR - the commit naming is therefore only relevant for the review process.
Pre-PR checklist
Before submitting a PR, ensure that you're up to date by rebasing onto next
, and that tests and lints pass by running:
# Runs the various lints
make lint
# Runs the test suite
make test
Post-PR
Please don't rebase your branch once the PR has been opened. In other words - only append new commits. This lets reviewers have a consistent view of your changes for follow-up reviews. Reviewers may request a rebase once they're ready in order to merge your changes in.
Any contributions you make will be under the MIT Software License
In short, when you submit code changes, your submissions are understood to be under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
Navigating the codebase
The code is organised using a Rust workspace with separate crates for the node and remote prover binaries, a crate for each node component, a couple of gRPC-related codegen crates, and a catch-all utilities crate.
The primary artifacts are the node and remote prover binaries. The library crates are not intended for external usage, but instead simply serve to enforce code organisation and decoupling.
Crate | Description |
---|---|
node | The node executable. Configure and run the node and its components. |
remote-prover | Remote prover executables. Includes workers and proxies. |
remote-prover-client | Remote prover client implementation. |
block-producer | Block-producer component implementation. |
store | Store component implementation. |
ntx-builder | Network transaction builder component implementation. |
rpc | RPC component implementation. |
proto | Contains and exports all protobuf definitions. |
rpc-proto | Contains the RPC protobuf definitions. Currently this is an awkward clone of proto because we re-use the definitions from the internal protobuf types. |
utils | Variety of utility functionality. |
test-macro | Provides a procedural macro to enable tracing in tests. |
note
miden-base
is an important dependency which
contains the core Miden protocol definitions e.g. accounts, notes, transactions etc.
Monitoring
Developer level overview of how we aim to use tracing
and open-telemetry
to provide monitoring and telemetry for the
node.
Please begin by reading through the monitoring operator guide as this will provide some much needed context.
Approach and philosophy
We want to trace important information such that we can quickly recognise issues (monitoring & alerting) and identify the cause. Conventionally this has been achieved via metrics and logs respectively, however a more modern approach is using wide-events/traces and post-processing these instead. We're using the OpenTelemetry standard for this, however we are only using the trace pillar and avoid metrics and logs.
We wish to emit these traces without compromising on code quality and readability. This is also a downside to including
metrics - these are usually emitted inline with the code, causing noise and obscuring the business logic. Ideally we
want to rely almost entirely on tracing::#[instrument]
to create spans as these live outide the function body.
There are of course exceptions to the rule - usually the root span itself is created manually e.g. a new root span for
each block building iteration. Inner spans should ideally keep to #[instrument]
where possible.
Relevant crates
We've attempted to lock most of the OpenTelemetry crates behind our own abstractions in the utils
crate. There are a
lot of these crates and it can be difficult to keep them all separate when writing new code. We also hope this will
provide a more consistent result as we build out our monitoring.
tracing
is the defacto standard for logging and tracing within the Rust ecosystem. OpenTelemetry has decided to avoid
fracturing the ecosystem and instead attempts to bridge between tracing
and the OpenTelemetry standard in-so-far as is
possible. All this to say that there are some rough edges where the two combine - this should improve over time.
crate | description |
---|---|
tracing | Emits tracing spans and events. |
tracing-subscriber | Provides the conventional tracing stdout logger (no interaction with OpenTelemetry). |
tracing-forest | Logs span trees to stdout. Useful to visualize span relations, but cannot trace across RPC boundaries as it doesn't understand remote tracing context. |
tracing-opentelemetry | Bridges the gaps between tracing and the OpenTelemetry standard. |
opentelemetry | Defines core types and concepts for OpenTelemetry. |
opentelemetry-otlp | gRPC exporter for OpenTelemetry traces. |
opentelemetry_sdk | Provides the OpenTelemetry abstractions for metrics, logs and traces. |
opentelemetry-semantic-conventions | Constants for naming conventions as per OpenTelemetry standard. |
Important concepts
OpenTelemetry standards & documentation
There is a lot. You don't need all of it - look things up as and when you stumble into confusion.
It is probably worth reading through the naming conventions to get a sense of style.
Footguns and common issues
tracing
requires data to be known statically e.g. you cannot add span attributes dynamically. tracing-opentelemetry
provides a span extension trait which works around this limitation - however this dynamic information is only visible
to the OpenTelemetry processing i.e. tracing_subscriber
won't see this at all.
In general, you'll find that tracing
subscribers are blind to any extensions or OpenTelemetry specific concepts. The
reverse is of course not true because OpenTelemetry is integrating with tracing
.
Another pain point is error stacks - or rather lack thereof. #[tracing::instrument(err)]
correctly marks the span as
an error, however unfortunately the macro only uses the Display
or Debug
implementation of the error. This means you
are missing the error reports entirely. tracing_opentelemetry
reuses the stringified error data provided by tracing
so currently there is no work-around for this. Using Debug
via ?err
at least shows some information but one still
misses the actual error messages which is quite bad.
Manually instrumenting code (i.e. without #[instrument]
) can be rather error prone because async calls must be
manually instrumented each time. And non-async code also requires holding the span.
Distributed context
We track traces across our components by injecting the parent span ID into the gRPC client's request metadata. The server side then extracts this and uses this as the parent span ID for its processing.
This is an OpenTelemetry concept - conventional tracing
cannot follow these relations.
Read more in the official OpenTelemetry documentation.
Choosing spans
A root span should represent a set of operations that belong together. It also shouldn't live forever as span information is usually only sent once the span closes i.e. a root span around the entire node makes no sense as the operation runs forever.
A good convention to follow is creating child spans for timing information you may want when debugging a failure or slow operation. As an example, it may make sense to instrument a mutex locking function to visualize the contention on it. Or separating the database file IO from the sqlite statement creation. Essentially operations which you would otherwise consider logging the timings for should be separate spans. While you may find this changes the code you might otherwise create, we've found this actually results in fairly good structure since it follows your business logic sense.
Inclusions and naming conventions
Where possible, attempt to find and use the naming conventions specified by the standard, ideally via the
opentelemetry-semantic-conventions
crate.
Include information you'd want to see when debugging - make life easy for your future self looking at data at 3AM on a Saturday. Also consider what information may be useful when correlating data e.g. client IP.
Node components
The node is split into three distinct components that communicate via gRPC. See the Operator guide#architecture chapter for an overview of each component.
The following sections will describe the inner architecture of each component.
RPC Component
This is by far the simplest component. Essentially this is a thin gRPC server which proxies all requests to the store and block-producer components.
Its main function is to pre-validate all requests before sending them on. This means malformed or non-sensical requests get rejected before reaching the store and block-producer, reducing their load. Notably this also includes verifying the proofs of submitting transactions. This allows the block-producer to skip proof verification (it trusts the RPC component), reducing the load in this critical component.
RPC Versioning
The RPC server enforces version requirements against connecting clients that provide the HTTP ACCEPT header. When this header is provided, its corresponding value must follow this format: application/vnd.miden.0.9.0+grpc
.
If there is a mismatch in version, clients will encounter an error while executing gRPC requests against the RPC server with the following details:
- gRPC status code: 3 (Invalid Argument)
- gRPC message: Missing required ACCEPT header
The server will reject any version that does not have the same major and minor version to it. This behaviour will change after v1.0.0., at which point only the major version will be taken into account.
Store component
This component persists the chain state in a sqlite
database. It also stores each block's raw data as a file.
Mekle data structures are kept in-memory and are rebuilt on startup. Other data like account, note and nullifier information is always read from disk. We will need to revisit this in the future but for now this is performant enough.
Migrations
We have database migration support in place but don't actively use it yet. There is only the latest schema, and we reset chain state (aka nuke the existing database) on each release.
Note that the migration logic includes both a schema number and a hash based on the sql schema. These are both checked on node startup to ensure that any existing database matches the expected schema. If you're seeing database failures on startup its likely that you created the database before making schema changes resulting in different schema hashes.
Architecture
The store consists mainly of a gRPC server which answers requests from the RPC and block-producer components, as well as new block submissions from the block-producer.
A lightweight background process performs database query optimisation by analysing database queries and statistics.
Block Producer Component
The block-producer is responsible for ordering transactions into batches, and batches into blocks, and creating the proofs for these. Proving is usually outsourced to a remote prover but can be done locally if throughput isn't essential, e.g. for test purposes on a local node.
It hosts a single gRPC endpoint to which the RPC component can forward new transactions.
The core of the block-producer revolves around the mempool which forms a DAG of all in-flight transactions and batches. It also ensures all invariants of the transactions are upheld e.g. account's current state matches the transaction's initial state, that all input notes are valid and unconsumed and that the transaction hasn't expired.
Batch production
Transactions are selected from the mempool periodically to form batches. This batch is then proven and submitted back to the mempool where it can be included in a block.
Block production
Proven batches are selected from the mempool periodically to form the next block. The block is then proven and committed to the store. At this point all transactions and batches in the block are removed from the mempool as committed.
Transaction lifecycle
- Transaction arrives at RPC component
- Transaction proof is verified
- Transaction arrives at block-producer
- Transaction delta is verified
- Does the account state match
- Do all input notes exist and are unconsumed
- Output notes are unique
- Transaction is not expired
- Wait until all parent transactions are in a batch
- Be selected as part of a batch
- Proven as part of a batch
- Wait until all parent batches are in a block
- Be selected as part of a block
- Committed
Note that its possible for transactions to be rejected/dropped even after they've been accepted, at any point in the above lifecycle (which effectively shows the happy path). This can occur if:
- The transaction expires before being included in a block.
- Any parent transaction is dropped (which will revert the state, invalidating child transactions).
- It causes proving or any part of block/batch creation to fail. This is a fail-safe against unforeseen bugs, removing problematic (but potentially valid) transactions from the mempool to prevent outages.
Network Transaction Builder Component
The network transaction builder (NTB) is responsible for driving the state of network accounts.
What is a network account
Network accounts are a special type of fully public account which contains no authentication and whose state can therefore be updated by anyone (in theory). Such accounts are required when publicly mutable state is needed.
The issue with publicly mutable state is that transactions against an account must be sequential and require the previous account commitment in order to create the transaction proof. This conflicts with Miden's client side proving and concurrency model since users would race each other to submit transactions against such an account.
Instead the solution is to have the network be responsible for driving the account state forward, and users can interact with the account using notes. Notes don't require a specific ordering and can be created concurrently without worrying about conflicts. We call these network notes and they always target a specific network account.
A network transaction is a transaction which consumes and applies a set of network notes to a network account. There is nothing special about the transaction itself - it can only be identified by the fact that it updates the state of a network account.
Limitations
At present, we artificially limit this such that only this component may create transactions against network accounts. This is enforced at the RPC layer by disallowing network transactions entirely in that component. The NTB skirts around this by submitting its transactions directly to the block-producer.
This limitation is there to prevent complicating the NTBs implementation while the protocol and definitions of network accounts, notes and transactions mature.
Implementation
On startup the mempool loads all unconsumed network notes from the store. From there it monitors the mempool for events which would impact network account state. This communication takes the form of an event stream via gRPC.
The NTB periodically selects an arbitrary network account with available network notes and creates a network transaction for it.
The block-producer remains blissfully unaware of network transactions. From its perspective a network transaction is simply the same as any other.
Oddities and FAQs
Common questions and head scratchers.
Chain MMR
The chain MMR always lags behind the blockchain by one block because otherwise there would be a cyclic dependency between the chain MMR and the block hash:
- chain MMR contains each block's hash as a leaf
- block hash calculation includes the chain MMR's root
To work-around this the inclusion of a block hash in the chain MMR is delayed by one block. Or put differently, block
N
is responsible for inserting block N-1
into the chain MMR. This does not break blockchain linkage because
the block header (and therefore hash) still includes the previous block's hash.
gRPC Reference
This is a reference of the Node's public RPC interface. It consists of a gRPC API which may be used to submit transactions and query the state of the blockchain.
The gRPC service definition can be found in the Miden node's proto
directory in the rpc.proto
file.
- CheckNullifiers
- CheckNullifiersByPrefix
- GetAccountDetails
- GetAccountProofs
- GetBlockByNumber
- GetBlockHeaderByNumber
- GetNotesById
- SubmitProvenTransaction
- SyncNotes
- SyncState
- Status
CheckNullifiers
Request proofs for a set of nullifiers.
CheckNullifiersByPrefix
Request nullifiers filtered by prefix and created after some block number.
The prefix is used to obscure the callers interest in a specific nullifier. Currently only 16-bit prefixes are supported.
GetAccountDetails
Request the latest state of an account.
GetAccountProofs
Request state proofs for accounts, including specific storage slots.
GetBlockByNumber
Request the raw data for a specific block.
GetBlockHeaderByNumber
Request a specific block header and its inclusion proof.
GetNotesById
Request a set of notes.
SubmitProvenTransaction
Submit a transaction to the network.
SyncNotes
Iteratively sync data for a given set of note tags.
Client specify the note tags of interest and the block height from which to search. The response returns the next block containing note matching the provided tags.
The response includes each note's metadata and inclusion proof.
A basic note sync can be implemented by repeatedly requesting the previous response's block until reaching the tip of the chain.
SyncState
Iteratively sync data for specific notes and accounts.
This request returns the next block containing data of interest. Client is expected to repeat these requests in a loop until the response reaches the head of the chain, at which point the data is fully synced.
Each update response also contains info about new notes, accounts etc. created. It also returns Chain MMR delta that can be used to update the state of Chain MMR. This includes both chain MMR peaks and chain MMR nodes.
The low part of note tags are redacted to preserve some degree of privacy. Returned data therefore contains additional notes which should be filtered out by the client.
Status
Request the status of the node components. The response contains the current version of the RPC component and the connection status of the other components, including their versions and the number of the most recent block in the chain (chain tip).