single

package module
v1.0.0-beta.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 3, 2025 License: Apache-2.0 Imports: 19 Imported by: 5

README

Single Sequencer

The single sequencer is a component of the Evolve framework that handles transaction ordering and batch submission to a Data Availability (DA) layer. It provides a reliable way to sequence transactions for applications via a designed node called the sequencer.

Overview

The sequencer receives transactions from clients, batches them together, and submits these batches to a Data Availability layer. It maintains transaction and batch queues, handles recovery from crashes, and provides verification mechanisms for batches.

flowchart LR
    Client["Client"] --> Sequencer
    Sequencer --> DA["DA Layer"]
    Sequencer <--> Database

Components

Sequencer

The main component that orchestrates the entire sequencing process. It:

  • Receives transactions from clients
  • Maintains transaction and batch queues
  • Periodically creates and submits batches to the DA layer
  • Handles recovery from crashes
  • Provides verification mechanisms for batches
TransactionQueue

Manages the queue of pending transactions:

  • Stores transactions in memory and in the database
  • Provides methods to add transactions and extract batches
  • Handles recovery of transactions from the database after a crash
BatchQueue

Manages the queue of batches:

  • Stores batches in memory and in the database
  • Provides methods to add and retrieve batches
  • Handles recovery of batches from the database after a crash
DAClient

Handles communication with the Data Availability layer:

  • Submits batches to the DA layer
  • Retrieves batch status from the DA layer

Flow of Calls

Initialization Flow
flowchart TD
    A["NewSequencer()"] --> B["LoadLastBatchHashFromDB()"]
    B --> C["LoadSeenBatchesFromDB()"]
    C --> D["Load Transaction Queue from DB"]
    D --> E["Load BatchQueue from DB"]
    E --> F["Start batch submission loop"]
Transaction Submission Flow
flowchart TD
    A["SubmitBatchTxs()"] --> B["Validate ID"]
    B --> C["AddTransaction to Queue"]
    C --> D["Store in DB"]
Batch Creation and Submission Flow
flowchart TD
    A["batchSubmissionLoop()"] --> B["publishBatch()"]
    B --> C["GetNextBatch from TxQueue"]
    C --> D["If batch not empty"]
    D --> E["submitBatchToDA"]
    E --> F["Add to BatchQueue"]
Batch Retrieval Flow
flowchart TD
    A["GetNextBatch()"] --> B["Validate ID"]
    B --> C["Check batch hash match"]
    C --> D["If match or both nil"]
    D --> E["Get batch from BatchQueue"]
    E --> F["Update last batch hash"]
Batch Verification Flow
flowchart TD
    A["VerifyBatch()"] --> B["Validate ID"]
    B --> C["Check if batch hash in seen batches map"]
    C --> D["Return status"]

Database Layout

The single sequencer uses a key-value database to store transactions, batches, and metadata. Here's the layout of the database:

Keys
Key Pattern Description
l Last batch hash
seen:<hex_encoded_hash> Marker for seen batch hashes
<hex_encoded_hash> Batch data (hash is the batch hash)
tx:<tx_hash> Transaction data (hash is SHA-256 of transaction bytes)
Key Details
Last Batch Hash Key (l)
  • Stores the hash of the last processed batch
  • Used for recovery after a crash
  • Value: Raw bytes of the hash
Seen Batch Hash Keys (seen:<hex_encoded_hash>)
  • Marks batches that have been seen and processed
  • Used for batch verification
  • Value: 1 (presence indicates the batch has been seen)
Batch Keys (<hex_encoded_hash>)
  • Stores the actual batch data
  • Key is the hex-encoded hash of the batch
  • Value: Protobuf-encoded batch data
Transaction Keys (tx:<tx_hash>)
  • Stores individual transactions
  • Key is prefixed with tx: followed by the SHA-256 hash of the transaction bytes
  • Value: Raw transaction bytes

Recovery Mechanism

The single sequencer implements a robust recovery mechanism to handle crashes:

  1. On startup, it loads the last batch hash from the database
  2. It loads all seen batch hashes into memory
  3. It loads all pending transactions from the database into the transaction queue
  4. It loads all pending batches from the database into the batch queue
  5. It resumes normal operation, continuing from where it left off

This ensures that no transactions are lost in case of a crash, and the sequencer can continue operating seamlessly.

Metrics

The sequencer exposes the following metrics:

Metric Description
gas_price The gas price of DA
last_blob_size The size in bytes of the last DA blob
transaction_status Count of transaction statuses for DA submissions
num_pending_blocks The number of pending blocks for DA submission
included_block_height The last DA included block height

These metrics can be used to monitor the health and performance of the sequencer.

Usage

To create a new single sequencer:

seq, err := NewSequencer(
    context.Background(),
    logger,
    database,
    daLayer,
    namespace,
    Id,
    batchTime,
    metrics,
)

To submit transactions:

response, err := seq.SubmitBatchTxs(
    context.Background(),
    coresequencer.SubmitBatchTxsRequest{
        Id: Id,
        Batch: &coresequencer.Batch{
            Transactions: [][]byte{transaction},
        },
    },
)

To get the next batch:

response, err := seq.GetNextBatch(
    context.Background(),
    coresequencer.GetNextBatchRequest{
        Id: Id,
        LastBatchHash: lastHash,
    },
)

To verify a batch:

response, err := seq.VerifyBatch(
    context.Background(),
    coresequencer.VerifyBatchRequest{
        Id: Id,
        BatchHash: batchHash,
    },
)

Documentation

Overview

This package implements a single sequencer.

Index

Constants

View Source
const (
	// MetricsSubsystem is a subsystem shared by all metrics exposed by this
	// package.
	MetricsSubsystem = "sequencer"
)

Variables

View Source
var (
	ErrInvalidId = errors.New("invalid chain id")
)

ErrInvalidId is returned when the chain id is invalid

View Source
var ErrQueueFull = errors.New("batch queue is full")

ErrQueueFull is returned when the batch queue has reached its maximum size

Functions

This section is empty.

Types

type BatchQueue

type BatchQueue struct {
	// contains filtered or unexported fields
}

BatchQueue implements a persistent queue for transaction batches

func NewBatchQueue

func NewBatchQueue(db ds.Batching, prefix string, maxSize int) *BatchQueue

NewBatchQueue creates a new BatchQueue with the specified maximum size. If maxSize is 0, the queue will be unlimited.

func (*BatchQueue) AddBatch

func (bq *BatchQueue) AddBatch(ctx context.Context, batch coresequencer.Batch) error

AddBatch adds a new transaction to the queue and writes it to the WAL. Returns ErrQueueFull if the queue has reached its maximum size.

func (*BatchQueue) Load

func (bq *BatchQueue) Load(ctx context.Context) error

Load reloads all batches from WAL file into the in-memory queue after a crash or restart

func (*BatchQueue) Next

func (bq *BatchQueue) Next(ctx context.Context) (*coresequencer.Batch, error)

Next extracts a batch of transactions from the queue and marks it as processed in the WAL

func (*BatchQueue) Size

func (bq *BatchQueue) Size() int

Size returns the effective number of batches in the queue This method is primarily for testing and monitoring purposes

type Metrics

type Metrics struct {
	// GasPrice
	GasPrice metrics.Gauge
	// Last submitted blob size
	LastBlobSize metrics.Gauge
	// cost / byte
	// CostPerByte metrics.Gauge
	// Wallet Balance
	// WalletBalance metrics.Gauge
	// Transaction Status
	TransactionStatus metrics.Counter
	// Number of pending blocks.
	NumPendingBlocks metrics.Gauge
	// Last included block height
	IncludedBlockHeight metrics.Gauge
}

Metrics contains metrics exposed by this package.

func NopMetrics

func NopMetrics() (*Metrics, error)

NopMetrics returns no-op Metrics.

func PrometheusMetrics

func PrometheusMetrics(labelsAndValues ...string) (*Metrics, error)

PrometheusMetrics returns Metrics build using Prometheus client library. Optionally, labels can be provided along with their values ("foo", "fooValue").

type MetricsProvider

type MetricsProvider func(chainID string) (*Metrics, error)

MetricsProvider returns sequencing Metrics.

func DefaultMetricsProvider

func DefaultMetricsProvider(enabled bool) MetricsProvider

DefaultMetricsProvider returns Metrics build using Prometheus client library if Prometheus is enabled. Otherwise, it returns no-op Metrics.

type Sequencer

type Sequencer struct {
	Id []byte
	// contains filtered or unexported fields
}

Sequencer implements core sequencing interface

func NewSequencer

func NewSequencer(
	ctx context.Context,
	logger zerolog.Logger,
	db ds.Batching,
	da coreda.DA,
	id []byte,
	batchTime time.Duration,
	metrics *Metrics,
	proposer bool,
) (*Sequencer, error)

NewSequencer creates a new Single Sequencer

func NewSequencerWithQueueSize

func NewSequencerWithQueueSize(
	ctx context.Context,
	logger zerolog.Logger,
	db ds.Batching,
	da coreda.DA,
	id []byte,
	batchTime time.Duration,
	metrics *Metrics,
	proposer bool,
	maxQueueSize int,
) (*Sequencer, error)

NewSequencerWithQueueSize creates a new Single Sequencer with configurable queue size

func (*Sequencer) GetNextBatch

GetNextBatch implements sequencing.Sequencer.

func (*Sequencer) RecordMetrics

func (c *Sequencer) RecordMetrics(gasPrice float64, blobSize uint64, statusCode coreda.StatusCode, numPendingBlocks uint64, includedBlockHeight uint64)

RecordMetrics updates the metrics with the given values. This method is intended to be called by the block manager after submitting data to the DA layer.

func (*Sequencer) SubmitBatchTxs

SubmitBatchTxs implements sequencing.Sequencer.

func (*Sequencer) VerifyBatch

VerifyBatch implements sequencing.Sequencer.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL