mach

package module
v1.0.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 14, 2025 License: Apache-2.0 Imports: 25 Imported by: 0

README

Mach - Embedded Object Storage Database for Go

Go Version License Build Status Coverage Go Report Card

Mach is a high-performance embedded object storage database for Go applications with S3-compatible API. It provides a lightweight, zero-dependency solution for applications that need reliable object storage without external dependencies.

Why Mach?

  • 🚀 Embedded: No external services required - embed directly in your Go application
  • 📦 Zero Dependencies: Pure Go implementation with minimal external dependencies
  • 🔄 S3 Compatible: Drop-in replacement for S3 operations in development and testing
  • ⚡ High Performance: Optimized for concurrent operations with intelligent caching
  • 🛡️ Data Integrity: Built-in checksums and atomic operations ensure data safety
  • 📊 Observable: Built-in metrics and health monitoring
  • 🔧 Configurable: Flexible configuration for different use cases

Use Cases

  • Development & Testing: S3-compatible local storage for development environments
  • Edge Computing: Embedded storage for edge applications and IoT devices
  • Microservices: Local object storage for containerized applications
  • Backup Systems: Reliable local storage with S3-compatible interface
  • Content Management: File storage for web applications and CMS systems
  • Data Processing: Temporary storage for data processing pipelines

Quick Start

Installation
go get github.com/elastic-io/mach
Basic Usage
package main

import (
    "fmt"
    "log"
    
    "github.com/elastic-io/mach"
)

func main() {
    // Create embedded storage instance
    storage, err := mach.New("./data")
    if err != nil {
        log.Fatal(err)
    }
    defer storage.Close()
    
    // Create a bucket
    err = storage.CreateBucket("my-app-data")
    if err != nil {
        log.Fatal(err)
    }
    
    // Store an object
    obj := &mach.ObjectData{
        Key:         "config/app.json",
        Data:        []byte(`{"version": "1.0", "debug": true}`),
        ContentType: "application/json",
        Metadata:    map[string]string{"app": "myapp"},
    }
    
    err = storage.PutObject("my-app-data", obj)
    if err != nil {
        log.Fatal(err)
    }
    
    // Retrieve the object
    retrievedObj, err := storage.GetObject("my-app-data", "config/app.json")
    if err != nil {
        log.Fatal(err)
    }
    
    fmt.Printf("Retrieved config: %s\n", string(retrievedObj.Data))
    fmt.Printf("Metadata: %v\n", retrievedObj.Metadata)
}
Web Application Example
package main

import (
    "io"
    "net/http"
    
    "github.com/elastic-io/mach"
)

func main() {
    // Initialize embedded storage
    storage, err := mach.New("./uploads")
    if err != nil {
        panic(err)
    }
    defer storage.Close()
    
    storage.CreateBucket("user-uploads")
    
    // File upload handler
    http.HandleFunc("/upload", func(w http.ResponseWriter, r *http.Request) {
        file, header, err := r.FormFile("file")
        if err != nil {
            http.Error(w, err.Error(), http.StatusBadRequest)
            return
        }
        defer file.Close()
        
        data, err := io.ReadAll(file)
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        
        obj := &mach.ObjectData{
            Key:         header.Filename,
            Data:        data,
            ContentType: header.Header.Get("Content-Type"),
        }
        
        err = storage.PutObject("user-uploads", obj)
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        
        w.WriteHeader(http.StatusCreated)
        w.Write([]byte("File uploaded successfully"))
    })
    
    // File download handler
    http.HandleFunc("/download/", func(w http.ResponseWriter, r *http.Request) {
        filename := r.URL.Path[10:] // Remove "/download/"
        
        obj, err := storage.GetObject("user-uploads", filename)
        if err != nil {
            http.Error(w, "File not found", http.StatusNotFound)
            return
        }
        
        w.Header().Set("Content-Type", obj.ContentType)
        w.Header().Set("Content-Disposition", "attachment; filename="+filename)
        w.Write(obj.Data)
    })
    
    http.ListenAndServe(":8080", nil)
}

Features

Core Database Features
  • Embedded Architecture: Runs directly in your Go process
  • ACID Compliance: Atomic operations with consistency guarantees
  • Concurrent Access: Thread-safe operations with fine-grained locking
  • Data Integrity: MD5 checksums and verification
  • Efficient Storage: Hash-based sharding and optimized file organization
S3-Compatible API
  • Bucket Operations: Create, delete, list buckets
  • Object Operations: Put, get, delete, list objects with metadata
  • Multipart Uploads: Large file uploads with resumable transfers
  • Range Requests: Partial content delivery
  • Streaming Operations: Memory-efficient handling of large files
Performance & Reliability
  • High Throughput: Optimized for concurrent operations
  • Memory Efficient: Buffer pooling and streaming operations
  • Background Cleanup: Automatic maintenance and garbage collection
  • Health Monitoring: Built-in metrics and health checks
  • Configurable: Tunable for different workloads

API Reference

Database Initialization
// Basic initialization
storage, err := mach.New("/path/to/data", logger)

// With custom configuration
config := &mach.Config{
    MaxConcurrentUploads:   16,
    MaxConcurrentDownloads: 32,
    BufferSize:            64 * 1024,
    EnableChecksumVerify:  true,
    CleanupInterval:       30 * time.Minute,
}

storage, err := mach.New("/path/to/data", logger)
storage.SetConfig(config)
Bucket Operations
// Create bucket
err := storage.CreateBucket("my-bucket")

// List buckets
buckets, err := storage.ListBuckets()

// Check bucket existence
exists, err := storage.BucketExists("my-bucket")

// Delete bucket (must be empty)
err := storage.DeleteBucket("my-bucket")
Object Operations
// Store object
obj := &mach.ObjectData{
    Key:         "documents/readme.txt",
    Data:        []byte("Hello World"),
    ContentType: "text/plain",
    Metadata:    map[string]string{"author": "user1"},
}
err := storage.PutObject("my-bucket", obj)

// Retrieve object
obj, err := storage.GetObject("my-bucket", "documents/readme.txt")

// Delete object
err := storage.DeleteObject("my-bucket", "documents/readme.txt")

// List objects with prefix
objects, prefixes, err := storage.ListObjects("my-bucket", "documents/", "", "", 100)
Large File Handling
// Streaming upload for large files
reader := bytes.NewReader(largeData)
etag, err := storage.PutObjectStream("bucket", "large-file.zip", 
    reader, int64(len(largeData)), "application/zip", nil)

// Streaming download
stream, metadata, err := storage.GetObjectStream("bucket", "large-file.zip")
defer stream.Close()

// Multipart upload for very large files
uploadID, err := storage.CreateMultipartUpload("bucket", "huge-file.dat", 
    "application/octet-stream", nil)

// Upload parts (5MB minimum per part)
var parts []mach.MultipartPart
for i, chunk := range fileChunks {
    etag, err := storage.UploadPart("bucket", "huge-file.dat", uploadID, i+1, chunk)
    parts = append(parts, mach.MultipartPart{PartNumber: i+1, ETag: etag})
}

// Complete upload
finalETag, err := storage.CompleteMultipartUpload("bucket", "huge-file.dat", uploadID, parts)
Monitoring & Health
// Get storage statistics
stats, err := storage.GetStats()
fmt.Printf("Buckets: %d, Objects: %d, Size: %d bytes\n", 
    stats.BucketCount, stats.ObjectCount, stats.TotalSize)

// Performance metrics
metrics := storage.GetMetrics()
fmt.Printf("Operations: R:%d W:%d D:%d, Errors: %d\n",
    metrics.ReadOps, metrics.WriteOps, metrics.DeleteOps, metrics.ErrorCount)

// Health check
if err := storage.HealthCheck(); err != nil {
    log.Printf("Storage health issue: %v", err)
}

// Advanced monitoring
monitor := mach.NewPerformanceMonitor(storage)
monitor.AddAlertCallback(func(alert mach.Alert) {
    log.Printf("ALERT: %s - %s", alert.Type, alert.Message)
})
monitor.Start(1 * time.Minute)

Configuration

Storage Configuration
config := &mach.Config{
    // Concurrency limits
    MaxConcurrentUploads:   runtime.NumCPU() * 4,
    MaxConcurrentDownloads: runtime.NumCPU() * 8,
    
    // Performance tuning
    BufferSize:           64 * 1024,  // 64KB buffer
    UseDirectIO:          false,      // Direct I/O bypass
    UseMmap:              true,       // Memory mapping
    
    // Data integrity
    EnableChecksumVerify: true,       // MD5 verification
    ChecksumAlgorithm:    "md5",      // Checksum algorithm
    
    // Maintenance
    CleanupInterval:      30 * time.Minute,
    TempFileMaxAge:       2 * time.Hour,
}
Directory Structure

The embedded database creates the following structure:

data-directory/
├── buckets/             # Object data storage
│   └── bucket-name/
│       └── ab/cd/       # Hash-based sharding (ab/cd/object-key)
├── .db.sys/             # System directory
│   ├── buckets/         # Bucket metadata
│   ├── multipart/       # Multipart upload state
│   └── tmp/             # Temporary files

Performance

Benchmarks

Performance on Apple M2 Pro (ARM64, 16GB RAM, NVMe SSD):

Single-threaded Operations

Operation Object Size Throughput Latency Ops/sec
Put 1KB 0.13 MB/s 8.0ms 124
Put 64KB 8.08 MB/s 8.1ms 123
Put 1MB 85.32 MB/s 12.3ms 81
Put 10MB 253.06 MB/s 41.4ms 24
Put 100MB 303.52 MB/s 345.5ms 3
Get 1KB 5.44 MB/s 188μs 5,312
Get 64KB 229.69 MB/s 285μs 3,504
Get 1MB 490.47 MB/s 2.1ms 502
Get 10MB 627.13 MB/s 16.7ms 60
Get 100MB 615.08 MB/s 170.5ms 6

Concurrent Operations (1MB objects)

Concurrency Put Throughput Get Throughput
1 88.81 MB/s 443.13 MB/s
2 158.58 MB/s 762.40 MB/s
4 185.07 MB/s 1,270.31 MB/s
8 167.52 MB/s 2,470.38 MB/s
16 139.81 MB/s 2,841.09 MB/s
32 133.75 MB/s 3,248.64 MB/s

Streaming Operations

Operation Object Size Throughput Latency
Stream Put 1MB 85.48 MB/s 12.3ms
Stream Put 10MB 250.89 MB/s 41.8ms
Stream Put 100MB 307.55 MB/s 341ms
Stream Get 1MB 6,338.74 MB/s 165μs
Stream Get 10MB 17,023.44 MB/s 616μs
Stream Get 100MB 13,891.45 MB/s 7.5ms

Multipart Upload

Operation File Size Throughput Latency
Multipart Upload 50MB file (5MB parts) 162.24 MB/s 323ms
Performance Characteristics
Excellent Read Performance
  • Streaming reads show exceptional performance (up to 17GB/s for 10MB files)
  • Concurrent reads scale very well with increased concurrency
  • Small object reads achieve over 5,000 ops/sec
Optimized Write Performance
  • Large file writes achieve 300+ MB/s throughput
  • Concurrent writes show good scaling up to 4-8 threads
  • Write performance optimized for larger objects
Memory Efficiency
  • Streaming operations maintain constant memory usage
  • Buffer pooling reduces allocation overhead
  • Memory usage scales predictably with concurrent operations
Platform-Specific Notes
Apple Silicon (M2/M3) Performance
  • Exceptional read performance due to unified memory architecture
  • Good write performance with NVMe storage
  • Excellent concurrent scaling for read operations
Intel x86_64 Performance
  • Balanced read/write performance
  • Good scaling across different workloads
  • Consistent performance across object sizes
Memory Usage
  • Base overhead: ~10MB for the storage engine
  • Per object metadata: ~200 bytes
  • Buffer pools: Configurable (default 64KB × CPU cores)
  • Streaming operations: Constant memory usage regardless of file size
  • Concurrent operations: Linear memory scaling with active operations
Optimization Tips
  1. For Read-Heavy Workloads:

    config.MaxConcurrentDownloads = runtime.NumCPU() * 16  // High concurrency
    config.UseMmap = true                                   // Enable memory mapping
    config.BufferSize = 256 * 1024                         // Larger buffers
    
  2. For Write-Heavy Workloads:

    config.MaxConcurrentUploads = runtime.NumCPU() * 4     // Moderate concurrency
    config.UseDirectIO = true                               // Bypass OS cache
    config.EnableChecksumVerify = false                     // Disable for speed
    
  3. For Large Files:

    // Use streaming operations
    etag, err := storage.PutObjectStream(bucket, key, reader, size, contentType, metadata)
    
    // Use multipart for files > 100MB
    if size > 100*1024*1024 {
        uploadID, err := storage.CreateMultipartUpload(bucket, key, contentType, metadata)
        // Upload in 5-10MB parts
    }
    
  4. For Small Files:

    config.BufferSize = 32 * 1024                          // Smaller buffers
    config.MaxConcurrentUploads = runtime.NumCPU() * 8     // Higher concurrency
    
  5. Memory Optimization:

    config.UseMmap = false                                  // Disable mmap for memory-constrained environments
    config.BufferSize = 16 * 1024                          // Smaller buffers
    config.MetadataCacheSize = 1000                         // Smaller cache
    
Benchmark Environment

The benchmarks were run on:

  • CPU: Apple M2 Pro (12-core)
  • Memory: 16GB unified memory
  • Storage: NVMe SSD
  • OS: macOS (darwin/arm64)
  • Go: 1.21+

For different hardware configurations, performance may vary. Run the included benchmark suite to measure performance on your specific environment:

make benchmark

Comparison with Alternatives

Feature Mach SQLite + BLOBs BadgerDB File System
S3 API
Embedded
Large Files ⚠️
Metadata
Transactions
Streaming
Multipart

Examples

See the examples directory for complete examples:

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Documentation

Index

Constants

View Source
const (
	// 分段上传相关常量
	MaxMultipartLifetime = 7 * 24 * time.Hour // 7天,与S3一致
	CleanupInterval      = 1 * time.Hour      // 清理过期上传的间隔
)
View Source
const (
	// 基本单位 - 字节
	Byte = 1

	// 使用 iota 从 0 开始,每次增加 1
	// 左移 10 位相当于乘以 2^10 = 1024
	KB = 1 << (10 * iota) // 1 << (10 * 1) = 1024
	MB                    // 1 << (10 * 2) = 1,048,576
	GB                    // 1 << (10 * 3) = 1,073,741,824
	TB                    // 1 << (10 * 4) = 1,099,511,627,776
	PB                    // 1 << (10 * 5) = 1,125,899,906,842,624
)

定义数据大小单位常量

View Source
const (
	AK     = "ak"
	SK     = "sk"
	Token  = "token"
	Region = "region"
)

Variables

This section is empty.

Functions

func CalculateETag

func CalculateETag(data []byte) string

calculateETag 计算数据的MD5哈希作为ETag

func CalculateMultipartETag

func CalculateMultipartETag(partETags []string) string

calculateMultipartETag 计算分段上传的最终ETag S3兼容的格式: "{md5-of-all-etags}-{number-of-parts}"

func GenerateUploadID

func GenerateUploadID(bucket, key string) string

generateUploadID 生成唯一的上传ID

Types

type Alert

type Alert struct {
	Type        string
	Severity    string
	Message     string
	Timestamp   time.Time
	MetricValue interface{}
	Threshold   interface{}
}

type AlertCallback

type AlertCallback func(alert Alert)

type AlertThresholds

type AlertThresholds struct {
	MaxErrorRate     float64       // 最大错误率 (%)
	MaxAvgLatency    time.Duration // 最大平均延迟
	MaxDiskUsage     float64       // 最大磁盘使用率 (%)
	MaxMemoryUsage   int64         // 最大内存使用 (bytes)
	MaxConcurrentOps int64         // 最大并发操作数
}

type BucketInfo

type BucketInfo struct {
	Name         string
	CreationDate time.Time
}

BucketInfo 表示存储桶的元数据

func (BucketInfo) MarshalEasyJSON

func (v BucketInfo) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (BucketInfo) MarshalJSON

func (v BucketInfo) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*BucketInfo) UnmarshalEasyJSON

func (v *BucketInfo) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*BucketInfo) UnmarshalJSON

func (v *BucketInfo) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type Config

type Config struct {
	// 并发控制
	MaxConcurrentUploads   int
	MaxConcurrentDownloads int

	// 缓存配置
	MetadataCacheSize int
	MetadataCacheTTL  time.Duration

	// 性能优化
	UseDirectIO bool
	UseMmap     bool
	BufferSize  int

	// 数据完整性
	EnableChecksumVerify bool
	ChecksumAlgorithm    string

	// 清理配置
	CleanupInterval time.Duration
	TempFileMaxAge  time.Duration
}

存储配置

func DefaultConfig

func DefaultConfig() *Config

默认配置

type DB added in v0.0.2

type DB struct {
	// contains filtered or unexported fields
}

实现了基于文件系统的S3兼容存储

func New

func New(path string) (*DB, error)

func NewWithLogger added in v1.0.0

func NewWithLogger(path string, log Logger) (*DB, error)

New创建一个新的文件系统存储实例

func (*DB) AbortMultipartUpload added in v0.0.2

func (s *DB) AbortMultipartUpload(bucket, key, uploadID string) error

AbortMultipartUpload 中止分段上传

func (*DB) Backup added in v0.0.2

func (s *DB) Backup(backupPath string) error

Backup 备份存储(简单实现)

func (*DB) BucketExists added in v0.0.2

func (s *DB) BucketExists(bucket string) (bool, error)

BucketExists 检查存储桶是否存在

func (*DB) Close added in v0.0.2

func (s *DB) Close() error

Close 关闭存储并清理资源

func (*DB) Compact added in v0.0.2

func (s *DB) Compact() error

Compact 压缩存储(清理碎片)

func (*DB) CompleteMultipartUpload added in v0.0.2

func (s *DB) CompleteMultipartUpload(bucket, key, uploadID string, parts []MultipartPart) (string, error)

CompleteMultipartUpload 完成分段上传(优化版本)

func (*DB) CreateBucket added in v0.0.2

func (s *DB) CreateBucket(bucket string) error

CreateBucket 创建存储桶

func (*DB) CreateMultipartUpload added in v0.0.2

func (s *DB) CreateMultipartUpload(bucket, key, contentType string, metadata map[string]string) (string, error)

CreateMultipartUpload 创建分段上传

func (*DB) DeleteBucket added in v0.0.2

func (s *DB) DeleteBucket(bucket string) error

DeleteBucket 删除存储桶

func (*DB) DeleteObject added in v0.0.2

func (s *DB) DeleteObject(bucket, key string) error

DeleteObject 删除对象

func (*DB) GetConfig added in v0.0.2

func (s *DB) GetConfig() *Config

GetConfig 获取当前配置

func (*DB) GetDiskUsage added in v0.0.2

func (s *DB) GetDiskUsage() (total, free, used uint64, err error)

GetDiskUsage 获取磁盘使用情况

func (*DB) GetMetrics added in v0.0.2

func (s *DB) GetMetrics() *Metrics

GetMetrics 获取性能指标

func (*DB) GetObject added in v0.0.2

func (s *DB) GetObject(bucket, key string) (*ObjectData, error)

GetObject 获取对象(优化版本)

func (*DB) GetObjectRange added in v0.0.2

func (s *DB) GetObjectRange(bucket, key string, start, end int64) ([]byte, *ObjectData, error)

GetObjectRange 获取对象的指定范围(支持HTTP Range请求)

func (*DB) GetObjectStream added in v0.0.2

func (s *DB) GetObjectStream(bucket, key string) (io.ReadCloser, *ObjectData, error)

GetObjectStream 流式获取对象(用于大文件)

func (*DB) GetStats added in v0.0.2

func (s *DB) GetStats() (*Stats, error)

Gettats 获取存储统计信息

func (*DB) HealthCheck added in v0.0.2

func (s *DB) HealthCheck() error

HealthCheck 健康检查

func (*DB) ListBuckets added in v0.0.2

func (s *DB) ListBuckets() ([]BucketInfo, error)

Bucket 相关操作 ListBuckets 列出所有存储桶

func (*DB) ListMultipartUploads added in v0.0.2

func (s *DB) ListMultipartUploads(bucket string) ([]MultipartUploadInfo, error)

ListMultipartUploads 列出所有进行中的分段上传

func (*DB) ListObjects added in v0.0.2

func (s *DB) ListObjects(bucket, prefix, marker, delimiter string, maxKeys int) ([]ObjectInfo, []string, error)

ListObjects 列出存储桶中的对象

func (*DB) ListParts added in v0.0.2

func (s *DB) ListParts(bucket, key, uploadID string) ([]*PartInfo, error)

ListParts 列出分段上传的所有部分

func (*DB) PutObject added in v0.0.2

func (s *DB) PutObject(bucket string, object *ObjectData) error

PutObject 存储对象(优化版本)

func (*DB) PutObjectStream added in v0.0.2

func (s *DB) PutObjectStream(bucket, key string, reader io.Reader, size int64, contentType string, metadata map[string]string) (string, error)

PutObjectStream 流式存储对象(用于大文件)

func (*DB) ResetMetrics added in v0.0.2

func (s *DB) ResetMetrics()

ResetMetrics 重置指标

func (*DB) Restore added in v0.0.2

func (s *DB) Restore(backupPath string) error

Restore 从备份恢复存储

func (*DB) SetConfig added in v0.0.2

func (s *DB) SetConfig(config *Config)

SetConfig 更新配置

func (*DB) UploadPart added in v0.0.2

func (s *DB) UploadPart(bucket, key, uploadID string, partNumber int, data []byte) (string, error)

UploadPart 上传分段(优化版本)

func (*DB) Validate added in v0.0.2

func (s *DB) Validate() error

Validate验证存储完整性

type DiskUsageInfo

type DiskUsageInfo struct {
	Total     uint64
	Free      uint64
	Used      uint64
	UsageRate float64
}

type Logger

type Logger interface {
	// Debug logs the provided arguments at [DebugLevel].
	// Spaces are added between arguments when neither is a string.
	Debug(args ...interface{})
	// Info logs the provided arguments at [InfoLevel].
	// Spaces are added between arguments when neither is a string.
	Info(args ...interface{})

	// Warn logs the provided arguments at [WarnLevel].
	// Spaces are added between arguments when neither is a string.
	Warn(args ...interface{})

	// Error logs the provided arguments at [ErrorLevel].
	// Spaces are added between arguments when neither is a string.
	Error(args ...interface{})

	// Fatal constructs a message with the provided arguments and calls os.Exit.
	// Spaces are added between arguments when neither is a string.
	Fatal(args ...interface{})

	// Debugf formats the message according to the format specifier
	// and logs it at [DebugLevel].
	Debugf(template string, args ...interface{})

	// Infof formats the message according to the format specifier
	// and logs it at [InfoLevel].
	Infof(template string, args ...interface{})

	// Warnf formats the message according to the format specifier
	// and logs it at [WarnLevel].
	Warnf(template string, args ...interface{})

	// Errorf formats the message according to the format specifier
	// and logs it at [ErrorLevel].
	Errorf(template string, args ...interface{})
}

type Metrics

type Metrics struct {
	// 操作计数
	ReadOps   int64
	WriteOps  int64
	DeleteOps int64
	ListOps   int64

	// 字节统计
	ReadBytes  int64
	WriteBytes int64

	// 错误统计
	ErrorCount int64

	// 性能统计
	AvgReadLatency  int64 // 纳秒
	AvgWriteLatency int64 // 纳秒

	// 并发统计
	ActiveReads  int64
	ActiveWrites int64
}

性能指标

type MetricsSnapshot

type MetricsSnapshot struct {
	Timestamp time.Time
	Metrics   Metrics
	DiskUsage DiskUsageInfo
}

type MultipartPart

type MultipartPart struct {
	PartNumber int
	ETag       string
}

MultipartPart 表示分段上传的一个部分

func (MultipartPart) MarshalEasyJSON

func (v MultipartPart) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (MultipartPart) MarshalJSON

func (v MultipartPart) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*MultipartPart) UnmarshalEasyJSON

func (v *MultipartPart) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*MultipartPart) UnmarshalJSON

func (v *MultipartPart) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type MultipartUploadInfo

type MultipartUploadInfo struct {
	Bucket      string
	Key         string
	UploadID    string
	ContentType string
	Metadata    map[string]string
	CreatedAt   time.Time
}

multipartUploadInfo 存储分段上传的元数据

func (MultipartUploadInfo) MarshalEasyJSON

func (v MultipartUploadInfo) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (MultipartUploadInfo) MarshalJSON

func (v MultipartUploadInfo) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*MultipartUploadInfo) UnmarshalEasyJSON

func (v *MultipartUploadInfo) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*MultipartUploadInfo) UnmarshalJSON

func (v *MultipartUploadInfo) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type ObjectData

type ObjectData struct {
	Key          string
	Data         []byte
	ContentType  string
	LastModified time.Time
	ETag         string
	Metadata     map[string]string
	Size         int64
}

ObjectData 表示 types 对象的完整数据

func (ObjectData) MarshalEasyJSON

func (v ObjectData) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (ObjectData) MarshalJSON

func (v ObjectData) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*ObjectData) UnmarshalEasyJSON

func (v *ObjectData) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*ObjectData) UnmarshalJSON

func (v *ObjectData) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type ObjectInfo

type ObjectInfo struct {
	Key          string
	Size         int64
	LastModified time.Time
	ETag         string
}

ObjectInfo 表示 types 对象的元数据

func (ObjectInfo) MarshalEasyJSON

func (v ObjectInfo) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (ObjectInfo) MarshalJSON

func (v ObjectInfo) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*ObjectInfo) UnmarshalEasyJSON

func (v *ObjectInfo) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*ObjectInfo) UnmarshalJSON

func (v *ObjectInfo) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type PartInfo

type PartInfo struct {
	PartNumber   int
	ETag         string
	Size         int
	LastModified time.Time
}

func (PartInfo) MarshalEasyJSON

func (v PartInfo) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (PartInfo) MarshalJSON

func (v PartInfo) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*PartInfo) UnmarshalEasyJSON

func (v *PartInfo) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*PartInfo) UnmarshalJSON

func (v *PartInfo) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

type PerformanceMonitor

type PerformanceMonitor struct {
	// contains filtered or unexported fields
}

PerformanceMonitor 性能监控器

func NewPerformanceMonitor

func NewPerformanceMonitor(db *DB) *PerformanceMonitor

NewPerformanceMonitor 创建性能监控器

func (*PerformanceMonitor) AddAlertCallback

func (pm *PerformanceMonitor) AddAlertCallback(callback AlertCallback)

AddAlertCallback 添加告警回调

func (*PerformanceMonitor) GenerateReport

func (pm *PerformanceMonitor) GenerateReport(duration time.Duration) *PerformanceReport

GenerateReport 生成性能报告

func (*PerformanceMonitor) GetCurrentMetrics

func (pm *PerformanceMonitor) GetCurrentMetrics() *Metrics

GetCurrentMetrics 获取当前指标

func (*PerformanceMonitor) GetMetricsHistory

func (pm *PerformanceMonitor) GetMetricsHistory(duration time.Duration) []MetricsSnapshot

GetMetricsHistory 获取历史指标

func (*PerformanceMonitor) SetAlertThresholds

func (pm *PerformanceMonitor) SetAlertThresholds(thresholds *AlertThresholds)

SetAlertThresholds 设置告警阈值

func (*PerformanceMonitor) Start

func (pm *PerformanceMonitor) Start(interval time.Duration)

Start 启动监控

func (*PerformanceMonitor) Stop

func (pm *PerformanceMonitor) Stop()

Stop 停止监控

type PerformanceReport

type PerformanceReport struct {
	StartTime           time.Time
	EndTime             time.Time
	Duration            time.Duration
	TotalOperations     int64
	TotalErrors         int64
	ErrorRate           float64
	AvgLatency          time.Duration
	MaxLatency          time.Duration
	MinLatency          time.Duration
	MaxDiskUsage        float64
	OperationsPerSecond float64
}

func (*PerformanceReport) String

func (pr *PerformanceReport) String() string

String 格式化报告输出

type Stats

type Stats struct {
	BucketCount int64
	ObjectCount int64
	TotalSize   int64
}

func (Stats) MarshalEasyJSON

func (v Stats) MarshalEasyJSON(w *jwriter.Writer)

MarshalEasyJSON supports easyjson.Marshaler interface

func (Stats) MarshalJSON

func (v Stats) MarshalJSON() ([]byte, error)

MarshalJSON supports json.Marshaler interface

func (*Stats) UnmarshalEasyJSON

func (v *Stats) UnmarshalEasyJSON(l *jlexer.Lexer)

UnmarshalEasyJSON supports easyjson.Unmarshaler interface

func (*Stats) UnmarshalJSON

func (v *Stats) UnmarshalJSON(data []byte) error

UnmarshalJSON supports json.Unmarshaler interface

Directories

Path Synopsis
examples
basic command
performance command
webserver command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL