stash

package module
v0.0.0-...-b6811cc Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 7, 2026 License: MIT Imports: 5 Imported by: 0

README

stash

Go Reference Go Report Card CI codecov

Generic cache for Go with TTL, eviction policies, and pluggable storage.

Features

  • Generic Types — Type-safe caching with Cache[K, V]
  • Eviction Policies — LRU, LFU, and FIFO out of the box
  • TTL Support — Per-cache default and per-entry overrides
  • Cost-Based Eviction — Limit by total cost, not just entry count
  • Automatic Loading — Loader functions with single-flight deduplication
  • Pluggable StorageStore interface for Redis, Valkey, or custom backends
  • Lifecycle Hooks — OnHit, OnMiss, OnEvict for observability
  • Injectable Clock — Control time in tests without real sleeps
  • Zero Dependencies — Only the Go standard library

Installation

go get github.com/bjaus/stash

Requires Go 1.25 or later.

Quick Start

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/bjaus/stash"
)

func main() {
    ctx := context.Background()

    cache := stash.New[string, int](
        stash.WithCapacity[string, int](1000),
        stash.WithTTL[string, int](5*time.Minute),
    )

    cache.Set(ctx, "answer", 42)

    if v, ok, _ := cache.Get(ctx, "answer"); ok {
        fmt.Println(v) // 42
    }
}

Usage

Basic Operations
ctx := context.Background()
cache := stash.New[string, *User]()

// Set a value
cache.Set(ctx, "user:123", user)

// Set with custom TTL
cache.SetWithTTL(ctx, "session:abc", session, 30*time.Minute)

// Get a value
user, ok, err := cache.Get(ctx, "user:123")

// Check existence (in-memory only)
if cache.Has("user:123") {
    // ...
}

// Delete
cache.Delete(ctx, "user:123")

// Clear all entries
cache.Clear()
Eviction Policies

Three policies control which entries are evicted when the cache is full:

// LRU - Least Recently Used (default)
cache := stash.New[string, int](
    stash.WithCapacity[string, int](100),
    stash.WithPolicy[string, int](stash.LRU),
)

// LFU - Least Frequently Used
cache := stash.New[string, int](
    stash.WithCapacity[string, int](100),
    stash.WithPolicy[string, int](stash.LFU),
)

// FIFO - First In, First Out
cache := stash.New[string, int](
    stash.WithCapacity[string, int](100),
    stash.WithPolicy[string, int](stash.FIFO),
)
Policy Evicts Best For
LRU Least recently accessed General purpose, temporal locality
LFU Least frequently accessed Hot/cold data, popularity-based
FIFO Oldest entry Time-ordered data, simple queues
Automatic Loading

Use a loader function to automatically fetch missing entries:

cache := stash.New[string, *User](
    stash.WithLoader(func(id string) (*User, error) {
        return db.GetUser(id)
    }),
)

// GetOrLoad checks cache first, then calls loader on miss
user, err := cache.GetOrLoad(ctx, "user:123")

The loader uses single-flight deduplication — concurrent requests for the same key share one load call, preventing thundering herd.

Cost-Based Eviction

Limit cache by total cost instead of entry count:

cache := stash.New[string, []byte](
    stash.WithMaxCost[string, []byte](100*1024*1024), // 100MB
    stash.WithCost(func(v []byte) int64 {
        return int64(len(v))
    }),
)

cache.Set(ctx, "small", make([]byte, 1024))      // 1KB
cache.Set(ctx, "large", make([]byte, 50*1024*1024)) // 50MB
// Evicts entries to stay under 100MB
External Storage

Plug in Redis, Valkey, or any custom backend:

// Implement the Store interface
type Store[K comparable, V any] interface {
    Get(ctx context.Context, key K) (V, bool, error)
    Set(ctx context.Context, key K, value V, ttl time.Duration) error
    Delete(ctx context.Context, key K) error
}

// Use with cache
cache := stash.New[string, *User](
    stash.WithStore(redisStore),
)

// Get checks memory first, then store
user, ok, err := cache.Get(ctx, "user:123")

// Set writes to both memory and store
cache.Set(ctx, "user:123", user)
Lifecycle Hooks

Monitor cache behavior for observability:

cache := stash.New[string, int](
    stash.OnHit(func(key string, value int) {
        metrics.Increment("cache.hit")
    }),
    stash.OnMiss(func(key string) {
        metrics.Increment("cache.miss")
    }),
    stash.OnEvict(func(key string, value int) {
        logger.Debug("evicted", "key", key)
    }),
)
Statistics

Track cache performance:

stats := cache.Stats()

fmt.Printf("Hits: %d\n", stats.Hits)
fmt.Printf("Misses: %d\n", stats.Misses)
fmt.Printf("Evictions: %d\n", stats.Evictions)
fmt.Printf("Hit Rate: %.2f%%\n", stats.HitRate()*100)

Testing

Inject a fake clock to control time:

type fakeClock struct {
    now time.Time
}

func (c *fakeClock) Now() time.Time { return c.now }
func (c *fakeClock) Advance(d time.Duration) { c.now = c.now.Add(d) }

func TestTTL(t *testing.T) {
    clock := &fakeClock{now: time.Now()}
    cache := stash.New[string, int](
        stash.WithTTL[string, int](time.Minute),
        stash.WithClock[string, int](clock),
    )

    ctx := context.Background()
    cache.Set(ctx, "key", 42)

    // Value exists
    _, ok, _ := cache.Get(ctx, "key")
    assert.True(t, ok)

    // Advance past TTL
    clock.Advance(2 * time.Minute)

    // Value expired
    _, ok, _ = cache.Get(ctx, "key")
    assert.False(t, ok)
}

API Reference

Constructor Options
Option Description
WithCapacity(n) Maximum number of entries (default: 1000)
WithTTL(d) Default TTL for entries (default: no expiry)
WithPolicy(p) Eviction policy: LRU, LFU, FIFO (default: LRU)
WithLoader(fn) Function to load missing entries
WithCost(fn) Function to compute entry cost
WithMaxCost(n) Maximum total cost
WithStore(s) External storage backend
WithStoreErrorHandler(fn) Custom handler for store errors
WithClock(c) Clock for time operations (testing)
OnHit(fn) Callback on cache hit
OnMiss(fn) Callback on cache miss
OnEvict(fn) Callback on entry eviction
Cache Methods
Method Description
Get(ctx, key) Get value, checking store on miss
GetMany(ctx, keys) Get multiple values at once
Set(ctx, key, value) Set value with default TTL
SetMany(ctx, entries) Set multiple values at once
SetWithTTL(ctx, key, value, ttl) Set value with custom TTL
Delete(ctx, key) Remove entry from cache and store
DeleteMany(ctx, keys) Remove multiple entries at once
GetOrLoad(ctx, key, loader?) Get or load via loader (per-call override)
GetManyOrLoad(ctx, keys, loader) Get or batch-load missing values
Peek(key) Get value without affecting stats/eviction
Has(key) Check if key exists in memory
Clear() Remove all entries from memory
Len() Number of entries in memory
Stats() Cache statistics snapshot

Design Philosophy

This package separates concerns into layers:

Cache Configuration (set at creation):

  • Capacity limits
  • Eviction policy
  • TTL defaults
  • Storage backend

Per-Operation (set at each call):

  • Custom TTL via SetWithTTL
  • Context for cancellation/timeout

Observability (callbacks):

  • Metrics via OnHit/OnMiss
  • Logging via OnEvict

This separation enables clean dependency injection and consistent cache behavior across an application.

License

MIT License - see LICENSE for details.

Documentation

Overview

Package stash provides a generic in-memory cache with TTL, eviction policies, and pluggable external storage.

Overview

Stash is a type-safe, concurrent cache for Go applications. It supports multiple eviction policies (LRU, LFU, FIFO), time-to-live expiration, cost-based eviction, automatic loading with single-flight deduplication, and optional external storage backends like Redis.

Basic Usage

Create a cache and perform basic operations:

ctx := context.Background()

cache := stash.New[string, int](
	stash.WithCapacity[string, int](1000),
	stash.WithTTL[string, int](5 * time.Minute),
)

// Set a value
cache.Set(ctx, "key", 42)

// Get a value
value, ok, err := cache.Get(ctx, "key")
if err != nil {
	return err
}
if ok {
	fmt.Println(value)
}

// Delete a value
cache.Delete(ctx, "key")

Eviction Policies

Three eviction policies determine which entries are removed when the cache reaches capacity:

// LRU - Least Recently Used (default)
cache := stash.New[string, int](stash.WithPolicy[string, int](stash.LRU))

// LFU - Least Frequently Used
cache := stash.New[string, int](stash.WithPolicy[string, int](stash.LFU))

// FIFO - First In, First Out
cache := stash.New[string, int](stash.WithPolicy[string, int](stash.FIFO))

Automatic Loading

Use a loader function to automatically fetch missing entries. The loader uses single-flight deduplication to prevent thundering herd:

cache := stash.New[string, *User](
	stash.WithLoader(func(id string) (*User, error) {
		return db.GetUser(id)
	}),
)

// GetOrLoad checks cache, then calls loader on miss
user, err := cache.GetOrLoad(ctx, "user:123")

Cost-Based Eviction

Limit the cache by total cost rather than entry count. Useful for caching variable-size data like images or documents:

cache := stash.New[string, []byte](
	stash.WithMaxCost[string, []byte](100 * 1024 * 1024), // 100MB
	stash.WithCost(func(v []byte) int64 {
		return int64(len(v))
	}),
)

External Storage

Plug in external storage backends by implementing the Store interface:

type Store[K comparable, V any] interface {
	Get(ctx context.Context, key K) (V, bool, error)
	GetMany(ctx context.Context, keys []K) (map[K]V, error)
	Set(ctx context.Context, key K, value V, ttl time.Duration) error
	SetMany(ctx context.Context, entries map[K]V, ttl time.Duration) error
	Delete(ctx context.Context, key K) error
	DeleteMany(ctx context.Context, keys []K) error
}

When a store is configured, Get checks memory first then the store, and Set writes to both memory and the store:

cache := stash.New[string, *User](stash.WithStore(redisStore))

Use WithStoreErrorHandler to control how store errors are handled:

cache := stash.New[string, *User](
	stash.WithStore(redisStore),
	stash.WithStoreErrorHandler(func(err error) error {
		log.Printf("store error: %v", err)
		return nil // swallow error, fall back to memory-only
	}),
)

Lifecycle Hooks

Monitor cache behavior for metrics and logging:

cache := stash.New[string, int](
	stash.OnHit(func(key string, value int) {
		metrics.Increment("cache.hit")
	}),
	stash.OnMiss(func(key string) {
		metrics.Increment("cache.miss")
	}),
	stash.OnEvict(func(key string, value int) {
		logger.Debug("evicted", "key", key)
	}),
)

Testing

Inject a custom clock to control time in tests:

type fakeClock struct{ now time.Time }
func (c *fakeClock) Now() time.Time { return c.now }

clock := &fakeClock{now: time.Now()}
cache := stash.New[string, int](
	stash.WithTTL[string, int](time.Minute),
	stash.WithClock[string, int](clock),
)

cache.Set(ctx, "key", 42)
clock.now = clock.now.Add(2 * time.Minute) // TTL expired
_, ok, _ := cache.Get(ctx, "key")          // ok == false

Thread Safety

All Cache methods are safe for concurrent use. The cache uses a sync.RWMutex internally to protect shared state.

Index

Examples

Constants

View Source
const (
	// DefaultCapacity is the default maximum number of entries.
	DefaultCapacity = 1000
)

Variables

This section is empty.

Functions

This section is empty.

Types

type BatchLoaderFunc

type BatchLoaderFunc[K comparable, V any] func(context.Context, []K) (map[K]V, error)

BatchLoaderFunc is a function that loads multiple values.

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic in-memory cache with TTL and eviction policies.

Example
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()
	cache := stash.New[string, int](
		stash.WithCapacity[string, int](100),
		stash.WithTTL[string, int](5*time.Minute),
	)

	cache.Set(ctx, "answer", 42)

	if v, ok, _ := cache.Get(ctx, "answer"); ok {
		fmt.Println(v)
	}
}
Output:

42
Example (Policies)
package main

import (
	"context"
	"fmt"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()

	// LRU evicts least recently used
	lru := stash.New[string, int](
		stash.WithCapacity[string, int](2),
		stash.WithPolicy[string, int](stash.LRU),
	)
	lru.Set(ctx, "a", 1)
	lru.Set(ctx, "b", 2)
	lru.Get(ctx, "a")    // a is now most recently used
	lru.Set(ctx, "c", 3) // evicts b
	_, hasB, _ := lru.Get(ctx, "b")
	fmt.Println("LRU has b:", hasB)

	// LFU evicts least frequently used
	lfu := stash.New[string, int](
		stash.WithCapacity[string, int](2),
		stash.WithPolicy[string, int](stash.LFU),
	)
	lfu.Set(ctx, "a", 1)
	lfu.Set(ctx, "b", 2)
	lfu.Get(ctx, "a")
	lfu.Get(ctx, "a")    // a has higher frequency
	lfu.Set(ctx, "c", 3) // evicts b
	_, hasB, _ = lfu.Get(ctx, "b")
	fmt.Println("LFU has b:", hasB)

}
Output:

LRU has b: false
LFU has b: false

func New

func New[K comparable, V any](opts ...Option[K, V]) *Cache[K, V]

New creates a new Cache with the given options.

func (*Cache[K, V]) Clear

func (c *Cache[K, V]) Clear()

Clear removes all entries from the in-memory cache. Does not affect the external store.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(ctx context.Context, key K) error

Delete removes a key from the cache. If a store is configured, deletes from both memory and store.

func (*Cache[K, V]) DeleteMany

func (c *Cache[K, V]) DeleteMany(ctx context.Context, keys []K) error

DeleteMany removes multiple keys from the cache.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. If a store is configured and the key is not in memory, it checks the store. Returns the value and true if found, zero value and false otherwise.

func (*Cache[K, V]) GetMany

func (c *Cache[K, V]) GetMany(ctx context.Context, keys []K) (found map[K]V, missing []K, err error)

GetMany retrieves multiple values from the cache. Returns found values and the keys that were not found.

func (*Cache[K, V]) GetManyOrLoad

func (c *Cache[K, V]) GetManyOrLoad(ctx context.Context, keys []K, loader BatchLoaderFunc[K, V]) (map[K]V, error)

GetManyOrLoad retrieves multiple values, loading missing ones via the loader. The loader receives only the keys not found in cache/store.

func (*Cache[K, V]) GetOrLoad

func (c *Cache[K, V]) GetOrLoad(ctx context.Context, key K, loader ...LoaderFunc[K, V]) (V, error)

GetOrLoad retrieves a value from the cache, loading it if not present. Uses single-flight to prevent thundering herd. If loader is provided, it overrides the default loader for this call.

func (*Cache[K, V]) Has

func (c *Cache[K, V]) Has(key K) bool

Has checks if a key exists in memory and is not expired.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries in the in-memory cache. May include expired entries that haven't been cleaned up yet.

func (*Cache[K, V]) Peek

func (c *Cache[K, V]) Peek(key K) (V, bool)

Peek retrieves a value without updating access stats or triggering callbacks. Useful for debugging or when you don't want to affect eviction order.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(ctx context.Context, key K, value V) error

Set adds or updates a value in the cache using the default TTL. If a store is configured, writes to both memory and store.

func (*Cache[K, V]) SetMany

func (c *Cache[K, V]) SetMany(ctx context.Context, entries map[K]V) error

SetMany adds or updates multiple values using the default TTL.

func (*Cache[K, V]) SetWithTTL

func (c *Cache[K, V]) SetWithTTL(ctx context.Context, key K, value V, ttl time.Duration) error

SetWithTTL adds or updates a value with a specific TTL. If a store is configured, writes to both memory and store.

func (*Cache[K, V]) Stats

func (c *Cache[K, V]) Stats() Snapshot

Stats returns a snapshot of cache statistics. The returned Snapshot is a point-in-time copy safe for concurrent use.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()
	cache := stash.New[string, int]()

	cache.Set(ctx, "a", 1)
	cache.Get(ctx, "a") // hit
	cache.Get(ctx, "b") // miss

	stats := cache.Stats()
	fmt.Printf("hits: %d, misses: %d, rate: %.0f%%\n",
		stats.Hits, stats.Misses, stats.HitRate()*100)

}
Output:

hits: 1, misses: 1, rate: 50%

type Clock

type Clock interface {
	Now() time.Time
}

Clock provides time operations for the cache. The default implementation uses time.Now().

type LoaderFunc

type LoaderFunc[K comparable, V any] func(context.Context, K) (V, error)

LoaderFunc is a function that loads a single value.

type Option

type Option[K comparable, V any] func(*config[K, V])

Option configures a Cache.

func OnEvict

func OnEvict[K comparable, V any](fn func(K, V)) Option[K, V]

OnEvict sets a callback invoked when an entry is evicted.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()
	cache := stash.New[string, int](
		stash.WithCapacity[string, int](2),
		stash.OnEvict(func(key string, value int) {
			fmt.Printf("evicted: %s=%d\n", key, value)
		}),
	)

	cache.Set(ctx, "a", 1)
	cache.Set(ctx, "b", 2)
	cache.Set(ctx, "c", 3) // triggers eviction of a

}
Output:

evicted: a=1

func OnHit

func OnHit[K comparable, V any](fn func(K, V)) Option[K, V]

OnHit sets a callback invoked on cache hits.

func OnMiss

func OnMiss[K comparable, V any](fn func(K)) Option[K, V]

OnMiss sets a callback invoked on cache misses.

func WithCapacity

func WithCapacity[K comparable, V any](n int) Option[K, V]

WithCapacity sets the maximum number of entries in the cache.

func WithClock

func WithClock[K comparable, V any](clk Clock) Option[K, V]

WithClock sets a custom clock for time operations. Useful for testing TTL behavior.

func WithCost

func WithCost[K comparable, V any](fn func(V) int64) Option[K, V]

WithCost sets a function to compute the cost of a value. Used with WithMaxCost for cost-based eviction.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()
	cache := stash.New[string, []byte](
		stash.WithMaxCost[string, []byte](100),
		stash.WithCost[string, []byte](func(v []byte) int64 {
			return int64(len(v))
		}),
	)

	cache.Set(ctx, "small", make([]byte, 10))  // cost 10, total 10
	cache.Set(ctx, "medium", make([]byte, 50)) // cost 50, total 60
	cache.Set(ctx, "large", make([]byte, 60))  // cost 60, total 120 -> evicts until <= 100

	fmt.Println("entries:", cache.Len())
}
Output:

entries: 1

func WithLoader

func WithLoader[K comparable, V any](fn func(K) (V, error)) Option[K, V]

WithLoader sets a function to load values on cache miss.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bjaus/stash"
)

func main() {
	ctx := context.Background()
	cache := stash.New[string, string](
		stash.WithLoader(func(key string) (string, error) {
			// simulate loading from database
			return "loaded:" + key, nil
		}),
	)

	// first call loads and caches
	v1, _ := cache.GetOrLoad(ctx, "user-123")
	fmt.Println(v1)

	// second call returns cached value
	v2, _ := cache.GetOrLoad(ctx, "user-123")
	fmt.Println(v2)

}
Output:

loaded:user-123
loaded:user-123

func WithMaxCost

func WithMaxCost[K comparable, V any](n int64) Option[K, V]

WithMaxCost sets the maximum total cost of all entries. Requires WithCost to be set.

func WithPolicy

func WithPolicy[K comparable, V any](p Policy) Option[K, V]

WithPolicy sets the eviction policy.

func WithStore

func WithStore[K comparable, V any](s Store[K, V]) Option[K, V]

WithStore sets an external backing store for the cache. When set, the cache will read from/write to both the in-memory cache and the external store.

func WithStoreErrorHandler

func WithStoreErrorHandler[K comparable, V any](fn func(error) error) Option[K, V]

WithStoreErrorHandler sets a function to handle store errors. The handler receives the error and returns the error to propagate (or nil to swallow). Default behavior propagates all errors.

func WithTTL

func WithTTL[K comparable, V any](d time.Duration) Option[K, V]

WithTTL sets the default time-to-live for cache entries.

type Policy

type Policy int

Policy defines the eviction policy for the cache.

const (
	// LRU evicts the least recently used entry.
	LRU Policy = iota
	// LFU evicts the least frequently used entry.
	LFU
	// FIFO evicts the oldest entry.
	FIFO
)

type Snapshot

type Snapshot struct {
	Hits      int64
	Misses    int64
	Evictions int64
}

Snapshot is a point-in-time copy of cache statistics.

func (Snapshot) HitRate

func (s Snapshot) HitRate() float64

HitRate returns the cache hit rate as a value between 0 and 1. Returns 0 if there have been no accesses.

type Stats

type Stats struct {
	// contains filtered or unexported fields
}

Stats holds cache statistics using atomic counters for lock-free updates.

func (*Stats) Evictions

func (s *Stats) Evictions() int64

Evictions returns the number of evictions.

func (*Stats) HitRate

func (s *Stats) HitRate() float64

HitRate returns the cache hit rate as a value between 0 and 1. Returns 0 if there have been no accesses.

func (*Stats) Hits

func (s *Stats) Hits() int64

Hits returns the number of cache hits.

func (*Stats) Misses

func (s *Stats) Misses() int64

Misses returns the number of cache misses.

func (*Stats) Snapshot

func (s *Stats) Snapshot() Snapshot

Snapshot returns a point-in-time copy of the stats.

type Store

type Store[K comparable, V any] interface {
	// Get retrieves a value from the store.
	// Returns the value and true if found, zero value and false otherwise.
	Get(ctx context.Context, key K) (V, bool, error)

	// GetMany retrieves multiple values from the store.
	// Returns found values mapped by key. Missing keys are not in the map.
	GetMany(ctx context.Context, keys []K) (map[K]V, error)

	// Set stores a value with an optional TTL.
	// If ttl is zero, the entry should not expire.
	Set(ctx context.Context, key K, value V, ttl time.Duration) error

	// SetMany stores multiple values with an optional TTL.
	SetMany(ctx context.Context, entries map[K]V, ttl time.Duration) error

	// Delete removes a key from the store.
	Delete(ctx context.Context, key K) error

	// DeleteMany removes multiple keys from the store.
	DeleteMany(ctx context.Context, keys []K) error
}

Store defines an external backing store for the cache. Implementations can persist cache entries to Redis, disk, or other storage.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL