26 Commits

Author SHA1 Message Date
Karl Seguin
0901f94888 Merge pull request #93 from chenyijun266846/perf_setnx
perf: add setnx2
2024-10-01 10:38:43 +08:00
chenyijun.266846
a47156de7d feat: add existing for setnx2 2024-09-29 15:07:02 +08:00
chenyijun.266846
c0806d27fe fix(setnx2): promotables 2024-09-29 14:46:58 +08:00
chenyijun.266846
d59160ba1c perf: add setnx2 2024-09-29 11:18:20 +08:00
Karl Seguin
e9a80ae138 Merge pull request #91 from miparnisari/bench-cpus
ensure benchmarks are compared if the CPUs are the same
2023-11-27 07:58:24 +08:00
Maria Ines Parnisari
964f899bf4 ensure benchmarks are compared if the CPUs are the same 2023-11-23 11:29:35 -08:00
Karl Seguin
3aa6a053b7 Make test more robust
Attempt to fix: https://github.com/karlseguin/ccache/issues/90
2023-11-23 09:48:42 +08:00
Maria Ines Parnisari
6d135b03a9 feat: add benchmarks 2023-11-23 09:12:34 +08:00
Karl Seguin
bf904fff3c fix make c on macos 2023-11-21 11:42:34 +08:00
Maria Ines Parnisari
de3e573a65 run tests and linter on every PR 2023-11-20 13:44:20 -08:00
Karl Seguin
7b5dfadcde Merge pull request #87 from craigpastro/patch-1
Update readme.md
2023-10-24 08:34:04 +08:00
Craig Pastro
2ad5f8fe86 Fix typo 2023-10-23 10:23:25 -07:00
Craig Pastro
c56472f9b5 Update readme.md
Generic version has landed and that branch no longer exists.
2023-10-23 10:20:28 -07:00
Karl Seguin
378b8b039e Merge pull request #86 from rfyiamcool/feat/public_extend
feat: public extend
2023-10-23 10:51:45 +08:00
rfyiamcool
1594fc55bc feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:19:27 +08:00
rfyiamcool
2977b36b74 feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:14:28 +08:00
Karl Seguin
62cd8cc8c3 Merge pull request #85 from rfyiamcool/feat/add_setnx
feat: add setnx (if not exists, set kv)
2023-10-22 20:19:26 +08:00
rfyiamcool
b26c342793 feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 19:23:23 +08:00
rfyiamcool
dd0671989b feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-20 10:36:04 +08:00
Karl Seguin
0f8575167d Merge pull request #84 from idsulik/added-key-method-to-item
Added Key() method to Item
2023-10-20 06:51:10 +08:00
Suleiman Dibirov
fd8f81fe86 Added Key() method to Item 2023-10-19 12:16:13 +03:00
Karl Seguin
a25552af28 Attempt to make Clear concurrency-safe
This is an attempt at fixing #81 without imposing a performance hit on the
cache's "normal" (get/set/fetch) activity. Calling "Clear" is now considerably
more expensive.
2023-04-14 15:27:39 +08:00
Karl Seguin
35052434f3 Merge pull request #78 from karlseguin/control_stop
Refactor control messages + Stop handling
2023-01-07 12:12:27 +08:00
Karl Seguin
22776be1ee Refactor control messages + Stop handling
Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.

Stop is now a control message. Channels are no longer closed as part of the stop
process.
2023-01-04 10:40:19 +08:00
Karl Seguin
ece93bf87d On delete, always set promotions == -2 and node == nil
Also, item.promotions doesn't need to be loaded/stored using atomic. Once upon a
time it did. Cache was updated long ago to not use atomic operations on it, but
LayeredCache wasn't. They are both consistent now (they don't use atomic
operations).

Fixes: https://github.com/karlseguin/ccache/issues/76
2022-12-13 20:33:58 +08:00
Karl Seguin
3452e4e261 Fix memory leak
As documented in https://github.com/karlseguin/ccache/issues/76, an entry which
is both GC'd and deleted (either via a delete or an update) will result in the
internal link list having a nil tail (because removing the same node multiple times
from the linked list does that).

doDelete was already aware of "invalid" nodes (where item.node == nil), so the
solution seems to be as simple as setting item.node = nil during GC.
2022-11-19 08:15:48 +08:00
19 changed files with 834 additions and 280 deletions

50
.github/workflows/master.yaml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Master
on:
push:
branches:
- master
permissions:
contents: read
jobs:
bench:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee bench_output.txt
- name: Get benchmark as JSON
uses: benchmark-action/github-action-benchmark@v1
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: bench_output.txt
# Write benchmarks to this file
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Save benchmark JSON to cache
uses: actions/cache/save@v3
with:
path: ./cache/benchmark-data.json
# Save with commit hash to avoid "cache already exists"
# Save with OS & CPU info to prevent comparing against results from different CPUs
key: ${{ github.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark

111
.github/workflows/pull_request.yaml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Pull Request
on:
merge_group:
pull_request:
branches:
- master
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Unit Tests
run: make t
bench:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0 # to be able to retrieve the last commit in master branch
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
cache-dependency-path: './go.sum'
check-latest: true
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee ${{ github.sha }}_bench_output.txt
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Get Master branch SHA
id: get-master-branch-sha
run: |
SHA=$(git rev-parse origin/master)
echo "sha=$SHA" >> $GITHUB_OUTPUT
- name: Try to get benchmark JSON from master branch
uses: actions/cache/restore@v3
id: cache
with:
path: ./cache/benchmark-data.json
key: ${{ steps.get-master-branch-sha.outputs.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark
- name: Compare benchmarks with master
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit == 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Where the benchmarks in master are (to compare)
external-data-json-path: ./cache/benchmark-data.json
# Do not save the data
save-data-file: false
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
# Enable Job Summary for PRs
summary-always: true
- name: Run benchmarks
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit != 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Write benchmarks to this file, do not publish to Github Pages
save-data-file: false
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
# Enable alert commit comment
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
# Enable Job Summary for PRs
summary-always: true

2
.gitignore vendored
View File

@@ -1 +1,3 @@
vendor/ vendor/
.idea/
*.out

35
.golangci.yaml Normal file
View File

@@ -0,0 +1,35 @@
run:
timeout: 3m
modules-download-mode: readonly
linters:
enable:
- errname
- gofmt
- goimports
- stylecheck
- importas
- errcheck
- gosimple
- govet
- ineffassign
- mirror
- staticcheck
- tagalign
- testifylint
- typecheck
- unused
- unconvert
- unparam
- wastedassign
- whitespace
- exhaustive
- noctx
- promlinter
linters-settings:
govet:
enable-all: true
disable:
- shadow
- fieldalignment

View File

@@ -1,18 +1,25 @@
.DEFAULT_GOAL := help
.PHONY: help
help:
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
.PHONY: bench
bench: ## Run benchmarks
go test ./... -bench . -benchtime 5s -timeout 0 -run=XXX -benchmem
.PHONY: l
l: ## Lint Go source files
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest && golangci-lint run
.PHONY: t .PHONY: t
t: t: ## Run unit tests
go test -race -count=1 ./... go test -race -count=1 ./...
.PHONY: f .PHONY: f
f: f: ## Format code
go fmt ./... go fmt ./...
.PHONY: c .PHONY: c
c: c: ## Measure code coverage
go test -race -covermode=atomic ./... -coverprofile=cover.out && \ go test -race -covermode=atomic ./... -coverprofile=cover.out
# go tool cover -html=cover.out && \
go tool cover -func cover.out \
| grep -vP '[89]\d\.\d%' | grep -v '100.0%' \
|| true
rm cover.out

View File

@@ -35,6 +35,54 @@ func (b *bucket[T]) get(key string) *Item[T] {
return b.lookup[key] return b.lookup[key]
} }
func (b *bucket[T]) setnx(key string, value T, duration time.Duration, track bool) *Item[T] {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, value, expires, track)
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item
}
b.lookup[key] = newItem
return newItem
}
func (b *bucket[T]) setnx2(key string, f func() T, duration time.Duration, track bool) (*Item[T], bool) {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item, true
}
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item, true
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, f(), expires, track)
b.lookup[key] = newItem
return newItem, false
}
func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) { func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
expires := time.Now().Add(duration).UnixNano() expires := time.Now().Add(duration).UnixNano()
item := newItem(key, value, expires, track) item := newItem(key, value, expires, track)
@@ -98,8 +146,10 @@ func (b *bucket[T]) deletePrefix(prefix string, deletables chan *Item[T]) int {
}, deletables) }, deletables)
} }
// we expect the caller to have acquired a write lock
func (b *bucket[T]) clear() { func (b *bucket[T]) clear() {
b.Lock() for _, item := range b.lookup {
item.promotions = -2
}
b.lookup = make(map[string]*Item[T]) b.lookup = make(map[string]*Item[T])
b.Unlock()
} }

238
cache.go
View File

@@ -7,42 +7,15 @@ import (
"time" "time"
) )
// The cache has a generic 'control' channel that is used to send
// messages to the worker. These are the messages that can be sent to it
type getDropped struct {
res chan int
}
type getSize struct {
res chan int64
}
type setMaxSize struct {
size int64
done chan struct{}
}
type clear struct {
done chan struct{}
}
type syncWorker struct {
done chan struct{}
}
type gc struct {
done chan struct{}
}
type Cache[T any] struct { type Cache[T any] struct {
*Configuration[T] *Configuration[T]
control
list *List[*Item[T]] list *List[*Item[T]]
size int64 size int64
buckets []*bucket[T] buckets []*bucket[T]
bucketMask uint32 bucketMask uint32
deletables chan *Item[T] deletables chan *Item[T]
promotables chan *Item[T] promotables chan *Item[T]
control chan interface{}
} }
// Create a new cache with the specified configuration // Create a new cache with the specified configuration
@@ -51,16 +24,18 @@ func New[T any](config *Configuration[T]) *Cache[T] {
c := &Cache[T]{ c := &Cache[T]{
list: NewList[*Item[T]](), list: NewList[*Item[T]](),
Configuration: config, Configuration: config,
control: newControl(),
bucketMask: uint32(config.buckets) - 1, bucketMask: uint32(config.buckets) - 1,
buckets: make([]*bucket[T], config.buckets), buckets: make([]*bucket[T], config.buckets),
control: make(chan interface{}), deletables: make(chan *Item[T], config.deleteBuffer),
promotables: make(chan *Item[T], config.promoteBuffer),
} }
for i := 0; i < config.buckets; i++ { for i := 0; i < config.buckets; i++ {
c.buckets[i] = &bucket[T]{ c.buckets[i] = &bucket[T]{
lookup: make(map[string]*Item[T]), lookup: make(map[string]*Item[T]),
} }
} }
c.restart() go c.worker()
return c return c
} }
@@ -144,6 +119,27 @@ func (c *Cache[T]) Set(key string, value T, duration time.Duration) {
c.set(key, value, duration, false) c.set(key, value, duration, false)
} }
// Setnx set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx(key string, value T, duration time.Duration) {
c.bucket(key).setnx(key, value, duration, false)
}
// Setnx2 set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx2(key string, f func() T, duration time.Duration) *Item[T] {
item, existing := c.bucket(key).setnx2(key, f, duration, false)
// consistent with Get
if existing && !item.Expired() {
select {
case c.promotables <- item:
default:
}
// consistent with set
} else if !existing {
c.promotables <- item
}
return item
}
// Replace the value if it exists, does not set if it doesn't. // Replace the value if it exists, does not set if it doesn't.
// Returns true if the item existed an was replaced, false otherwise. // Returns true if the item existed an was replaced, false otherwise.
// Replace does not reset item's TTL // Replace does not reset item's TTL
@@ -156,6 +152,18 @@ func (c *Cache[T]) Replace(key string, value T) bool {
return true return true
} }
// Extend the value if it exists, does not set if it doesn't exists.
// Returns true if the expire time of the item an was extended, false otherwise.
func (c *Cache[T]) Extend(key string, duration time.Duration) bool {
item := c.bucket(key).get(key)
if item == nil {
return false
}
item.Extend(duration)
return true
}
// Attempts to get the value from the cache and calles fetch on a miss (missing // Attempts to get the value from the cache and calles fetch on a miss (missing
// or stale item). If fetch returns an error, no value is cached and the error // or stale item). If fetch returns an error, no value is cached and the error
// is returned back to the caller. // is returned back to the caller.
@@ -184,99 +192,6 @@ func (c *Cache[T]) Delete(key string) bool {
return false return false
} }
// Clears the cache
// This is a control command.
func (c *Cache[T]) Clear() {
done := make(chan struct{})
c.control <- clear{done: done}
<-done
}
// Stops the background worker. Operations performed on the cache after Stop
// is called are likely to panic
// This is a control command.
func (c *Cache[T]) Stop() {
close(c.promotables)
<-c.control
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
// This is a control command.
func (c *Cache[T]) GetDropped() int {
return doGetDropped(c.control)
}
func doGetDropped(controlCh chan<- interface{}) int {
res := make(chan int)
controlCh <- getDropped{res: res}
return <-res
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now.
//
// For efficiency, the cache's implementation of LRU behavior is partly managed by a worker
// goroutine that updates its internal data structures asynchronously. This means that the
// cache's state in terms of (for instance) eviction of LRU items is only eventually consistent;
// there is no guarantee that it happens before a Get or Set call has returned. Most of the time
// application code will not care about this, but especially in a test scenario you may want to
// be able to know when the worker has caught up.
//
// This applies only to cache methods that were previously called by the same goroutine that is
// now calling SyncUpdates. If other goroutines are using the cache at the same time, there is
// no way to know whether any of them still have pending state updates when SyncUpdates returns.
// This is a control command.
func (c *Cache[T]) SyncUpdates() {
doSyncUpdates(c.control)
}
func doSyncUpdates(controlCh chan<- interface{}) {
done := make(chan struct{})
controlCh <- syncWorker{done: done}
<-done
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
// This is a control command.
func (c *Cache[T]) SetMaxSize(size int64) {
done := make(chan struct{})
c.control <- setMaxSize{size: size, done: done}
<-done
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c *Cache[T]) GC() {
done := make(chan struct{})
c.control <- gc{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c *Cache[T]) GetSize() int64 {
res := make(chan int64)
c.control <- getSize{res}
return <-res
}
func (c *Cache[T]) restart() {
c.deletables = make(chan *Item[T], c.deleteBuffer)
c.promotables = make(chan *Item[T], c.promoteBuffer)
c.control = make(chan interface{})
go c.worker()
}
func (c *Cache[T]) deleteItem(bucket *bucket[T], item *Item[T]) {
bucket.delete(item.key) //stop other GETs from getting it
c.deletables <- item
}
func (c *Cache[T]) set(key string, value T, duration time.Duration, track bool) *Item[T] { func (c *Cache[T]) set(key string, value T, duration time.Duration, track bool) *Item[T] {
item, existing := c.bucket(key).set(key, value, duration, track) item, existing := c.bucket(key).set(key, value, duration, track)
if existing != nil { if existing != nil {
@@ -292,49 +207,78 @@ func (c *Cache[T]) bucket(key string) *bucket[T] {
return c.buckets[h.Sum32()&c.bucketMask] return c.buckets[h.Sum32()&c.bucketMask]
} }
func (c *Cache[T]) halted(fn func()) {
c.halt()
defer c.unhalt()
fn()
}
func (c *Cache[T]) halt() {
for _, bucket := range c.buckets {
bucket.Lock()
}
}
func (c *Cache[T]) unhalt() {
for _, bucket := range c.buckets {
bucket.Unlock()
}
}
func (c *Cache[T]) worker() { func (c *Cache[T]) worker() {
defer close(c.control)
dropped := 0 dropped := 0
cc := c.control
promoteItem := func(item *Item[T]) { promoteItem := func(item *Item[T]) {
if c.doPromote(item) && c.size > c.maxSize { if c.doPromote(item) && c.size > c.maxSize {
dropped += c.gc() dropped += c.gc()
} }
} }
for { for {
select { select {
case item, ok := <-c.promotables: case item := <-c.promotables:
if ok == false {
goto drain
}
promoteItem(item) promoteItem(item)
case item := <-c.deletables: case item := <-c.deletables:
c.doDelete(item) c.doDelete(item)
case control := <-c.control: case control := <-cc:
switch msg := control.(type) { switch msg := control.(type) {
case getDropped: case controlStop:
goto drain
case controlGetDropped:
msg.res <- dropped msg.res <- dropped
dropped = 0 dropped = 0
case setMaxSize: case controlSetMaxSize:
c.maxSize = msg.size c.maxSize = msg.size
if c.size > c.maxSize { if c.size > c.maxSize {
dropped += c.gc() dropped += c.gc()
} }
msg.done <- struct{}{} msg.done <- struct{}{}
case clear: case controlClear:
for _, bucket := range c.buckets { c.halted(func() {
bucket.clear() promotables := c.promotables
} for len(promotables) > 0 {
c.size = 0 <-promotables
c.list = NewList[*Item[T]]() }
deletables := c.deletables
for len(deletables) > 0 {
<-deletables
}
for _, bucket := range c.buckets {
bucket.clear()
}
c.size = 0
c.list = NewList[*Item[T]]()
})
msg.done <- struct{}{} msg.done <- struct{}{}
case getSize: case controlGetSize:
msg.res <- c.size msg.res <- c.size
case gc: case controlGC:
dropped += c.gc() dropped += c.gc()
msg.done <- struct{}{} msg.done <- struct{}{}
case syncWorker: case controlSyncUpdates:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem, doAllPendingPromotesAndDeletes(c.promotables, promoteItem, c.deletables, c.doDelete)
c.deletables, c.doDelete)
msg.done <- struct{}{} msg.done <- struct{}{}
} }
} }
@@ -346,7 +290,6 @@ drain:
case item := <-c.deletables: case item := <-c.deletables:
c.doDelete(item) c.doDelete(item)
default: default:
close(c.deletables)
return return
} }
} }
@@ -367,9 +310,7 @@ doAllPromotes:
for { for {
select { select {
case item := <-promotables: case item := <-promotables:
if item != nil { promoteFn(item)
promoteFn(item)
}
default: default:
break doAllPromotes break doAllPromotes
} }
@@ -394,6 +335,8 @@ func (c *Cache[T]) doDelete(item *Item[T]) {
c.onDelete(item) c.onDelete(item)
} }
c.list.Remove(item.node) c.list.Remove(item.node)
item.node = nil
item.promotions = -2
} }
} }
@@ -430,7 +373,7 @@ func (c *Cache[T]) gc() int {
} }
prev := node.Prev prev := node.Prev
item := node.Value item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 { if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.key).delete(item.key) c.bucket(item.key).delete(item.key)
c.size -= item.size c.size -= item.size
c.list.Remove(node) c.list.Remove(node)
@@ -438,6 +381,7 @@ func (c *Cache[T]) gc() int {
c.onDelete(item) c.onDelete(item)
} }
dropped += 1 dropped += 1
item.node = nil
item.promotions = -2 item.promotions = -2
} }
node = prev node = prev

View File

@@ -1,8 +1,10 @@
package ccache package ccache
import ( import (
"math/rand"
"sort" "sort"
"strconv" "strconv"
"sync"
"sync/atomic" "sync/atomic"
"testing" "testing"
"time" "time"
@@ -10,6 +12,52 @@ import (
"github.com/karlseguin/ccache/v3/assert" "github.com/karlseguin/ccache/v3/assert"
) )
func Test_Setnx(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
// set if exists
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
assert.Equal(t, cache.Get("spice").Value(), "flow")
// set if not exists
cache.Delete("spice")
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.Get("spice").Value(), "worm")
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_Extend(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
// non exist
ok := cache.Extend("spice", time.Minute*10)
assert.Equal(t, false, ok)
// exist
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
ok = cache.Extend("spice", time.Minute*10) // 10 + 10
assert.Equal(t, true, ok)
item := cache.Get("spice")
less := time.Minute*22 < time.Duration(item.expires)
assert.Equal(t, true, less)
more := time.Minute*18 < time.Duration(item.expires)
assert.Equal(t, true, more)
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_CacheDeletesAValue(t *testing.T) { func Test_CacheDeletesAValue(t *testing.T) {
cache := New(Configure[string]()) cache := New(Configure[string]())
defer cache.Stop() defer cache.Stop()
@@ -76,7 +124,6 @@ func Test_CacheDeletesAFunc(t *testing.T) {
return key == "d" return key == "d"
}), 1) }), 1)
assert.Equal(t, cache.ItemCount(), 2) assert.Equal(t, cache.ItemCount(), 2)
} }
func Test_CacheOnDeleteCallbackCalled(t *testing.T) { func Test_CacheOnDeleteCallbackCalled(t *testing.T) {
@@ -313,6 +360,151 @@ func Test_CacheForEachFunc(t *testing.T) {
assert.DoesNotContain(t, forEachKeys(cache), "stop") assert.DoesNotContain(t, forEachKeys(cache), "stop")
} }
func Test_CachePrune(t *testing.T) {
maxSize := int64(500)
cache := New(Configure[string]().MaxSize(maxSize).ItemsToPrune(50))
epoch := 0
for i := 0; i < 10000; i++ {
epoch += 1
expired := make([]string, 0)
for i := 0; i < 50; i += 1 {
key := strconv.FormatInt(rand.Int63n(maxSize*20), 10)
item := cache.Get(key)
if item == nil || item.TTL() > 1*time.Minute {
expired = append(expired, key)
}
}
for _, key := range expired {
cache.Set(key, key, 5*time.Minute)
}
if epoch%500 == 0 {
assert.True(t, cache.GetSize() <= 500)
}
}
}
func Test_ConcurrentStop(t *testing.T) {
for i := 0; i < 100; i++ {
cache := New(Configure[string]())
r := func() {
for {
key := strconv.Itoa(int(rand.Int31n(100)))
switch rand.Int31n(3) {
case 0:
cache.Get(key)
case 1:
cache.Set(key, key, time.Minute)
case 2:
cache.Delete(key)
}
}
}
go r()
go r()
go r()
time.Sleep(time.Millisecond * 10)
cache.Stop()
}
}
func Test_ConcurrentClearAndSet(t *testing.T) {
for i := 0; i < 1000000; i++ {
var stop atomic.Bool
var wg sync.WaitGroup
cache := New(Configure[string]())
r := func() {
for !stop.Load() {
cache.Set("a", "a", time.Minute)
}
wg.Done()
}
go r()
wg.Add(1)
cache.Clear()
stop.Store(true)
wg.Wait()
cache.SyncUpdates()
// The point of this test is to make sure that the cache's lookup and its
// recency list are in sync. But the two aren't written to atomically:
// the lookup is written to directly from the call to Set, whereas the
// list is maintained by the background worker. This can create a period
// where the two are out of sync. Even SyncUpdate is helpless here, since
// it can only sync what's been written to the buffers.
for i := 0; i < 10; i++ {
expectedCount := 0
if cache.list.Head != nil {
expectedCount = 1
}
actualCount := cache.ItemCount()
if expectedCount == actualCount {
return
}
time.Sleep(time.Millisecond)
}
t.Errorf("cache list and lookup are not consistent")
t.FailNow()
}
}
func BenchmarkFrequentSets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
}
}
func BenchmarkFrequentGets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
numKeys := 500
for i := 0; i < numKeys; i++ {
key := strconv.Itoa(i)
cache.Set(key, i, time.Minute)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.FormatInt(rand.Int63n(int64(numKeys)), 10)
cache.Get(key)
}
}
func BenchmarkGetWithPromoteSmall(b *testing.B) {
getsPerPromotes := 5
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
}
}
}
func BenchmarkGetWithPromoteLarge(b *testing.B) {
getsPerPromotes := 100
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
}
}
}
type SizedItem struct { type SizedItem struct {
id int id int
s int64 s int64

View File

@@ -37,7 +37,7 @@ func (c *Configuration[T]) MaxSize(max int64) *Configuration[T] {
// requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) // requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...)
// [16] // [16]
func (c *Configuration[T]) Buckets(count uint32) *Configuration[T] { func (c *Configuration[T]) Buckets(count uint32) *Configuration[T] {
if count == 0 || ((count&(^count+1)) == count) == false { if count == 0 || !((count & (^count + 1)) == count) {
count = 16 count = 16
} }
c.buckets = int(count) c.buckets = int(count)

View File

@@ -16,3 +16,8 @@ func Test_Configuration_BucketsPowerOf2(t *testing.T) {
} }
} }
} }
func Test_Configuration_Buffers(t *testing.T) {
assert.Equal(t, Configure[int]().DeleteBuffer(24).deleteBuffer, 24)
assert.Equal(t, Configure[int]().PromoteBuffer(95).promoteBuffer, 95)
}

110
control.go Normal file
View File

@@ -0,0 +1,110 @@
package ccache
type controlGC struct {
done chan struct{}
}
type controlClear struct {
done chan struct{}
}
type controlStop struct {
}
type controlGetSize struct {
res chan int64
}
type controlGetDropped struct {
res chan int
}
type controlSetMaxSize struct {
size int64
done chan struct{}
}
type controlSyncUpdates struct {
done chan struct{}
}
type control chan interface{}
func newControl() chan interface{} {
return make(chan interface{}, 5)
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c control) GC() {
done := make(chan struct{})
c <- controlGC{done: done}
<-done
}
// Sends a stop signal to the worker thread. The worker thread will shut down
// 5 seconds after the last message is received. The cache should not be used
// after Stop is called, but concurrently executing requests should properly finish
// executing.
// This is a control command.
func (c control) Stop() {
c.SyncUpdates()
c <- controlStop{}
}
// Clears the cache
// This is a control command.
func (c control) Clear() {
done := make(chan struct{})
c <- controlClear{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c control) GetSize() int64 {
res := make(chan int64)
c <- controlGetSize{res: res}
return <-res
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
// This is a control command.
func (c control) GetDropped() int {
res := make(chan int)
c <- controlGetDropped{res: res}
return <-res
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
// This is a control command.
func (c control) SetMaxSize(size int64) {
done := make(chan struct{})
c <- controlSetMaxSize{size: size, done: done}
<-done
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now.
//
// For efficiency, the cache's implementation of LRU behavior is partly managed by a worker
// goroutine that updates its internal data structures asynchronously. This means that the
// cache's state in terms of (for instance) eviction of LRU items is only eventually consistent;
// there is no guarantee that it happens before a Get or Set call has returned. Most of the time
// application code will not care about this, but especially in a test scenario you may want to
// be able to know when the worker has caught up.
//
// This applies only to cache methods that were previously called by the same goroutine that is
// now calling SyncUpdates. If other goroutines are using the cache at the same time, there is
// no way to know whether any of them still have pending state updates when SyncUpdates returns.
// This is a control command.
func (c control) SyncUpdates() {
done := make(chan struct{})
c <- controlSyncUpdates{done: done}
<-done
}

2
go.mod
View File

@@ -1,3 +1,3 @@
module github.com/karlseguin/ccache/v3 module github.com/karlseguin/ccache/v3
go 1.18 go 1.19

View File

@@ -55,6 +55,10 @@ func (i *Item[T]) shouldPromote(getsPerPromote int32) bool {
return i.promotions == getsPerPromote return i.promotions == getsPerPromote
} }
func (i *Item[T]) Key() string {
return i.key
}
func (i *Item[T]) Value() T { func (i *Item[T]) Value() T {
return i.value return i.value
} }

View File

@@ -8,6 +8,11 @@ import (
"github.com/karlseguin/ccache/v3/assert" "github.com/karlseguin/ccache/v3/assert"
) )
func Test_Item_Key(t *testing.T) {
item := &Item[int]{key: "foo"}
assert.Equal(t, item.Key(), "foo")
}
func Test_Item_Promotability(t *testing.T) { func Test_Item_Promotability(t *testing.T) {
item := &Item[int]{promotions: 4} item := &Item[int]{promotions: 4}
assert.Equal(t, item.shouldPromote(5), true) assert.Equal(t, item.shouldPromote(5), true)

View File

@@ -32,7 +32,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return nil return nil
} }
return bucket return bucket
@@ -41,7 +41,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) { func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
b.Lock() b.Lock()
bkt, exists := b.buckets[primary] bkt, exists := b.buckets[primary]
if exists == false { if !exists {
bkt = &bucket[T]{lookup: make(map[string]*Item[T])} bkt = &bucket[T]{lookup: make(map[string]*Item[T])}
b.buckets[primary] = bkt b.buckets[primary] = bkt
} }
@@ -55,7 +55,7 @@ func (b *layeredBucket[T]) delete(primary, secondary string) *Item[T] {
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return nil return nil
} }
return bucket.delete(secondary) return bucket.delete(secondary)
@@ -65,7 +65,7 @@ func (b *layeredBucket[T]) deletePrefix(primary, prefix string, deletables chan
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return 0 return 0
} }
return bucket.deletePrefix(prefix, deletables) return bucket.deletePrefix(prefix, deletables)
@@ -75,7 +75,7 @@ func (b *layeredBucket[T]) deleteFunc(primary string, matches func(key string, i
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return 0 return 0
} }
return bucket.deleteFunc(matches, deletables) return bucket.deleteFunc(matches, deletables)
@@ -85,7 +85,7 @@ func (b *layeredBucket[T]) deleteAll(primary string, deletables chan *Item[T]) b
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return false return false
} }
@@ -111,9 +111,8 @@ func (b *layeredBucket[T]) forEachFunc(primary string, matches func(key string,
} }
} }
// we expect the caller to have acquired a write lock
func (b *layeredBucket[T]) clear() { func (b *layeredBucket[T]) clear() {
b.Lock()
defer b.Unlock()
for _, bucket := range b.buckets { for _, bucket := range b.buckets {
bucket.clear() bucket.clear()
} }

View File

@@ -9,13 +9,13 @@ import (
type LayeredCache[T any] struct { type LayeredCache[T any] struct {
*Configuration[T] *Configuration[T]
control
list *List[*Item[T]] list *List[*Item[T]]
buckets []*layeredBucket[T] buckets []*layeredBucket[T]
bucketMask uint32 bucketMask uint32
size int64 size int64
deletables chan *Item[T] deletables chan *Item[T]
promotables chan *Item[T] promotables chan *Item[T]
control chan interface{}
} }
// Create a new layered cache with the specified configuration. // Create a new layered cache with the specified configuration.
@@ -35,17 +35,18 @@ func Layered[T any](config *Configuration[T]) *LayeredCache[T] {
c := &LayeredCache[T]{ c := &LayeredCache[T]{
list: NewList[*Item[T]](), list: NewList[*Item[T]](),
Configuration: config, Configuration: config,
control: newControl(),
bucketMask: uint32(config.buckets) - 1, bucketMask: uint32(config.buckets) - 1,
buckets: make([]*layeredBucket[T], config.buckets), buckets: make([]*layeredBucket[T], config.buckets),
deletables: make(chan *Item[T], config.deleteBuffer), deletables: make(chan *Item[T], config.deleteBuffer),
control: make(chan interface{}), promotables: make(chan *Item[T], config.promoteBuffer),
} }
for i := 0; i < int(config.buckets); i++ { for i := 0; i < config.buckets; i++ {
c.buckets[i] = &layeredBucket[T]{ c.buckets[i] = &layeredBucket[T]{
buckets: make(map[string]*bucket[T]), buckets: make(map[string]*bucket[T]),
} }
} }
c.restart() go c.worker()
return c return c
} }
@@ -180,63 +181,6 @@ func (c *LayeredCache[T]) DeleteFunc(primary string, matches func(key string, it
return c.bucket(primary).deleteFunc(primary, matches, c.deletables) return c.bucket(primary).deleteFunc(primary, matches, c.deletables)
} }
// Clears the cache
func (c *LayeredCache[T]) Clear() {
done := make(chan struct{})
c.control <- clear{done: done}
<-done
}
func (c *LayeredCache[T]) Stop() {
close(c.promotables)
<-c.control
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
func (c *LayeredCache[T]) GetDropped() int {
return doGetDropped(c.control)
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now. See Cache.SyncUpdates for details.
func (c *LayeredCache[T]) SyncUpdates() {
doSyncUpdates(c.control)
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
func (c *LayeredCache[T]) SetMaxSize(size int64) {
done := make(chan struct{})
c.control <- setMaxSize{size: size, done: done}
<-done
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c *LayeredCache[T]) GC() {
done := make(chan struct{})
c.control <- gc{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c *LayeredCache[T]) GetSize() int64 {
res := make(chan int64)
c.control <- getSize{res}
return <-res
}
func (c *LayeredCache[T]) restart() {
c.promotables = make(chan *Item[T], c.promoteBuffer)
c.control = make(chan interface{})
go c.worker()
}
func (c *LayeredCache[T]) set(primary, secondary string, value T, duration time.Duration, track bool) *Item[T] { func (c *LayeredCache[T]) set(primary, secondary string, value T, duration time.Duration, track bool) *Item[T] {
item, existing := c.bucket(primary).set(primary, secondary, value, duration, track) item, existing := c.bucket(primary).set(primary, secondary, value, duration, track)
if existing != nil { if existing != nil {
@@ -252,79 +196,121 @@ func (c *LayeredCache[T]) bucket(key string) *layeredBucket[T] {
return c.buckets[h.Sum32()&c.bucketMask] return c.buckets[h.Sum32()&c.bucketMask]
} }
func (c *LayeredCache[T]) halted(fn func()) {
c.halt()
defer c.unhalt()
fn()
}
func (c *LayeredCache[T]) halt() {
for _, bucket := range c.buckets {
bucket.Lock()
}
}
func (c *LayeredCache[T]) unhalt() {
for _, bucket := range c.buckets {
bucket.Unlock()
}
}
func (c *LayeredCache[T]) promote(item *Item[T]) { func (c *LayeredCache[T]) promote(item *Item[T]) {
c.promotables <- item c.promotables <- item
} }
func (c *LayeredCache[T]) worker() { func (c *LayeredCache[T]) worker() {
defer close(c.control)
dropped := 0 dropped := 0
cc := c.control
promoteItem := func(item *Item[T]) { promoteItem := func(item *Item[T]) {
if c.doPromote(item) && c.size > c.maxSize { if c.doPromote(item) && c.size > c.maxSize {
dropped += c.gc() dropped += c.gc()
} }
} }
deleteItem := func(item *Item[T]) {
if item.node == nil {
atomic.StoreInt32(&item.promotions, -2)
} else {
c.size -= item.size
if c.onDelete != nil {
c.onDelete(item)
}
c.list.Remove(item.node)
}
}
for { for {
select { select {
case item, ok := <-c.promotables: case item := <-c.promotables:
if ok == false {
return
}
promoteItem(item) promoteItem(item)
case item := <-c.deletables: case item := <-c.deletables:
deleteItem(item) c.doDelete(item)
case control := <-c.control: case control := <-cc:
switch msg := control.(type) { switch msg := control.(type) {
case getDropped: case controlStop:
goto drain
case controlGetDropped:
msg.res <- dropped msg.res <- dropped
dropped = 0 dropped = 0
case setMaxSize: case controlSetMaxSize:
c.maxSize = msg.size c.maxSize = msg.size
if c.size > c.maxSize { if c.size > c.maxSize {
dropped += c.gc() dropped += c.gc()
} }
msg.done <- struct{}{} msg.done <- struct{}{}
case clear: case controlClear:
for _, bucket := range c.buckets { promotables := c.promotables
bucket.clear() for len(promotables) > 0 {
<-promotables
} }
c.size = 0 deletables := c.deletables
c.list = NewList[*Item[T]]() for len(deletables) > 0 {
<-deletables
}
c.halted(func() {
for _, bucket := range c.buckets {
bucket.clear()
}
c.size = 0
c.list = NewList[*Item[T]]()
})
msg.done <- struct{}{} msg.done <- struct{}{}
case getSize: case controlGetSize:
msg.res <- c.size msg.res <- c.size
case gc: case controlGC:
dropped += c.gc() dropped += c.gc()
msg.done <- struct{}{} msg.done <- struct{}{}
case syncWorker: case controlSyncUpdates:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem, doAllPendingPromotesAndDeletes(c.promotables, promoteItem, c.deletables, c.doDelete)
c.deletables, deleteItem)
msg.done <- struct{}{} msg.done <- struct{}{}
} }
} }
} }
drain:
for {
select {
case item := <-c.deletables:
c.doDelete(item)
default:
return
}
}
}
func (c *LayeredCache[T]) doDelete(item *Item[T]) {
if item.node == nil {
item.promotions = -2
} else {
c.size -= item.size
if c.onDelete != nil {
c.onDelete(item)
}
c.list.Remove(item.node)
item.node = nil
item.promotions = -2
}
} }
func (c *LayeredCache[T]) doPromote(item *Item[T]) bool { func (c *LayeredCache[T]) doPromote(item *Item[T]) bool {
// deleted before it ever got promoted // deleted before it ever got promoted
if atomic.LoadInt32(&item.promotions) == -2 { if item.promotions == -2 {
return false return false
} }
if item.node != nil { //not a new item if item.node != nil { //not a new item
if item.shouldPromote(c.getsPerPromote) { if item.shouldPromote(c.getsPerPromote) {
c.list.MoveToFront(item.node) c.list.MoveToFront(item.node)
atomic.StoreInt32(&item.promotions, 0) item.promotions = 0
} }
return false return false
} }
@@ -348,13 +334,14 @@ func (c *LayeredCache[T]) gc() int {
} }
prev := node.Prev prev := node.Prev
item := node.Value item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 { if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.group).delete(item.group, item.key) c.bucket(item.group).delete(item.group, item.key)
c.size -= item.size c.size -= item.size
c.list.Remove(node) c.list.Remove(node)
if c.onDelete != nil { if c.onDelete != nil {
c.onDelete(item) c.onDelete(item)
} }
item.node = nil
item.promotions = -2 item.promotions = -2
dropped += 1 dropped += 1
} }

View File

@@ -1,6 +1,7 @@
package ccache package ccache
import ( import (
"math/rand"
"sort" "sort"
"strconv" "strconv"
"sync/atomic" "sync/atomic"
@@ -117,7 +118,6 @@ func Test_LayedCache_DeletesAFunc(t *testing.T) {
return key == "d" return key == "d"
}), 1) }), 1)
assert.Equal(t, cache.ItemCount(), 3) assert.Equal(t, cache.ItemCount(), 3)
} }
func Test_LayedCache_OnDeleteCallbackCalled(t *testing.T) { func Test_LayedCache_OnDeleteCallbackCalled(t *testing.T) {
@@ -372,6 +372,52 @@ func Test_LayeredCache_EachFunc(t *testing.T) {
assert.DoesNotContain(t, forEachKeysLayered[int](cache, "1"), "stop") assert.DoesNotContain(t, forEachKeysLayered[int](cache, "1"), "stop")
} }
func Test_LayeredCachePrune(t *testing.T) {
maxSize := int64(500)
cache := Layered(Configure[string]().MaxSize(maxSize).ItemsToPrune(50))
epoch := 0
for i := 0; i < 10000; i++ {
epoch += 1
expired := make([]string, 0)
for i := 0; i < 50; i += 1 {
key := strconv.FormatInt(rand.Int63n(maxSize*20), 10)
item := cache.Get(key, key)
if item == nil || item.TTL() > 1*time.Minute {
expired = append(expired, key)
}
}
for _, key := range expired {
cache.Set(key, key, key, 5*time.Minute)
}
if epoch%500 == 0 {
assert.True(t, cache.GetSize() <= 500)
}
}
}
func Test_LayeredConcurrentStop(t *testing.T) {
for i := 0; i < 100; i++ {
cache := Layered(Configure[string]())
r := func() {
for {
key := strconv.Itoa(int(rand.Int31n(100)))
switch rand.Int31n(3) {
case 0:
cache.Get(key, key)
case 1:
cache.Set(key, key, key, time.Minute)
case 2:
cache.Delete(key, key)
}
}
}
go r()
go r()
go r()
time.Sleep(time.Millisecond * 10)
cache.Stop()
}
}
func newLayered[T any]() *LayeredCache[T] { func newLayered[T any]() *LayeredCache[T] {
c := Layered[T](Configure[T]()) c := Layered[T](Configure[T]())
c.Clear() c.Clear()

View File

@@ -85,11 +85,3 @@ func assertList(t *testing.T, list *List[int], expected ...int) {
node = node.Prev node = node.Prev
} }
} }
func listFromInts(ints ...int) *List[int] {
l := NewList[int]()
for i := len(ints) - 1; i >= 0; i-- {
l.Insert(ints[i])
}
return l
}

View File

@@ -1,9 +1,5 @@
# CCache # CCache
Generic version is on the way:
https://github.com/karlseguin/ccache/tree/generic
CCache is an LRU Cache, written in Go, focused on supporting high concurrency. CCache is an LRU Cache, written in Go, focused on supporting high concurrency.
Lock contention on the list is reduced by: Lock contention on the list is reduced by:
@@ -21,7 +17,7 @@ Import and create a `Cache` instance:
```go ```go
import ( import (
github.com/karlseguin/ccache/v3 "github.com/karlseguin/ccache/v3"
) )
// create a cache with string values // create a cache with string values
@@ -111,8 +107,19 @@ cache.Delete("user:4")
`Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish. `Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish.
### Extend ### Extend
The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time. The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time.
```go
cache.Extend("user:4", time.Minute * 10)
// or
item := cache.Get("user:4")
if item != nil {
item.Extend(time.Minute * 10)
}
```
### Replace ### Replace
The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU: The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:
@@ -122,6 +129,14 @@ cache.Replace("user:4", user)
`Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned. `Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned.
### Setnx
Set the value if not exists. setnx will first check whether kv exists. If it does not exist, set kv in cache. this operation is atomic.
```go
cache.Set("user:4", user, time.Minute * 10)
```
### GetDropped ### GetDropped
You can get the number of keys evicted due to memory pressure by calling `GetDropped`: You can get the number of keys evicted due to memory pressure by calling `GetDropped`:
@@ -198,4 +213,4 @@ By default, items added to a cache have a size of 1. This means that if you conf
However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes. However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.
## Want Something Simpler? ## Want Something Simpler?
For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache) For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache).