26 Commits

Author SHA1 Message Date
Karl Seguin
f9779b45fc use delete instead of remove where possible 2024-11-12 10:22:32 +08:00
Karl Seguin
61f506609d change to an intrinsic linked list for less memory usage 2024-11-12 09:52:42 +08:00
Karl Seguin
0901f94888 Merge pull request #93 from chenyijun266846/perf_setnx
perf: add setnx2
2024-10-01 10:38:43 +08:00
chenyijun.266846
a47156de7d feat: add existing for setnx2 2024-09-29 15:07:02 +08:00
chenyijun.266846
c0806d27fe fix(setnx2): promotables 2024-09-29 14:46:58 +08:00
chenyijun.266846
d59160ba1c perf: add setnx2 2024-09-29 11:18:20 +08:00
Karl Seguin
e9a80ae138 Merge pull request #91 from miparnisari/bench-cpus
ensure benchmarks are compared if the CPUs are the same
2023-11-27 07:58:24 +08:00
Maria Ines Parnisari
964f899bf4 ensure benchmarks are compared if the CPUs are the same 2023-11-23 11:29:35 -08:00
Karl Seguin
3aa6a053b7 Make test more robust
Attempt to fix: https://github.com/karlseguin/ccache/issues/90
2023-11-23 09:48:42 +08:00
Maria Ines Parnisari
6d135b03a9 feat: add benchmarks 2023-11-23 09:12:34 +08:00
Karl Seguin
bf904fff3c fix make c on macos 2023-11-21 11:42:34 +08:00
Maria Ines Parnisari
de3e573a65 run tests and linter on every PR 2023-11-20 13:44:20 -08:00
Karl Seguin
7b5dfadcde Merge pull request #87 from craigpastro/patch-1
Update readme.md
2023-10-24 08:34:04 +08:00
Craig Pastro
2ad5f8fe86 Fix typo 2023-10-23 10:23:25 -07:00
Craig Pastro
c56472f9b5 Update readme.md
Generic version has landed and that branch no longer exists.
2023-10-23 10:20:28 -07:00
Karl Seguin
378b8b039e Merge pull request #86 from rfyiamcool/feat/public_extend
feat: public extend
2023-10-23 10:51:45 +08:00
rfyiamcool
1594fc55bc feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:19:27 +08:00
rfyiamcool
2977b36b74 feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:14:28 +08:00
Karl Seguin
62cd8cc8c3 Merge pull request #85 from rfyiamcool/feat/add_setnx
feat: add setnx (if not exists, set kv)
2023-10-22 20:19:26 +08:00
rfyiamcool
b26c342793 feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 19:23:23 +08:00
rfyiamcool
dd0671989b feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-20 10:36:04 +08:00
Karl Seguin
0f8575167d Merge pull request #84 from idsulik/added-key-method-to-item
Added Key() method to Item
2023-10-20 06:51:10 +08:00
Suleiman Dibirov
fd8f81fe86 Added Key() method to Item 2023-10-19 12:16:13 +03:00
Karl Seguin
a25552af28 Attempt to make Clear concurrency-safe
This is an attempt at fixing #81 without imposing a performance hit on the
cache's "normal" (get/set/fetch) activity. Calling "Clear" is now considerably
more expensive.
2023-04-14 15:27:39 +08:00
Karl Seguin
35052434f3 Merge pull request #78 from karlseguin/control_stop
Refactor control messages + Stop handling
2023-01-07 12:12:27 +08:00
Karl Seguin
22776be1ee Refactor control messages + Stop handling
Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.

Stop is now a control message. Channels are no longer closed as part of the stop
process.
2023-01-04 10:40:19 +08:00
21 changed files with 875 additions and 376 deletions

50
.github/workflows/master.yaml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Master
on:
push:
branches:
- master
permissions:
contents: read
jobs:
bench:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee bench_output.txt
- name: Get benchmark as JSON
uses: benchmark-action/github-action-benchmark@v1
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: bench_output.txt
# Write benchmarks to this file
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Save benchmark JSON to cache
uses: actions/cache/save@v3
with:
path: ./cache/benchmark-data.json
# Save with commit hash to avoid "cache already exists"
# Save with OS & CPU info to prevent comparing against results from different CPUs
key: ${{ github.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark

111
.github/workflows/pull_request.yaml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Pull Request
on:
merge_group:
pull_request:
branches:
- master
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Unit Tests
run: make t
bench:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0 # to be able to retrieve the last commit in master branch
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
cache-dependency-path: './go.sum'
check-latest: true
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee ${{ github.sha }}_bench_output.txt
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Get Master branch SHA
id: get-master-branch-sha
run: |
SHA=$(git rev-parse origin/master)
echo "sha=$SHA" >> $GITHUB_OUTPUT
- name: Try to get benchmark JSON from master branch
uses: actions/cache/restore@v3
id: cache
with:
path: ./cache/benchmark-data.json
key: ${{ steps.get-master-branch-sha.outputs.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark
- name: Compare benchmarks with master
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit == 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Where the benchmarks in master are (to compare)
external-data-json-path: ./cache/benchmark-data.json
# Do not save the data
save-data-file: false
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
# Enable Job Summary for PRs
summary-always: true
- name: Run benchmarks
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit != 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Write benchmarks to this file, do not publish to Github Pages
save-data-file: false
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
# Enable alert commit comment
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
# Enable Job Summary for PRs
summary-always: true

2
.gitignore vendored
View File

@@ -1 +1,3 @@
vendor/
.idea/
*.out

35
.golangci.yaml Normal file
View File

@@ -0,0 +1,35 @@
run:
timeout: 3m
modules-download-mode: readonly
linters:
enable:
- errname
- gofmt
- goimports
- stylecheck
- importas
- errcheck
- gosimple
- govet
- ineffassign
- mirror
- staticcheck
- tagalign
- testifylint
- typecheck
- unused
- unconvert
- unparam
- wastedassign
- whitespace
- exhaustive
- noctx
- promlinter
linters-settings:
govet:
enable-all: true
disable:
- shadow
- fieldalignment

View File

@@ -1,18 +1,25 @@
.DEFAULT_GOAL := help
.PHONY: help
help:
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
.PHONY: bench
bench: ## Run benchmarks
go test ./... -bench . -benchtime 5s -timeout 0 -run=XXX -benchmem
.PHONY: l
l: ## Lint Go source files
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest && golangci-lint run
.PHONY: t
t:
t: ## Run unit tests
go test -race -count=1 ./...
.PHONY: f
f:
f: ## Format code
go fmt ./...
.PHONY: c
c:
go test -race -covermode=atomic ./... -coverprofile=cover.out && \
# go tool cover -html=cover.out && \
go tool cover -func cover.out \
| grep -vP '[89]\d\.\d%' | grep -v '100.0%' \
|| true
rm cover.out
c: ## Measure code coverage
go test -race -covermode=atomic ./... -coverprofile=cover.out

View File

@@ -35,6 +35,54 @@ func (b *bucket[T]) get(key string) *Item[T] {
return b.lookup[key]
}
func (b *bucket[T]) setnx(key string, value T, duration time.Duration, track bool) *Item[T] {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, value, expires, track)
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item
}
b.lookup[key] = newItem
return newItem
}
func (b *bucket[T]) setnx2(key string, f func() T, duration time.Duration, track bool) (*Item[T], bool) {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item, true
}
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item, true
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, f(), expires, track)
b.lookup[key] = newItem
return newItem, false
}
func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
expires := time.Now().Add(duration).UnixNano()
item := newItem(key, value, expires, track)
@@ -45,7 +93,7 @@ func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool)
return item, existing
}
func (b *bucket[T]) delete(key string) *Item[T] {
func (b *bucket[T]) remove(key string) *Item[T] {
b.Lock()
item := b.lookup[key]
delete(b.lookup, key)
@@ -53,6 +101,12 @@ func (b *bucket[T]) delete(key string) *Item[T] {
return item
}
func (b *bucket[T]) delete(key string) {
b.Lock()
delete(b.lookup, key)
b.Unlock()
}
// This is an expensive operation, so we do what we can to optimize it and limit
// the impact it has on concurrent operations. Specifically, we:
// 1 - Do an initial iteration to collect matches. This allows us to do the
@@ -98,8 +152,10 @@ func (b *bucket[T]) deletePrefix(prefix string, deletables chan *Item[T]) int {
}, deletables)
}
// we expect the caller to have acquired a write lock
func (b *bucket[T]) clear() {
b.Lock()
for _, item := range b.lookup {
item.promotions = -2
}
b.lookup = make(map[string]*Item[T])
b.Unlock()
}

265
cache.go
View File

@@ -7,60 +7,35 @@ import (
"time"
)
// The cache has a generic 'control' channel that is used to send
// messages to the worker. These are the messages that can be sent to it
type getDropped struct {
res chan int
}
type getSize struct {
res chan int64
}
type setMaxSize struct {
size int64
done chan struct{}
}
type clear struct {
done chan struct{}
}
type syncWorker struct {
done chan struct{}
}
type gc struct {
done chan struct{}
}
type Cache[T any] struct {
*Configuration[T]
list *List[*Item[T]]
control
list *List[T]
size int64
buckets []*bucket[T]
bucketMask uint32
deletables chan *Item[T]
promotables chan *Item[T]
control chan interface{}
}
// Create a new cache with the specified configuration
// See ccache.Configure() for creating a configuration
func New[T any](config *Configuration[T]) *Cache[T] {
c := &Cache[T]{
list: NewList[*Item[T]](),
list: NewList[T](),
Configuration: config,
control: newControl(),
bucketMask: uint32(config.buckets) - 1,
buckets: make([]*bucket[T], config.buckets),
control: make(chan interface{}),
deletables: make(chan *Item[T], config.deleteBuffer),
promotables: make(chan *Item[T], config.promoteBuffer),
}
for i := 0; i < config.buckets; i++ {
c.buckets[i] = &bucket[T]{
lookup: make(map[string]*Item[T]),
}
}
c.restart()
go c.worker()
return c
}
@@ -144,6 +119,27 @@ func (c *Cache[T]) Set(key string, value T, duration time.Duration) {
c.set(key, value, duration, false)
}
// Setnx set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx(key string, value T, duration time.Duration) {
c.bucket(key).setnx(key, value, duration, false)
}
// Setnx2 set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx2(key string, f func() T, duration time.Duration) *Item[T] {
item, existing := c.bucket(key).setnx2(key, f, duration, false)
// consistent with Get
if existing && !item.Expired() {
select {
case c.promotables <- item:
default:
}
// consistent with set
} else if !existing {
c.promotables <- item
}
return item
}
// Replace the value if it exists, does not set if it doesn't.
// Returns true if the item existed an was replaced, false otherwise.
// Replace does not reset item's TTL
@@ -156,6 +152,18 @@ func (c *Cache[T]) Replace(key string, value T) bool {
return true
}
// Extend the value if it exists, does not set if it doesn't exists.
// Returns true if the expire time of the item an was extended, false otherwise.
func (c *Cache[T]) Extend(key string, duration time.Duration) bool {
item := c.bucket(key).get(key)
if item == nil {
return false
}
item.Extend(duration)
return true
}
// Attempts to get the value from the cache and calles fetch on a miss (missing
// or stale item). If fetch returns an error, no value is cached and the error
// is returned back to the caller.
@@ -176,7 +184,7 @@ func (c *Cache[T]) Fetch(key string, duration time.Duration, fetch func() (T, er
// Remove the item from the cache, return true if the item was present, false otherwise.
func (c *Cache[T]) Delete(key string) bool {
item := c.bucket(key).delete(key)
item := c.bucket(key).remove(key)
if item != nil {
c.deletables <- item
return true
@@ -184,99 +192,6 @@ func (c *Cache[T]) Delete(key string) bool {
return false
}
// Clears the cache
// This is a control command.
func (c *Cache[T]) Clear() {
done := make(chan struct{})
c.control <- clear{done: done}
<-done
}
// Stops the background worker. Operations performed on the cache after Stop
// is called are likely to panic
// This is a control command.
func (c *Cache[T]) Stop() {
close(c.promotables)
<-c.control
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
// This is a control command.
func (c *Cache[T]) GetDropped() int {
return doGetDropped(c.control)
}
func doGetDropped(controlCh chan<- interface{}) int {
res := make(chan int)
controlCh <- getDropped{res: res}
return <-res
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now.
//
// For efficiency, the cache's implementation of LRU behavior is partly managed by a worker
// goroutine that updates its internal data structures asynchronously. This means that the
// cache's state in terms of (for instance) eviction of LRU items is only eventually consistent;
// there is no guarantee that it happens before a Get or Set call has returned. Most of the time
// application code will not care about this, but especially in a test scenario you may want to
// be able to know when the worker has caught up.
//
// This applies only to cache methods that were previously called by the same goroutine that is
// now calling SyncUpdates. If other goroutines are using the cache at the same time, there is
// no way to know whether any of them still have pending state updates when SyncUpdates returns.
// This is a control command.
func (c *Cache[T]) SyncUpdates() {
doSyncUpdates(c.control)
}
func doSyncUpdates(controlCh chan<- interface{}) {
done := make(chan struct{})
controlCh <- syncWorker{done: done}
<-done
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
// This is a control command.
func (c *Cache[T]) SetMaxSize(size int64) {
done := make(chan struct{})
c.control <- setMaxSize{size: size, done: done}
<-done
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c *Cache[T]) GC() {
done := make(chan struct{})
c.control <- gc{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c *Cache[T]) GetSize() int64 {
res := make(chan int64)
c.control <- getSize{res}
return <-res
}
func (c *Cache[T]) restart() {
c.deletables = make(chan *Item[T], c.deleteBuffer)
c.promotables = make(chan *Item[T], c.promoteBuffer)
c.control = make(chan interface{})
go c.worker()
}
func (c *Cache[T]) deleteItem(bucket *bucket[T], item *Item[T]) {
bucket.delete(item.key) //stop other GETs from getting it
c.deletables <- item
}
func (c *Cache[T]) set(key string, value T, duration time.Duration, track bool) *Item[T] {
item, existing := c.bucket(key).set(key, value, duration, track)
if existing != nil {
@@ -292,49 +207,78 @@ func (c *Cache[T]) bucket(key string) *bucket[T] {
return c.buckets[h.Sum32()&c.bucketMask]
}
func (c *Cache[T]) halted(fn func()) {
c.halt()
defer c.unhalt()
fn()
}
func (c *Cache[T]) halt() {
for _, bucket := range c.buckets {
bucket.Lock()
}
}
func (c *Cache[T]) unhalt() {
for _, bucket := range c.buckets {
bucket.Unlock()
}
}
func (c *Cache[T]) worker() {
defer close(c.control)
dropped := 0
cc := c.control
promoteItem := func(item *Item[T]) {
if c.doPromote(item) && c.size > c.maxSize {
dropped += c.gc()
}
}
for {
select {
case item, ok := <-c.promotables:
if ok == false {
goto drain
}
case item := <-c.promotables:
promoteItem(item)
case item := <-c.deletables:
c.doDelete(item)
case control := <-c.control:
case control := <-cc:
switch msg := control.(type) {
case getDropped:
case controlStop:
goto drain
case controlGetDropped:
msg.res <- dropped
dropped = 0
case setMaxSize:
case controlSetMaxSize:
c.maxSize = msg.size
if c.size > c.maxSize {
dropped += c.gc()
}
msg.done <- struct{}{}
case clear:
for _, bucket := range c.buckets {
bucket.clear()
}
c.size = 0
c.list = NewList[*Item[T]]()
case controlClear:
c.halted(func() {
promotables := c.promotables
for len(promotables) > 0 {
<-promotables
}
deletables := c.deletables
for len(deletables) > 0 {
<-deletables
}
for _, bucket := range c.buckets {
bucket.clear()
}
c.size = 0
c.list = NewList[T]()
})
msg.done <- struct{}{}
case getSize:
case controlGetSize:
msg.res <- c.size
case gc:
case controlGC:
dropped += c.gc()
msg.done <- struct{}{}
case syncWorker:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem,
c.deletables, c.doDelete)
case controlSyncUpdates:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem, c.deletables, c.doDelete)
msg.done <- struct{}{}
}
}
@@ -346,7 +290,6 @@ drain:
case item := <-c.deletables:
c.doDelete(item)
default:
close(c.deletables)
return
}
}
@@ -367,9 +310,7 @@ doAllPromotes:
for {
select {
case item := <-promotables:
if item != nil {
promoteFn(item)
}
promoteFn(item)
default:
break doAllPromotes
}
@@ -386,15 +327,14 @@ doAllDeletes:
}
func (c *Cache[T]) doDelete(item *Item[T]) {
if item.node == nil {
if item.next == nil && item.prev == nil {
item.promotions = -2
} else {
c.size -= item.size
if c.onDelete != nil {
c.onDelete(item)
}
c.list.Remove(item.node)
item.node = nil
c.list.Remove(item)
item.promotions = -2
}
}
@@ -404,22 +344,23 @@ func (c *Cache[T]) doPromote(item *Item[T]) bool {
if item.promotions == -2 {
return false
}
if item.node != nil { //not a new item
if item.next != nil || item.prev != nil { // not a new item
if item.shouldPromote(c.getsPerPromote) {
c.list.MoveToFront(item.node)
c.list.MoveToFront(item)
item.promotions = 0
}
return false
}
c.size += item.size
item.node = c.list.Insert(item)
c.list.Insert(item)
return true
}
func (c *Cache[T]) gc() int {
dropped := 0
node := c.list.Tail
item := c.list.Tail
itemsToPrune := int64(c.itemsToPrune)
if min := c.size - c.maxSize; min > itemsToPrune {
@@ -427,23 +368,21 @@ func (c *Cache[T]) gc() int {
}
for i := int64(0); i < itemsToPrune; i++ {
if node == nil {
if item == nil {
return dropped
}
prev := node.Prev
item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
prev := item.prev
if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.key).delete(item.key)
c.size -= item.size
c.list.Remove(node)
c.list.Remove(item)
if c.onDelete != nil {
c.onDelete(item)
}
dropped += 1
item.node = nil
item.promotions = -2
}
node = prev
item = prev
}
return dropped
}

View File

@@ -4,6 +4,7 @@ import (
"math/rand"
"sort"
"strconv"
"sync"
"sync/atomic"
"testing"
"time"
@@ -11,6 +12,52 @@ import (
"github.com/karlseguin/ccache/v3/assert"
)
func Test_Setnx(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
// set if exists
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
assert.Equal(t, cache.Get("spice").Value(), "flow")
// set if not exists
cache.Delete("spice")
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.Get("spice").Value(), "worm")
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_Extend(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
// non exist
ok := cache.Extend("spice", time.Minute*10)
assert.Equal(t, false, ok)
// exist
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
ok = cache.Extend("spice", time.Minute*10) // 10 + 10
assert.Equal(t, true, ok)
item := cache.Get("spice")
less := time.Minute*22 < time.Duration(item.expires)
assert.Equal(t, true, less)
more := time.Minute*18 < time.Duration(item.expires)
assert.Equal(t, true, more)
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_CacheDeletesAValue(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
@@ -77,7 +124,6 @@ func Test_CacheDeletesAFunc(t *testing.T) {
return key == "d"
}), 1)
assert.Equal(t, cache.ItemCount(), 2)
}
func Test_CacheOnDeleteCallbackCalled(t *testing.T) {
@@ -337,6 +383,128 @@ func Test_CachePrune(t *testing.T) {
}
}
func Test_ConcurrentStop(t *testing.T) {
for i := 0; i < 100; i++ {
cache := New(Configure[string]())
r := func() {
for {
key := strconv.Itoa(int(rand.Int31n(100)))
switch rand.Int31n(3) {
case 0:
cache.Get(key)
case 1:
cache.Set(key, key, time.Minute)
case 2:
cache.Delete(key)
}
}
}
go r()
go r()
go r()
time.Sleep(time.Millisecond * 10)
cache.Stop()
}
}
func Test_ConcurrentClearAndSet(t *testing.T) {
for i := 0; i < 1000000; i++ {
var stop atomic.Bool
var wg sync.WaitGroup
cache := New(Configure[string]())
r := func() {
for !stop.Load() {
cache.Set("a", "a", time.Minute)
}
wg.Done()
}
go r()
wg.Add(1)
cache.Clear()
stop.Store(true)
wg.Wait()
cache.SyncUpdates()
// The point of this test is to make sure that the cache's lookup and its
// recency list are in sync. But the two aren't written to atomically:
// the lookup is written to directly from the call to Set, whereas the
// list is maintained by the background worker. This can create a period
// where the two are out of sync. Even SyncUpdate is helpless here, since
// it can only sync what's been written to the buffers.
for i := 0; i < 10; i++ {
expectedCount := 0
if cache.list.Head != nil {
expectedCount = 1
}
actualCount := cache.ItemCount()
if expectedCount == actualCount {
return
}
time.Sleep(time.Millisecond)
}
t.Errorf("cache list and lookup are not consistent")
t.FailNow()
}
}
func BenchmarkFrequentSets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
}
}
func BenchmarkFrequentGets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
numKeys := 500
for i := 0; i < numKeys; i++ {
key := strconv.Itoa(i)
cache.Set(key, i, time.Minute)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.FormatInt(rand.Int63n(int64(numKeys)), 10)
cache.Get(key)
}
}
func BenchmarkGetWithPromoteSmall(b *testing.B) {
getsPerPromotes := 5
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
}
}
}
func BenchmarkGetWithPromoteLarge(b *testing.B) {
getsPerPromotes := 100
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
}
}
}
type SizedItem struct {
id int
s int64

View File

@@ -37,7 +37,7 @@ func (c *Configuration[T]) MaxSize(max int64) *Configuration[T] {
// requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...)
// [16]
func (c *Configuration[T]) Buckets(count uint32) *Configuration[T] {
if count == 0 || ((count&(^count+1)) == count) == false {
if count == 0 || !((count & (^count + 1)) == count) {
count = 16
}
c.buckets = int(count)

View File

@@ -16,3 +16,8 @@ func Test_Configuration_BucketsPowerOf2(t *testing.T) {
}
}
}
func Test_Configuration_Buffers(t *testing.T) {
assert.Equal(t, Configure[int]().DeleteBuffer(24).deleteBuffer, 24)
assert.Equal(t, Configure[int]().PromoteBuffer(95).promoteBuffer, 95)
}

110
control.go Normal file
View File

@@ -0,0 +1,110 @@
package ccache
type controlGC struct {
done chan struct{}
}
type controlClear struct {
done chan struct{}
}
type controlStop struct {
}
type controlGetSize struct {
res chan int64
}
type controlGetDropped struct {
res chan int
}
type controlSetMaxSize struct {
size int64
done chan struct{}
}
type controlSyncUpdates struct {
done chan struct{}
}
type control chan interface{}
func newControl() chan interface{} {
return make(chan interface{}, 5)
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c control) GC() {
done := make(chan struct{})
c <- controlGC{done: done}
<-done
}
// Sends a stop signal to the worker thread. The worker thread will shut down
// 5 seconds after the last message is received. The cache should not be used
// after Stop is called, but concurrently executing requests should properly finish
// executing.
// This is a control command.
func (c control) Stop() {
c.SyncUpdates()
c <- controlStop{}
}
// Clears the cache
// This is a control command.
func (c control) Clear() {
done := make(chan struct{})
c <- controlClear{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c control) GetSize() int64 {
res := make(chan int64)
c <- controlGetSize{res: res}
return <-res
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
// This is a control command.
func (c control) GetDropped() int {
res := make(chan int)
c <- controlGetDropped{res: res}
return <-res
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
// This is a control command.
func (c control) SetMaxSize(size int64) {
done := make(chan struct{})
c <- controlSetMaxSize{size: size, done: done}
<-done
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now.
//
// For efficiency, the cache's implementation of LRU behavior is partly managed by a worker
// goroutine that updates its internal data structures asynchronously. This means that the
// cache's state in terms of (for instance) eviction of LRU items is only eventually consistent;
// there is no guarantee that it happens before a Get or Set call has returned. Most of the time
// application code will not care about this, but especially in a test scenario you may want to
// be able to know when the worker has caught up.
//
// This applies only to cache methods that were previously called by the same goroutine that is
// now calling SyncUpdates. If other goroutines are using the cache at the same time, there is
// no way to know whether any of them still have pending state updates when SyncUpdates returns.
// This is a control command.
func (c control) SyncUpdates() {
done := make(chan struct{})
c <- controlSyncUpdates{done: done}
<-done
}

2
go.mod
View File

@@ -1,3 +1,3 @@
module github.com/karlseguin/ccache/v3
go 1.18
go 1.19

14
item.go
View File

@@ -27,7 +27,8 @@ type Item[T any] struct {
expires int64
size int64
value T
node *Node[*Item[T]]
next *Item[T]
prev *Item[T]
}
func newItem[T any](key string, value T, expires int64, track bool) *Item[T] {
@@ -37,6 +38,7 @@ func newItem[T any](key string, value T, expires int64, track bool) *Item[T] {
if sized, ok := (interface{})(value).(Sized); ok {
size = sized.Size()
}
item := &Item[T]{
key: key,
value: value,
@@ -55,6 +57,10 @@ func (i *Item[T]) shouldPromote(getsPerPromote int32) bool {
return i.promotions == getsPerPromote
}
func (i *Item[T]) Key() string {
return i.key
}
func (i *Item[T]) Value() T {
return i.value
}
@@ -93,5 +99,9 @@ func (i *Item[T]) Extend(duration time.Duration) {
// fmt.Sprintf expression could cause fields of the Item to be read in a non-thread-safe
// way.
func (i *Item[T]) String() string {
return fmt.Sprintf("Item(%v)", i.value)
group := i.group
if group == "" {
return fmt.Sprintf("Item(%s:%v)", i.key, i.value)
}
return fmt.Sprintf("Item(%s:%s:%v)", group, i.key, i.value)
}

View File

@@ -8,6 +8,11 @@ import (
"github.com/karlseguin/ccache/v3/assert"
)
func Test_Item_Key(t *testing.T) {
item := &Item[int]{key: "foo"}
assert.Equal(t, item.Key(), "foo")
}
func Test_Item_Promotability(t *testing.T) {
item := &Item[int]{promotions: 4}
assert.Equal(t, item.shouldPromote(5), true)

View File

@@ -32,7 +32,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if exists == false {
if !exists {
return nil
}
return bucket
@@ -41,7 +41,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
b.Lock()
bkt, exists := b.buckets[primary]
if exists == false {
if !exists {
bkt = &bucket[T]{lookup: make(map[string]*Item[T])}
b.buckets[primary] = bkt
}
@@ -51,21 +51,31 @@ func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time
return item, existing
}
func (b *layeredBucket[T]) delete(primary, secondary string) *Item[T] {
func (b *layeredBucket[T]) remove(primary, secondary string) *Item[T] {
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if exists == false {
if !exists {
return nil
}
return bucket.delete(secondary)
return bucket.remove(secondary)
}
func (b *layeredBucket[T]) delete(primary, secondary string) {
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if !exists {
return
}
bucket.delete(secondary)
}
func (b *layeredBucket[T]) deletePrefix(primary, prefix string, deletables chan *Item[T]) int {
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if exists == false {
if !exists {
return 0
}
return bucket.deletePrefix(prefix, deletables)
@@ -75,7 +85,7 @@ func (b *layeredBucket[T]) deleteFunc(primary string, matches func(key string, i
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if exists == false {
if !exists {
return 0
}
return bucket.deleteFunc(matches, deletables)
@@ -85,7 +95,7 @@ func (b *layeredBucket[T]) deleteAll(primary string, deletables chan *Item[T]) b
b.RLock()
bucket, exists := b.buckets[primary]
b.RUnlock()
if exists == false {
if !exists {
return false
}
@@ -111,9 +121,8 @@ func (b *layeredBucket[T]) forEachFunc(primary string, matches func(key string,
}
}
// we expect the caller to have acquired a write lock
func (b *layeredBucket[T]) clear() {
b.Lock()
defer b.Unlock()
for _, bucket := range b.buckets {
bucket.clear()
}

View File

@@ -9,13 +9,13 @@ import (
type LayeredCache[T any] struct {
*Configuration[T]
list *List[*Item[T]]
control
list *List[T]
buckets []*layeredBucket[T]
bucketMask uint32
size int64
deletables chan *Item[T]
promotables chan *Item[T]
control chan interface{}
}
// Create a new layered cache with the specified configuration.
@@ -33,19 +33,20 @@ type LayeredCache[T any] struct {
// See ccache.Configure() for creating a configuration
func Layered[T any](config *Configuration[T]) *LayeredCache[T] {
c := &LayeredCache[T]{
list: NewList[*Item[T]](),
list: NewList[T](),
Configuration: config,
control: newControl(),
bucketMask: uint32(config.buckets) - 1,
buckets: make([]*layeredBucket[T], config.buckets),
deletables: make(chan *Item[T], config.deleteBuffer),
control: make(chan interface{}),
promotables: make(chan *Item[T], config.promoteBuffer),
}
for i := 0; i < int(config.buckets); i++ {
for i := 0; i < config.buckets; i++ {
c.buckets[i] = &layeredBucket[T]{
buckets: make(map[string]*bucket[T]),
}
}
c.restart()
go c.worker()
return c
}
@@ -157,7 +158,7 @@ func (c *LayeredCache[T]) Fetch(primary, secondary string, duration time.Duratio
// Remove the item from the cache, return true if the item was present, false otherwise.
func (c *LayeredCache[T]) Delete(primary, secondary string) bool {
item := c.bucket(primary).delete(primary, secondary)
item := c.bucket(primary).remove(primary, secondary)
if item != nil {
c.deletables <- item
return true
@@ -180,63 +181,6 @@ func (c *LayeredCache[T]) DeleteFunc(primary string, matches func(key string, it
return c.bucket(primary).deleteFunc(primary, matches, c.deletables)
}
// Clears the cache
func (c *LayeredCache[T]) Clear() {
done := make(chan struct{})
c.control <- clear{done: done}
<-done
}
func (c *LayeredCache[T]) Stop() {
close(c.promotables)
<-c.control
}
// Gets the number of items removed from the cache due to memory pressure since
// the last time GetDropped was called
func (c *LayeredCache[T]) GetDropped() int {
return doGetDropped(c.control)
}
// SyncUpdates waits until the cache has finished asynchronous state updates for any operations
// that were done by the current goroutine up to now. See Cache.SyncUpdates for details.
func (c *LayeredCache[T]) SyncUpdates() {
doSyncUpdates(c.control)
}
// Sets a new max size. That can result in a GC being run if the new maxium size
// is smaller than the cached size
func (c *LayeredCache[T]) SetMaxSize(size int64) {
done := make(chan struct{})
c.control <- setMaxSize{size: size, done: done}
<-done
}
// Forces GC. There should be no reason to call this function, except from tests
// which require synchronous GC.
// This is a control command.
func (c *LayeredCache[T]) GC() {
done := make(chan struct{})
c.control <- gc{done: done}
<-done
}
// Gets the size of the cache. This is an O(1) call to make, but it is handled
// by the worker goroutine. It's meant to be called periodically for metrics, or
// from tests.
// This is a control command.
func (c *LayeredCache[T]) GetSize() int64 {
res := make(chan int64)
c.control <- getSize{res}
return <-res
}
func (c *LayeredCache[T]) restart() {
c.promotables = make(chan *Item[T], c.promoteBuffer)
c.control = make(chan interface{})
go c.worker()
}
func (c *LayeredCache[T]) set(primary, secondary string, value T, duration time.Duration, track bool) *Item[T] {
item, existing := c.bucket(primary).set(primary, secondary, value, duration, track)
if existing != nil {
@@ -252,70 +196,109 @@ func (c *LayeredCache[T]) bucket(key string) *layeredBucket[T] {
return c.buckets[h.Sum32()&c.bucketMask]
}
func (c *LayeredCache[T]) halted(fn func()) {
c.halt()
defer c.unhalt()
fn()
}
func (c *LayeredCache[T]) halt() {
for _, bucket := range c.buckets {
bucket.Lock()
}
}
func (c *LayeredCache[T]) unhalt() {
for _, bucket := range c.buckets {
bucket.Unlock()
}
}
func (c *LayeredCache[T]) promote(item *Item[T]) {
c.promotables <- item
}
func (c *LayeredCache[T]) worker() {
defer close(c.control)
dropped := 0
cc := c.control
promoteItem := func(item *Item[T]) {
if c.doPromote(item) && c.size > c.maxSize {
dropped += c.gc()
}
}
deleteItem := func(item *Item[T]) {
if item.node == nil {
item.promotions = -2
} else {
c.size -= item.size
if c.onDelete != nil {
c.onDelete(item)
}
c.list.Remove(item.node)
item.node = nil
item.promotions = -2
}
}
for {
select {
case item, ok := <-c.promotables:
if ok == false {
return
}
case item := <-c.promotables:
promoteItem(item)
case item := <-c.deletables:
deleteItem(item)
case control := <-c.control:
c.doDelete(item)
case control := <-cc:
switch msg := control.(type) {
case getDropped:
case controlStop:
goto drain
case controlGetDropped:
msg.res <- dropped
dropped = 0
case setMaxSize:
case controlSetMaxSize:
c.maxSize = msg.size
if c.size > c.maxSize {
dropped += c.gc()
}
msg.done <- struct{}{}
case clear:
for _, bucket := range c.buckets {
bucket.clear()
case controlClear:
promotables := c.promotables
for len(promotables) > 0 {
<-promotables
}
c.size = 0
c.list = NewList[*Item[T]]()
deletables := c.deletables
for len(deletables) > 0 {
<-deletables
}
c.halted(func() {
for _, bucket := range c.buckets {
bucket.clear()
}
c.size = 0
c.list = NewList[T]()
})
msg.done <- struct{}{}
case getSize:
case controlGetSize:
msg.res <- c.size
case gc:
case controlGC:
dropped += c.gc()
msg.done <- struct{}{}
case syncWorker:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem,
c.deletables, deleteItem)
case controlSyncUpdates:
doAllPendingPromotesAndDeletes(c.promotables, promoteItem, c.deletables, c.doDelete)
msg.done <- struct{}{}
}
}
}
drain:
for {
select {
case item := <-c.deletables:
c.doDelete(item)
default:
return
}
}
}
func (c *LayeredCache[T]) doDelete(item *Item[T]) {
if item.prev == nil && item.next == nil {
item.promotions = -2
} else {
c.size -= item.size
if c.onDelete != nil {
c.onDelete(item)
}
c.list.Remove(item)
item.promotions = -2
}
}
func (c *LayeredCache[T]) doPromote(item *Item[T]) bool {
@@ -323,45 +306,45 @@ func (c *LayeredCache[T]) doPromote(item *Item[T]) bool {
if item.promotions == -2 {
return false
}
if item.node != nil { //not a new item
if item.next != nil || item.prev != nil { // not a new item
if item.shouldPromote(c.getsPerPromote) {
c.list.MoveToFront(item.node)
c.list.MoveToFront(item)
item.promotions = 0
}
return false
}
c.size += item.size
item.node = c.list.Insert(item)
c.list.Insert(item)
return true
}
func (c *LayeredCache[T]) gc() int {
node := c.list.Tail
dropped := 0
itemsToPrune := int64(c.itemsToPrune)
item := c.list.Tail
itemsToPrune := int64(c.itemsToPrune)
if min := c.size - c.maxSize; min > itemsToPrune {
itemsToPrune = min
}
for i := int64(0); i < itemsToPrune; i++ {
if node == nil {
if item == nil {
return dropped
}
prev := node.Prev
item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
prev := item.prev
if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.group).delete(item.group, item.key)
c.size -= item.size
c.list.Remove(node)
c.list.Remove(item)
if c.onDelete != nil {
c.onDelete(item)
}
item.node = nil
item.promotions = -2
dropped += 1
item.promotions = -2
}
node = prev
item = prev
}
return dropped
}

View File

@@ -118,7 +118,6 @@ func Test_LayedCache_DeletesAFunc(t *testing.T) {
return key == "d"
}), 1)
assert.Equal(t, cache.ItemCount(), 3)
}
func Test_LayedCache_OnDeleteCallbackCalled(t *testing.T) {
@@ -396,6 +395,29 @@ func Test_LayeredCachePrune(t *testing.T) {
}
}
func Test_LayeredConcurrentStop(t *testing.T) {
for i := 0; i < 100; i++ {
cache := Layered(Configure[string]())
r := func() {
for {
key := strconv.Itoa(int(rand.Int31n(100)))
switch rand.Int31n(3) {
case 0:
cache.Get(key, key)
case 1:
cache.Set(key, key, key, time.Minute)
case 2:
cache.Delete(key, key)
}
}
}
go r()
go r()
go r()
time.Sleep(time.Millisecond * 10)
cache.Stop()
}
}
func newLayered[T any]() *LayeredCache[T] {
c := Layered[T](Configure[T]())
c.Clear()

50
list.go
View File

@@ -1,57 +1,45 @@
package ccache
type List[T any] struct {
Head *Node[T]
Tail *Node[T]
Head *Item[T]
Tail *Item[T]
}
func NewList[T any]() *List[T] {
return &List[T]{}
}
func (l *List[T]) Remove(node *Node[T]) {
next := node.Next
prev := node.Prev
func (l *List[T]) Remove(item *Item[T]) {
next := item.next
prev := item.prev
if next == nil {
l.Tail = node.Prev
l.Tail = prev
} else {
next.Prev = prev
next.prev = prev
}
if prev == nil {
l.Head = node.Next
l.Head = next
} else {
prev.Next = next
prev.next = next
}
node.Next = nil
node.Prev = nil
item.next = nil
item.prev = nil
}
func (l *List[T]) MoveToFront(node *Node[T]) {
l.Remove(node)
l.nodeToFront(node)
func (l *List[T]) MoveToFront(item *Item[T]) {
l.Remove(item)
l.Insert(item)
}
func (l *List[T]) Insert(value T) *Node[T] {
node := &Node[T]{Value: value}
l.nodeToFront(node)
return node
}
func (l *List[T]) nodeToFront(node *Node[T]) {
func (l *List[T]) Insert(item *Item[T]) {
head := l.Head
l.Head = node
l.Head = item
if head == nil {
l.Tail = node
l.Tail = item
return
}
node.Next = head
head.Prev = node
}
type Node[T any] struct {
Next *Node[T]
Prev *Node[T]
Value T
item.next = head
head.prev = item
}

View File

@@ -10,13 +10,13 @@ func Test_List_Insert(t *testing.T) {
l := NewList[int]()
assertList(t, l)
l.Insert(1)
l.Insert(newItem("a", 1, 0, false))
assertList(t, l, 1)
l.Insert(2)
l.Insert(newItem("b", 2, 0, false))
assertList(t, l, 2, 1)
l.Insert(3)
l.Insert(newItem("c", 3, 0, false))
assertList(t, l, 3, 2, 1)
}
@@ -24,15 +24,21 @@ func Test_List_Remove(t *testing.T) {
l := NewList[int]()
assertList(t, l)
node := l.Insert(1)
l.Remove(node)
item := newItem("a", 1, 0, false)
l.Insert(item)
l.Remove(item)
assertList(t, l)
n5 := l.Insert(5)
n4 := l.Insert(4)
n3 := l.Insert(3)
n2 := l.Insert(2)
n1 := l.Insert(1)
n5 := newItem("e", 5, 0, false)
l.Insert(n5)
n4 := newItem("d", 4, 0, false)
l.Insert(n4)
n3 := newItem("c", 3, 0, false)
l.Insert(n3)
n2 := newItem("b", 2, 0, false)
l.Insert(n2)
n1 := newItem("a", 1, 0, false)
l.Insert(n1)
l.Remove(n5)
assertList(t, l, 1, 2, 3, 4)
@@ -50,20 +56,6 @@ func Test_List_Remove(t *testing.T) {
assertList(t, l)
}
func Test_List_MoveToFront(t *testing.T) {
l := NewList[int]()
n1 := l.Insert(1)
l.MoveToFront(n1)
assertList(t, l, 1)
n2 := l.Insert(2)
l.MoveToFront(n1)
assertList(t, l, 1, 2)
l.MoveToFront(n2)
assertList(t, l, 2, 1)
}
func assertList(t *testing.T, list *List[int], expected ...int) {
t.Helper()
@@ -75,21 +67,13 @@ func assertList(t *testing.T, list *List[int], expected ...int) {
node := list.Head
for _, expected := range expected {
assert.Equal(t, node.Value, expected)
node = node.Next
assert.Equal(t, node.value, expected)
node = node.next
}
node = list.Tail
for i := len(expected) - 1; i >= 0; i-- {
assert.Equal(t, node.Value, expected[i])
node = node.Prev
assert.Equal(t, node.value, expected[i])
node = node.prev
}
}
func listFromInts(ints ...int) *List[int] {
l := NewList[int]()
for i := len(ints) - 1; i >= 0; i-- {
l.Insert(ints[i])
}
return l
}

View File

@@ -1,9 +1,5 @@
# CCache
Generic version is on the way:
https://github.com/karlseguin/ccache/tree/generic
CCache is an LRU Cache, written in Go, focused on supporting high concurrency.
Lock contention on the list is reduced by:
@@ -21,7 +17,7 @@ Import and create a `Cache` instance:
```go
import (
github.com/karlseguin/ccache/v3
"github.com/karlseguin/ccache/v3"
)
// create a cache with string values
@@ -111,8 +107,19 @@ cache.Delete("user:4")
`Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish.
### Extend
The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time.
```go
cache.Extend("user:4", time.Minute * 10)
// or
item := cache.Get("user:4")
if item != nil {
item.Extend(time.Minute * 10)
}
```
### Replace
The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:
@@ -122,6 +129,14 @@ cache.Replace("user:4", user)
`Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned.
### Setnx
Set the value if not exists. setnx will first check whether kv exists. If it does not exist, set kv in cache. this operation is atomic.
```go
cache.Set("user:4", user, time.Minute * 10)
```
### GetDropped
You can get the number of keys evicted due to memory pressure by calling `GetDropped`:
@@ -198,4 +213,4 @@ By default, items added to a cache have a size of 1. This means that if you conf
However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.
## Want Something Simpler?
For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache)
For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache).

View File

@@ -41,7 +41,7 @@ func (s *SecondaryCache[T]) Fetch(secondary string, duration time.Duration, fetc
// Delete a secondary key.
// The semantics are the same as for LayeredCache.Delete
func (s *SecondaryCache[T]) Delete(secondary string) bool {
item := s.bucket.delete(secondary)
item := s.bucket.remove(secondary)
if item != nil {
s.pCache.deletables <- item
return true