19 Commits

Author SHA1 Message Date
Karl Seguin
0901f94888 Merge pull request #93 from chenyijun266846/perf_setnx
perf: add setnx2
2024-10-01 10:38:43 +08:00
chenyijun.266846
a47156de7d feat: add existing for setnx2 2024-09-29 15:07:02 +08:00
chenyijun.266846
c0806d27fe fix(setnx2): promotables 2024-09-29 14:46:58 +08:00
chenyijun.266846
d59160ba1c perf: add setnx2 2024-09-29 11:18:20 +08:00
Karl Seguin
e9a80ae138 Merge pull request #91 from miparnisari/bench-cpus
ensure benchmarks are compared if the CPUs are the same
2023-11-27 07:58:24 +08:00
Maria Ines Parnisari
964f899bf4 ensure benchmarks are compared if the CPUs are the same 2023-11-23 11:29:35 -08:00
Karl Seguin
3aa6a053b7 Make test more robust
Attempt to fix: https://github.com/karlseguin/ccache/issues/90
2023-11-23 09:48:42 +08:00
Maria Ines Parnisari
6d135b03a9 feat: add benchmarks 2023-11-23 09:12:34 +08:00
Karl Seguin
bf904fff3c fix make c on macos 2023-11-21 11:42:34 +08:00
Maria Ines Parnisari
de3e573a65 run tests and linter on every PR 2023-11-20 13:44:20 -08:00
Karl Seguin
7b5dfadcde Merge pull request #87 from craigpastro/patch-1
Update readme.md
2023-10-24 08:34:04 +08:00
Craig Pastro
2ad5f8fe86 Fix typo 2023-10-23 10:23:25 -07:00
Craig Pastro
c56472f9b5 Update readme.md
Generic version has landed and that branch no longer exists.
2023-10-23 10:20:28 -07:00
Karl Seguin
378b8b039e Merge pull request #86 from rfyiamcool/feat/public_extend
feat: public extend
2023-10-23 10:51:45 +08:00
rfyiamcool
1594fc55bc feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:19:27 +08:00
rfyiamcool
2977b36b74 feat: public extend
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 22:14:28 +08:00
Karl Seguin
62cd8cc8c3 Merge pull request #85 from rfyiamcool/feat/add_setnx
feat: add setnx (if not exists, set kv)
2023-10-22 20:19:26 +08:00
rfyiamcool
b26c342793 feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-22 19:23:23 +08:00
rfyiamcool
dd0671989b feat: add setnx (if not exists, set kv)
Signed-off-by: rfyiamcool <rfyiamcool@163.com>
2023-10-20 10:36:04 +08:00
16 changed files with 455 additions and 81 deletions

50
.github/workflows/master.yaml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Master
on:
push:
branches:
- master
permissions:
contents: read
jobs:
bench:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee bench_output.txt
- name: Get benchmark as JSON
uses: benchmark-action/github-action-benchmark@v1
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: bench_output.txt
# Write benchmarks to this file
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Save benchmark JSON to cache
uses: actions/cache/save@v3
with:
path: ./cache/benchmark-data.json
# Save with commit hash to avoid "cache already exists"
# Save with OS & CPU info to prevent comparing against results from different CPUs
key: ${{ github.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark

111
.github/workflows/pull_request.yaml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Pull Request
on:
merge_group:
pull_request:
branches:
- master
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
- name: Unit Tests
run: make t
bench:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0 # to be able to retrieve the last commit in master branch
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: './go.mod'
cache-dependency-path: './go.sum'
check-latest: true
- name: Run benchmark and store the output to a file
run: |
set -o pipefail
make bench | tee ${{ github.sha }}_bench_output.txt
- name: Get CPU information
uses: kenchan0130/actions-system-info@master
id: system-info
- name: Get Master branch SHA
id: get-master-branch-sha
run: |
SHA=$(git rev-parse origin/master)
echo "sha=$SHA" >> $GITHUB_OUTPUT
- name: Try to get benchmark JSON from master branch
uses: actions/cache/restore@v3
id: cache
with:
path: ./cache/benchmark-data.json
key: ${{ steps.get-master-branch-sha.outputs.sha }}-${{ runner.os }}-${{ steps.system-info.outputs.cpu-model }}-go-benchmark
- name: Compare benchmarks with master
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit == 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Where the benchmarks in master are (to compare)
external-data-json-path: ./cache/benchmark-data.json
# Do not save the data
save-data-file: false
# Workflow will fail when an alert happens
fail-on-alert: true
github-token: ${{ secrets.GITHUB_TOKEN }}
# Enable Job Summary for PRs
summary-always: true
- name: Run benchmarks
uses: benchmark-action/github-action-benchmark@v1
if: steps.cache.outputs.cache-hit != 'true'
with:
# What benchmark tool the output.txt came from
tool: 'go'
# Where the output from the benchmark tool is stored
output-file-path: ${{ github.sha }}_bench_output.txt
# Write benchmarks to this file, do not publish to Github Pages
save-data-file: false
external-data-json-path: ./cache/benchmark-data.json
# Workflow will fail when an alert happens
fail-on-alert: true
# Enable alert commit comment
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
# Enable Job Summary for PRs
summary-always: true

2
.gitignore vendored
View File

@@ -1 +1,3 @@
vendor/ vendor/
.idea/
*.out

35
.golangci.yaml Normal file
View File

@@ -0,0 +1,35 @@
run:
timeout: 3m
modules-download-mode: readonly
linters:
enable:
- errname
- gofmt
- goimports
- stylecheck
- importas
- errcheck
- gosimple
- govet
- ineffassign
- mirror
- staticcheck
- tagalign
- testifylint
- typecheck
- unused
- unconvert
- unparam
- wastedassign
- whitespace
- exhaustive
- noctx
- promlinter
linters-settings:
govet:
enable-all: true
disable:
- shadow
- fieldalignment

View File

@@ -1,18 +1,25 @@
.DEFAULT_GOAL := help
.PHONY: help
help:
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
.PHONY: bench
bench: ## Run benchmarks
go test ./... -bench . -benchtime 5s -timeout 0 -run=XXX -benchmem
.PHONY: l
l: ## Lint Go source files
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest && golangci-lint run
.PHONY: t .PHONY: t
t: t: ## Run unit tests
go test -race -count=1 ./... go test -race -count=1 ./...
.PHONY: f .PHONY: f
f: f: ## Format code
go fmt ./... go fmt ./...
.PHONY: c .PHONY: c
c: c: ## Measure code coverage
go test -race -covermode=atomic ./... -coverprofile=cover.out && \ go test -race -covermode=atomic ./... -coverprofile=cover.out
# go tool cover -html=cover.out && \
go tool cover -func cover.out \
| grep -vP '[89]\d\.\d%' | grep -v '100.0%' \
|| true
rm cover.out

View File

@@ -35,6 +35,54 @@ func (b *bucket[T]) get(key string) *Item[T] {
return b.lookup[key] return b.lookup[key]
} }
func (b *bucket[T]) setnx(key string, value T, duration time.Duration, track bool) *Item[T] {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, value, expires, track)
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item
}
b.lookup[key] = newItem
return newItem
}
func (b *bucket[T]) setnx2(key string, f func() T, duration time.Duration, track bool) (*Item[T], bool) {
b.RLock()
item := b.lookup[key]
b.RUnlock()
if item != nil {
return item, true
}
b.Lock()
defer b.Unlock()
// check again under write lock
item = b.lookup[key]
if item != nil {
return item, true
}
expires := time.Now().Add(duration).UnixNano()
newItem := newItem(key, f(), expires, track)
b.lookup[key] = newItem
return newItem, false
}
func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) { func (b *bucket[T]) set(key string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
expires := time.Now().Add(duration).UnixNano() expires := time.Now().Add(duration).UnixNano()
item := newItem(key, value, expires, track) item := newItem(key, value, expires, track)

View File

@@ -7,33 +7,6 @@ import (
"time" "time"
) )
// The cache has a generic 'control' channel that is used to send
// messages to the worker. These are the messages that can be sent to it
type getDropped struct {
res chan int
}
type getSize struct {
res chan int64
}
type setMaxSize struct {
size int64
done chan struct{}
}
type clear struct {
done chan struct{}
}
type syncWorker struct {
done chan struct{}
}
type gc struct {
done chan struct{}
}
type Cache[T any] struct { type Cache[T any] struct {
*Configuration[T] *Configuration[T]
control control
@@ -146,6 +119,27 @@ func (c *Cache[T]) Set(key string, value T, duration time.Duration) {
c.set(key, value, duration, false) c.set(key, value, duration, false)
} }
// Setnx set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx(key string, value T, duration time.Duration) {
c.bucket(key).setnx(key, value, duration, false)
}
// Setnx2 set the value in the cache for the specified duration if not exists
func (c *Cache[T]) Setnx2(key string, f func() T, duration time.Duration) *Item[T] {
item, existing := c.bucket(key).setnx2(key, f, duration, false)
// consistent with Get
if existing && !item.Expired() {
select {
case c.promotables <- item:
default:
}
// consistent with set
} else if !existing {
c.promotables <- item
}
return item
}
// Replace the value if it exists, does not set if it doesn't. // Replace the value if it exists, does not set if it doesn't.
// Returns true if the item existed an was replaced, false otherwise. // Returns true if the item existed an was replaced, false otherwise.
// Replace does not reset item's TTL // Replace does not reset item's TTL
@@ -158,6 +152,18 @@ func (c *Cache[T]) Replace(key string, value T) bool {
return true return true
} }
// Extend the value if it exists, does not set if it doesn't exists.
// Returns true if the expire time of the item an was extended, false otherwise.
func (c *Cache[T]) Extend(key string, duration time.Duration) bool {
item := c.bucket(key).get(key)
if item == nil {
return false
}
item.Extend(duration)
return true
}
// Attempts to get the value from the cache and calles fetch on a miss (missing // Attempts to get the value from the cache and calles fetch on a miss (missing
// or stale item). If fetch returns an error, no value is cached and the error // or stale item). If fetch returns an error, no value is cached and the error
// is returned back to the caller. // is returned back to the caller.
@@ -186,11 +192,6 @@ func (c *Cache[T]) Delete(key string) bool {
return false return false
} }
func (c *Cache[T]) deleteItem(bucket *bucket[T], item *Item[T]) {
bucket.delete(item.key) //stop other GETs from getting it
c.deletables <- item
}
func (c *Cache[T]) set(key string, value T, duration time.Duration, track bool) *Item[T] { func (c *Cache[T]) set(key string, value T, duration time.Duration, track bool) *Item[T] {
item, existing := c.bucket(key).set(key, value, duration, track) item, existing := c.bucket(key).set(key, value, duration, track)
if existing != nil { if existing != nil {
@@ -372,7 +373,7 @@ func (c *Cache[T]) gc() int {
} }
prev := node.Prev prev := node.Prev
item := node.Value item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 { if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.key).delete(item.key) c.bucket(item.key).delete(item.key)
c.size -= item.size c.size -= item.size
c.list.Remove(node) c.list.Remove(node)

View File

@@ -12,6 +12,52 @@ import (
"github.com/karlseguin/ccache/v3/assert" "github.com/karlseguin/ccache/v3/assert"
) )
func Test_Setnx(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
// set if exists
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
assert.Equal(t, cache.Get("spice").Value(), "flow")
// set if not exists
cache.Delete("spice")
cache.Setnx("spice", "worm", time.Minute)
assert.Equal(t, cache.Get("spice").Value(), "worm")
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_Extend(t *testing.T) {
cache := New(Configure[string]())
defer cache.Stop()
assert.Equal(t, cache.ItemCount(), 0)
// non exist
ok := cache.Extend("spice", time.Minute*10)
assert.Equal(t, false, ok)
// exist
cache.Set("spice", "flow", time.Minute)
assert.Equal(t, cache.ItemCount(), 1)
ok = cache.Extend("spice", time.Minute*10) // 10 + 10
assert.Equal(t, true, ok)
item := cache.Get("spice")
less := time.Minute*22 < time.Duration(item.expires)
assert.Equal(t, true, less)
more := time.Minute*18 < time.Duration(item.expires)
assert.Equal(t, true, more)
assert.Equal(t, cache.ItemCount(), 1)
}
func Test_CacheDeletesAValue(t *testing.T) { func Test_CacheDeletesAValue(t *testing.T) {
cache := New(Configure[string]()) cache := New(Configure[string]())
defer cache.Stop() defer cache.Stop()
@@ -78,7 +124,6 @@ func Test_CacheDeletesAFunc(t *testing.T) {
return key == "d" return key == "d"
}), 1) }), 1)
assert.Equal(t, cache.ItemCount(), 2) assert.Equal(t, cache.ItemCount(), 2)
} }
func Test_CacheOnDeleteCallbackCalled(t *testing.T) { func Test_CacheOnDeleteCallbackCalled(t *testing.T) {
@@ -363,7 +408,7 @@ func Test_ConcurrentStop(t *testing.T) {
} }
func Test_ConcurrentClearAndSet(t *testing.T) { func Test_ConcurrentClearAndSet(t *testing.T) {
for i := 0; i < 100; i++ { for i := 0; i < 1000000; i++ {
var stop atomic.Bool var stop atomic.Bool
var wg sync.WaitGroup var wg sync.WaitGroup
@@ -379,19 +424,83 @@ func Test_ConcurrentClearAndSet(t *testing.T) {
cache.Clear() cache.Clear()
stop.Store(true) stop.Store(true)
wg.Wait() wg.Wait()
time.Sleep(time.Millisecond)
cache.SyncUpdates() cache.SyncUpdates()
known := make(map[string]struct{}) // The point of this test is to make sure that the cache's lookup and its
for node := cache.list.Head; node != nil; node = node.Next { // recency list are in sync. But the two aren't written to atomically:
known[node.Value.key] = struct{}{} // the lookup is written to directly from the call to Set, whereas the
// list is maintained by the background worker. This can create a period
// where the two are out of sync. Even SyncUpdate is helpless here, since
// it can only sync what's been written to the buffers.
for i := 0; i < 10; i++ {
expectedCount := 0
if cache.list.Head != nil {
expectedCount = 1
}
actualCount := cache.ItemCount()
if expectedCount == actualCount {
return
}
time.Sleep(time.Millisecond)
}
t.Errorf("cache list and lookup are not consistent")
t.FailNow()
}
}
func BenchmarkFrequentSets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
}
}
func BenchmarkFrequentGets(b *testing.B) {
cache := New(Configure[int]())
defer cache.Stop()
numKeys := 500
for i := 0; i < numKeys; i++ {
key := strconv.Itoa(i)
cache.Set(key, i, time.Minute)
} }
for _, bucket := range cache.buckets { b.ResetTimer()
for key := range bucket.lookup { for n := 0; n < b.N; n++ {
_, exists := known[key] key := strconv.FormatInt(rand.Int63n(int64(numKeys)), 10)
assert.True(t, exists) cache.Get(key)
} }
}
func BenchmarkGetWithPromoteSmall(b *testing.B) {
getsPerPromotes := 5
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
}
}
}
func BenchmarkGetWithPromoteLarge(b *testing.B) {
getsPerPromotes := 100
cache := New(Configure[int]().GetsPerPromote(int32(getsPerPromotes)))
defer cache.Stop()
b.ResetTimer()
for n := 0; n < b.N; n++ {
key := strconv.Itoa(n)
cache.Set(key, n, time.Minute)
for i := 0; i < getsPerPromotes; i++ {
cache.Get(key)
} }
} }
} }

View File

@@ -37,7 +37,7 @@ func (c *Configuration[T]) MaxSize(max int64) *Configuration[T] {
// requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) // requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...)
// [16] // [16]
func (c *Configuration[T]) Buckets(count uint32) *Configuration[T] { func (c *Configuration[T]) Buckets(count uint32) *Configuration[T] {
if count == 0 || ((count&(^count+1)) == count) == false { if count == 0 || !((count & (^count + 1)) == count) {
count = 16 count = 16
} }
c.buckets = int(count) c.buckets = int(count)

View File

@@ -16,3 +16,8 @@ func Test_Configuration_BucketsPowerOf2(t *testing.T) {
} }
} }
} }
func Test_Configuration_Buffers(t *testing.T) {
assert.Equal(t, Configure[int]().DeleteBuffer(24).deleteBuffer, 24)
assert.Equal(t, Configure[int]().PromoteBuffer(95).promoteBuffer, 95)
}

2
go.mod
View File

@@ -1,3 +1,3 @@
module github.com/karlseguin/ccache/v3 module github.com/karlseguin/ccache/v3
go 1.18 go 1.19

View File

@@ -32,7 +32,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return nil return nil
} }
return bucket return bucket
@@ -41,7 +41,7 @@ func (b *layeredBucket[T]) getSecondaryBucket(primary string) *bucket[T] {
func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) { func (b *layeredBucket[T]) set(primary, secondary string, value T, duration time.Duration, track bool) (*Item[T], *Item[T]) {
b.Lock() b.Lock()
bkt, exists := b.buckets[primary] bkt, exists := b.buckets[primary]
if exists == false { if !exists {
bkt = &bucket[T]{lookup: make(map[string]*Item[T])} bkt = &bucket[T]{lookup: make(map[string]*Item[T])}
b.buckets[primary] = bkt b.buckets[primary] = bkt
} }
@@ -55,7 +55,7 @@ func (b *layeredBucket[T]) delete(primary, secondary string) *Item[T] {
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return nil return nil
} }
return bucket.delete(secondary) return bucket.delete(secondary)
@@ -65,7 +65,7 @@ func (b *layeredBucket[T]) deletePrefix(primary, prefix string, deletables chan
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return 0 return 0
} }
return bucket.deletePrefix(prefix, deletables) return bucket.deletePrefix(prefix, deletables)
@@ -75,7 +75,7 @@ func (b *layeredBucket[T]) deleteFunc(primary string, matches func(key string, i
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return 0 return 0
} }
return bucket.deleteFunc(matches, deletables) return bucket.deleteFunc(matches, deletables)
@@ -85,7 +85,7 @@ func (b *layeredBucket[T]) deleteAll(primary string, deletables chan *Item[T]) b
b.RLock() b.RLock()
bucket, exists := b.buckets[primary] bucket, exists := b.buckets[primary]
b.RUnlock() b.RUnlock()
if exists == false { if !exists {
return false return false
} }

View File

@@ -41,7 +41,7 @@ func Layered[T any](config *Configuration[T]) *LayeredCache[T] {
deletables: make(chan *Item[T], config.deleteBuffer), deletables: make(chan *Item[T], config.deleteBuffer),
promotables: make(chan *Item[T], config.promoteBuffer), promotables: make(chan *Item[T], config.promoteBuffer),
} }
for i := 0; i < int(config.buckets); i++ { for i := 0; i < config.buckets; i++ {
c.buckets[i] = &layeredBucket[T]{ c.buckets[i] = &layeredBucket[T]{
buckets: make(map[string]*bucket[T]), buckets: make(map[string]*bucket[T]),
} }
@@ -334,7 +334,7 @@ func (c *LayeredCache[T]) gc() int {
} }
prev := node.Prev prev := node.Prev
item := node.Value item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 { if !c.tracking || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.group).delete(item.group, item.key) c.bucket(item.group).delete(item.group, item.key)
c.size -= item.size c.size -= item.size
c.list.Remove(node) c.list.Remove(node)

View File

@@ -118,7 +118,6 @@ func Test_LayedCache_DeletesAFunc(t *testing.T) {
return key == "d" return key == "d"
}), 1) }), 1)
assert.Equal(t, cache.ItemCount(), 3) assert.Equal(t, cache.ItemCount(), 3)
} }
func Test_LayedCache_OnDeleteCallbackCalled(t *testing.T) { func Test_LayedCache_OnDeleteCallbackCalled(t *testing.T) {

View File

@@ -85,11 +85,3 @@ func assertList(t *testing.T, list *List[int], expected ...int) {
node = node.Prev node = node.Prev
} }
} }
func listFromInts(ints ...int) *List[int] {
l := NewList[int]()
for i := len(ints) - 1; i >= 0; i-- {
l.Insert(ints[i])
}
return l
}

View File

@@ -1,9 +1,5 @@
# CCache # CCache
Generic version is on the way:
https://github.com/karlseguin/ccache/tree/generic
CCache is an LRU Cache, written in Go, focused on supporting high concurrency. CCache is an LRU Cache, written in Go, focused on supporting high concurrency.
Lock contention on the list is reduced by: Lock contention on the list is reduced by:
@@ -21,7 +17,7 @@ Import and create a `Cache` instance:
```go ```go
import ( import (
github.com/karlseguin/ccache/v3 "github.com/karlseguin/ccache/v3"
) )
// create a cache with string values // create a cache with string values
@@ -111,8 +107,19 @@ cache.Delete("user:4")
`Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish. `Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish.
### Extend ### Extend
The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time. The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time.
```go
cache.Extend("user:4", time.Minute * 10)
// or
item := cache.Get("user:4")
if item != nil {
item.Extend(time.Minute * 10)
}
```
### Replace ### Replace
The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU: The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:
@@ -122,6 +129,14 @@ cache.Replace("user:4", user)
`Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned. `Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned.
### Setnx
Set the value if not exists. setnx will first check whether kv exists. If it does not exist, set kv in cache. this operation is atomic.
```go
cache.Set("user:4", user, time.Minute * 10)
```
### GetDropped ### GetDropped
You can get the number of keys evicted due to memory pressure by calling `GetDropped`: You can get the number of keys evicted due to memory pressure by calling `GetDropped`:
@@ -198,4 +213,4 @@ By default, items added to a cache have a size of 1. This means that if you conf
However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes. However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.
## Want Something Simpler? ## Want Something Simpler?
For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache) For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache).