Commit Graph

64 Commits

Author SHA1 Message Date
Karl Seguin
a25552af28 Attempt to make Clear concurrency-safe
This is an attempt at fixing #81 without imposing a performance hit on the
cache's "normal" (get/set/fetch) activity. Calling "Clear" is now considerably
more expensive.
2023-04-14 15:27:39 +08:00
Karl Seguin
22776be1ee Refactor control messages + Stop handling
Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.

Stop is now a control message. Channels are no longer closed as part of the stop
process.
2023-01-04 10:40:19 +08:00
Karl Seguin
ece93bf87d On delete, always set promotions == -2 and node == nil
Also, item.promotions doesn't need to be loaded/stored using atomic. Once upon a
time it did. Cache was updated long ago to not use atomic operations on it, but
LayeredCache wasn't. They are both consistent now (they don't use atomic
operations).

Fixes: https://github.com/karlseguin/ccache/issues/76
2022-12-13 20:33:58 +08:00
Karl Seguin
3452e4e261 Fix memory leak
As documented in https://github.com/karlseguin/ccache/issues/76, an entry which
is both GC'd and deleted (either via a delete or an update) will result in the
internal link list having a nil tail (because removing the same node multiple times
from the linked list does that).

doDelete was already aware of "invalid" nodes (where item.node == nil), so the
solution seems to be as simple as setting item.node = nil during GC.
2022-11-19 08:15:48 +08:00
Karl Seguin
b1107e7097 Merge branch 'master' into generic 2022-10-23 13:20:22 +08:00
Karl Seguin
516d62ed5f add generic list 2022-03-03 19:04:16 +08:00
Karl Seguin
e838337a8b Initial pass at leveraging generics
Still need to replace the linked list with a generic linked list and
want to remove the dependency on the expect package.
2022-03-02 21:26:07 +08:00
Karl Seguin
451f5a6e42 Add GetWithoutPromote
This gets the value without promoting it (sort of like a Peak)
2022-01-24 12:51:35 +08:00
Karl Seguin
ef4bd54683 Always promote on set
It's fine to conditionally promote on Get, to avoid blocking on a get
(see: https://github.com/karlseguin/ccache/pull/52) but a set _must_
promote else we can end with an entry in our buckets that isn't in our
list.

issue: https://github.com/karlseguin/ccache/issues/64
2021-06-13 10:15:28 +08:00
Karl Seguin
325d078286 move GC and GetSize to control commands 2021-03-20 18:57:11 +08:00
Eli Bishop
6453d332ba Merge branch 'upstream-mirror' into sync-updates
# Conflicts:
#	layeredcache_test.go
2021-03-18 20:27:43 -07:00
Eli Bishop
c1fb5be323 add SyncUpdates method to synchronize with worker thread, and use it in tests 2021-03-18 16:09:30 -07:00
Karl Seguin
df2d98315c Conditionally prune more than itemsToPrune items
It's possible, though unlikely, that c.size will be larger than
c.maxSize by more than c.itemsToPrune. The most likely case that this
can happen is when using SetMaxSize to dynamically adjust the cache
size. The gc will now always clear to at least c.maxSize.
2021-03-18 19:29:04 +08:00
Karl Seguin
f28a7755a1 document the simplicity of fetch 2021-03-18 18:45:44 +08:00
Karl Seguin
ae1872d700 add ForEachFunc 2021-02-05 19:24:54 +08:00
gopalmor
36d03ce88e Avoid blocking if promotables channel is full.
In rare situation it is possible to have `promotables` channel full. In such condition, the `Get` function will be blocked because it calls `promote` function. `Get` function being blocked defeats the purpose of fast cache response and hence may impact the application code in unexpected manner.
In this commit, the `promote` function is modified to use non-blocking channel send construct.
2020-12-09 23:05:20 -08:00
Karl Seguin
b779edb2ca Merge pull request #50 from imxyb/remove-int
remote unless int func
2020-11-18 18:01:52 +08:00
Steven Santos Erenst
97e7acb2af Fix race condition in Cache.Get()
Cache.Get() accesses the expires field in the Item without any sort of locking
or atomicity. This is an issue because Item.Extend() can be called from a
separate thread and cause a race condition.
2020-11-18 01:56:24 -08:00
imxyb
5fe99ab07a remote unless int func 2020-11-02 22:23:14 +08:00
Karl Seguin
1189f7f993 make Clear thread-safe 2020-08-16 21:12:47 +08:00
Karl Seguin
f3b2b9fd88 Merge pull request #48 from sargun/master
Add TrackingSet method
2020-08-14 11:03:30 +08:00
Sargun Dhillon
df91803297 Add TrackingSet method
This method reduces the likelihood of a race condition where
you can add a (tracked) item to the cache, and the item isn't
the item you thought it was.
2020-08-13 10:43:38 -07:00
Bjørn Erik Pedersen
a42bd4a9c8 Use typed *Item in DeleteFunc 2020-08-13 16:10:22 +02:00
Bjørn Erik Pedersen
d56665a86e Add DeleteFunc
This shares DeletePrefixs's implementation.
2020-08-12 17:47:11 +02:00
Karl Seguin
40275a30c8 Ability to dynamically SetMaxSize
To support this, rather than adding another field/channel like
`getDroppedReq`, I added a `control` channel that can be used for these
miscellaneous interactions with the worker. The control can also be
used to take over for the `donec` channel
2020-06-26 20:22:30 +08:00
Karl Seguin
1a257a89d6 add GetDropped function 2020-02-05 22:05:05 +08:00
Karl Seguin
356e164dd5 preliminary work on DeletePrefix 2020-01-23 10:27:12 +08:00
Karl Seguin
3385784411 Add cache.ItemCount() intt64 API 2019-01-26 12:33:50 +07:00
Alexej Kubarev
243f5c8219 Fixes #21. Callong OnDelete during gc() 2018-11-25 15:31:09 -08:00
Alexej Kubarev
7421e2d7b4 Adding support for OnDelete callback function
OnDelete will receive an item that is being processed for deletion to support calling cleanup function specific to the item stored
2018-07-16 18:20:17 +02:00
Anthony Romano
b3c864ded7 cache: make Stop() synchronous and races in tests
worker goroutine running concurrently with tests would cause data race errors
when running tests with -race enabled.
2017-02-13 15:39:24 -08:00
Matthew Dale
162d4e27ca Use nanosecond-resolution TTL instead of second-resolution. 2016-07-07 15:32:49 -07:00
David Palm
d5307b40af Fetch does not return stale items 2016-02-03 16:07:59 +01:00
Karl Seguin
74754c77cc Partially fixing #3.
On close, drain the deletables channel (unblocking an waiting goroutines) and
close deletables. Like Gets and Sets against a now-closed promotables,
this means any subsequent to Deletes from deletables will panic.

I'm still not sure that this is ccache's responsibility. If a client closes a DB
connection, we'd expect subsequent operations against the now-closed connection
to fail. My main problems with defer'ing a recover are:

1 - the performance overhead on every single get / set / delete
2 - not communicating with the caller that the requested operatin is no longer
    valid.
2015-07-26 11:05:48 +08:00
Karl Seguin
bfa769c6b6 add Stop method to stop the background worker and make it possible for the GC to reap the object 2015-07-23 22:24:50 +08:00
Karl Seguin
41f1a3cfcb gonna be one of those days... 2015-01-07 08:12:17 +07:00
Karl Seguin
f9c7f14b7b Fetch's API wasn't usable. It returned different values types based on whether
the fetch was needed or not. It now behaves consistently (with itself and with
Get), returning an *Item.
2015-01-07 08:09:39 +07:00
Karl Seguin
6df1e24ae3 2 changes:
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.

2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).
2014-12-28 11:11:32 +07:00
Karl Seguin
557d56ec6f guard all access to item.promotions 2014-12-28 10:35:20 +07:00
Karl Seguin
78e597cdae replace is size-aware 2014-11-21 15:45:11 +07:00
Karl Seguin
41ccfbb39a renamed MaxItems to MaxSize, updated readme 2014-11-21 15:06:27 +07:00
Karl Seguin
c810d4feb3 test + fix for actual size function 2014-11-21 14:59:04 +07:00
Karl Seguin
ff8727e847 initial work on tracking cache by item size 2014-11-21 14:39:25 +07:00
Karl Seguin
44cdb043d1 Move size tracking to a variable, away from simply using the length of the list.
This paves the way for more complex size tracking.
2014-11-20 07:03:59 +07:00
Karl Seguin
df2f8eb082 Added documentation.
Bucket and LayeredBucket are no longer exported.
2014-11-14 07:56:24 +07:00
Karl Seguin
cc0395a391 added replace method 2014-11-13 22:20:12 +07:00
Karl Seguin
5e131cc17c Buckets must be a power of 2. Move from % to & for determining the bucket. 2014-11-02 18:09:49 +07:00
Karl Seguin
624c03cd3e delete and deleteall return boolean to indicate if delete found the item 2014-10-27 08:30:48 +07:00
Karl Seguin
77765a3f11 Get now returns the *Item rather than the item's value. Get no longer actively
purges stale items.

Combining these two changes, CCache can now be used to implement both of
Varnish's grace and saint mode.
2014-10-25 17:15:47 +07:00
Karl Seguin
3a00ce8f0a fixed possible nil panic when item is deleted immediately after being added 2014-10-25 12:24:52 +07:00