Commit Graph

42 Commits

Author SHA1 Message Date
Karl Seguin
55899506d5 Add setable buffer to handle sets in order to allow a hard max size limit
Previously, items were pushed onto the frequency linked list via the promotable
buffer. As a general rule, you want your protobable buffer to be quite large,
since you don't want to block Gets. But because Set uses the same buffer, the
cache could grow to MaxSize + cap(promotables).

Sets are now "promoted" via a new "setables" buffer. These are handled exactly
the same way as before, but having it be a separate buffer means they can have
different capacity. Thus, using the new `SetableBuffer(int)` configuration
method can help set a hard limit on the maximum size.
2023-03-08 12:44:20 +08:00
Karl Seguin
22776be1ee Refactor control messages + Stop handling
Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.

Stop is now a control message. Channels are no longer closed as part of the stop
process.
2023-01-04 10:40:19 +08:00
Karl Seguin
ece93bf87d On delete, always set promotions == -2 and node == nil
Also, item.promotions doesn't need to be loaded/stored using atomic. Once upon a
time it did. Cache was updated long ago to not use atomic operations on it, but
LayeredCache wasn't. They are both consistent now (they don't use atomic
operations).

Fixes: https://github.com/karlseguin/ccache/issues/76
2022-12-13 20:33:58 +08:00
Karl Seguin
3452e4e261 Fix memory leak
As documented in https://github.com/karlseguin/ccache/issues/76, an entry which
is both GC'd and deleted (either via a delete or an update) will result in the
internal link list having a nil tail (because removing the same node multiple times
from the linked list does that).

doDelete was already aware of "invalid" nodes (where item.node == nil), so the
solution seems to be as simple as setting item.node = nil during GC.
2022-11-19 08:15:48 +08:00
Karl Seguin
b1107e7097 Merge branch 'master' into generic 2022-10-23 13:20:22 +08:00
Karl Seguin
c4d364ba51 rely more on generic inference 2022-03-03 20:47:12 +08:00
Karl Seguin
95f74b4e85 remove dependency on expect library 2022-03-03 09:01:45 +08:00
Karl Seguin
e838337a8b Initial pass at leveraging generics
Still need to replace the linked list with a generic linked list and
want to remove the dependency on the expect package.
2022-03-02 21:26:07 +08:00
Karl Seguin
451f5a6e42 Add GetWithoutPromote
This gets the value without promoting it (sort of like a Peak)
2022-01-24 12:51:35 +08:00
Karl Seguin
325d078286 move GC and GetSize to control commands 2021-03-20 18:57:11 +08:00
Eli Bishop
6453d332ba Merge branch 'upstream-mirror' into sync-updates
# Conflicts:
#	layeredcache_test.go
2021-03-18 20:27:43 -07:00
Eli Bishop
b2a868314a revert changes not relevant to the SyncUpdates branch 2021-03-18 19:53:21 -07:00
Eli Bishop
c1fb5be323 add SyncUpdates method to synchronize with worker thread, and use it in tests 2021-03-18 16:09:30 -07:00
Karl Seguin
df2d98315c Conditionally prune more than itemsToPrune items
It's possible, though unlikely, that c.size will be larger than
c.maxSize by more than c.itemsToPrune. The most likely case that this
can happen is when using SetMaxSize to dynamically adjust the cache
size. The gc will now always clear to at least c.maxSize.
2021-03-18 19:29:04 +08:00
Karl Seguin
ae1872d700 add ForEachFunc 2021-02-05 19:24:54 +08:00
Karl Seguin
839a17bedb Remove impossible race conditions from test
This makes the output of go test --race less noisy.
2020-08-16 19:07:52 +08:00
Karl Seguin
f3b2b9fd88 Merge pull request #48 from sargun/master
Add TrackingSet method
2020-08-14 11:03:30 +08:00
Sargun Dhillon
df91803297 Add TrackingSet method
This method reduces the likelihood of a race condition where
you can add a (tracked) item to the cache, and the item isn't
the item you thought it was.
2020-08-13 10:43:38 -07:00
Bjørn Erik Pedersen
a42bd4a9c8 Use typed *Item in DeleteFunc 2020-08-13 16:10:22 +02:00
Bjørn Erik Pedersen
d56665a86e Add DeleteFunc
This shares DeletePrefixs's implementation.
2020-08-12 17:47:11 +02:00
Karl Seguin
40275a30c8 Ability to dynamically SetMaxSize
To support this, rather than adding another field/channel like
`getDroppedReq`, I added a `control` channel that can be used for these
miscellaneous interactions with the worker. The control can also be
used to take over for the `donec` channel
2020-06-26 20:22:30 +08:00
Karl Seguin
1a257a89d6 add GetDropped function 2020-02-05 22:05:05 +08:00
Karl Seguin
356e164dd5 preliminary work on DeletePrefix 2020-01-23 10:27:12 +08:00
Karl Seguin
3385784411 Add cache.ItemCount() intt64 API 2019-01-26 12:33:50 +07:00
Alexej Kubarev
243f5c8219 Fixes #21. Callong OnDelete during gc() 2018-11-25 15:31:09 -08:00
Alexej Kubarev
7421e2d7b4 Adding support for OnDelete callback function
OnDelete will receive an item that is being processed for deletion to support calling cleanup function specific to the item stored
2018-07-16 18:20:17 +02:00
Anthony Romano
b3c864ded7 cache: make Stop() synchronous and races in tests
worker goroutine running concurrently with tests would cause data race errors
when running tests with -race enabled.
2017-02-13 15:39:24 -08:00
David Palm
3665b16e83 Better test 2016-02-05 14:34:58 +01:00
David Palm
d5307b40af Fetch does not return stale items 2016-02-03 16:07:59 +01:00
Karl Seguin
6df1e24ae3 2 changes:
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.

2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).
2014-12-28 11:11:32 +07:00
Karl Seguin
78e597cdae replace is size-aware 2014-11-21 15:45:11 +07:00
Karl Seguin
41ccfbb39a renamed MaxItems to MaxSize, updated readme 2014-11-21 15:06:27 +07:00
Karl Seguin
c810d4feb3 test + fix for actual size function 2014-11-21 14:59:04 +07:00
Karl Seguin
44cdb043d1 Move size tracking to a variable, away from simply using the length of the list.
This paves the way for more complex size tracking.
2014-11-20 07:03:59 +07:00
Karl Seguin
3e4d668990 blank identifier for tests 2014-11-14 07:41:22 +07:00
Karl Seguin
77765a3f11 Get now returns the *Item rather than the item's value. Get no longer actively
purges stale items.

Combining these two changes, CCache can now be used to implement both of
Varnish's grace and saint mode.
2014-10-25 17:15:47 +07:00
Karl Seguin
3a00ce8f0a fixed possible nil panic when item is deleted immediately after being added 2014-10-25 12:24:52 +07:00
Karl Seguin
967200d7bc switched from gspec -> expect 2014-10-25 07:46:18 +07:00
Karl Seguin
d9d6e2b00e This is a sad commit.
How do you decide you need to purge your cache? Relying on runtime.ReadMemStats
sucks for two reasons. First, it's a stop-the-world call, which is pretty bad
in general and down right stupid for a supposedly concurrent-focused package.
Secondly, it only tells you the total memory usage, but most time you really
want to limit the amount of memory the cache itself uses.

Since there's no great way to determine the size of an object, that means users
need to supply the size. One way is to make it so that any cached item satisfies
a simple interface which exposes a Size() method. With this, we can track how
much memory is set put and a delete releases. But it's hard for consumers to
know how much memory they're taking when storing complex object (the entire point
of an in-process cache is to avoid having to serialize the data). Since any Size()
is bound to be a rough guess, we can simplify the entire thing by evicting based
on # of items.

This works really bad when items vary greatly in size (an HTTP cache), but in
a lot of other cases it works great. Furthermore, even for an HTTP cache, given
enough values, it should average out in most cases.

Whatever. This improve performance and should improve the usability of the cache.
It is a pretty big breaking change though.
2014-04-08 23:36:28 +08:00
Karl Seguin
7e109b11cc removed print line to fix #1 2014-03-23 07:53:29 +08:00
Karl Seguin
c1e1fb5933 fixed tests 2014-02-28 23:50:42 +08:00
Karl Seguin
890bb18dbf The cache can now do reference counting so that the LRU algorithm is aware of
long-lived objects and won't clean them up. Oftentimes, the value returned
from a cache hit is short-lived. As a silly example:

	func GetUser(http.responseWrite) {
		user := cache.Get("user:1")
		response.Write(serialize(user))
	}

It's fine if the cache's GC cleans up "user:1" while the user variable has a reference to the
object..the cache's reference is removed and the real GC will clean it up
at some point after the user variable falls out of scope.

However, what if user is long-lived? Possibly stored as a reference to another
cached object? Normally (without this commit) the next time you call
cache.Get("user:1"), you'll get a miss and will need to refetch the object; even
though the original user object is still somewhere in memory - you just lost
your reference to it from the cache.

By enabling the Track() configuration flag, and calling TrackingGet() (instead
of Get), the cache will track that the object is in-use and won't GC it (even
if there's great memory pressure (what's the point? something else is holding on
to it anyways). Calling item.Release() will decrement the number of references.
When the count is 0, the item can be pruned from the cache.

The returned value is a TrackedItem which exposes:

- Value() interface{} (to get the actual cached value)
- Release() to release the item back in the cache
2014-02-28 20:10:42 +08:00