This is an attempt at fixing #81 without imposing a performance hit on the
cache's "normal" (get/set/fetch) activity. Calling "Clear" is now considerably
more expensive.
Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.
Stop is now a control message. Channels are no longer closed as part of the stop
process.
Also, item.promotions doesn't need to be loaded/stored using atomic. Once upon a
time it did. Cache was updated long ago to not use atomic operations on it, but
LayeredCache wasn't. They are both consistent now (they don't use atomic
operations).
Fixes: https://github.com/karlseguin/ccache/issues/76
As documented in https://github.com/karlseguin/ccache/issues/76, an entry which
is both GC'd and deleted (either via a delete or an update) will result in the
internal link list having a nil tail (because removing the same node multiple times
from the linked list does that).
doDelete was already aware of "invalid" nodes (where item.node == nil), so the
solution seems to be as simple as setting item.node = nil during GC.
It's possible, though unlikely, that c.size will be larger than
c.maxSize by more than c.itemsToPrune. The most likely case that this
can happen is when using SetMaxSize to dynamically adjust the cache
size. The gc will now always clear to at least c.maxSize.
In rare situation it is possible to have `promotables` channel full. In such condition, the `Get` function will be blocked because it calls `promote` function. `Get` function being blocked defeats the purpose of fast cache response and hence may impact the application code in unexpected manner.
In this commit, the `promote` function is modified to use non-blocking channel send construct.
Cache.Get() accesses the expires field in the Item without any sort of locking
or atomicity. This is an issue because Item.Extend() can be called from a
separate thread and cause a race condition.
This method reduces the likelihood of a race condition where
you can add a (tracked) item to the cache, and the item isn't
the item you thought it was.
To support this, rather than adding another field/channel like
`getDroppedReq`, I added a `control` channel that can be used for these
miscellaneous interactions with the worker. The control can also be
used to take over for the `donec` channel
On close, drain the deletables channel (unblocking an waiting goroutines) and
close deletables. Like Gets and Sets against a now-closed promotables,
this means any subsequent to Deletes from deletables will panic.
I'm still not sure that this is ccache's responsibility. If a client closes a DB
connection, we'd expect subsequent operations against the now-closed connection
to fail. My main problems with defer'ing a recover are:
1 - the performance overhead on every single get / set / delete
2 - not communicating with the caller that the requested operatin is no longer
valid.
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.
2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).