Move the control API shared between Cache and LayeredCache into its own struct.
But keep the control logic handling separate - it requires access to the local
values, like dropped and deleteItem.
Stop is now a control message. Channels are no longer closed as part of the stop
process.
Also, item.promotions doesn't need to be loaded/stored using atomic. Once upon a
time it did. Cache was updated long ago to not use atomic operations on it, but
LayeredCache wasn't. They are both consistent now (they don't use atomic
operations).
Fixes: https://github.com/karlseguin/ccache/issues/76
As documented in https://github.com/karlseguin/ccache/issues/76, an entry which
is both GC'd and deleted (either via a delete or an update) will result in the
internal link list having a nil tail (because removing the same node multiple times
from the linked list does that).
doDelete was already aware of "invalid" nodes (where item.node == nil), so the
solution seems to be as simple as setting item.node = nil during GC.
It's possible, though unlikely, that c.size will be larger than
c.maxSize by more than c.itemsToPrune. The most likely case that this
can happen is when using SetMaxSize to dynamically adjust the cache
size. The gc will now always clear to at least c.maxSize.
To support this, rather than adding another field/channel like
`getDroppedReq`, I added a `control` channel that can be used for these
miscellaneous interactions with the worker. The control can also be
used to take over for the `donec` channel
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.
2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).