DigitalRuby.SimpleCache
1.0.14
See the version list below for details.
dotnet add package DigitalRuby.SimpleCache --version 1.0.14
NuGet\Install-Package DigitalRuby.SimpleCache -Version 1.0.14
<PackageReference Include="DigitalRuby.SimpleCache" Version="1.0.14" />
paket add DigitalRuby.SimpleCache --version 1.0.14
#r "nuget: DigitalRuby.SimpleCache, 1.0.14"
// Install DigitalRuby.SimpleCache as a Cake Addin #addin nuget:?package=DigitalRuby.SimpleCache&version=1.0.14 // Install DigitalRuby.SimpleCache as a Cake Tool #tool nuget:?package=DigitalRuby.SimpleCache&version=1.0.14
<h1 align="center">SimpleCache</h1>
SimpleCache removes the headache and pain of getting caching right in .NET.
Features:
- Simple and intuitive API using generics and tasks.
- Cache storm prevention using
GetOrCreateAsync
. Your factory is guaranteed to execute only once per key, regardless of how many callers stack on it. - Exceptions are not cached.
- Thread safe.
- Three layers: RAM, disk and redis. Disk and redis can be disabled if desired.
- Null and memory versions of both file and redis caches available for mocking.
- Excellent test coverage.
- Optimized usage of all your resources. Simple cache has three layers to give you maximum performance: RAM, disk and redis.
- Built in json-lz4 serializer for file and redis caching for smaller values and minimal implementation pain.
- You can create your own serializer if you want to use protobuf or other compression options.
Setup and Configuration
using DigitalRuby.SimpleCache;
// create your builder, add simple cache
var builder = WebApplication.CreateBuilder(args);
// bind to IConfiguration, see the DigitalRuby.SimpleCache.Sandbox project appsettings.json for an example
builder.Services.AddSimpleCache(builder.Configuration);
// you can also create a builder with a strongly typed configuration
builder.Services.AddSimpleCache(new SimpleCacheConfiguration
{
// fill in values here
});
The configuration options are:
{
"DigitalRuby.SimpleCache":
{
/*
optional, cache key prefix, by default the entry assembly name is used
you can set this to an empty string to share keys between services that are using the same redis cluster
*/
"KeyPrefix": "sandbox",
/* optional, override max memory size (in megabytes). Default is 1024. */
"MaxMemorySize": 2048,
/* optional redis connection string */
"RedisConnectionString": "localhost:6379",
/*
opptional, override file cache directory, set to empty to not use file cache (recommended if not on SSD)
the default is %temp% which means to use the temp directory
this example assumes running on Windows, for production, use an environment variable or just leave off for default of %temp%.
*/
"FileCacheDirectory": "c:/temp",
/* optional, override the file cache cleanup threshold (0-100 percent). default is 15 */
"FileCacheFreeSpaceThreshold": 10,
/*
optional, override the default json-lz4 serializer with your own class that implements DigitalRuby.SimpleCache.ISerializer
the serializer is used to convert objects to bytes for the file and redis caches
this should be an assembly qualified type name
*/
"SerializerType": "DigitalRuby.SimpleCache.JsonSerializer, DigitalRuby.SimpleCache"
}
}
If the RedisConnectionString
is empty, no redis cache will be used, an no key change notifications will be sent, preventing auto purge of cache values that are modified.
For production usage, you should load this from an environment variable.
Usage
You can inject the following interface into your constructors to use the layered cache:
/// <summary>
/// Layered cache interface. A layered cache aggregates multiple caches, such as memory, file and distributed cache (redis, etc.).<br/>
/// Internally, keys are prefixed with the entry assembyly name and the type full name. You can change the entry assembly by specifying a KeyPrefix in the configuration.<br/>
/// </summary>
public interface ILayeredCache : IDisposable
{
/// <summary>
/// Get or create an item from the cache.
/// </summary>
/// <typeparam name="T">Type of item</typeparam>
/// <param name="key">Cache key</param>
/// <param name="factory">Factory method to create the item if no item is in the cache for the key. This factory is guaranteed to execute only one per key.<br/>
/// Inside your factory, you should set the CacheParameters on the GetOrCreateAsyncContext to a duration and size tuple: (TimeSpan duration, int size)</param>
/// <param name="cancelToken">Cancel token</param>
/// <returns>Task of return of type T</returns>
Task<T> GetOrCreateAsync<T>(string key, Func<GetOrCreateAsyncContext, Task<T>> factory, CancellationToken cancelToken = default);
/// <summary>
/// Attempts to retrieve value of T by key.
/// </summary>
/// <typeparam name="T">Type of object to get</typeparam>
/// <param name="key">Cache key</param>
/// <param name="cancelToken">Cancel token</param>
/// <returns>Result of type T or null if nothing found for the key</returns>
Task<T?> GetAsync<T>(string key, CancellationToken cancelToken = default);
/// <summary>
/// Sets value T by key.
/// </summary>
/// <typeparam name="T">Type of object</typeparam>
/// <param name="key">Cache key to set</param>
/// <param name="value">Value to set</param>
/// <param name="cacheParam">Cache parameters</param>
/// <param name="cancelToken">Cancel token</param>
/// <returns>Task</returns>
Task SetAsync<T>(string key, T value, CacheParameters cacheParam, CancellationToken cancelToken = default);
/// <summary>
/// Attempts to delete an entry of T type by key. If there is no key found, nothing happens.
/// </summary>
/// <typeparam name="T">The type of object to delete</typeparam>
/// <param name="key">The key to delete</param>
/// <param name="cancelToken">Cancel token</param>
/// <returns>Task</returns>
Task DeleteAsync<T>(string key, CancellationToken cancelToken = default);
}
Your cache key will be modified by the type parameter, <T>
. This means you can have duplicate key
parameters for different types.
Cache keys are also prefixed by the entry assembly name by default. This can be changed in the configuration.
The CacheParameters
struct can be simplified by just passing a TimeSpan
if you don't know the size. You can also pass a tuple of (TimeSpan, int)
for a duration, size pair.
If you do know the approximate size of your object, you should specify the size to assist the memory compaction background task to be more accurate.
GetOrCreateAsync
example:
var result = await cache.GetOrCreateAsync<string>(key, duration, async context =>
{
// if you need the key, you can use context.Key to avoid capturing the key parameter, saving performance
var value = await MyExpensiveFunctionThatReturnsAStringAsync();
// set the cache duration and size, this is an important step to not miss
// the tuple is minutes, size
context.CacheParameters = (0.5, value.Length * 2);
// you can also set individually
context.Duration = TimeSpan.FromMinutes(0.5);
context.Size = value.Length * 2;
// the context also has a CancelToken property if you need it
return value;
}, stoppingToken);
Serialization
The configuration options mention a serializer. The default serializer is a json-lz4 serializer that gives a balance of ease of use, performance and smaller cache value sizes.
You can create your own serializer if desired, or use the json serializer that does not compress, as is shown in the configuration example.
When implementing your own serializer, inherit and complete the following interface:
/// <summary>
/// Interface for serializing cache objects to/from bytes
/// </summary>
public interface ISerializer
{
/// <summary>
/// Deserialize
/// </summary>
/// <param name="bytes">Bytes to deserialize</param>
/// <param name="type">Type of object to deserialize to</param>
/// <returns>Deserialized object or null if bytes is null or empty</returns>
object? Deserialize(byte[]? bytes, Type type);
/// <summary>
/// Deserialize using generic type parameter
/// </summary>
/// <typeparam name="T">Type of object to deserialize</typeparam>
/// <param name="bytes">Bytes</param>
/// <returns>Deserialized object or null if bytes is null or empty</returns>
T? Deserialize<T>(byte[]? bytes) => (T?)Deserialize(bytes, typeof(T));
/// <summary>
/// Serialize an object
/// </summary>
/// <param name="obj">Object to serialize</param>
/// <returns>Serialized bytes or null if obj is null</returns>
byte[]? Serialize(object? obj);
/// <summary>
/// Serialize using generic type parameter
/// </summary>
/// <typeparam name="T">Type of object</typeparam>
/// <param name="obj">Object to serialize</param>
/// <returns>Serialized bytes or null if obj is null</returns>
byte[]? Serialize<T>(T? obj) => Serialize(obj);
/// <summary>
/// Get a short description for the serializer, i.e. json or json-lz4.
/// </summary>
string Description { get; }
}
Layers
Simple cache uses layers, just like a modern CPU. Modern CPU's have moultiple layers of cache just like simple cache.
Using multiple layers allows ever increasing amounts of data to be stored at slightly slower retrieval times.
Memory cache
The first layer (L1), the memory cache portion of simple cache uses IMemoryCache. This will be registered for you automatically in the services collection.
.NET will compact the memory cache based on your settings from the configuration.
File cache
The second layer (L2), the file cache portion of simple cache uses the temp directory by default. You can override this.
Keys are hashed using Blake2B and converted to base64.
A background file cleanup task runs to ensure you do not overrun disk space.
If you are not running on an SSD, it is recommended to disable the file cache by specifying an empty string for the file cache directory.
Redis cache
The third and final layer, the redis cache uses StackExchange.Redis nuget package.
The redis layer detects when there is a failover and failback in a cluster and handles this gracefully.
Keyspace notifications are sent to keep cache in sync between machines. Run CONFIG SET notify-keyspace-events KEA
on your redis servers for this to take effect. Simple cache will attempt to do this as well.
Sometimes you need to purge your entire cache, do this with caution. To cause simple cache to clear memory and file caches, set a redis key that equals __flushall__
with any value, then wait a second then execute a FLUSHALL
or FLUSHDB
command.
As a bonus, a distributed lock factory is provided to acquire locks that need to be synchronized accross machines.
You can inject this interface into your constructors for distributed locking:
/// <summary>
/// Interface for distributed locks
/// </summary>
public interface IDistributedLockFactory
{
/// <summary>
/// Attempt to acquire a distributed lock
/// </summary>
/// <param name="key">Lock key</param>
/// <param name="lockTime">Duration to hold the lock before it auto-expires. Set this to the maximum possible duration you think your code might hold the lock.</param>
/// <param name="timeout">Time out to acquire the lock or default to only make one attempt to acquire the lock</param>
/// <returns>The lock or null if the lock could not be acquired</returns>
Task<IAsyncDisposable?> TryAcquireLockAsync(string key, TimeSpan lockTime, TimeSpan timeout = default);
}
ISystemClock
Simple cache uses a ClockHandler
class that implements the ISystemClock
and IClockHandler
interfaces.
You can inject your own implementation for these interfaces if you have a different needs, for example tests.
Exceptions and null
Simple cache does not cache exceptions and does not cache null. If you must cache these types of objects, please wrap them in an object that can go in the cache.
Thanks for reading!
-- Jeff
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- K4os.Compression.LZ4.Streams (>= 1.2.16)
- Microsoft.Extensions.Caching.Memory (>= 6.0.1)
- Microsoft.Extensions.Caching.StackExchangeRedis (>= 6.0.6)
- Microsoft.Extensions.Configuration.Binder (>= 6.0.0)
- Microsoft.Extensions.Hosting.Abstractions (>= 6.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 6.0.1)
- Polly.Contrib.DuplicateRequestCollapser (>= 0.2.1)
- SauceControl.Blake2Fast (>= 2.0.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
2.0.1 | 219 | 12/29/2023 |
2.0.0 | 133 | 12/29/2023 |
1.0.17 | 136 | 12/21/2023 |
1.0.16 | 469 | 10/15/2022 |
1.0.15 | 442 | 8/3/2022 |
1.0.14 | 428 | 7/19/2022 |
1.0.13 | 445 | 7/5/2022 |
1.0.12 | 425 | 7/5/2022 |
1.0.11 | 432 | 7/4/2022 |
1.0.10 | 434 | 7/3/2022 |
1.0.9 | 461 | 7/3/2022 |
1.0.8 | 431 | 6/12/2022 |
1.0.7 | 440 | 6/1/2022 |
1.0.6 | 437 | 5/29/2022 |
1.0.5 | 441 | 5/29/2022 |
1.0.4 | 455 | 5/29/2022 |
1.0.3 | 432 | 5/29/2022 |
1.0.2 | 437 | 5/29/2022 |
1.0.1 | 443 | 5/28/2022 |
1.0.0 | 448 | 5/28/2022 |
Add cancel token to distributed lock call