Microsoft.KernelMemory.Abstractions
0.94.241201.1
Prefix Reserved
See the version list below for details.
dotnet add package Microsoft.KernelMemory.Abstractions --version 0.94.241201.1
NuGet\Install-Package Microsoft.KernelMemory.Abstractions -Version 0.94.241201.1
<PackageReference Include="Microsoft.KernelMemory.Abstractions" Version="0.94.241201.1" />
paket add Microsoft.KernelMemory.Abstractions --version 0.94.241201.1
#r "nuget: Microsoft.KernelMemory.Abstractions, 0.94.241201.1"
// Install Microsoft.KernelMemory.Abstractions as a Cake Addin #addin nuget:?package=Microsoft.KernelMemory.Abstractions&version=0.94.241201.1 // Install Microsoft.KernelMemory.Abstractions as a Cake Tool #tool nuget:?package=Microsoft.KernelMemory.Abstractions&version=0.94.241201.1
Kernel Memory
This repository presents best practices and a reference implementation for Memory in specific AI and LLMs application scenarios. Please note that the code provided serves as a demonstration and is not an officially supported Microsoft offering.
Kernel Memory (KM) is a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for Retrieval Augmented Generation (RAG), synthetic memory, prompt engineering, and custom semantic memory processing.
KM is available as a Web Service, as a Docker container, a Plugin for ChatGPT/Copilot/Semantic Kernel, and as a .NET library for embedded applications.
Utilizing advanced embeddings and LLMs, the system enables Natural Language querying for obtaining answers from the indexed data, complete with citations and links to the original sources.
Kernel Memory is designed for seamless integration as a Plugin with Semantic Kernel, Microsoft Copilot and ChatGPT.
Kernel Memory Service on Azure
Kernel Memory can be deployed in various configurations, including as a Service in Azure. To learn more about deploying Kernel Memory in Azure, please refer to the Azure deployment guide. For detailed instructions on deploying to Azure, you can check the infrastructure documentation.
If you are already familiar with these resources, you can quickly deploy by clicking the following button.
๐ See also: Kernel Memory via Docker and Serverless Kernel Memory with Azure services example.
Running Kernel Memory with Aspire
Kernel Memory can be easily run and imported in other projects also via .NET Aspire. For example:
var builder = DistributedApplication.CreateBuilder();
builder.AddContainer("kernel-memory", "kernelmemory/service")
.WithEnvironment("KernelMemory__TextGeneratorType", "OpenAI")
.WithEnvironment("KernelMemory__DataIngestion__EmbeddingGeneratorTypes__0", "OpenAI")
.WithEnvironment("KernelMemory__Retrieval__EmbeddingGeneratorType", "OpenAI")
.WithEnvironment("KernelMemory__Services__OpenAI__APIKey", "...your OpenAI key...");
builder.Build().Run();
Data Ingestion using Kernel Memory OpenAPI Web Service
The example show the default documents ingestion pipeline:
- Extract text: automatically recognize the file format and extract the information
- Partition the text in small chunks, ready for search and RAG prompts
- Extract embeddings using any LLM embedding generator
- Save embeddings into a vector index such as Azure AI Search, Qdrant or other DBs.
The example shows how to safeguard private information specifying who owns each document, and how to organize data for search and faceted navigation, using Tags.
C#
#r "nuget: Microsoft.KernelMemory.WebClient" var memory = new MemoryWebClient("http://127.0.0.1:9001"); // <== URL of KM web service // Import a file await memory.ImportDocumentAsync("meeting-transcript.docx"); // Import a file specifying Document ID and Tags await memory.ImportDocumentAsync("business-plan.docx", new Document("doc01") .AddTag("user", "devis@contoso.com") .AddTag("collection", "business") .AddTag("collection", "plans") .AddTag("fiscalYear", "2025"));
Python
import requests # Files to import files = { "file1": ("business-plan.docx", open("business-plan.docx", "rb")), } # Tags to apply, used by queries to filter memory data = { "documentId": "doc01", "tags": [ "user:devis@contoso.com", "collection:business", "collection:plans", "fiscalYear:2025" ] } response = requests.post("http://127.0.0.1:9001/upload", files=files, data=data)
Direct Data Ingestion using embedded Serverless .NET component
var memory = new KernelMemoryBuilder() .WithOpenAIDefaults(Environment.GetEnvironmentVariable("OPENAI_API_KEY")) .Build<MemoryServerless>(); // Import a file await memory.ImportDocumentAsync("meeting-transcript.docx"); // Import a file specifying Document ID and Tags await memory.ImportDocumentAsync("business-plan.docx", new Document("doc01") .AddTag("collection", "business") .AddTag("collection", "plans") .AddTag("fiscalYear", "2025"));
Memory retrieval and RAG
Asking questions, running RAG prompts, and filtering by user and other criteria is simple, with answers including citations and all the information needed to verify their accuracy, pointing to which documents ground the response.
C#
Asking questions:
var answer1 = await memory.AskAsync("How many people attended the meeting?"); var answer2 = await memory.AskAsync("what's the project timeline?", filter: MemoryFilters.ByTag("user", "devis@contoso.com"));
Data lineage, citations, referencing sources:
await memory.ImportFileAsync("NASA-news.pdf"); var answer = await memory.AskAsync("Any news from NASA about Orion?"); Console.WriteLine(answer.Result + "/n"); foreach (var x in answer.RelevantSources) { Console.WriteLine($" * {x.SourceName} -- {x.Partitions.First().LastUpdate:D}"); }
Yes, there is news from NASA about the Orion spacecraft. NASA has invited the media to see a new test version [......] For more information about the Artemis program, you can visit the NASA website.
- NASA-news.pdf -- Tuesday, August 1, 2023
Python
Asking questions:
import requests import json data = { "question": "what's the project timeline?", "filters": [ {"user": ["devis@contoso.com"]} ] } response = requests.post( "http://127.0.0.1:9001/ask", headers={"Content-Type": "application/json"}, data=json.dumps(data), ).json() print(response["text"])
OpenAPI
curl http://127.0.0.1:9001/ask -d'{"query":"Any news from NASA about Orion?"}' -H 'Content-Type: application/json'
{ "Query": "Any news from NASA about Orion?", "Text": "Yes, there is news from NASA about the Orion spacecraft. NASA has invited the media to see a new test version [......] For more information about the Artemis program, you can visit the NASA website.", "RelevantSources": [ { "Link": "...", "SourceContentType": "application/pdf", "SourceName": "file5-NASA-news.pdf", "Partitions": [ { "Text": "Skip to main content\nJul 28, 2023\nMEDIA ADVISORY M23-095\nNASA Invites Media to See Recovery Craft for\nArtemis Moon Mission\n(/sites/default/๏ฌles/thumbnails/image/ksc-20230725-ph-fmx01_0003orig.jpg)\nAboard the [......] to Mars (/topics/moon-to-\nmars/),Orion Spacecraft (/exploration/systems/orion/index.html)\nNASA Invites Media to See Recovery Craft for Artemis Moon Miss... https://www.nasa.gov/press-release/nasa-invites-media-to-see-recov...\n2 of 3 7/28/23, 4:51 PM", "Relevance": 0.8430657, "SizeInTokens": 863, "LastUpdate": "2023-08-01T08:15:02-07:00" } ] } ] }
The OpenAPI schema ("swagger") is available at http://127.0.0.1:9001/swagger/index.html when running the service locally with OpenAPI enabled. Here's a copy.
๐ See also:
Kernel Memory Docker image
If you want to give the service a quick test, use the following command to start the Kernel Memory Service using OpenAI:
docker run -e OPENAI_API_KEY="..." -it --rm -p 9001:9001 kernelmemory/service
on Linux ARM/MacOS
docker run -e OPENAI_API_KEY="..." -it --rm -p 9001:9001 kernelmemory/service:latest-arm64
If you prefer using custom settings and services such as Azure OpenAI, Azure
Document Intelligence, etc., you should create an appsettings.Development.json
file overriding the default values set in appsettings.json
, or using the
configuration wizard included:
cd service/Service
dotnet run setup
Then run this command to start the Docker image with the configuration just created:
on Windows:
docker run --volume .\appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service
on Linux (AMD64):
docker run --volume ./appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service
on ARM64 / macOS:
docker run --volume ./appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service:latest-arm64
๐ See also:
Memory as a Service: Data Ingestion Pipelines + RAG Web Service
Depending on your scenarios, you might want to run all the code remotely through an asynchronous and scalable service, or locally inside your process.
If you're importing small files, and use only .NET and can block the application process while importing documents, then local-in-process execution can be fine, using the MemoryServerless described below.
However, if you are in one of these scenarios:
- My app is written in TypeScript, Java, Rust, or some other language
- I'd just like a web service to import data and send questions to answer
- I'm importing big documents that can require minutes to process, and I don't want to block the user interface
- I need memory import to run independently, supporting failures and retry logic
- I want to define custom pipelines mixing multiple languages like Python, TypeScript, etc
then you're likely looking for a Memory Service, and you can deploy Kernel Memory as a backend service, using the default ingestion logic, or your custom workflow including steps coded in Python/TypeScript/Java/etc., leveraging the asynchronous non-blocking memory encoding process, uploading documents and asking questions using the MemoryWebClient.
Here you can find a complete set of instruction about how to run the Kernel Memory service.
Embedded Memory Component (aka "serverless")
Kernel Memory works and scales at best when running as an asynchronous Web Service, allowing to ingest thousands of documents and information without blocking your app.
However, Kernel Memory can also run in serverless mode, embedding MemoryServerless
class instance
in .NET backend/console/desktop apps in synchronous mode.
Each request is processed immediately, although calling clients are responsible for handling
transient errors.
Extensions
Kernel Memory relies on external services to run stateful pipelines, store data, handle embeddings, and generate text responses. The project includes extensions that allow customization of file storage, queues, vector stores, and LLMs to fit specific requirements.
- AI: Azure OpenAI, OpenAI, ONNX, Ollama, Anthropic, Azure AI Document Intelligence, Azure AI Content Safety
- Vector Store: Azure AI Search, Postgres, SQL Server, Elasticsearch, Qdrant, Redis, MongoDB Atlas, In memory store
- File Storage: Azure Blob storage, AWS S3, MongoDB Atlas, Local disk, In memory storage
- Ingestion pipelines: Azure Queues, RabbitMQ, In memory queues
Custom memory ingestion pipelines
Document ingestion operates as a stateful pipeline, executing steps in a defined sequence. By default, Kernel Memory employs a pipeline to extract text, chunk content, vectorize, and store data.
If you need a custom data pipeline, you can modify the sequence, add new steps, or replace existing ones by providing custom โhandlersโ for each desired stage. This allows complete flexibility in defining how data is processed. For example:
// Memory setup, e.g. how to calculate and where to store embeddings
var memoryBuilder = new KernelMemoryBuilder()
.WithoutDefaultHandlers()
.WithOpenAIDefaults(Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
var memory = memoryBuilder.Build();
// Plug in custom .NET handlers
memory.Orchestrator.AddHandler<MyHandler1>("step1");
memory.Orchestrator.AddHandler<MyHandler2>("step2");
memory.Orchestrator.AddHandler<MyHandler3>("step3");
// Use the custom handlers with the memory object
await memory.ImportDocumentAsync(
new Document("mytest001")
.AddFile("file1.docx")
.AddFile("file2.pdf"),
steps: new[] { "step1", "step2", "step3" });
Kernel Memory (KM) and Semantic Kernel (SK)
Semantic Kernel is an SDK for C#, Python, and Java used to develop solutions with AI. SK includes libraries that wrap direct calls to databases, supporting vector search.
Semantic Kernel is maintained in three languages, while the list of supported storage engines (known as "connectors") varies across languages.
Kernel Memory (KM) is a SERVICE built on Semantic Kernel, with additional features developed for RAG, Security, and Cloud deployment. As a service, KM can be used from any language, tool, or platform, e.g. browser extensions and ChatGPT assistants.
Kernel Memory provides several features out of the scope of Semantic Kernel, that would usually be developed manually, such as storing files, extracting text from documents, providing a framework to secure users' data, content moderation etc.
Kernel Memory is also leveraged to explore new AI patterns, which sometimes are backported to Semantic Kernel and Microsoft libraries, for instance vector stores flexible schemas, advanced filtering, authentications.
Here's comparison table:
Feature | Kernel Memory | Semantic Kernel |
---|---|---|
Runtime | Memory as a Service, Web service | SDK packages |
Data formats | Web pages, PDF, Images, Word, PowerPoint, Excel, Markdown, Text, JSON | Text only |
Language support | Any language | .NET, Python, Java |
RAG | Yes | - |
Cloud deployment | Yes | - |
Examples and Tools
Examples
- Collection of Jupyter notebooks with various scenarios
- Using Kernel Memory web service to upload documents and answer questions
- Importing files and asking question without running the service (serverless mode)
- Kernel Memory RAG with Azure services
- Kernel Memory with .NET Aspire
- Using KM Plugin for Semantic Kernel
- Customizations
- Processing files with custom logic (custom handlers) in serverless mode
- Processing files with custom logic (custom handlers) in asynchronous mode
- Customizing RAG and summarization prompts
- Custom partitioning/text chunking options
- Using a custom embedding/vector generator
- Using custom content decoders
- Using a custom web scraper to fetch web pages
- Writing and using a custom ingestion handler
- Using Context Parameters to customize RAG prompt during a request
- Local models and external connectors
- Upload files and ask questions from command line using curl
- Summarizing documents, using synthetic memories
- Hybrid Search with Azure AI Search
- Running a single asynchronous pipeline handler as a standalone service
- Integrating Memory with ASP.NET applications and controllers
- Sample code showing how to extract text from files
- .NET configuration and logging
- Expanding chunks retrieving adjacent partitions
- Creating a Memory instance without KernelMemoryBuilder
- Intent Detection
- Fetching data from Discord
- Test project using KM package from nuget.org
Tools
- .NET appsettings.json generator
- Curl script to upload files
- Curl script to ask questions
- Curl script to search documents
- Script to start Qdrant for development tasks
- Script to start Elasticsearch for development tasks
- Script to start MS SQL Server for development tasks
- Script to start Redis for development tasks
- Script to start RabbitMQ for development tasks
- Script to start MongoDB Atlas for development tasks
.NET packages
Microsoft.KernelMemory.WebClient: .NET web client to call a running instance of Kernel Memory web service.
Microsoft.KernelMemory: Kernel Memory library including all extensions and clients, it can be used to build custom pipelines and handlers. It contains also the serverless client to use memory in a synchronous way without the web service.
Microsoft.KernelMemory.Service.AspNetCore: an extension to load Kernel Memory into your ASP.NET apps.
Microsoft.KernelMemory.SemanticKernelPlugin: a Memory plugin for Semantic Kernel, replacing the original Semantic Memory available in SK.
Microsoft.KernelMemory.* packages: Kernel Memory Core and all KM extensions split into distinct packages.
Packages for Python, Java and other languages
Kernel Memory service offers a Web API out of the box, including the OpenAPI swagger documentation that you can leverage to test the API and create custom web clients. For instance, after starting the service locally, see http://127.0.0.1:9001/swagger/index.html.
A .NET Web Client and a Semantic Kernel plugin are available, see the nugets packages above.
For Python, TypeScript, Java and other languages we recommend leveraging the Web Service. We also welcome PR contributions to support more languages.
Contributors
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net8.0
- Microsoft.Extensions.Configuration (>= 8.0.0)
- Microsoft.Extensions.Configuration.Json (>= 8.0.1)
- Microsoft.Extensions.Hosting (>= 8.0.1)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.2)
- Microsoft.SemanticKernel.Abstractions (>= 1.26.0)
- System.Linq.Async (>= 6.0.1)
- System.Memory.Data (>= 8.0.1)
- System.Numerics.Tensors (>= 8.0.0)
NuGet packages (40)
Showing the top 5 NuGet packages that depend on Microsoft.KernelMemory.Abstractions:
Package | Downloads |
---|---|
Microsoft.KernelMemory.Core
The package contains the the core logic and abstractions of Kernel Memory, not including extensions. |
|
Microsoft.KernelMemory.AI.OpenAI
Provide access to OpenAI LLM models in Kernel Memory to generate embeddings and text |
|
Microsoft.KernelMemory.AI.AzureOpenAI
Provide access to Azure OpenAI LLM models in Kernel Memory to generate embeddings and text |
|
Microsoft.KernelMemory.MemoryDb.AzureAISearch
Azure AI Search connector for Microsoft Kernel Memory, to store and search memory using Azure AI Search vector indexing and semantic features. |
|
Microsoft.KernelMemory.MemoryDb.Qdrant
Qdrant connector for Microsoft Kernel Memory, to store and search memory using Qdrant vector indexing and Qdrant features. |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.KernelMemory.Abstractions:
Repository | Stars |
---|---|
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
|
Version | Downloads | Last updated | |
---|---|---|---|
0.95.241216.2 | 270 | 12/17/2024 | |
0.95.241216.1 | 96 | 12/16/2024 | |
0.94.241201.1 | 4,271 | 12/1/2024 | |
0.93.241118.1 | 9,415 | 11/19/2024 | |
0.92.241112.1 | 11,543 | 11/12/2024 | |
0.91.241101.1 | 11,544 | 11/1/2024 | |
0.91.241031.1 | 6,040 | 10/31/2024 | |
0.90.241021.1 | 16,039 | 10/22/2024 | |
0.90.241020.3 | 1,483 | 10/20/2024 | |
0.80.241017.2 | 2,652 | 10/17/2024 | |
0.79.241014.2 | 3,054 | 10/14/2024 | |
0.79.241014.1 | 209 | 10/14/2024 | |
0.78.241007.1 | 4,418 | 10/8/2024 | |
0.78.241005.1 | 922 | 10/6/2024 | |
0.77.241004.1 | 459 | 10/5/2024 | |
0.76.240930.3 | 6,420 | 9/30/2024 | |
0.75.240924.1 | 9,392 | 9/24/2024 | |
0.74.240919.1 | 4,975 | 9/19/2024 | |
0.73.240906.1 | 24,537 | 9/7/2024 | |
0.72.240904.1 | 3,873 | 9/5/2024 | |
0.71.240820.1 | 29,843 | 8/21/2024 | |
0.70.240803.1 | 24,886 | 8/3/2024 | |
0.69.240727.1 | 12,444 | 7/27/2024 | |
0.68.240722.1 | 3,422 | 7/22/2024 | |
0.68.240716.1 | 3,000 | 7/16/2024 | |
0.67.240712.1 | 2,107 | 7/12/2024 | |
0.66.240709.1 | 7,007 | 7/9/2024 | |
0.65.240620.1 | 34,510 | 6/21/2024 | |
0.64.240619.1 | 1,191 | 6/20/2024 | |
0.63.240618.1 | 3,294 | 6/18/2024 | |
0.62.240605.1 | 24,615 | 6/5/2024 | |
0.62.240604.1 | 624 | 6/4/2024 | |
0.61.240524.1 | 14,251 | 5/24/2024 | |
0.61.240519.2 | 10,595 | 5/19/2024 | |
0.60.240517.1 | 268 | 5/18/2024 | |
0.51.240513.2 | 8,629 | 5/13/2024 | |
0.50.240504.7 | 6,960 | 5/4/2024 | |
0.40.240430.1 | 1,010 | 5/1/2024 | |
0.39.240426.1 | 5,827 | 4/27/2024 | |
0.38.240424.2 | 1,956 | 4/24/2024 | |
0.38.240424.1 | 72 | 4/24/2024 | |
0.37.240423.2 | 1,628 | 4/24/2024 | |
0.37.240423.1 | 92 | 4/23/2024 | |
0.36.240416.1 | 16,050 | 4/16/2024 | |
0.36.240415.1 | 2,487 | 4/15/2024 | |
0.36.240412.1 | 174 | 4/12/2024 | |
0.35.240318.2 | 54,381 | 3/18/2024 | |
0.34.240313.1 | 28,066 | 3/13/2024 | |
0.33.240312.2 | 621 | 3/12/2024 | |
0.33.240312.1 | 110 | 3/12/2024 | |
0.32.240307.1 | 5,659 | 3/7/2024 | |
0.31.240305.4 | 137 | 3/6/2024 | |
0.30.240227.1 | 21,341 | 2/28/2024 | |
0.29.240219.3 | 8,721 | 2/20/2024 | |
0.29.240219.2 | 98 | 2/20/2024 | |
0.28.240212.1 | 6,732 | 2/13/2024 | |
0.27.240205.2 | 5,425 | 2/5/2024 | |
0.26.240104.1 | 30,429 | 1/5/2024 | |
0.25.240103.1 | 365 | 1/4/2024 | |
0.24.231228.5 | 2,660 | 12/29/2023 | |
0.24.231228.4 | 908 | 12/29/2023 | |
0.23.231224.1 | 10,792 | 12/24/2023 | |
0.23.231221.1 | 1,222 | 12/22/2023 | |
0.23.231219.1 | 2,800 | 12/20/2023 | |
0.22.231217.1 | 194 | 12/18/2023 | |
0.21.231214.1 | 295 | 12/15/2023 | |
0.20.231212.1 | 1,056 | 12/13/2023 | |
0.19.231211.1 | 1,358 | 12/11/2023 |