FsTsetlin 0.4.1
See the version list below for details.
dotnet add package FsTsetlin --version 0.4.1
NuGet\Install-Package FsTsetlin -Version 0.4.1
<PackageReference Include="FsTsetlin" Version="0.4.1" />
paket add FsTsetlin --version 0.4.1
#r "nuget: FsTsetlin, 0.4.1"
// Install FsTsetlin as a Cake Addin #addin nuget:?package=FsTsetlin&version=0.4.1 // Install FsTsetlin as a Cake Tool #tool nuget:?package=FsTsetlin&version=0.4.1
FsTsetlin
Implements a Tsetlin machine learning system in F#. The key difference between this and other Testlin machine implementations is that this library uses tensor operations to parallelize learning and prediction. FsTsetlin utilizes the tensor library underpinning TorchSharp/PyTorch. The libary has been tested to work on both CPU and GPU (although extensive performance testing has not been performed as of yet)
Note: This is still a work-in-progress. Please treat it as beta-level code.
Tsetline Machine
Tsetlin machine (TM) is a recently developed machine learning system based on automata (finite state machines / propositional logic) learning. Please see this paper for details.
TM is said to be competitive on many tasks with other ML methods (both DL & classical). However, the main draw is that TM-based models are faster to train and more energy-efficient when used for inferencing than models based on other ML methods - while providing similar accuracy. As ML becomes pervasive, the compute and power costs of deployed models becomes non-trivial. TM may help to reign in runtime and training costs associated with ML at scale.
Although this implementation is in F#, the goal is to define a largely language-agnostic computation approach that can be easily ported to other languages (e.g. Python, Java, etc.), as long as the language has a binding to libtorch - the C++ tensor library underlying PyTorch.
There are other GPU implementations available (see github repo). None of these use tensor operations from a standard tensor library. By using PyTorch as an established standard, the desire is to gain wider deployment portability. For example, the TM may be trained on a GPU but later deployed to a CPU-based environment for inferencing, to save costs.
Examples
- NoisyXor.fsx (binary)
- BinaryIris.fsx (multiclass)
Datasets courtesy of https://github.com/cair/TsetlinMachine
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- FSharp.Core (>= 6.0.4)
- FsPickler (>= 5.3.2)
- TorchSharp (>= 0.96.6)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.