Xam.Plugins.OnDeviceCustomVision
2.0.0
See the version list below for details.
dotnet add package Xam.Plugins.OnDeviceCustomVision --version 2.0.0
NuGet\Install-Package Xam.Plugins.OnDeviceCustomVision -Version 2.0.0
<PackageReference Include="Xam.Plugins.OnDeviceCustomVision" Version="2.0.0" />
paket add Xam.Plugins.OnDeviceCustomVision --version 2.0.0
#r "nuget: Xam.Plugins.OnDeviceCustomVision, 2.0.0"
// Install Xam.Plugins.OnDeviceCustomVision as a Cake Addin #addin nuget:?package=Xam.Plugins.OnDeviceCustomVision&version=2.0.0 // Install Xam.Plugins.OnDeviceCustomVision as a Cake Tool #tool nuget:?package=Xam.Plugins.OnDeviceCustomVision&version=2.0.0
Xam.Plugins.OnDeviceCustomVision
The Azure Custom Vision service is able to create models that can be exported as CoreML, TensorFlow or ONNX models to do image classification on device.
This plugin makes it easy to download and use these models offline from inside your mobile app, using CoreML on iOS, TensorFlow on Android and Windows ML on Windows. These models can then be called from a .NET standard library, using something like Xam.Plugins.Media to take photos for classification.
Setup
- Available on NuGet: https://www.nuget.org/packages/Xam.Plugins.OnDeviceCustomVision/
- Install into your .NET Standard project and iOS, Android and UWP client projects.
Platform Support
Platform | Version |
---|---|
Xamarin.iOS | iOS 11+ |
Xamarin.Android | API 21+ |
UWP | Windows SDK 17110+ |
Usage
Before you can use this API, you need to initialize it with the model file downloaded from CustomVision. Trying to classify an image without calling Init
will result in a ImageClassifierException
being thrown.
Each platform has it's own platform-specific
Init
method on a platform-specific static class. This is due to differences in the way each platform handles the models.
iOS
Export, then download the Core ML model from the Custom Vision service.
Pre-compiled models
Models can be compiled before being used, or compiled on the device. To use a pre-compiled model, compile the downloaded model using:
xcrun coremlcompiler compile <model_file_name>.mlmodel <model_name>.mlmodelc
This will spit out a folder called <model_name>.mlmodelc
containing a number of files. Add this entire folder to the Resources
folder in your iOS app. Once this has been added, add a call to Init
to your app delegate, passing in the name of your compiled model without the extension (i.e. the name of the model folder without mlmodelc
):
using Xam.Plugins.OnDeviceCustomVision;
...
public override bool FinishedLaunching(UIApplication uiApplication, NSDictionary launchOptions)
{
...
iOSImageClassifier.Init("<model_name>");
return base.FinishedLaunching(uiApplication, launchOptions);
}
Uncompiled models
Add the downloaded model, called <model_name>.mlmodel
, to the Resources
folder in your iOS app.Once this has been added, add a call to Init
to your app delegate, passing in the name of your model without the extension (i.e. the name of the model folder without mlmodel
):
using Xam.Plugins.OnDeviceCustomVision;
...
public override bool FinishedLaunching(UIApplication uiApplication, NSDictionary launchOptions)
{
...
iOSImageClassifier.Init("<model_name>");
return base.FinishedLaunching(uiApplication, launchOptions);
}
The call to Init
will attempt to compile the model, throwing a ImageClassifierException
if the compile fails.
Android
Export, then download the TensorFlow model from the Custom Vision service. This will be a zip file, and when unzipped this will be a folder containing two files.
labels.txt
model.pb
Add both these files to the Assets
folder in your Android app. Once these are added, add a call to Init
to your main activity passing in the name of the model file, name of the labels file and the type of model downloaded from the custom vision service. The model and labels file names have default arguments of "models.pb"
and "labels.txt"
, so you only need to set these if you have changed the names of these files.
using Xam.Plugins.OnDeviceCustomVision;
...
protected override void OnCreate(Bundle savedInstanceState)
{
...
AndroidImageClassifier.Current.Init("model.pb", "labels.txt", ModelType.General);
}
The model type is used to provide adjustments to the image color scheme, and this is only necessary for models trained before 7th May 2018, models trained after this date do not need any adjustments. The library will work out for you if adjustments need to be made, but if the model was generated after 7th May 2018 you can leave the model type as the default value.
Windows
Export, then download the ONNX model from the Custom Vision service. Add the downloaded model to the Assets
folder in your UWP app and ensure the Build Action is set to Content. A <model_name>.cs
file will be created in the root folder of your app to provide a wrapper around the model, but this wrapper is not needed as this plugin provides its own wrapper.
Don't delete the wrapper class yet, as you will need it to get the labels for the model.
You need to pass the name of the model, along with a list of the models labels in the order that the tags have been defined in the model. You can get this order by opening the auto-generated <model_name>.cs
file and copying the order from there. The wrapper will have a class called <model_name>ModelOutput
and in the constructor for this class will be some code to create a dictionary called loss
:
this.loss = new Dictionary<string, float>()
{
{ "<label 1>", float.NaN },
{ "<label 2>", float.NaN },
...
};
This defines the labels in the correct order. Pass them to the Init
method along with the model name inside the MainPage
class in your UWP app, or similar page class. The Init
method is async
so will need to be awaited in an appropriate place, such as by overriding OnNavigatedTo
.
using Xam.Plugins.OnDeviceCustomVision;
...
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
await WindowsImageClassifier.Init("Currency", new[] { "<label 1>", "<label 2>", ... });
base.OnNavigatedTo(e);
}
Calling this from your .NET Standard library
To classify an image, call:
var tags = await CrossImageClassifier.Current.ClassifyImage(stream);
Passing in an image as a stream. You can use a library like Xam.Plugins.Media to get an image as a stream from the camera or image library.
This will return a list of ImageClassification
instances, one per tag in the model with the probability that the image matches that tag. Probabilities are doubles in the range of 0 - 1, with 1 being 100% probability that the image matches the tag. To find the most likely classification use:
tags.OrderByDescending(t => t.Probability)
.First().Tag;
Using with an IoC container
CrossImageClassifier.Current
returns an instance of the IImageClassifier
interface, and this can be stored inside your IoC container and injected where required.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp1.0 was computed. netcoreapp1.1 was computed. netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard1.0 is compatible. netstandard1.1 was computed. netstandard1.2 was computed. netstandard1.3 was computed. netstandard1.4 was computed. netstandard1.5 was computed. netstandard1.6 was computed. netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net45 was computed. net451 was computed. net452 was computed. net46 was computed. net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. monoandroid81 is compatible. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen30 was computed. tizen40 was computed. tizen60 was computed. |
Universal Windows Platform | uap was computed. uap10.0 was computed. uap10.0.17134 is compatible. |
Windows Phone | wp8 was computed. wp81 was computed. wpa81 was computed. |
Windows Store | netcore was computed. netcore45 was computed. netcore451 was computed. |
Xamarin.iOS | xamarinios was computed. xamarinios10 is compatible. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 1.0
- NETStandard.Library (>= 1.6.1)
-
.NETStandard 2.0
- No dependencies.
-
MonoAndroid 8.1
- Xam.Android.Tensorflow (>= 1.0.0)
-
UAP 10.0.17134
- Microsoft.NETCore.UniversalWindowsPlatform (>= 6.1.5)
-
Xamarin.iOS 1.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
2.2.2 | 3,967 | 10/28/2019 |
2.2.1 | 1,286 | 10/16/2019 |
2.1.1 | 1,709 | 6/10/2019 |
2.1.0-alpha | 1,136 | 6/6/2019 |
2.0.0 | 2,117 | 7/18/2018 |
1.0.0 | 9,352 | 2/26/2018 |
0.1.5-alpha | 2,348 | 1/24/2018 |
0.1.1-alpha | 2,300 | 1/9/2018 |
0.1.0-alpha | 2,551 | 1/9/2018 |
* Added support for ONNX models in UWP apps.
* Handles the updates to the image color adjustments in models post 7th May 2018