Citms.OnnxRuntime.Gpu-arm64
1.16.3
There is a newer version of this package available.
See the version list below for details.
See the version list below for details.
dotnet add package Citms.OnnxRuntime.Gpu-arm64 --version 1.16.3
NuGet\Install-Package Citms.OnnxRuntime.Gpu-arm64 -Version 1.16.3
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Citms.OnnxRuntime.Gpu-arm64" Version="1.16.3" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Citms.OnnxRuntime.Gpu-arm64 --version 1.16.3
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
#r "nuget: Citms.OnnxRuntime.Gpu-arm64, 1.16.3"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Citms.OnnxRuntime.Gpu-arm64 as a Cake Addin #addin nuget:?package=Citms.OnnxRuntime.Gpu-arm64&version=1.16.3 // Install Citms.OnnxRuntime.Gpu-arm64 as a Cake Tool #tool nuget:?package=Citms.OnnxRuntime.Gpu-arm64&version=1.16.3
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
Microsoft.ML.OnnxRuntime.Gpu ARM64
基于ARM服务器编译OnnxRuntime V1.16.3版本GPU依赖库,编译步骤如下
1.下载V1.16.3版本源码
git clone -b v1.16.3 --depth 1 https://github.com/microsoft/onnxruntime.git
git submodule update --init --recursive --progress
2.修改ARM64 cuda dockerfile文件
#进入到下载好的源码目录
cd onnxruntime
cd dockerfiles
#开始修改Dockerfile.cuda文件
- 原始dockerfile文件内容
# --------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------
# Dockerfile to run ONNXRuntime with CUDA, CUDNN integration
# nVidia cuda 11.4 Base Image
FROM nvcr.io/nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
MAINTAINER Changming Sun "chasun@microsoft.com"
ADD . /code
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
RUN apt-get update && apt-get install -y --no-install-recommends python3-dev ca-certificates g++ python3-numpy gcc make git python3-setuptools python3-wheel python3-packaging python3-pip aria2 && aria2c -q -d /tmp -o cmake-3.26.3-linux-x86_64.tar.gz https://github.com/Kitware/CMake/releases/download/v3.26.3/cmake-3.26.3-linux-x86_64.tar.gz && tar -zxf /tmp/cmake-3.26.3-linux-x86_64.tar.gz --strip=1 -C /usr
RUN cd /code && python3 -m pip install -r tools/ci_build/github/linux/docker/inference/x64/python/cpu/scripts/requireme\
nts.txt && /bin/bash ./build.sh --allow_running_as_root --skip_submodule_sync --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --use_cuda --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) 'CMAKE_CUDA_ARCHITECTURES=52;60;61;70;75;86'
FROM nvcr.io/nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
COPY --from=0 /code/build/Linux/Release/dist /root
COPY --from=0 /code/dockerfiles/LICENSE-IMAGE.txt /code/LICENSE-IMAGE.txt
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends libstdc++6 ca-certificates python3-setuptools python3-wheel python3-pip unattended-upgrades && unattended-upgrade && python3 -m pip install /root/*.whl && rm -rf /root/*.whl
- 修改后的文件内容
# --------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------
# Dockerfile to run ONNXRuntime with CUDA, CUDNN integration
# nVidia cuda 11.4 Base Image
FROM nvcr.io/nvidia/cuda:11.4.3-cudnn8-devel-ubuntu20.04
ENV DEBIAN_FRONTEND=noninteractive
MAINTAINER Changming Sun "chasun@microsoft.com"
ADD . /code
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
RUN apt-get update && apt-get install -y --no-install-recommends python3-dev ca-certificates g++ python3-numpy gcc make git python3-setuptools python3-wheel python3-packaging python3-pip aria2 && aria2c -q -d /tmp -o cmake-3.26.3-linux-aarch64.tar.gz https://github.com/Kitware/CMake/releases/download/v3.26.3/cmake-3.26.3-linux-aarch64.tar.gz && tar -zxf /tmp/cmake-3.26.3-linux-aarch64.tar.gz --strip=1 -C /usr
RUN /bin/bash ./build.sh --allow_running_as_root --skip_submodule_sync --tensorrt_home /usr/lib/aarch64-linux-gnu --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu/ --use_cuda --config Release --build_shared_lib --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) 'CMAKE_CUDA_ARCHITECTURES=52;60;61;70;75;86'
修改点 🐳 | 修改前📌 | 修改后 |
---|---|---|
基础镜像 | nvcr.io/nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 | nvcr.io/nvidia/cuda:11.4.3-cudnn8-devel-ubuntu20.04 |
cmake下载地址 | cmake-3.26.3-linux-x86_64.tar.gz | cmake-3.26.3-linux-aarch64.tar.gz |
build.sh cuda+cuDNN路径 | /usr/lib/x86_64-linux-gnu | /usr/lib/aarch64-linux-gnu |
3.开始编译
docker build -t onnxruntime-cuda-build -f Dockerfile.cuda ..
整个执行过程预计2小时,镜像构建完成后,启动容器将文件拷备出来
docker run --rm -it onnxruntime-cuda-build /bin/bash
# 新开一个shell窗口,其中f96cb33b6482为刚启动的容器id,可通过docker ps进行查看
docker cp f96cb33b6482:/code/build/Linux/Release /root/linux-onnx-cpu
- 编译后的文件,全都拷贝到宿主机 /root/linux-onnx-cpu目录中了 libonnxruntime.so libonnxruntime_providers_cuda.so libonnxruntime_providers_shared.so 这三个文件是我们需要的最终文件
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.1 is compatible. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
-
.NETStandard 2.1
- No dependencies.
NuGet packages (2)
Showing the top 2 NuGet packages that depend on Citms.OnnxRuntime.Gpu-arm64:
Package | Downloads |
---|---|
Citms.PaddleInference.Gpu
This project is based on Citms.Paddle,use ONNX, Including character recognition, text detection , It can be used without network and has high recognition accuracy. |
|
Citms.YoloV8.Gpu
Use YOLOv8 in real-time for object detection, instance segmentation, pose estimation and image classification, via ONNX Runtime |
GitHub repositories
This package is not used by any popular GitHub repositories.