BrabeNetz 1.4.0

There is a newer version of this package available.
See the version list below for details.
dotnet add package BrabeNetz --version 1.4.0                
NuGet\Install-Package BrabeNetz -Version 1.4.0                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="BrabeNetz" Version="1.4.0" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add BrabeNetz --version 1.4.0                
#r "nuget: BrabeNetz, 1.4.0"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install BrabeNetz as a Cake Addin
#addin nuget:?package=BrabeNetz&version=1.4.0

// Install BrabeNetz as a Cake Tool
#tool nuget:?package=BrabeNetz&version=1.4.0                

BrabeNetz

<img align="right" src="Images/brain.png" alt="Brain - Image by medium.com" width=200>

BrabeNetz is a supervised neural network written in C++, aiming to be as fast as possible. It can effectively multithread on the CPU where needed, allocate and free fast (by malloc/free), access values faster (pointer-arrays instead of vector) and is well documented.

View latest release

I've written two examples of using BrabeNetz in the Trainer class to train a XOR ({0,0}=0, {0,1}=1, ..) and recognize handwritten characters.

In my XOR example, I'm using a {2,3,1} topology (2 input-, 3 hidden- and 1 output-neurons), but BrabeNetz is scalable until the hardware reaches its limits. The digits recognizer is using a {784,500,100,10} network to train handwritten digits from the MNIST DB.

Be sure to read the network description, and check out my digit recognizer written in Qt (using a trained BrabeNetz MNIST dataset)

Benchmarks

Build: Release x64 | Windows 10 64bit

CPU: Intel i7 6700k @ 4.0GHz x 8cores

RAM: HyperX Fury DDR4 32GB CL14 2400MHz

SSD: Samsung 850 EVO 540MB/s

Commit: 53328c3

<p align="center"> <img width="700" align="center" src="Images/cout.png" alt="Console output with elapsed time (2ms)"> <p align="center">Training a <b>XOR</b> 1000 times takes just <b>0.49ms</b></p> </p> <p align="center"> <img width="700" align="center" src="Images/results_digits.png" alt="Actual trained network prediction output for digit recognition"> <p align="center"><b>Actual prediction</b> of the digit recognizer network</p> </p> <p align="center"> <img width="750" align="center" src="Images/cpuload.png" alt="Using 24/24 cores in Taskmanager"> <p align="center">Effectively using <b>all available cores</b> (24/24, 100% workload)</p> </p> <p align="center"> <img width="700" align="center" src="Images/linux_cout.png" alt="Running on Linux (Output)"> <p align="center">BrabeNetz running on <a href="https://github.com/mrousavy/BrabeNetz/tree/master/Linux">Linux</a> (Debian 9, Linux 4.9.62, KDE Plasma)</p> </p> <p align="center"> <img width="700" align="center" src="Images/linux_htop.png" alt="Running on Linux (Task View - htop)"> <p align="center">Task Resource viewer (htop) on <a href="https://github.com/mrousavy/BrabeNetz/tree/master/Linux">Linux</a> (Debian 9, Linux 4.9.62, KDE Plasma)</p> </p>

Specs

  • Faster algorithms via malloc/free instead of new/delete, and pointers instead of std::vector
  • Smart multithreading by OpenMP where worth the spawn-overhead
  • Scalability (Neuron size, Layer count) - only limited by hardware
  • Easy to use (Inputs, outputs)
  • Randomly generated values to begin with
  • Easily binary save/load with network::save(string)/network::load(string) (state.nn file)
  • Sigmoid squashing function
  • Biases for each neuron
  • network_topology helper objects for loading/saving state and inspecting network

Usage

  1. Build library

    1. Download/Clone from GitHub and change custom definitions (see this for more info)
    2. Open Developer Commandprompt for Visual Studio and navigate to the BrabeNetz\BrabeNetz folder
    3. Run msbuild BrabeNetz.vcxproj /p:Configuration=Release /p:Platform=x64 (Use the configuration and platform you need)
    4. Link the library (in BrabeNetz\BrabeNetz\x64\Release) to your Project
    5. Add headers to your project (every file ending with .h in BrabeNetz\BrabeNetz)
  2. Constructors

    • network(initializer_list<int>): Create a new neural network with the given topology vector and fill it with random numbers ({ 2, 3, 4, 1} = 2 Input, 3 Hidden, 4 Hidden, 1 Output Neurons - total of 4 layers)
    • network(network_topology&): Create a new neural network with the given network topology and load_ it's values
    • network(string): Create a new neural network with the given path to the sate.nn file and load it.
  3. Functions

    • double* feed(double* input_values): Feed the network input_values and return an array of output values (where the array's length is the size of the output layer in topology)
    • double* train(double* input_values, double* expected_output, double& out_total_error): Feed the network input_values and backwards-propagate to adjust the weights/biases and reduce error. Returns the output layer's values, out_total_error will be set to the total error of the output layer (This can be used to check if more training is needed)
    • void save(string path): Save the current network state (topology, weights, biases) to disk (with the given path or default: state.nn)
    • void set_learnrate(double value): Set the learn rate of the network (used by train(..) function). Should either be a constant (0.5) or 1 / (total train times + 1)
    • network_topology& build_topology(): Build and set the network topology object of the current network's state (Can be used for network visualization or similar)

Usage examples can be found here, and here

There are no supported framework assets in this package.

Learn more about Target Frameworks and .NET Standard.

This package has no dependencies.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.5.2 1,474 2/1/2018
1.5.0 1,119 1/25/2018
1.4.5 1,073 1/18/2018
1.4.3 1,237 1/18/2018
1.4.2 1,158 1/18/2018
1.4.1 1,166 1/18/2018
1.4.0 1,039 1/18/2018

Performance improvements, Code cleanup, Properties parameters