Go to file
Andrew Jeffery 6574bddf6b genann: Use reciprocal interval value to strength reduce divide to multiply
This gives a reduction of roughly 2.5 million instructions in the execution
trace of example4.

genann_act_sigmoid_cached() previously divided by interval to calculate the
lookup index. Divide is a expensive operation, so instead use the reciprocal of
the existing interval calculation to reduce the divide to a multiply.

Building with the following configuration:

```
$ head /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 61
model name      : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
stepping        : 4
microcode       : 0x25
cpu MHz         : 2593.871
cache size      : 4096 KB
physical id     : 0
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="17.10 (Artful Aardvark)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.10"
VERSION_ID="17.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=artful
UBUNTU_CODENAME=artful
$ cc --version
gcc (Ubuntu 7.2.0-8ubuntu3) 7.2.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```

on my Lenovo X1 Carbon Gen 3 machine sees the following:

```
$ make CFLAGS="-g -O3 -march=native -DNDEBUG"
cc -g -O3 -march=native -DNDEBUG   -c -o test.o test.c
cc -g -O3 -march=native -DNDEBUG   -c -o genann.o genann.c
cc -g -O3 -march=native -DNDEBUG   -c -o example1.o example1.c
cc -g -O3 -march=native -DNDEBUG   -c -o example2.o example2.c
cc -g -O3 -march=native -DNDEBUG   -c -o example3.o example3.c
cc -g -O3 -march=native -DNDEBUG   -c -o example4.o example4.c
cc -g -O3 -march=native -DNDEBUG   -c -o strings.o strings.c
cc   test.o genann.o  -lm -o test
cc   example1.o genann.o  -lm -o example1
cc   example4.o genann.o  -lm -o example4
cc   example3.o genann.o  -lm -o example3
cc   example2.o genann.o  -lm -o example2
cc   strings.o genann.o  -lm -o strings
$ for i in `seq 0 10`; do ./example4 > /dev/null; done; sudo perf stat record ./example4
GENANN example 4.
Train an ANN on the IRIS dataset using backpropagation.
Loading 150 data points from example/iris.data
Training for 5000 loops over data.
147/150 correct (98.0%).

 Performance counter stats for './example4':

        101.369081      task-clock (msec)         #    0.998 CPUs utilized
                 1      context-switches          #    0.010 K/sec
                 0      cpu-migrations            #    0.000 K/sec
                79      page-faults               #    0.779 K/sec
       320,197,883      cycles                    #    3.159 GHz
     1,121,174,423      instructions              #    3.50  insn per cycle
       223,257,752      branches                  # 2202.425 M/sec
            62,680      branch-misses             #    0.03% of all branches

       0.101595114 seconds time elapsed
```

Prior to the change, we see something like:

```
$ make CFLAGS="-g -O3 -march=native"
cc -g -O3 -march=native   -c -o test.o test.c
cc -g -O3 -march=native   -c -o genann.o genann.c
cc -g -O3 -march=native   -c -o example1.o example1.c
cc -g -O3 -march=native   -c -o example2.o example2.c
cc -g -O3 -march=native   -c -o example3.o example3.c
cc -g -O3 -march=native   -c -o example4.o example4.c
cc -g -O3 -march=native   -c -o strings.o strings.c
cc   test.o genann.o  -lm -o test
cc   example1.o genann.o  -lm -o example1
cc   example3.o genann.o  -lm -o example3
cc   example4.o genann.o  -lm -o example4
cc   strings.o genann.o  -lm -o strings
cc   example2.o genann.o  -lm -o example2
$ for i in `seq 0 10`; do ./example4 > /dev/null; done; sudo perf stat record ./example4
GENANN example 4.
Train an ANN on the IRIS dataset using backpropagation.
Loading 150 data points from example/iris.data
Training for 5000 loops over data.
147/150 correct (98.0%).

 Performance counter stats for './example4':

        104.644198      task-clock (msec)         #    0.998 CPUs utilized
                 0      context-switches          #    0.000 K/sec
                 0      cpu-migrations            #    0.000 K/sec
                79      page-faults               #    0.755 K/sec
       330,340,554      cycles                    #    3.157 GHz
     1,123,669,767      instructions              #    3.40  insn per cycle
       215,441,809      branches                  # 2058.803 M/sec
            62,406      branch-misses             #    0.03% of all branches

       0.104891323 seconds time elapsed
```

Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
2017-12-18 16:59:40 +10:30
doc Added to documentation. 2016-03-14 12:23:10 -05:00
example Initial commit 2016-02-09 17:53:54 -06:00
.travis.yml Changed build to work with both gcc and clang. 2016-03-08 12:58:02 -06:00
example1.c Changed name case, code style. 2016-02-11 14:38:42 -06:00
example2.c Changed name case, code style. 2016-02-11 14:38:42 -06:00
example3.c Changed build to work with both gcc and clang. 2016-03-08 12:58:02 -06:00
example4.c example4: Fix unused-result warning for fgets() 2017-10-22 08:23:21 +10:30
genann.c genann: Use reciprocal interval value to strength reduce divide to multiply 2017-12-18 16:59:40 +10:30
genann.h Added linear activation function. 2016-05-19 16:55:44 -05:00
LICENSE moved license 2016-12-21 12:22:49 -06:00
Makefile Makefile: Increase optimisation 2017-12-18 16:59:40 +10:30
minctest.h Initial commit 2016-02-09 17:53:54 -06:00
README.md fixed headings for github 2017-04-03 13:12:42 -05:00
test.c added more training for xor 2017-01-15 12:48:00 -06:00

Build Status

Genann logo

Genann

Genann is a minimal, well-tested library for training and using feedforward artificial neural networks (ANN) in C. Its primary focus is on being simple, fast, reliable, and hackable. It achieves this by providing only the necessary functions and little extra.

Features

  • ANSI C with no dependencies.
  • Contained in a single source code and header file.
  • Simple.
  • Fast and thread-safe.
  • Easily extendible.
  • Implements backpropagation training.
  • Compatible with alternative training methods (classic optimization, genetic algorithms, etc)
  • Includes examples and test suite.
  • Released under the zlib license - free for nearly any use.

Building

Genann is self-contained in two files: genann.c and genann.h. To use Genann, simply add those two files to your project.

Example Code

Four example programs are included with the source code.

Quick Example

We create an ANN taking 2 inputs, having 1 layer of 3 hidden neurons, and providing 2 outputs. It has the following structure:

NN Example Structure

We then train it on a set of labeled data using backpropagation and ask it to predict on a test data point:

#include "genann.h"

/* Not shown, loading your training and test data. */
double **training_data_input, **training_data_output, **test_data_input;

/* New network with 2 inputs,
 * 1 hidden layer of 3 neurons each,
 * and 2 outputs. */
genann *ann = genann_init(2, 1, 3, 2);

/* Learn on the training set. */
for (i = 0; i < 300; ++i) {
    for (j = 0; j < 100; ++j)
        genann_train(ann, training_data_input[j], training_data_output[j], 0.1);
}

/* Run the network and see what it predicts. */
double const *prediction = genann_run(ann, test_data_input[0]);
printf("Output for the first test data point is: %f, %f\n", prediction[0], prediction[1]);

genann_free(ann);

This example is to show API usage, it is not showing good machine learning techniques. In a real application you would likely want to learn on the test data in a random order. You would also want to monitor the learning to prevent over-fitting.

Usage

Creating and Freeing ANNs

genann *genann_init(int inputs, int hidden_layers, int hidden, int outputs);
genann *genann_copy(genann const *ann);
void genann_free(genann *ann);

Creating a new ANN is done with the genann_init() function. Its arguments are the number of inputs, the number of hidden layers, the number of neurons in each hidden layer, and the number of outputs. It returns a genann struct pointer.

Calling genann_copy() will create a deep-copy of an existing genann struct.

Call genann_free() when you're finished with an ANN returned by genann_init().

Training ANNs

void genann_train(genann const *ann, double const *inputs,
        double const *desired_outputs, double learning_rate);

genann_train() will preform one update using standard backpropogation. It should be called by passing in an array of inputs, an array of expected outputs, and a learning rate. See example1.c for an example of learning with backpropogation.

A primary design goal of Genann was to store all the network weights in one contigious block of memory. This makes it easy and efficient to train the network weights using direct-search numeric optimizion algorthims, such as Hill Climbing, the Genetic Algorithm, Simulated Annealing, etc. These methods can be used by searching on the ANN's weights directly. Every genann struct contains the members int total_weights; and double *weight;. *weight points to an array of total_weights size which contains all weights used by the ANN. See example2.c for an example of training using random hill climbing search.

Saving and Loading ANNs

genann *genann_read(FILE *in);
void genann_write(genann const *ann, FILE *out);

Genann provides the genann_read() and genann_write() functions for loading or saving an ANN in a text-based format.

Evaluating

double const *genann_run(genann const *ann, double const *inputs);

Call genann_run() on a trained ANN to run a feed-forward pass on a given set of inputs. genann_run() will provide a pointer to the array of predicted outputs (of ann->outputs length).

Hints

  • All functions start with genann_.
  • The code is simple. Dig in and change things.

Extra Resources

The comp.ai.neural-nets FAQ is an excellent resource for an introduction to artificial neural networks.

If you're looking for a heavier, more opinionated neural network library in C, I recommend the FANN library. Another good library is Peter van Rossum's Lightweight Neural Network, which despite its name, is heavier and has more features than Genann.