Cells

Introduction

Cell objects are the atomics elements that compose a deep neural network.

They are the node of the computation graph. n2d2.cells.NeuralNetworkCell are not dependant of a DeepNet this allow a dynamic management of the computation.

Cells are organize with the following logic :
  • n2d2.cells.NeuralNetworkCell : Atomic cell of a neural network;

  • n2d2.cells.Block : Store a collection of n2d2.cells.NeuralNetworkCell, the storage order does not determine the graph computation;

  • n2d2.cells.DeepNetCell : This cell allow you to use an N2D2.DeepNet, it can be used for ONNX and INI import or to run optimize learning;

  • n2d2.cells.Iterable : Similar to n2d2.cells.Block but the order of storage determine the computation graph;

  • n2d2.cells.Sequence : A vertical structure to create neural network;

  • n2d2.cells.Layer : An horizontal structure to create neural network.

Cell class diagram

Block

Sequence

Layer

DeepNetCell

The n2d2.cells.DeepNetCell constructor require an N2D2.DeepNet. In practice, you will not use the constructor directly.

There are three methods to generate a n2d2.cells.DeepNetCell : n2d2.cells.DeepNetCell.load_from_ONNX(), n2d2.cells.DeepNetCell.load_from_INI(), n2d2.cells.Sequence.to_deepnet_cell()

The DeepNetCell can be used to train the neural network in an efficient way thanks to n2d2.cells.DeepNetCell.fit().

Example

You can create a DeepNet cell with n2d2.cells.DeepNetCell.load_from_ONNX() :

database = n2d2.database.MNIST(data_path=DATA_PATH, validation=0.1)
provider = n2d2.provider.DataProvider(database, [28, 28, 1], batch_size=BATCH_SIZE)
model = n2d2.cells.DeepNetCell.load_from_ONNX(provider, ONNX_PATH)
model.fit(nb_epochs)
model.run_test()

Using n2d2.cells.DeepNetCell.fit() method will reduce the learning time as it will parallelize the loading of the batch of data and the propagation.

If you want to use the dynamic computation graph provided by the API, you can use the n2d2.cells.DeepNetCell as a simple cell.

database = n2d2.database.MNIST(data_path=DATA_PATH, validation=0.1)
provider = n2d2.provider.DataProvider(database, [28, 28, 1], batch_size=BATCH_SIZE)
model = n2d2.cells.DeepNetCell.load_from_ONNX(provider, ONNX_PATH)
sequence = n2d2.cells.Sequence([model, n2d2.cells.Softmax(with_loss=True)])
input_tensor = n2d2.Tensor(DIMS)
output_tensor = sequence(input_tensor)

Cells

NeuralNetworkCell

Conv

Deconv

Fc

Dropout

ElemWise

Padding

Softmax

BatchNorm2d

Pool

Activation

Reshape

Resize

Scaling

Transformation

Transpose

Saving parameters

You can save the parameters (weights, biases …) of your network with the method export_free_parameters. To load those parameters you can use the method import_free_parameters.

With n2d2 you can choose wether you want to save the parameters of a part of your network or of all your graph.

Object

Save parameters

Load parameters

n2d2.cells.NeuralNetworkCell

n2d2.cells.NeuralNetworkCell.export_free_parameters()

n2d2.cells.NeuralNetworkCell.import_free_parameters()

n2d2.cells.Block

n2d2.cells.Block.import_free_parameters()

n2d2.cells.Block.import_free_parameters()

Configuration section

If you want to add the same parameters to multiple cells, you can use a n2d2.ConfigSection.

n2d2.ConfigSection are used like dictionaries and passes to the constructor of classes like kwargs.

Usage example

conv_config = n2d2.ConfigSection(no_bias=True)
n2d2.cells.Conv(3, 32, [4, 4], **conv_config)

This creates a n2d2.cells.Conv with the parameter no_bias=True. This functionality allow you to write more concise code, when multiple cells share the same parameters.

Warning

If you want to pass an object as a parameter for multiple n2d2 object. You need to create a wrapping function to create your object. Example :

def conv_def():
        return n2d2.ConfigSection(weights_solver=n2d2.solver.SGD())
n2d2.cells.Conv(3, 32, [4, 4], **conv_def())

Mapping

You can change the mapping of the input for some cells (see if they have mapping parameter available).

You can create a mapping manually with a n2d2.Tensor object :

mapping=n2d2.Tensor([15, 24], datatype="bool")
mapping.set_values([
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1]])

Or use the Mapping object :

mapping=n2d2.mapping.Mapping(nb_channels_per_group=2).create_mapping(15, 24)

Which create the following mapping :

1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1

Solver

You can associate at construction and run time a n2d2.solver.Solver object to a cell. This solver object will optimize the parameters of your cell using a specific algorithm.

Usage example

In this short example we will see how to associate a solver to a model and to a cell object at construction and at runtime.

Set solver at construction time

Let’s create a couple of n2d2.cells.Fc cell and add them to a n2d2.cells.Sequence. At construction time we will set the solver of one of them to a n2d2.solver.SGD with a learning_rate=0.1.

import n2d2

cell1 = n2d2.cells.Fc(2,2, solver=n2d2.solver.SGD(learning_rate=0.1))
cell2 = n2d2.cells.Fc(2,2)

model = n2d2.cells.Sequence([cell1, cell2])

print(model)

Output :

'Sequence_0' Sequence(
        (0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
        (1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)

Set a solver for a specific parameter

We can set a new solver for the bias of the second cell fully connected cell. This solver will be different than the weight parameter one.

Note

Here we access the cell via its instanciate object but we could have used its name : model["Fc_1"].bias_solver=n2d2.solver.Adam().

cell2.bias_solver=n2d2.solver.Adam()

print(model)

Output :

'Sequence_0' Sequence(
        (0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
        (1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.001), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)

Set a solver for a model

We can set a solver to the whole n2d2.cells.Sequence with the method n2d2.cells.Sequence.set_solver().

model.set_solver(n2d2.solver.Adam(learning_rate=0.1))

print(model)

Output :

'Sequence_0' Sequence(
        (0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
        (1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)

SGD

Adam

Filler

You can associate to a cell at construction time a n2d2.filler.Filler object. This object will fill weights and biases using a specific method.

Usage example

In this short example we will see how to associate a filler to a cell object, how to get the weights and biases and how to set a new filler and refill the weights.

Setting a filler at construction time

We begin by importing n2d2 and creating a n2d2.cells.Fc object. We will associate a n2d2.filler.Constant filler.

Note

If you want to set a filler only for weights (or biases) you could have used the parameter weight_filler (or bias_filler).

import n2d2
cell = n2d2.cells.Fc(2,2, filler=n2d2.filler.Constant(value=1.0))

If you print the weights, you will see that they are all set to one.

print("--- Weights ---")
for channel in cell.get_weights():
for value in channel:
        print(value)

Output :

--- Weights ---
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)

Same with the biases

print("--- Biases ---")
for channel in cell.get_biases():
print(channel)

Output :

--- Biases ---
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)

Changing the filler of an instanciated object

You can set a new filler for bias by changing the bias_filler attribute (or weight_filler for only weights or filer for both).

However changing the filler doesn’t change the parameter values, you need to use the method n2d2.cells.Fc.refill_bias() (see also n2d2.cells.Fc.refill_weights())

Note

You can also use the method n2d2.cells.Fc.set_filler(), n2d2.cells.Fc.set_weights_filler() and n2d2.cells.Fc.set_biases_filler(). Which have a refill option.

cell.bias_filler=n2d2.filler.Normal()
cell.refill_bias()

You can then observe the new biases :

print("--- New Biases ---")
for channel in cell.get_biases():
print(channel)

Output :

--- New Biases ---
n2d2.Tensor([
1.32238
], device=cpu, datatype=f)
n2d2.Tensor([
-0.0233932
], device=cpu, datatype=f)

He

Normal

Constant

Xavier

Activations

You can associate to some cell an activation function.

Linear

Rectifier

Tanh

Target

Last cell of the network this object computes the loss.

To understand what the Target does, please refer to this part of the documentation : Target INI.

Usage example

How to use a Target to train your model :

# Propagation & BackPropagation example
output = model(stimuli)
loss = target(output)
loss.back_propagate()
loss.update()

Log performance analysis of your training :

### After validation ###
# save computational stats of the network
target.log_stats("name")
# save a confusion matrix
target.log_confusion_matrix("name")
# save a graph of the loss and the validation score as a function of the number of steps
target.log_success("name")