dualing.models.base

Pre-defined base architectures.

A package for already-implemented machine learning architectures that serves as base for Siamese Networks.

class dualing.models.base.CNN(n_blocks: Optional[int] = 3, init_kernel: Optional[int] = 5, n_output: Optional[int] = 128, activation: Optional[str] = 'sigmoid')

Bases: dualing.core.Base

A CNN class stands for a standard Convolutional Neural Network implementation.

__init__(self, n_blocks: Optional[int] = 3, init_kernel: Optional[int] = 5, n_output: Optional[int] = 128, activation: Optional[str] = 'sigmoid')

Initialization method.

Parameters
  • n_blocks – Number of convolutional/pooling blocks.

  • init_kernel – Size of initial kernel.

  • n_outputs – Number of output units.

  • activation – Output activation function.

call(self, x: tensorflow.Tensor)

Method that holds vital information whenever this class is called.

Parameters

x – Tensor containing the input sample.

Returns

The layer’s outputs.

Return type

(tf.Tensor)

class dualing.models.base.GRU(vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Bases: dualing.core.Base

A GRU class stands for a standard Gated Recurrent Unit implementation.

__init__(self, vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Initialization method.

Parameters
  • vocab_size – Vocabulary size.

  • embedding_size – Embedding layer units.

  • hidden_size – Hidden layer units.

call(self, x: tensorflow.Tensor)

Method that holds vital information whenever this class is called.

Parameters

x – Tensor containing the input sample.

Returns

The layer’s outputs.

Return type

(tf.Tensor)

class dualing.models.base.LSTM(vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Bases: dualing.core.Base

An LSTM class stands for a standard Long Short-Term Memory implementation.

__init__(self, vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Initialization method.

Parameters
  • vocab_size – Vocabulary size.

  • embedding_size – Embedding layer units.

  • hidden_size – Hidden layer units.

call(self, x: tensorflow.Tensor)

Method that holds vital information whenever this class is called.

Parameters

x – Tensor containing the input sample.

Returns

The layer’s outputs.

Return type

(tf.Tensor)

class dualing.models.base.MLP(n_hidden: Optional[Tuple[int, Ellipsis]] = (128,))

Bases: dualing.core.Base

An MLP class stands for a Multi-Layer Perceptron implementation.

__init__(self, n_hidden: Optional[Tuple[int, Ellipsis]] = (128,))

Initialization method.

Parameters

n_hidden – Tuple containing the number of hidden units per layer.

call(self, x: tensorflow.Tensor)

Method that holds vital information whenever this class is called.

Parameters

x – Tensor containing the input sample.

Returns

The layer’s outputs.

Return type

(tf.Tensor)

class dualing.models.base.RNN(vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Bases: dualing.core.Base

An RNN class stands for a standard Recurrent Neural Network implementation.

__init__(self, vocab_size: Optional[int] = 1, embedding_size: Optional[int] = 32, hidden_size: Optional[int] = 64)

Initialization method.

Parameters
  • vocab_size – Vocabulary size.

  • embedding_size – Embedding layer units.

  • hidden_size – Hidden layer units.

call(self, x: tensorflow.Tensor)

Method that holds vital information whenever this class is called.

Parameters

x – Tensor containing the input sample.

Returns

The layer’s outputs.

Return type

(tf.Tensor)