dynoNet

Linear dynamical blocks

class torchid.dynonet.module.lti.MimoLinearDynamicalOperator(in_channels, out_channels, n_b, n_a, n_k=0)[source]

Applies a multi-input-multi-output linear dynamical filtering operation.

Parameters:
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

  • n_b (int) – Number of learnable coefficients of the transfer function numerator

  • n_a (int) – Number of learnable coefficients of the transfer function denominator

  • n_k (int, optional) – Number of input delays in the numerator. Default: 0

Shape:
  • Input: (batch_size, seq_len, in_channels)

  • Output: (batch_size, seq_len, out_channels)

b_coeff

The learnable coefficients of the transfer function numerator

Type:

Tensor

a_coeff

The learnable coefficients of the transfer function denominator

Type:

Tensor

Examples:

>>> in_channels, out_channels = 2, 4
>>> n_b, n_a, n_k = 2, 2, 1
>>> G = MimoLinearDynamicalOperator(in_channels, out_channels, n_b, n_a, n_k)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, in_channels))
>>> y_out = G(u_in, y_0, u_0) # shape: (batch_size, seq_len, out_channels)
get_filtdata()[source]

Returns the numerator and denominator coefficients of the transfer function \(q^{-1}\)-polynomials.

The polynomials are function of the variable \(q^{-1}\). The polynomial coefficients b and a have length m and n, respectively and are sorted in descending power order.

For a certain input channel \(i\) and output channel \(o\), the corresponding transfer function \(G_{i\rightarrow o}(z)\) is:

\[G_{i\rightarrow o}(z) = q^{-n_k}\frac{b[o, i, 0] + b[o, i, 1]q^{-1} + \dots + b[o, i, n]q^{-m+1}} {a[o, i, 0] + a[o, i, 1]q^{-1} + \dots + a[o, i, n]q^{-n+1}}\]
Returns:

numerator \(\beta\) and denominator \(\alpha\) polynomial coefficients of the transfer function.

Return type:

np.array(in_channels, out_channels, m), np.array(in_channels, out_channels, n)

Examples:

>>> num, den = G.get_tfdata()
>>> G_tf = control.TransferFunction(G2_num, G2_den, ts=1.0)
get_tfdata()[source]

Returns the numerator and denominator coefficients of the transfer function \(z\)-polynomials.

The polynomials are function of the variable Z-transform variable \(z\). The polynomial coefficients :math::beta and \(\alpha\) have equal length p and are sorted in descending power order.

For a certain input channel \(i\) and output channel \(o\), the corresponding transfer function \(G_{i\rightarrow o}(z)\) is:

\[G_{i\rightarrow o}(z) = \frac{\beta[o, i, 0]z^{n-1} + \beta[o, i, 1]z^{n-1} + \dots + \beta[o, i, p]}{\alpha[o, i, 0]z^{n-1} + \alpha[o, i, 1]z^{n-2} + \dots + \alpha[o, i, p]}\]
Returns:

numerator \(\beta\) and denominator \(\alpha\) polynomial coefficients of the transfer function.

Return type:

np.array(in_channels, out_channels, p), np.array(in_channels, out_channels, p)

Examples:

>>> num, den = G.get_tfdata()
>>> G_tf = control.TransferFunction(G2_num, G2_den, ts=1.0)
class torchid.dynonet.module.lti.SisoLinearDynamicalOperator(n_b, n_a, n_k=0)[source]

Applies a single-input-single-output linear dynamical filtering operation.

Parameters:
  • n_b (int) – Number of learnable coefficients of the transfer function numerator

  • n_a (int) – Number of learnable coefficients of the transfer function denominator

  • n_k (int, optional) – Number of input delays in the numerator. Default: 0

Shape:
  • Input: (batch_size, seq_len, 1)

  • Output: (batch_size, seq_len, 1)

b_coeff

the learnable coefficients of the transfer function numerator

Type:

Tensor

a_coeff

the learnable coefficients of the transfer function denominator

Type:

Tensor

Examples:

>>> n_b, n_a = 2, 2
>>> G = SisoLinearDynamicalOperator(b_coeff, a_coeff)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len))
>>> y_out = G(u_in, y_0, u_0) # shape: (batch_size, seq_len, 1)
get_filtdata()[source]

Returns the numerator and denominator coefficients of the transfer function \(q^{-1}\)-polynomials.

The polynomials are function of the variable \(q^{-1}\). The polynomial coefficients b and a have length m and n, respectively and are sorted in descending power order.

For a certain input channel \(i\) and output channel \(o\), the corresponding transfer function \(G_{i\rightarrow o}(z)\) is:

\[G_{i\rightarrow o}(z) = q^{-n_k}\frac{b[o, i, 0] + b[o, i, 1]q^{-1} + \dots + b[o, i, n]q^{-m+1}} {a[o, i, 0] + a[o, i, 1]q^{-1} + \dots + a[o, i, n]q^{-n+1}}\]
Returns:

numerator \(\beta\) and denominator \(\alpha\) polynomial coefficients of the transfer function.

Return type:

np.array(in_channels, out_channels, m), np.array(in_channels, out_channels, n)

Examples:

>>> num, den = G.get_tfdata()
>>> G_tf = control.TransferFunction(G2_num, G2_den, ts=1.0)
get_tfdata()[source]

Returns the numerator and denominator coefficients of the transfer function \(z\)-polynomials.

The polynomials are function of the variable Z-transform variable \(z\). The polynomial coefficients :math::beta and \(\alpha\) have equal length p and are sorted in descending power order.

For a certain input channel \(i\) and output channel \(o\), the corresponding transfer function \(G_{i\rightarrow o}(z)\) is:

\[G_{i\rightarrow o}(z) = \frac{\beta[o, i, 0]z^{n-1} + \beta[o, i, 1]z^{n-1} + \dots + \beta[o, i, p]}{\alpha[o, i, 0]z^{n-1} + \alpha[o, i, 1]z^{n-2} + \dots + \alpha[o, i, p]}\]
Returns:

numerator \(\beta\) and denominator \(\alpha\) polynomial coefficients of the transfer function.

Return type:

np.array(in_channels, out_channels, p), np.array(in_channels, out_channels, p)

Examples:

>>> num, den = G.get_tfdata()
>>> G_tf = control.TransferFunction(G2_num, G2_den, ts=1.0)
class torchid.dynonet.module.lti.MimoFirLinearDynamicalOperator(in_channels, out_channels, n_b, channels_last=True)[source]

Applies a FIR linear multi-input-multi-output filtering operation.

Parameters:
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

  • n_b (int) – Number of learnable FIR coefficients

Shape:
  • Input: (batch_size, seq_len, in_channels)

  • Output: (batch_size, seq_len, out_channels)

G

The underlying Conv1D object used to implement the convolution

Type:

torch.nn.Conv1d

Examples:

>>> in_channels, out_channels = 2, 4
>>> n_b = 128
>>> G = MimoLinearDynamicalOperator(in_channels, out_channels, n_b)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, in_channels))
>>> y_out = G(u_in, y_0, u_0) # shape: (batch_size, seq_len, out_channels)
class torchid.dynonet.module.lti.SisoFirLinearDynamicalOperator(n_b, channels_last=True)[source]

Applies a FIR linear single-input-single-output filtering operation.

Parameters:

n_b (int) – Number of learnable FIR coefficients

Shape:
  • Input: (batch_size, seq_len, 1)

  • Output: (batch_size, seq_len, 1)

G

The underlying Conv1D object used to implement the convolution

Type:

torch.nn.Conv1d

Examples:

>>> n_b = 128
>>> G = SisoFirLinearDynamicalOperator(n_b)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, 1))
>>> y_out = G(u_in, y_0, u_0) # shape: (batch_size, seq_len, 1)
class torchid.dynonet.module.lti.MimoSecondOrderDynamicOperator(in_channels, out_channels)[source]

Applies a stable second-order linear multi-input-multi-output filtering operation. The denominator of the transfer function is parametrized in terms of two complex conjugate poles with magnitude :math:: r, 0 < r < 1 and phase :math:: beta, < 0 beta < pi. In turn, :math:: r and :math:: beta are parametrized in terms of unconstrained variables :math:: rho and :math:: psi

Parameters:
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

Shape:
  • Input: (batch_size, seq_len, 1)

  • Output: (batch_size, seq_len, 1)

rho

the learnable :math:: rho coefficients of the transfer function denominator

Type:

Tensor

psi

the learnable :math:: psi coefficients of the transfer function denominator

Type:

Tensor

b_coeff

the learnable numerator coefficients

Type:

Tensor

Examples:

>>> n_b = 128
>>> G = SisoFirLinearDynamicalOperator(n_b)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, 1))
>>> y_out = G(u_in, y_0, u_0) # shape: (batch_size, seq_len, 1)
class torchid.dynonet.module.lti.SisoSecondOrderDynamicOperator[source]

Applies a stable second-order linear single-input-single-output filtering operation. The denominator of the transfer function is parametrized in terms of two complex conjugate poles with magnitude :math:: r, 0 < r < 1 and phase :math:: beta, < 0 beta < pi. In turn, :math:: r and :math:: beta are parametrized in terms of unconstrained variables :math:: rho and :math:: psi

Static non-linear blocks

class torchid.dynonet.module.static.MimoStaticNonLinearity(in_channels, out_channels, n_hidden=20, activation='tanh')[source]

Applies a Static MIMO non-linearity. The non-linearity is implemented as a feed-forward neural network.

Parameters:
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

  • n_hidden (int, optional) – Number of nodes in the hidden layer. Default: 20

  • activation (str) – Activation function. Either ‘tanh’, ‘relu’, or ‘sigmoid’. Default: ‘tanh’

Shape:
  • Input: (…, in_channels)

  • Output: (…, out_channels)

Examples:

>>> in_channels, out_channels = 2, 4
>>> F = MimoStaticNonLinearity(in_channels, out_channels)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, in_channels))
>>> y_out = F(u_in, y_0, u_0) # shape: (batch_size, seq_len, out_channels)
forward(u_lin)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchid.dynonet.module.static.SisoStaticNonLinearity(n_hidden=20, activation='tanh')[source]

Applies a Static SISO non-linearity. The non-linearity is implemented as a feed-forward neural network.

Parameters:
  • n_hidden (int, optional) – Number of nodes in the hidden layer. Default: 20

  • activation (str) – Activation function. Either ‘tanh’, ‘relu’, or ‘sigmoid’. Default: ‘tanh’

  • s

Shape:
  • Input: (…, in_channels)

  • Output: (…, out_channels)

Examples:

>>> F = SisoStaticNonLinearity(n_hidden=20)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, in_channels))
>>> y_out = F(u_in, y_0, u_0) # shape: (batch_size, seq_len, out_channels)
class torchid.dynonet.module.static.MimoChannelWiseNonLinearity(channels, n_hidden=10)[source]

Applies a Channel-wise non-linearity. The non-linearity is implemented as a set of feed-forward neural networks (each one operating on a different channel).

Parameters:
  • channels (int) – Number of both input and output channels

  • n_hidden (int, optional) – Number of nodes in the hidden layer of each network. Default: 10

Shape:
  • Input: (…, channels)

  • Output: (…, channels)

Examples:

>>> channels = 4
>>> F = MimoChannelWiseNonLinearity(channels)
>>> batch_size, seq_len = 32, 100
>>> u_in = torch.ones((batch_size, seq_len, channels))
>>> y_out = F(u_in, y_0, u_0) # shape: (batch_size, seq_len, channels)
forward(u_lin)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.