Internal Neural Network Interfaces#
- class drdmannturb.TauNet(n_layers: int = 2, hidden_layer_size: int = 3, learn_nu: bool = True)[source]#
Classical implementation of neural network that learns the eddy lifetime function \(\tau(\boldsymbol{k})\). A SimpleNN and Rational network comprise this class. The network widths are determined by a single integer and thereafter the networks have hidden layers of only that width.
The objective is to learn the function
\[\tau(\boldsymbol{k})=\frac{T|\boldsymbol{a}|^{\nu-\frac{2}{3}}}{\left(1+|\boldsymbol{a}|^2\right)^{\nu / 2}}, \quad \boldsymbol{a}=\boldsymbol{a}(\boldsymbol{k}),\]where
\[\boldsymbol{a}(\boldsymbol{k}) = \operatorname{abs}(\boldsymbol{k}) + \mathrm{NN}(\operatorname{abs}(\boldsymbol{k})).\]This class implements the simplest architectures which solve this problem.
- Parameters:
n_layers (
int
, optional) – Number of hidden layers, by default 2hidden_layer_size (
int
, optional) – Size of the hidden layers, by default 3learn_nu (
bool
, optional) – If true, learns also the exponent \(\nu\), by default True
- class drdmannturb.CustomNet(n_layers: int = 2, hidden_layer_sizes: int | list[int] = [10, 10], activations: List[Module] = [ReLU(), ReLU()], learn_nu: bool = True)[source]#
A more versatile version of the tauNet. The objective is the same: to learn the eddy lifetime function \(\tau(\boldsymbol{k})\) in the same way. This class allows for neural networks of variable widths and different kinds of activation functions used between layers.
- Parameters:
n_layers (
int
, optional) – Number of hidden layers, by default 2hidden_layer_sizes (
Union[int
,list[int]]
) – Sizes of each layer; by default [10, 10].activations (
List[nn.Module]
, optional) – List of activation functions to use, by default [nn.ReLU(), nn.ReLU()]learn_nu (
bool
, optional) – If true, learns also the exponent \(\nu\), by default True
- class drdmannturb.SimpleNN(nlayers: int = 2, inlayer: int = 3, hlayer: int = 3, outlayer: int = 3)[source]#
A simple feed-forward neural network consisting of n layers with a ReLU activation function. The default initialization is to random noise of magnitude 1e-9.
- Parameters:
nlayers (
int
, optional) – Number of layers to use, by default 2inlayer (
int
, optional) – Number of input features, by default 3hlayer (
int
, optional) – Number of hidden layers, by default 3outlayer (
int
, optional) – Number of output features, by default 3
- class drdmannturb.Rational(learn_nu: bool = True)[source]#
Learnable rational kernel; a neural network that learns the rational function
\[\tau(\boldsymbol{k})=\frac{T|\boldsymbol{a}|^{\nu-\frac{2}{3}}}{\left(1+|\boldsymbol{a}|^2\right)^{\nu / 2}}, \quad \boldsymbol{a}=\boldsymbol{a}(\boldsymbol{k}),\]specifically, the neural network part of the augmented wavevector
\[\mathrm{NN}(\operatorname{abs}(\boldsymbol{k})).\]- Parameters:
learn_nu (
bool
, optional) – Indicates whether or not the exponent nu should be learned also; by default True
- class drdmannturb.CustomMLP(hlayers: list[int], activations: list[Module], inlayer: int = 3, outlayer: int = 3)[source]#
Feed-forward neural network with variable widths of layers and activation functions. Useful for DNN configurations and experimentation with different activation functions.
- Parameters:
hlayers (
list
) – list specifying widths of hidden layers in NNactivations (
list[nn.Module]
) – list specifying activation functions for each hidden layerinlayer (
int
, optional) – Number of input features, by default 3outlayer (
int
, optional) – Number of features to output, by default 3