models package
Subpackages
Submodules
models.KANSINupm_v1 module
- class models.KANSINupm_v1.KANSIN(nneuron_sin, sigma=1.0, *args, **kwargs)[source]
Bases:
KANupm
- Parameters:
nneuron_sin (int)
sigma (float)
- class models.KANSINupm_v1.SineLayer(input_size, output_size, sigma=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KAN_v4_1 module
- class models.KAN_v4_1.ChebyshevLayer(input_dim, output_dim, degree, **kwargs)[source]
Bases:
Module
Chebyshev layer for KAN model.
- Parameters:
input_dim (int) – The number of input features.
output_dim (int) – The number of output features.
degree (int) – The degree of the Chebyshev polynomial.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KAN_v4_1.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
Jacobi layer for KAN model.
- Parameters:
input_dim (int) – The number of input features.
output_dim (int) – The number of output features.
degree (int) – The degree of the Jacobi polynomial.
a (float, optional) – The first parameter of the Jacobi polynomial (default:
1.0
).b (float, optional) – The second parameter of the Jacobi polynomial (default:
1.0
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KAN_v4_1.KAN(input_size, output_size, n_layers, hidden_size, layer_type, model_name='KAN', p_dropouts=0.0, device='cpu', **layer_kwargs)[source]
Bases:
Module
KAN (Kolmogorov-Arnold Network) model for regression tasks. This model is based on https://arxiv.org/abs/2404.19756, inspired by the Kolmogorov-Arnold representation theorem.
- Parameters:
input_size (int) – The number of input features.
output_size (int) – The number of output features.
n_layers (int) – The number of hidden layers.
hidden_size (int) – The number of neurons in the hidden layers.
layer_type (nn.Module) – The type of layer to use in the model. It can be one of the following:
JacobiLayer
,ChebyshevLayer
.model_name (str) – The name of the model.
p_dropouts (float, optional) – The dropout probability (default:
0.0
).device (torch.device, optional) – The device where the model is loaded (default: gpu if available).
**layer_kwargs – Additional keyword arguments to pass to the layer type. For example, the order of the Taylor series or the degree of the Chebyshev polynomial.
- classmethod create_optimized_model(train_dataset, eval_dataset, optuna_optimizer, **kwargs)[source]
Create an optimized KAN model using Optuna. The model is trained on the training dataset and the metric to optimize is computed with the evaluation dataset. If the parameters from the optimizer are a tuple, the function will optimize the parameter. If the parameter is a single value, it will be fixed during optimization.
- Parameters:
train_dataset – The training dataset.
eval_dataset – The evaluation dataset.
optuna_optimizer (OptunaOptimizer) – The optimizer to use for optimization.
kwargs – Additional keyword arguments.
- Returns:
The optimized model and the optimization parameters.
- Return type:
Tuple [KAN, Dict]
Example
>>> from pyLOM.NN import KAN, OptunaOptimizer >>> # Split the dataset >>> train_dataset, eval_dataset = dataset.get_splits([0.8, 0.2]) >>> >>> # Define the optimization parameters >>> optimization_params = { >>> "lr": (0.00001, 0.1), >>> "batch_size": (10, 64), >>> "hidden_size": (10, 40), # optimizable parameter >>> "n_layers": (1, 4), >>> "print_eval_rate": 2, >>> "epochs": 10, # non-optimizable parameter >>> "lr_gamma": 0.98, >>> "lr_step_size": 3, >>> "model_name": "kan_test_optuna", >>> 'device': device, >>> "layer_type": (pyLOM.NN.ChebyshevLayer, pyLOM.NN.JacobiLayer), >>> "layer_kwargs": { >>> "degree": (3, 10), >>> }, >>> } >>> >>> # Define the optimizer >>> optimizer = OptunaOptimizer( >>> optimization_params=optimization_params, >>> n_trials=5, >>> direction="minimize", >>> pruner=optuna.pruners.MedianPruner(n_startup_trials=5, n_warmup_steps=5, interval_steps=1), >>> save_dir=None, >>> ) >>> >>> # Create the optimized model >>> model, optimization_params = KAN.create_optimized_model(train_dataset, eval_dataset, optimizer) >>> >>> # Fit the model >>> model.fit(train_dataset, eval_dataset, **optimization_params)
- fit(train_dataset, eval_dataset, batch_size=32, epochs=100, lr=0.001, lr_gamma=1, lr_scheduler_step=1, print_eval_rate=2, loss_fn=MSELoss(), save_logs_path=None)[source]
Train the model using the provided training dataset. The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset.
eval_dataset – The evaluation dataset.
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
lr_gamma (float) – The learning rate decay factor.
lr_scheduler_step (int) – The number of epochs to reduce the learning rate.
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs and the losses will be printed. If set to 0, nothing will be printed (default:2
).loss_fn (torch.nn.Module, optional) – The loss function (default:
nn.MSELoss()
).save_logs_path (str, optional) – Path to save the training and evaluation losses (default:
None
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod load(path, device=device(type='cpu'))[source]
Loads a model from a checkpoint file.
- Parameters:
path (str) – Path to the checkpoint file.
device (torch.device) – Device where the model is loaded (default: cpu).
- Returns:
The loaded KAN model with the trained weights.
- Return type:
model (KAN)
- predict(X, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- save(path, save_only_model=False)[source]
Save the model to a checkpoint file.
- Parameters:
path (str) – Path to save the model. It can be either a path to a directory or a file name.
directory (If it is a)
trained. (the model will be saved with a filename that includes the number of epochs)
save_only_model (bool, optional) – Whether to only save the model, or also the optimizer and scheduler. Note that when this is true, you won’t be able to resume training from checkpoint.(default:
False
).
- class models.KAN_v4_1.TaylorLayer(input_dim, out_dim, order, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KANupm_v4 module
- class models.KANupm_v4.ChebyLayer(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4.ChebyLayer_ant(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4.ChebyLayer_v2(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4.KANupm(ninput, noutput, nlayers, hidden_neur, layer_type, name_model, id='kan', dropout_p=0, device=device(type='cpu'), **layer_kwargs)[source]
Bases:
Module
- Parameters:
ninput (int)
noutput (int)
nlayers (int)
hidden_neur (int)
name_model (str)
dropout_p (float)
device (device)
- exam(data, **kwargs)[source]
Recibe un tensor con datos de test y resultados. Devuelve un array de numpy de predicciones y otro de valores verdaderos para comparar.
- Parameters:
data (torch.Tensor) – Tensor con (ninput + noutput) columnas y n filas, siendo n los casos a evaluar.
- Returns:
Dataset con ninputs como columnas de entrada y noutputs como columnas de salida.
- Return type:
Torch Dataset
- fit(train_dataset, eval_dataset, epochs, batch, lr, lr_gamma, lr_scheduler_step, print_eval_rate=2, criterion=MSELoss(), folder_save=None)[source]
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod load(path, device=device(type='cpu'))[source]
Cargar el modelo desde un archivo checkpoint.
- Parameters:
path (str) – Ruta del archivo de checkpoint.
device (torch.device) – El dispositivo donde cargar el modelo.
- Returns:
El modelo cargado con los pesos restaurados.
- Return type:
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- class models.KANupm_v4.MakeDataset_kan(inputs, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
outputs (Tensor)
- class models.KANupm_v4.TaylorLayer(input_dim, out_dim, order, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KANupm_v4_1 module
- class models.KANupm_v4_1.ChebyLayer_ant(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_1.ChebyLayer_v2(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_1.ChebyLayer_v3(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_1.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_1.KANupm(ninput, noutput, nlayers, hidden_neur, layer_type, name_model, id='kan', dropout_p=0, device=device(type='cpu'), **layer_kwargs)[source]
Bases:
Module
KAN (Kolmogorov-Arnold Network) model for regression tasks. This model is based on https://arxiv.org/abs/2404.19756, inspired by the Kolmogorov-Arnold representation theorem.
- Parameters:
ninput (int) – The number of input features.
noutput (int) – The number of output features.
nlayers (int) – The number of hidden layers.
hidden_neur (int) – The number of neurons in the hidden layers.
layer_type (nn.Module) – The type of layer to use in the model. It can be one of the following:
JacobiLayer
,ChebyshevLayer
.name_model (str) – The name of the model.
dropout_p (float, optional) – The dropout probability (default:
0.0
).device (torch.device, optional) – The device where the model is loaded (default: gpu if available).
**layer_kwargs – Additional keyword arguments to pass to the layer type. For example, the order of the Taylor series or the degree of the Chebyshev polynomial.
- exam(data, **kwargs)[source]
Recibe un tensor con datos de test y resultados. Devuelve un array de numpy de predicciones y otro de valores verdaderos para comparar.
- Parameters:
data (torch.Tensor) – Tensor con (ninput + noutput) columnas y n filas, siendo n los casos a evaluar.
- Returns:
Dataset con ninputs como columnas de entrada y noutputs como columnas de salida.
- Return type:
Torch Dataset
- fit(train_dataset, eval_dataset, epochs, batch_size, lr, lr_gamma, lr_scheduler_step, print_eval_rate=2, criterion=MSELoss(), save_logs_path=None)[source]
Train the model using the provided training dataset. The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset.
eval_dataset – The evaluation dataset.
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
lr_gamma (float) – The learning rate decay factor.
lr_scheduler_step (int) – The number of epochs to reduce the learning rate.
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs and the losses will be printed. If set to 0, nothing will be printed (default:2
).criterion (torch.nn.Module, optional) – The loss function (default:
nn.MSELoss()
).save_logs_path (str, optional) – Path to save the training and evaluation losses (default:
None
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod load(path, device=device(type='cpu'))[source]
Loads a model from a checkpoint file.
- Parameters:
path (str) – Path to the checkpoint file.
device (torch.device) – Device where the model is loaded (default: cpu).
- Returns:
The loaded KAN model with the trained weights.
- Return type:
model (KAN)
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- print_hours(start, end)[source]
Recibe dos momentos temporales e imprime el intervalo de tiempo en formato sexagesimal
- save(path)[source]
Save the model to a checkpoint file.
- Parameters:
path (str) – Path to save the model. It can be either a path to a directory or a file name.
directory (If it is a)
trained. (the model will be saved with a filename that includes the number of epochs)
save_only_model (bool, optional) – Whether to only save the model, or also the optimizer and scheduler. Note that when this is true, you won’t be able to resume training from checkpoint.(default:
False
).
- class models.KANupm_v4_1.MakeDataset_kan(inputs, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
outputs (Tensor)
- class models.KANupm_v4_1.TaylorLayer(input_dim, out_dim, order, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KANupm_v4_2 module
- class models.KANupm_v4_2.ChebyLayer_ant(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_2.ChebyLayer_v2(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_2.ChebyLayer_v3(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_2.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v4_2.KANupm(ninput, noutput, nlayers, hidden_neur, layer_type, name_model, id='kan', dropout_p=0, device=device(type='cpu'), **layer_kwargs)[source]
Bases:
Module
KAN (Kolmogorov-Arnold Network) model for regression tasks. This model is based on https://arxiv.org/abs/2404.19756, inspired by the Kolmogorov-Arnold representation theorem.
- Parameters:
ninput (int) – The number of input features.
noutput (int) – The number of output features.
nlayers (int) – The number of hidden layers.
hidden_neur (int) – The number of neurons in the hidden layers.
layer_type (nn.Module) – The type of layer to use in the model. It can be one of the following:
JacobiLayer
,ChebyshevLayer
.name_model (str) – The name of the model.
dropout_p (float, optional) – The dropout probability (default:
0.0
).device (torch.device, optional) – The device where the model is loaded (default: gpu if available).
**layer_kwargs – Additional keyword arguments to pass to the layer type. For example, the order of the Taylor series or the degree of the Chebyshev polynomial.
- exam(data, **kwargs)[source]
Recibe un tensor con datos de test y resultados. Devuelve un array de numpy de predicciones y otro de valores verdaderos para comparar.
- Parameters:
data (torch.Tensor) – Tensor con (ninput + noutput) columnas y n filas, siendo n los casos a evaluar.
- Returns:
Dataset con ninputs como columnas de entrada y noutputs como columnas de salida.
- Return type:
Torch Dataset
- fit(train_dataset, eval_dataset, epochs, batch_size, lr, scheduler_type='StepLR', lr_kwargs=None, print_eval_rate=2, criterion=MSELoss(), save_logs_path=None)[source]
Entrena el modelo utilizando el conjunto de datos proporcionado, con soporte para múltiples estrategias de ajuste dinámico del learning rate.
- Parameters:
train_dataset (Dataset) – Conjunto de datos de entrenamiento.
eval_dataset (Dataset) – Conjunto de datos de evaluación.
epochs (int) – Número total de épocas para entrenar el modelo.
batch_size (int) – Tamaño de los lotes para entrenamiento y evaluación.
lr (float) – Learning rate inicial para el optimizador.
scheduler_type (str, opcional) – Tipo de scheduler para ajustar dinámicamente el learning rate. Opciones disponibles: - “StepLR”: Reduce el learning rate cada cierto número de épocas. - “ReduceLROnPlateau”: Reduce el learning rate si la pérdida no mejora. - “OneCycleLR”: Ajusta el learning rate siguiendo un único ciclo durante todo el entrenamiento. (Por defecto: “StepLR”).
lr_kwargs (dict, opcional) – Diccionario con parámetros específicos del scheduler seleccionado. Ejemplos: - Para StepLR: {“step_size”: int, “gamma”: float}. - Para ReduceLROnPlateau: {“mode”: str, “factor”: float, “patience”: int}. - Para OneCycleLR: {“anneal_strategy”: str, “div_factor”: float}. (Por defecto: {}).
print_eval_rate (int, opcional) – Frecuencia de evaluación del modelo (en épocas). Si se establece en 0, no se realiza evaluación periódica. (Por defecto: 2).
criterion (nn.Module, opcional) – Función de pérdida a optimizar. (Por defecto: nn.MSELoss()).
save_logs_path (str, opcional) – Ruta donde guardar las pérdidas de entrenamiento y evaluación como archivos .npy. Si es None, no se guardan los resultados. (Por defecto: None).
- Returns:
Entrena el modelo y, opcionalmente, guarda las pérdidas.
- Return type:
None
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod load(path, device=device(type='cpu'))[source]
Loads a model from a checkpoint file.
- Parameters:
path (str) – Path to the checkpoint file.
device (torch.device) – Device where the model is loaded (default: cpu).
- Returns:
The loaded KAN model with the trained weights.
- Return type:
model (KAN)
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- print_hours(start, end)[source]
Recibe dos momentos temporales e imprime el intervalo de tiempo en formato sexagesimal
- save(path)[source]
Save the model to a checkpoint file.
- Parameters:
path (str) – Path to save the model. It can be either a path to a directory or a file name.
directory (If it is a)
trained. (the model will be saved with a filename that includes the number of epochs)
save_only_model (bool, optional) – Whether to only save the model, or also the optimizer and scheduler. Note that when this is true, you won’t be able to resume training from checkpoint.(default:
False
).
- class models.KANupm_v4_2.MakeDataset_kan(inputs, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
outputs (Tensor)
- class models.KANupm_v4_2.TaylorLayer(input_dim, out_dim, order, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KANupm_v6_1 module
- class models.KANupm_v6_1.ChebyLayer_ant(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v6_1.ChebyLayer_v2(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v6_1.ChebyLayer_v3(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v6_1.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v6_1.KANupm(ninput, noutput, nlayers, hidden_neur, layer_type, model_name, dropout_p=0, device=device(type='cpu'), intro=True, **layer_kwargs)[source]
Bases:
Module
KAN (Kolmogorov-Arnold Network) model for regression tasks. This model is based on https://arxiv.org/abs/2404.19756, inspired by the Kolmogorov-Arnold representation theorem.
- Parameters:
ninput (int) – The number of input features.
noutput (int) – The number of output features.
nlayers (int) – The number of hidden layers.
hidden_neur (int) – The number of neurons in the hidden layers.
layer_type (nn.Module) – The type of layer to use in the model. It can be one of the following:
JacobiLayer
,ChebyshevLayer
.model_name (str) – The name of the model.
dropout_p (float, optional) – The dropout probability (default:
0.0
).device (torch.device, optional) – The device where the model is loaded (default: gpu if available).
**layer_kwargs – Additional keyword arguments to pass to the layer type. For example, the order of the Taylor series or the degree of the Chebyshev polynomial.
intro (bool)
- count_trainable_params()[source]
Devuelve el número total de parámetros entrenables del modelo.
- Returns:
Número de parámetros entrenables.
- Return type:
int
- exam(data, rescale=True, **kwargs)[source]
Recibe un set con datos de test y resultados, además de las referencias de escalados. Devuelve un array de numpy de predicciones y otro de valores verdaderos para comparar. SOLO FUNCIONA CON DATOS VERSIÓN JARAIZ.
- Parameters:
data (dictionary) – Diccionario de datos con las keys: data (torch.Tensor): Tensor con los datos reales scaled (torch.Tensor): Tensor con los datos escalados. mins (torch.Tensor): Tensor unidimensional con los valores mínimos de las columnas de datos. maxs (torch.Tensor): Tensor unidimensional con los valores máximos de las columnas de datos.
rescale (bool) – Señal para reescalar la salida con la referencia de data.
dataloader_params – Configuración para cargar los datos en un dataloader (evitar problemas de memoria con archuvos grandes).
- Returns:
Mean Square Error entre la predicción y los resultados reales. outs: Predicción del modelo, reescalado o no en función del parámetro de entrada correspondiente. targ: Valores de referencia, reescalados o no en función del parámetro de entrada correspondiente.
- Return type:
mse
- fit(train_dataset, eval_dataset, batch_size=32, epochs=100, lr=0.001, optimizer=<class 'torch.optim.adam.Adam'>, scheduler_type='StepLR', opti_kwargs={}, lr_kwargs={}, print_eval_rate=2, loss_fn=MSELoss(), save_logs_path=None, intro=True, max_norm_grad=inf, **kwargs)[source]
Train the model using the provided training dataset. The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset.
eval_dataset – The evaluation dataset.
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
optimizer (torch.optim, optional) – The optimizer to use. All PyTorch optimizers except AdaDelta are available. (default:
optim.Adam
).scheduler_type (str, optional) – Type of scheduler used to dynamically adjust the learning rate. Available options: - “StepLR”: Decreases the learning rate after a specified number of epochs. - “ReduceLROnPlateau”: Reduces the learning rate when the loss stops improving. - “OneCycleLR”: Adjusts the learning rate following a single cycle throughout training. (Default: “StepLR”).
lr_kwargs (dict, optional) – Dictionary with specific parameters for the selected scheduler. Examples: - For StepLR: {“step_size”: int, “gamma”: float}. - For ReduceLROnPlateau: {“mode”: str, “factor”: float, “patience”: int}. - For OneCycleLR: {“anneal_strategy”: str, “div_factor”: float}. (Default: {}).
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs, and the losses will be printed. If set to 0, no evaluations will be displayed (default:2
).loss_fn (torch.nn.Module, optional) – The loss function to be optimized (default:
nn.MSELoss()
).save_logs_path (str, optional) – Path to save the training and evaluation losses as .npy files. If set to None, no logs will be saved. (Default:
None
).intro (bool, optional) – Whether to print model training information (default:
True
).max_norm_grad (float, optional) – The maximum gradient norm allowed. If set to float(‘inf’), no restriction is applied (default:
float('inf')
).kwargs (dict, optional) – Additional keyword arguments to be passed to the DataLoader. These can be used to configure DataLoader parameters (see the PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Whether to shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use for data loading (default:0
). - pin_memory (bool, optional): Whether to use pinned memory (default:True
).
- fit_scored(train_dataset, eval_dataset, loss_fn, batch_size=32, epochs=100, lr=0.001, optimizer=<class 'torch.optim.adam.Adam'>, scheduler_type='StepLR', opti_kwargs={}, lr_kwargs={}, print_eval_rate=2, save_logs_path=None, intro=True, max_norm_grad=inf, **kwargs)[source]
Train the model using the provided training dataset (input, score, target). The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset. Must be made by MakeDatasetScored_kan
eval_dataset – The evaluation dataset. Must be made by MakeDatasetScored_kan
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
optimizer (torch.optim, optional) – The optimizer to use. All PyTorch optimizers except AdaDelta are available. (default:
optim.Adam
).scheduler_type (str, optional) – Type of scheduler used to dynamically adjust the learning rate. Available options: - “StepLR”: Decreases the learning rate after a specified number of epochs. - “ReduceLROnPlateau”: Reduces the learning rate when the loss stops improving. - “OneCycleLR”: Adjusts the learning rate following a single cycle throughout training. (Default: “StepLR”).
lr_kwargs (dict, optional) – Dictionary with specific parameters for the selected scheduler. Examples: - For StepLR: {“step_size”: int, “gamma”: float}. - For ReduceLROnPlateau: {“mode”: str, “factor”: float, “patience”: int}. - For OneCycleLR: {“anneal_strategy”: str, “div_factor”: float}. (Default: {}).
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs, and the losses will be printed. If set to 0, no evaluations will be displayed (default:2
).loss_fn (torch.nn.Module, optional) – The loss function to be optimized (default:
nn.MSELoss()
).save_logs_path (str, optional) – Path to save the training and evaluation losses as .npy files. If set to None, no logs will be saved. (Default:
None
).intro (bool, optional) – Whether to print model training information (default:
True
).max_norm_grad (float, optional) – The maximum gradient norm allowed. If set to float(‘inf’), no restriction is applied (default:
float('inf')
).kwargs (dict, optional) – Additional keyword arguments to be passed to the DataLoader. These can be used to configure DataLoader parameters (see the PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Whether to shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use for data loading (default:0
). - pin_memory (bool, optional): Whether to use pinned memory (default:True
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_weights(layer_idx, neuron_idx, print_info=False, ploted=False)[source]
Retrieve the weights of a specific neuron in a given layer.
- Parameters:
layer_idx (int) – The index of the layer to inspect.
neuron_idx (int) – The index of the neuron in that layer.
print_info (bool) – If True, prints weight information.
ploted (bool) – If True, plots the corresponding Chebyshev function.
- Returns:
Tensor containing the weights.
- Return type:
neuron_weights (torch.Tensor)
- classmethod load(path, device=device(type='cpu'))[source]
Loads a model from a checkpoint file.
- Parameters:
path (str) – Path to the checkpoint file.
device (torch.device) – Device where the model is loaded (default: cpu).
- Returns:
The loaded KAN model with the trained weights.
- Return type:
model (KAN)
- plot_structure(save_path=None, labels=False, figsize=(12, 6))[source]
- Parameters:
save_path (str)
labels (bool)
figsize (tuple)
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- print_hours(start, end, epochs=None)[source]
Recibe dos momentos temporales e imprime el intervalo de tiempo en formato sexagesimal
- save(path, version=0)[source]
Save the model to a checkpoint file.
- Parameters:
path (str) – Path to save the model. It can be either a path to a directory or a file name.
directory (If it is a)
trained. (the model will be saved with a filename that includes the number of epochs)
save_only_model (bool, optional) – Whether to only save the model, or also the optimizer and scheduler. Note that when this is true, you won’t be able to resume training from checkpoint.(default:
False
).version (int) – An integer which describe the model’s version. Zero means no version.
- class models.KANupm_v6_1.MakeDatasetScored_kan(inputs, score, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
score (Tensor)
outputs (Tensor)
- class models.KANupm_v6_1.MakeDataset_kan(inputs, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
outputs (Tensor)
- class models.KANupm_v6_1.TaylorLayer(input_dim, out_dim, order, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.KANupm_v8 module
- class models.KANupm_v8.ChebyLayer_ant(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v8.ChebyLayer_v2(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v8.ChebyLayer_v3(input_dim, output_dim, degree)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v8.JacobiLayer(input_dim, output_dim, degree, a=1.0, b=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v8.KANupm(ninput, noutput, nlayers, hidden_neur, layer_type, model_name, dropout_p=0, device=device(type='cpu'), intro=True, degree=5)[source]
Bases:
Module
KAN (Kolmogorov-Arnold Network) model for regression tasks. This model is based on https://arxiv.org/abs/2404.19756, inspired by the Kolmogorov-Arnold representation theorem.
- Parameters:
ninput (int) – The number of input features.
noutput (int) – The number of output features.
nlayers (int) – The number of hidden layers.
hidden_neur (int) – The number of neurons in the hidden layers.
layer_type (nn.Module) – The type of layer to use in the model. It can be one of the following:
JacobiLayer
,ChebyshevLayer
.model_name (str) – The name of the model.
dropout_p (float, optional) – The dropout probability (default:
0.0
).device (torch.device, optional) – The device where the model is loaded (default: gpu if available).
**layer_kwargs – Additional keyword arguments to pass to the layer type. For example, the order of the Taylor series or the degree of the Chebyshev polynomial.
intro (bool)
- count_trainable_params()[source]
Devuelve el número total de parámetros entrenables del modelo.
- Returns:
Número de parámetros entrenables.
- Return type:
int
- exam(data, **kwargs)[source]
Recibe un set con datos de test y resultados, además de las referencias de escalados. Devuelve un array de numpy de predicciones y otro de valores verdaderos para comparar. SOLO FUNCIONA CON DATOS VERSIÓN JARAIZ.
- Parameters:
data (dictionary) – Diccionario de datos con las keys: - tensor (torch.Tensor): Tensor con los datos reales - scaled (torch.Tensor): Tensor con los datos escalados. - mins (torch.Tensor): Tensor unidimensional con los valores mínimos de las columnas de datos. - maxs (torch.Tensor): Tensor unidimensional con los valores máximos de las columnas de datos. - info (dict): Diccionario con información adicional sobre los datos.
kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).
- Returns:
Mean Square Error entre la predicción y los resultados reales. outs: Predicción del modelo, reescalado o no en función del parámetro de entrada correspondiente. targ: Valores de referencia, reescalados o no en función del parámetro de entrada correspondiente.
- Return type:
mse
- fit(train_dataset, eval_dataset, batch_size=32, epochs=100, lr=0.001, optimizer=<class 'torch.optim.adam.Adam'>, scheduler_type='StepLR', opti_kwargs={}, lr_kwargs={}, print_eval_rate=2, loss_fn=MSELoss(), save_logs_path=None, intro=True, max_norm_grad=inf, **kwargs)[source]
Train the model using the provided training dataset. The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset.
eval_dataset – The evaluation dataset.
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
optimizer (torch.optim, optional) – The optimizer to use. All PyTorch optimizers except AdaDelta are available. (default:
optim.Adam
).scheduler_type (str, optional) – Type of scheduler used to dynamically adjust the learning rate. Available options: - “StepLR”: Decreases the learning rate after a specified number of epochs. - “ReduceLROnPlateau”: Reduces the learning rate when the loss stops improving. - “OneCycleLR”: Adjusts the learning rate following a single cycle throughout training. (Default: “StepLR”).
lr_kwargs (dict, optional) – Dictionary with specific parameters for the selected scheduler. Examples: - For StepLR: {“step_size”: int (example: (epochs_per_step*len(train_dataset)) // (batch_size)), “gamma”: float}. - For ReduceLROnPlateau: {“mode”: str, “factor”: float, “patience”: int}. - For OneCycleLR: {“anneal_strategy”: str, “div_factor”: float}. (Default: {}).
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs, and the losses will be printed. If set to 0, no evaluations will be displayed (default:2
).loss_fn (torch.nn.Module, optional) – The loss function to be optimized (default:
nn.MSELoss()
).save_logs_path (str, optional) – Path to save the training and evaluation losses as .npy files. If set to None, no logs will be saved. (Default:
None
).intro (bool, optional) – Whether to print model training information (default:
True
).max_norm_grad (float, optional) – The maximum gradient norm allowed. If set to float(‘inf’), no restriction is applied (default:
float('inf')
).kwargs (dict, optional) – Additional keyword arguments to be passed to the DataLoader. These can be used to configure DataLoader parameters (see the PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Whether to shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use for data loading (default:0
). - pin_memory (bool, optional): Whether to use pinned memory (default:True
).
- fit_scored(train_dataset, eval_dataset, batch_size=32, epochs=100, lr=0.001, optimizer=<class 'torch.optim.adam.Adam'>, scheduler_type='StepLR', opti_kwargs={}, lr_kwargs={}, print_eval_rate=2, save_logs_path=None, intro=True, max_norm_grad=inf, **kwargs)[source]
Train the model using the provided training dataset (input, score, target). The model is trained using the Adam optimizer with the provided learning rate and learning rate decay factor.
- Parameters:
train_dataset – The training dataset. Must be made by MakeDatasetScored_kan
eval_dataset – The evaluation dataset. Must be made by MakeDatasetScored_kan
epochs (int) – The number of epochs to train the model.
batch_size (int) – The batch size.
lr (float) – The learning rate for the Adam optimizer.
optimizer (torch.optim, optional) – The optimizer to use. All PyTorch optimizers except AdaDelta are available. (default:
optim.Adam
).scheduler_type (str, optional) – Type of scheduler used to dynamically adjust the learning rate. Available options: - “StepLR”: Decreases the learning rate after a specified number of epochs. - “ReduceLROnPlateau”: Reduces the learning rate when the loss stops improving. - “OneCycleLR”: Adjusts the learning rate following a single cycle throughout training. (Default: “StepLR”).
lr_kwargs (dict, optional) – Dictionary with specific parameters for the selected scheduler. Examples: - For StepLR: {“step_size”: int, “gamma”: float}. - For ReduceLROnPlateau: {“mode”: str, “factor”: float, “patience”: int}. - For OneCycleLR: {“anneal_strategy”: str, “div_factor”: float}. (Default: {}).
print_eval_rate (int, optional) – The model will be evaluated every
print_eval_rate
epochs, and the losses will be printed. If set to 0, no evaluations will be displayed (default:2
).save_logs_path (str, optional) – Path to save the training and evaluation losses as .npy files. If set to None, no logs will be saved. (Default:
None
).intro (bool, optional) – Whether to print model training information (default:
True
).max_norm_grad (float, optional) – The maximum gradient norm allowed. If set to float(‘inf’), no restriction is applied (default:
float('inf')
).kwargs (dict, optional) – Additional keyword arguments to be passed to the DataLoader. These can be used to configure DataLoader parameters (see the PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Whether to shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use for data loading (default:0
). - pin_memory (bool, optional): Whether to use pinned memory (default:True
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_weights(layer_idx, neuron_idx, print_info=False, ploted=False)[source]
Retrieve the weights of a specific neuron in a given layer.
- Parameters:
layer_idx (int) – The index of the layer to inspect.
neuron_idx (int) – The index of the neuron in that layer.
print_info (bool) – If True, prints weight information.
ploted (bool) – If True, plots the corresponding Chebyshev function.
- Returns:
Tensor containing the weights.
- Return type:
neuron_weights (torch.Tensor)
- classmethod load(path, device=device(type='cpu'))[source]
Loads a model from a checkpoint file.
- Parameters:
path (str) – Path to the checkpoint file.
device (torch.device) – Device where the model is loaded (default: cpu).
- Returns:
The loaded KAN model with the trained weights.
- Return type:
model (KAN)
- plot_structure(save_path=None, labels=False, figsize=(12, 6))[source]
- Parameters:
save_path (str)
labels (bool)
figsize (tuple)
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
- print_hours(start, end, epochs=None)[source]
Recibe dos momentos temporales e imprime el intervalo de tiempo en formato sexagesimal
- save(path, version=0)[source]
Save the model to a checkpoint file.
- Parameters:
path (str) – Path to save the model. It can be either a path to a directory or a file name.
directory (If it is a)
trained. (the model will be saved with a filename that includes the number of epochs)
save_only_model (bool, optional) – Whether to only save the model, or also the optimizer and scheduler. Note that when this is true, you won’t be able to resume training from checkpoint.(default:
False
).version (int) – An integer which describe the model’s version. Zero means no version.
- class models.KANupm_v8.MakeDatasetScored_kan_ant(inputs, score, outputs)[source]
Bases:
Dataset
- Parameters:
inputs (Tensor)
score (Tensor)
outputs (Tensor)
- class models.KANupm_v8.NormPLoss[source]
Bases:
Module
- forward(input, score, target, p=2)[source]
Calcula la norma p del vector de pérdidas ponderadas por score.
- Parameters:
input (torch.Tensor) – Predicciones del modelo.
target (torch.Tensor) – Valores reales.
score (torch.Tensor) – Ponderaciones de importancia.
p (float, optional) – Valor de la norma p (default, 2).
- Returns:
Pérdida basada en la norma p.
- Return type:
torch.Tensor
- class models.KANupm_v8.SineLayer(input_size, output_size, sigma=1.0)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.KANupm_v8.TaylorLayer(input_dim, out_dim, degree, addbias=True)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.VAE module
- class models.VAE.VAE(in_channels, latent_dim, hidden_dims=None, act_function=ELU(alpha=1.0), **kwargs)[source]
Bases:
Module
Variational autoencoder designed to reduce dimensionality of a mesh data by encoding their coordinates into a latent space, being able to reconstruct them from that latent space. Adapted from the base VAE in https://github.com/AntixK/PyTorch-VAE.
- Parameters:
in_channels (int) – Number of coordinate points in the airfoil data.
latent_dim (int) – Dimensionality of the latent space.
List[int] (hidden_dims) – List of hidden dimensions for the encoder and decoder. Assumed symmetrical.
act_function (Callable) – Activation function to use for the encoder and decoder.
- decode(z)[source]
Maps the given latent codes onto the coordinate space.
- Parameters:
z (Tensor) – Latent code [B x D_latent]
- Returns:
Reconstructed input [B x D_out]
- Return type:
Tensor
- encode(x)[source]
Encodes the input by passing through the encoder network and returns the latent codes.
- Parameters:
X – (Tensor) Input tensor to encoder [N x D_in]
- Returns:
Tuple[Tensor, Tensor] List of latent codes
- forward(input, **kwargs)[source]
Forward pass through the network.
- Parameters:
input (Tensor) – Input tensor to the VAE [N x D_in]
- Returns:
List containing the reconstructed input, original input, mean, and log variance
- Return type:
List[Tensor]
- loss_function(pred, *args, **kwargs)[source]
Computes the VAE loss function using the Kullback-Leibler divergence.
- Parameters:
pred (List[Tensor]) – List containing the reconstructed input, original input, mean, and log variance
*args – Additional arguments
**kwargs – Additional keyword arguments, including ‘weight’ for Beta-VAE
- Returns:
Dictionary containing the total loss, reconstruction loss, and KL divergence loss
- Return type:
dict
- reparameterize(mu, logvar)[source]
Reparameterization trick to sample from N(mu, var) from N(0,1).
- Parameters:
mu (Tensor) – Mean of the latent Gaussian [B x D_latent]
logvar (Tensor) – Standard deviation of the latent Gaussian [B x D_latent]
- Returns:
Sampled latent code [B x D_latent]
- Return type:
Tensor
- sample(num_samples, current_device, std_coef=1.0, **kwargs)[source]
Samples from the latent space and returns the corresponding reconstructed input.
- Parameters:
num_samples (int) – Number of samples
current_device (int) – Device to run the model
std_coef (float, optional) – Standard deviation coefficient for sampling. Default is 1.0.
- Returns:
Sampled and decoded tensor
- Return type:
Tensor
models.elm module
models.mlp module
- class models.mlp.MLP(input_size, output_size, n_layers, hidden_size, p_dropouts=0.0, activation=<function relu>, device=device(type='cpu'), seed=None, **kwargs)[source]
Bases:
Module
,Model
Multi-layer perceptron model for regression tasks. The model is based on the PyTorch library torch.nn (detailed documentation can be found at https://pytorch.org/docs/stable/nn.html).
- Parameters:
input_size (int) – Number of input features.
output_size (int) – Number of output features.
n_layers (int) – Number of hidden layers.
hidden_size (int) – Number of neurons in each hidden layer.
p_dropouts (float, optional) – Dropout probability for the hidden layers (default:
0.0
).checkpoint_file (str, optional) – Path to a checkpoint file to load the model from (default:
None
).activation (torch.nn.Module, optional) – Activation function to use (default:
torch.nn.functional.relu
).device (torch.device, optional) – Device to use (default:
torch.device("cpu")
).seed (int, optional) – Seed to use for reproducibility (default:
None
).kwargs (Dict) – Additional keyword arguments.
- classmethod create_optimized_model(train_dataset, eval_dataset, optuna_optimizer, **kwargs)[source]
Create an optimized model using Optuna. The model is trained on the training dataset and evaluated on the validation dataset.
- Parameters:
train_dataset (BaseDataset) – The training dataset.
eval_dataset (BaseDataset) – The evaluation dataset.
optuna_optimizer (OptunaOptimizer) – The optimizer to use for optimization.
kwargs – Additional keyword arguments.
- Returns:
The optimized model and the optimization parameters.
- Return type:
Tuple [Model, Dict]
- fit(train_dataset, eval_dataset=None, epochs=100, lr=0.001, lr_gamma=1, lr_scheduler_step=1, loss_fn=MSELoss(), optimizer_class=<class 'torch.optim.adam.Adam'>, scheduler_class=<class 'torch.optim.lr_scheduler.StepLR'>, print_rate_batch=0, print_rate_epoch=1, **kwargs)[source]
Fit the model to the training data. If eval_set is provided, the model will be evaluated on this set after each epoch.
- Parameters:
train_dataset (BaseDataset) – Training dataset to fit the model.
eval_dataset (BaseDataset, optional) – Evaluation dataset to evaluate the model after each epoch (default:
None
).epochs (int, optional) – Number of epochs to train the model (default:
100
).lr (float, optional) – Learning rate for the optimizer (default:
0.001
).lr_gamma (float, optional) – Multiplicative factor of learning rate decay (default:
1
).lr_scheduler_step (int, optional) – Number of epochs to decay the learning rate (default:
1
).loss_fn (torch.nn.Module, optional) – Loss function to optimize (default:
torch.nn.MSELoss()
).optimizer_class (torch.optim.Optimizer, optional) – Optimizer class to use (default:
torch.optim.Adam
).scheduler_class (torch.optim.lr_scheduler._LRScheduler, optional) – Learning rate scheduler class to use. If
None
, no scheduler will be used (default:torch.optim.lr_scheduler.StepLR
).print_rate_batch (int, optional) – Print loss every
print_rate_batch
batches (default:1
). If set to0
, no print will be done.print_rate_epoch (int, optional) – Print loss every
print_rate_epoch
epochs (default:1
). If set to0
, no print will be done.kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod load(path, device=device(type='cpu'))[source]
Load the model from a checkpoint file. Does not require the model to be instantiated.
- predict(X, rescale_output=True, return_targets=False, **kwargs)[source]
Predict the target values for the input data. The dataset is loaded to a DataLoader with the provided keyword arguments. The model is set to evaluation mode and the predictions are made using the input data. The output can be rescaled using the dataset scaler.
- Parameters:
X (BaseDataset) – The dataset whose target values are to be predicted using the input data.
rescale_output (bool) – Whether to rescale the output with the scaler of the dataset (default:
True
).kwargs (dict, optional) – Additional keyword arguments to pass to the DataLoader. Can be used to set the parameters of the DataLoader (see PyTorch documentation at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader): - batch_size (int, optional): Batch size (default:
32
). - shuffle (bool, optional): Shuffle the data (default:True
). - num_workers (int, optional): Number of workers to use (default:0
). - pin_memory (bool, optional): Pin memory (default:True
).return_targets (bool)
- Returns:
The predictions and the true target values.
- Return type:
Tuple [np.ndarray, np.ndarray]
models.model_interface module
- class models.model_interface.Model[source]
Bases:
ABC
- classmethod create_optimized_model(train_dataset, eval_dataset, optuna_optimizer)[source]
Create an optimized model using Optuna.
- Parameters:
train_dataset (BaseDataset) – The training dataset.
eval_dataset (Optional[BaseDataset]) – The evaluation dataset.
optuna_optimizer (OptunaOptimizer) – The optimizer to use for optimization.
- Returns:
The optimized model and the best parameters found by the optimizer.
- Return type:
Tuple[Model, Dict]
- abstract fit(train_dataset, eval_set=typing.Optional[cetaceo.data.dataset.BaseDataset], **kwargs)[source]
Fit the model to the training data.
- Parameters:
train_dataset (BaseDataset) – The training dataset.
eval_set (Optional[BaseDataset]) – The evaluation dataset.
**kwargs – Additional parameters for the fit method.
- abstract classmethod load(path)[source]
Load a model from a file.
- Parameters:
path (str) – The path to load the model from.
- Returns:
The loaded model.
- Return type:
- abstract predict(X, rescale_output=True, **kwargs)[source]
Predict the target values for the input data.
- Parameters:
X (BaseDataset) – The input data.
rescale_output (bool, optional) – Whether to rescale the output data. Default is True.
**kwargs – Additional parameters for the predict method.
- Returns:
The predicted target values.
- Return type:
np.array