magnet.aggmodels.rlpartitioner.WeakContigDRLCP#

class magnet.aggmodels.rlpartitioner.WeakContigDRLCP(hidden_units: int, lin_hidden_units: int, num_features: int)#

Bases: DRLCoarsePartioner

Constructor

__init__(hidden_units: int, lin_hidden_units: int, num_features: int)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

Methods

__init__(hidden_units, lin_hidden_units, ...)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(graph)

Define the computation performed at every call.

get_sample(mesh[, randomRotate])

create a graph data structure sample from a mesh.

normalize(x)

Normalize the data before feeding it to the GNN.

reward_function(new_state, old_state, action)

__init__(hidden_units: int, lin_hidden_units: int, num_features: int)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(graph)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_sample(mesh: Mesh, randomRotate: bool = True)#

create a graph data structure sample from a mesh.

This is used for both training and running the GNN.

Parameters:
  • mesh (Mesh) – Mesh to be sampled.

  • randomRotate (bool, optional) – If True, randomly rotate the mesh (default is False).

  • selfloop (bool, optional) – If True, add 1 on the diagonal of the adjacency matrix, i.e self-loops on the graph (default is False).

Returns:

Graph data representing the mesh.

Return type:

Data

Notes

The two tensors x and edge_index are both on DEVICE (cuda, if available).

normalize(x)#

Normalize the data before feeding it to the GNN.

Parameters:

x (torch.Tensor) – The data to be normalized.

Returns:

The normalized data.

Return type:

torch.Tensor

Notes

Normalization consists in aligning the widest direction of mesh to the x axis by rotating it and rescaling the coordinates to have zero mean and unit variance.

The output is returned on the the same torch device as the output of get_sample, i.e. DEVICE.

reward_function(new_state: Data, old_state: Data, action: int, disc_coeff: float = 0) float#

Inherited Methods

A2C_train(training_dataset[, batch_size, ...])

ac_eval(graph[, perc])

agglomerate(mesh[, mode, nref, mult_factor])

Agglomerate a mesh.

agglomerate_dataset(dataset, **kwargs)

Agglomerate all meshes in a dataset.

bisect(mesh)

Bisect the mesh once.

bisection_Nref(mesh, Nref[, warm_start])

Bisect the mesh recursively a set number of times.

bisection_mult_factor(mesh, mult_factor[, ...])

Bisect a mesh until the agglomerated elements are small enough.

bisection_segregated(mesh, mult_factor[, subset])

Bisect heterogeneous mesh until elements are small enough.

change_vert(graph, action)

In place change of vertex to other subgraph.

coarsen(mesh, subset[, mode, nref, mult_factor])

Coarsen a subregion of the mesh.

compute_episode_length(graph)

cut(graph)

get_number_of_parameters()

Get total number of parameters of the GNN.

load_model(model_path)

Load model from state dictionary.

loss_function(output, graph)

Loss function used during training.

multi_eval(graph[, step, perc])

multilevel_bisection(mesh[, refiner, ...])

save_model(output_path)

Save current model to state dictionary.

train_GNN(training_dataset, ...[, ...])

Train the Graph Neural Network.

update_state(graph, action)

volumes(graph)