User guides#
Installation#
To use magnet, you will need the following python packages:
torch
torch_geometric
meshio
vtk
gmsh
numpy, scipy
matplotlib
networkx
metis-python
scikit-learn
They are automatically installed when running
$ pip install .
Warning
If you intend to use the Lymph interface, to be able to call Python from MATLAB, you will need a compatible Python version. See the Lymph section for more details.
To avoid conflicts between different packages, it is suggested to create a new virtual environment:
$ python -m venv .myvenv
You will also need to install Metis locally. See metispy and METIS for more details.
Note
If you are not able to import metispy
, try to set the environment variable ‘METIS_DLL’
to the exact path of the python shared library file metis.dll in the __init__.py of the package.
Test cases#
This package comes with a set of examples extracted from the test cases of the Magnet white paper.
They can be found in the folder examples
. They can be easily run in Google Colab as show in examples/python/examples.ipynb
.
Dataset creation#
There are 2 types of datasets
In the first case, simply call a generate function specifying the number of each type of mesh to be included in the dataset.
magnet provides 2 ways of creating datasets: using the built-in generate2D
module, or by using the create_dataset()
function.
from magnet.generate2D import generate_2D_dataset
generate_2D_dataset(200, 200, 200, 200, 'datasets', 'trainig_dataset')
This will create a folder ‘datasets/training_dataset’ containing all the generated meshes named progressively starting from ‘mesh0.vtk’, a summary of the dataset porperties and a .npz file containig the mesh graph datas.
If instead you want to use other meshes, you first need to put them in one folder with the same naming scheme as
before (progressively from ‘mesh0.vtk’), then call create_dataset()
. This will create the .npz file and a summary file, similar to before.
from magnet.io import create_dataset
create_dataset('datasets/mydatasetfolder', n_meshes=100)
GNN Training#
To train a GNN, you first need two datasets: a training dataset and a validation dataset.
First, load the datasets using load_dataset()
or load_graph_dataset()
after having created them.
from magnet.io import load_graph_dataset
tr_set = load_graph_dataset('datasets/training_dataset')
val_set = load_graph_dataset('datasets/validation_dataset')
Then, initialize the GNN, e.g. using one of the predefined models.
from magnet import aggmodels
GNNtest = aggmodels.SageBase2D(64, 32, 3, 2).to(aggmodels.DEVICE)
Note
When initializing a GNN, always use to(DEVICE). This is because all operations are carried out on GPU (if cuda is available) since they are faster.
To start the training, call the train_GNN()
method, specifying the number of epochs,
the batch size and learning rate.
GNNtest.train_GNN(tr_set, val_set, epochs=300, batch_size=4, learning_rate=1e-5)
During training, log messages will describe the training progress:
When training is completed, by default a plot displaying the training a and validation loss functions and a log file is saved with a summary of the training.
To save the trained model, call save_model()
to save it as a state dictionary.
GNNtest.save_model('models/SageBase2D_training_test.pt')
Mesh agglomeration#
To agglomerate a single mesh, first load it using load_mesh()
:
from magnet.io import load_mesh
mesh = load_mesh('datasets/mesh.vtk')
Note
If you intend to use the agglomearted mesh for numerical solvers, it is important to
correctly extract the boundary elements and tags of the original mesh. To see how to do it,
read the detailed documentation of load_mesh()
.
Then, initialize the agglomeration model you intend to use:
from magnet.io import aggmodels
kmeans = aggmodels.KMEANS()
To agglomerate the mesh you then have to call the agglomerate()
method.
For example, if we want to agglomearte the mesh by bisecting it recursively 7 times, having a total
of 128 agglomerated elements, you would use:
agg_mesh = kmeans.agglomerate(mesh, mode='Nref', nref=7)
Since agglomerate()
has a few different possible options, please check its
full documentation for further details.
Finally, you can plot the agglomerated mesh using view()
and save it in vtk
format using save_mesh()
.
agg_mesh.view()
agg_mesh.save_mesh('outputs/aggmesh.vtk')
Quality metrics and model comparison#
The AgglomerableMesh
class provides some built-in methods to compute quality metrics
of an agglomerated mesh: this can be useful to evaluate the performance of a model.
To compute the quality metrics, you can call the respective methods (Circle_Ratio()
,
Uniformity_Factor()
, Volumes_Difference()
), or
get_quality_metrics()
to compute them together.
You can also do this on an entire dataset at the same time by using agglomerate_dataset()
and get_quality_metrics()
agg_dataset = mymodel.agglomerate_dataset(dataset)
QM = agg_dataset.get_quality_metrics()
magnet provides also a compare_quality()
to automatically compare the performance of different
models on the same dataset by first agglomerating it and then computing the quality metrics.
from magnet.io import load_dataset
from magnet import aggmodels
km = aggmodels.KMEANS()
mt = aggmodels.METIS()
dataset = load_dataset('datasets/test_dataset')
dataset.compare_quality([km, mt], mode='Nref', nref=5)