when you are on the cluster node after having source the python virtual environment launch the canvas:
tomwer canvas
from IPython.display import Video
Video("video/start_tomwer_canvas.mp4", embed=True, height=500)
there is two use cases for now:
For this we need to use the bliss(HDF5) - nxtomomill
alias h52nx-nxtomomill
widget.
data list
or data selector
to see the output and make sure all the conversion went well)note 1 : keep in mind that this can also be done manually using the command line interface (CLI) of nxtomomill. For more details please see h52nx nxtomomill tutorial
note 2 : for converting from edf to NXtomo you can do the same operation but using the edf2nx-nxtomomill
widget. Have a look at https://tomotools.gitlab-pages.esrf.fr/nxtomomill/tutorials/edf2nx.html for CLI or advanced usage.
note 3 : if no configuration file is provided then the default parametesr will be used. Otherwise you can provide a configuration file. See for details the edf2nx widget video tutorial which explain how to provide a file (GUI mecanism is the same for edf and hdf5)
from IPython.display import Video
Video("video/h52nx_example.mp4", embed=True, height=500)
copy data from `` to a local workspace (like /tmp_14_days/{your_name}
) or reuse them if exists already
launch tomwer canvas
convert the bliss .h5
file to a NXtomo (.nx
) using the appropriate widget
From raw data we will need:
reduced dark / flat widget
NXtomo is expected to contain dark and flat frames. In order to compute the flat field we need compute reduced
darks and flats. to compute reduced darks and flats
center of rotation (see 'cor_search' notebook for more information).
For the training we will use the 'sino-coarse-to-fine' algorithm which provide a good estmation of the cor. On the video we will do it manually but we could lock the algorithm to avoid validating the value found.
nabu 'slice'
to reconstruct one slice. In the example we will also ask for a Paganin phase to have a better reconstruction
data viewer
to display the reconstructed slice (and browse the dataset)
note: the widget can be created from the left panel (with a mouse left click on the widget) or by creating a link from a left click the node output and releasing it downstream)
from IPython.display import Video
Video("video/create_dummy_workflow.mp4", embed=True, height=500)
Now that you have an input and a basic workflow we can process it.
For this simply 'select' the NXtomo from the scan selector that has been created during step 1. Processing should start, you have to wait until all processes are finished.
from IPython.display import Video
Video("video/execute_dummy_workflow.mp4", embed=True, height=500)
Note: you can control advancement from the 'object supervisor' on the bottom of the window. If he is not visible you can display / hide it from the view / object supervisor option as show in the video
from IPython.display import Video
Video("video/object_supervisor_display.mp4", embed=True, height=500)
Reconstruct a slice of the NXtomo created during exercise A
once you are happy with your workflow you can save it (ctrl+s) and you will be able to load it next time.
Note: dataset used will also be saved so it can be a good way to share data processing with collegues or report a bug easy to reproduce