DY06_FoV2.9 B2_recon EV rotate 6.5 degrees.txm

Author: Niels Jeppesen (niejep@dtu.dk)

This notebook reads the TXM data format. In our experience NIfTI is much faster to read than TXM using the packages we use here.

The structure-tensor package we will be using here is a 2D and 3D strcture tensor package for Python implemented by Vedrana A. Dahl and Niels Jeppesen.

Installation

To run this notebook there are some prerequisites that must be installed in our Python environment. We generally recommend creating a new Python environment to run the notebooks in, for instance using conda create -n <env_name> python=3.7. This way we avoid conflicts with packages in your existing environment.

Install dependencies (for this notebook) using pip install numpy scipy scikit-image matplotlib nibabel tqdm structure-tensor. To start the notebook we will of course also need jupyter. Install the dxchange package from GitHub, see note below.

Note: The current release of the dxchange package has a bug in the TXM reader. This is fixed in the GitHub master branch, so for now the package must be installed directly from GitHub using pip install git+https://github.com/data-exchange/dxchange.git. The setup.py does not specify requirements for the package, which can be found in dxchange requirements.txt instead.

Now, let's go ahead and import the required modules. The structure_tensor_workers.py file should be in the notebook folder.

Load and display data

First, we'll load a sample of the data and some meta data. We will be using the following folder structure:

Let's have a need look at the data, to ensure it was loaded correctly.

Crop data

Clearly a large part of the data is not of interest to us, so let's crop out the area of interest. You can cut out any part of the data you want to analyze.

Create new shape.

Save as RAW

To work with the data, we will save it as a raw data file.

Choosing fiber threshold

To choose a good intensity threshold for the fibers we can use a histogram and choose the value manually. Alternatively we can use a method such as Otsu's threshold. For choosing the threshold we make sure only to include inside-sample data. This makes it easier to separate foreground (fibers) and background inside the sample.

Structure tensor

We set $\sigma$ and $\rho$ based on the size of the fibers that we want to analyze. For more details see the related paper and StructureTensorFiberAnalysisDemo notebook.

Then, we set the block size (crop_size), which will determine the maximum size of the blocks we will partition the data into for the ST calculation. The maximum black size will depend on crop_size, the truncate value used for Gaussian filters (default is 4), and $\rho$ or $\sigma$ (usually $\rho$ because ti is largest).

We also set the fiber_threshold and specify a list of devices to use for the calculation. The list determines how the blocks will be distributed between devices for calculations. If we have a dual GPU system supporting CUDA, we can specify ['cuda:0', 'cuda:1'], which will distribute blocks evenly between the two GPUs. ['cuda:0', 'cuda:1', 'cuda:1'] would move two thirds to GPU 1, while ['cuda:0', 'cuda:1', 'cpu'] will move one third of the blocks to the CPU. Remember to update the device list to match your system resources. Specifying CUDA devices that are not available will result in exceptions and/or undefined behaviour. If you don't have CUDA GPUs available, just set device = ['cpu'] and specify the number of processes later.

The class_vectors area used to segment the voxels based on the orientations of the ST eigenvectors. It is a list of unit vectors, which represent each of the fiber classes. The first vector is considered the primary orientation and will be used for calculating orientation metrics for each voxel. For more details see the related paper and StructureTensorFiberAnalysisDemo notebook.

Lastly, bins_per_degree determines the number of bins used per degree when aggregating the orientation metrics.

Create valid data mask

To create the mask for valid data we create a mask as before, except that we erode the mask with an amount equal to the kernel radius. This was we "remove" voxels that are affected by voxels without values (black/zero intensity).

Creating fiber mask

Now that we have a threshold and data mask, we combine them into a fiber mask, which is True for voxels which contain fibers.

We can plot the mask on top of the data. Here, red are fiber voxels while blue are non-fiber voxels, background and voxels ignored for being close to the edge of the data.