Synapse Segmentation Code

Synapse Segmentation Source Code

Here you can find the source code for our Synapse Detection & Segmentation approach.

Latest version is v0.21, updated 13/02/2015.

v0.21 fixes a bug when compiling with ITK 4.6 and newer.

v0.2 adds anisotropic stack support.

Requirements

  • 64-bit Linux distribution.
  • Large amounts of RAM needed, depending on the size of the stacks.

If you want to build the code yourself (not neccesary), you will need:

Pre-compiled Binaries

You can get the pre-compiled binaries for 64-bit linux HERE.

Download and Compile

(You don’t need to compile the code if you download the pre-compiled binaries above)

First, download the code HERE and uncompress it.

To generate the necessary binaries:

  1. create a build folder and ‘cd’ to it.
  2. Run ccmake <path_where_the_code_was_uncompressed>
  3. Set Build Type to RELEASE to generate optimized code.
  4. If needed, specify the folder where ITK can be found.
  5. Run configure/generate, and exit ccmake
  6. Run make -j and wait until it finishes

If compilation fails and problems persist, please contact us.

Running

There are four main ingredients that the binary needs to train and predict synaptic voxels on stacks:

  • The EM volume/stack and training ground truth
  • Pre-computed channels (Gradient magnitude, Structure Tensor Eigenvalues)
  • Pre-computed orientation estimate (currently done with Hessian eigenvalues)
  • Configuration file, containing parameters, path to channels and orientation estimates, etc.

We describe them in detail now.

EM Stack and Ground Truth

The EM stack should be in TIF format, and the type should be unsigned char (8-bit).

The ground truth (training) must be a TIF file of the same size as the EM stack. A value of 0 means that a voxel it is a negative sample, while 255 means positive. Whatever else is ignored and not used as a training sample.

You SHOULD be careful with the annotations and make sure that proper ground truth is inputted to the algorithm. We suggest to erode and dilate the ground truth volume with the binary called BinaryErodeDilateSynapseImageFilter, included in the source package. The syntax is

BinaryErodeDilateSynapseImageFilter <input_gt_stack> <erodeRadius>> <dilateRadius>> <outputStack>>

This helps eliminate mislabeled voxels and favors generalization. Typical dilate radiuses are 10, and we suggest to use at least erodeRadius = 1.

Pre-computed channels & orientation estimate

Our approach needs to load pre-computed channels, which should be already available before calling the binary. To generate a typical set of channels look at the auxItk/computeSynapseFeatures.py python script. It expects a stack named “volume.tif” and generates a set of channels in nrrd format in a folder called “synapse_features”. It makes use of binaries created during the compilation of the cmake project.

IMPORTANT: by default, orientation estimates are computed with the Hessian with sigma = 3.5 voxels. You SHOULD change this value according to the voxel size you are working with. This can be done in computeSynapseFeatures.py.

Configuration file

The binary ccbost expects the path of the configuration file as the first argument. The configuration file contains all the parameters needed to train the classifier and to run the prediction on an unseen test volume. An example configuration file named example_config.cfg is provided in the source code, where each option is explained.

Within the parameters that must be specified in the configuration file, pay special attention to

  • numStumps: number of stumps for AdaBoost
  • supervoxel seed and cubeness: recommended values are seed = 2, cubeness = 16 for EM volumes of 6nm (x,y) voxel size.
  • Polarity specification in the train structure: this is essential to get good results.
  • Anisotropy:  zAnisotropyFactor (float) specifies the anisotropy factor (voxel size in z vs voxel size in (x,y)). Currently it must be the same in both train and test stacks. NOTE that voxel size information on either TIF or NRRD formats are ignored.

Running ccboost

Once the features are pre-computed and the configuration file is ready, the binary can be run with

ccboost <path_to_config_file>.cfg

It will first load the ground truth and volumes, and then start training the classifier.

Once it is done with training, it will write the learned stumps in a file called <outFileName>-stumps.cfg, which can be re-used for testing later.

After training, it will go through each test volume and generate two output stacks for each, one ending in -min and one ending in -max. The -min one should be discarded, and is only provided for experimental purposes. The correct output to use is -max.

References

Learning Context Cues for Synapse Segmentation

C. J. Becker; K. Ali; G. Knott; P. Fua 

IEEE Transactions on Medical Imaging. 2013. Vol. 32, num. 10, p. 1864–1877. DOI : 10.1109/Tmi.2013.2267747.

Learning Context Cues for Synapse Segmentation in EM Volumes

C. J. Becker; K. Ali; G. Knott; P. Fua 

2012. International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). p. 585-592. DOI : 10.1007/978-3-642-33415-3_72.

Contact

If you have any questions or bug reports, please send an email to [email protected].