Using Google Colab:

(https://colab.research.google.com/github/matjesg/deepflash2/blob/master/nbs/deepflash2.ipynb)

1.Video

The whole workflow is also explained in this video. click here

2. Setup Environment

At first you must allow google colab to run the notebook from GitHub by accepting the prompt.

3. Connect your google drive

It is recommended to connect your runtime to google drive.

You can allow it with ‘y’ for yes or deny it with ‘n’ for no.

Go to the Url that is presented to you in the “Set up environment cell”. There you can choose your google drive account to connect to google colab.

After you have successfully connected the accounts, google will present you a one time authorization code. Copy this code and enter it in the according field in the “Set up environment cell” and continue by pressing the enter key.

Click on runtime in the left upper toolbar and select factory reset runtime.
Then you should reload the page and start again. reset runtime

4. Start deepflash2 UI

When you run this cell, the UI opens. At first click “Select Project Folder”. This unfolds your google drive main directory.

There you can browse through the folders and select the correct folders. After that hit Save. Now the selected folder is connected to deepFlash2.

The select folder tab will change to the name of the selected folder. If you want to change the folder, click this button again.

5. GT Estimation

5.1. Expert Annotations

You can estimate the ground truth by selecting images that are segmented beforehand by different experts. We recommend at least twelve different images from three different experts. After you have selected the desired data, you can press Load Data.

5.2. Ground Truth Estimation:

When you have uploaded the images you can start the ground truth estimation by selecting one of the presented algorithms. At the time of writing you can select between STAPLE and Majority Voting.

We recommend the STAPLE Algorithm when:

  • the experience of experts that have annotated the images vary or is unknown
  • you need more precise results when compared with the Majority Voting Algorithm

We recommend the Majority Voting Algorithm when:

  • Use this algorithm if you can be sure that the expert annotations have no repeated errors.

When the estimation is finished you can download the ground truth images for further use in training. If you proceed with training, deepFlash2 will automatically select them for you.

6. Training the model

In this step you will create a model that you can use to automatically annotate new images.

6.1 Data

First, you have to provide training images. These should be unsegmented and contain the objects you want the neural network to find. Second, you have to offer segmentation masks you have to create beforehand. DeepFlash2 automatically recognizes the number of masks you have provided.
Third, you have to select a number of classes. The number of classes to choose depends on the characteristics of the segmentation. E.g., two for binary segmentation (foreground and background class). Fourth, you can provide instance labels. This step is optional.

6.2 Ensemble Training

You can use the Ensemble training to optimize the results. First choose a number of models within an ensemble. Depending on the data, ensembles should at least have three models. If you are experimenting with custom train settings, try only one model first.

6.3 Validation

Here you can validate the performance of the model you have trained before with unsegmented images.

7. Prediction

In this section you can use a model to segment new images and evaluate the precision.

7.1 Data and Ensemble

First you have to upload the images you want the model to work on. Second you have to select the model or model ensemble you want to apply on the images. If you have used deepFlash2 to train the model, it will automatically select the model/s in the

7.2 Prediction and Quality Control

Here you can run the prediction and download the results. You can enable test-time augmentation for prediction (more reliable and accurate, but slow).