NIH | National Cancer Institute | NCI Wiki  

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

Publications mentioned during the meeting

  • TY - RPRT TI - FRVT 2006 and ICE 2006 large-scale results AU - Phillips, P Jonathon AU - Scruggs, W Todd AU - O'Toole, Alice J AU - Flynn, Patrick J AU - Boyer, Kevin W AU - Schott, Cathy L AU - Sharpe, Matthew PY - 2007 PB - National Institute of Standards and Technology CY - Gaithersburg, MD SN - NIST IR 7408 DO - 10.6028/NIST.IR.7408 ER -

and 

  • Schwarz CG, Kremers WK, Lowe VJ, Savvides M, Gunter JL, Senjem ML, Vemuri P, Kantarci K, Knopman DS, Petersen RC, Jack CR Jr; Alzheimer’s Disease Neuroimaging Initiative. Face recognition from research brain PET: An unexpected PET problem. Neuroimage. 2022 Jun 3;258:119357. doi: 10.1016/j.neuroimage.2022.119357. Epub ahead of print. PMID: 35660089.

Best practices document draft will be ready in the next couple of weeks and will be shared for comment.


Demo of SynthStrip

  • Dr. Malte Hoffman's slides (request an accessible version)
  • SynthStip prevents facial reconstruction from medical images with pixel-level de-identification.
  • Combined effort led by Andrew at the Martinez Center.
  • He will present the tool and how it works and then will focus on its evaluation and robustness.
  • Skull stripping removes non-brain tissue from CT scan images. The reason we do this is because irrelevant information can appear in downstream analysis algorithms. If we remove these structures, we can increase security. It's a holistic approach to de-id and de-facing.
  • Other tools make strong assumptions about the type of image you can use them on. They are dependent on the type of image, like MRI scans. Resolution expectations.
  • Implementations of skull stripping from deep learning have limitations too because they train. They would not do as well on new contrast types or new modalities.
  • The goal is to get these neural networks to generalize to data that is unseen in training. We suggest to synthesize images of arbitrary characteristics and get these images to train label maps to do the synthesis.
  • The SynthStrip team's general process is to start from a set of brain label maps but no images. Given an input label map, they create a new label map by flying a random nonlinear formation to increase spatial variability. Then they sample a right-scale image with a different intensity distribution for every structure in the image, supplying random artifacts such as blurring.
  • Since this process is completely randomized, if they were to input the same underlying segmentation map again, they would get a completely different image. They want to encourage their networks to generalize beyond a specific data type.
  • For brain extraction, they sample and augment a label map and synthesize an arbitrary image from it. This image is fed into a neural network. Then they compute the ground truth and compare. 
  • Label maps are only needed for training. If you want to extract the brain from the image, you don't need the label map at test time.
  • At evaluation time, we find that SynthStrip performs with robustness and high accuracy on MRI scans, isotropic scans that have fixed license, and diffusing weighted imaging that typically have lower resolution.
  • Performance in the presence of pathology? SynthStrip did well against data from TCIA.
  • To quantify this, they compiled a database of 600 images and created two metrics. For both metrics, SynthStrip outperforms all of the other baselines they tested. It avoids cross-mislabeling.
  • Limitations: 1. SynthStrip does not currently support in-utero brain extraction. 2. SynthStrip is also inherently three-dimensional. 3. It is focused on pre-processing so does not include maximum intensity and so forth.
  • Runtime performance is very fast.
  • It is a simple command-line utility but there is a standalone version available on the Docker hub.
  • They use python and pytorch under the hood.

Discussion of Presentation

  • Is SynthStrip data useful in the real world?
  • The algorithm needs to prepare to work on skull-stripped data.
  • Perspective from radiation oncology: they tested methods and the Karina method was able to maintain what is needed for radiation therapy. Ying wants input from this group on if they test re-id, what would be an acceptable performance metric?

Brian Bialecki on De-identification

  • His team at ACR is trying to find a way to release this data publicly.
  • He would like to get patient consent to share some identifying data.
  • They'd like to see the real data to assess both the real world risk of doing nothing and the real world risk of various mitigation approaches. 

Other Discussion

  • Skull stripping is not a replacement for for de-facing.
  • The value of SynthStrip is not in how well the model performs, but rather how the model is created.
  • It's difficult to find things that work across different data sets and modalities. This is something we desperately need.
  • SynthStrip does not work on slices, it is fully 3D.
  • Access model, registered or restricted, for data with a data use agreement. This is what everyone is converging on.
  • Record and track who got the data. But some repositories have no such tracking.

Next Meeting

  • Skipping July
  • August 9, 2022 at 1 p.m. EST