NIH | National Cancer Institute | NCI Wiki  

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...


The AIM Foundation and AIM 4.0 model are used to express and capture image annotation and markup information relevant to images. They are described using a Unified Modeling Language (UML) class diagram. An annotation can contain explanatory or descriptive information that is generated by humans or machines, which directly relates to the content of a referenced image or images. It describes information regarding the meaning of pixel information in images. Annotations also become a collection of image semantic content that can be used for data mining purposes. An image markup is a graphical symbol or textual description associated with an image. Markups can be used to depict textual information and regions-of-interest visually alongside of, or more typically when overlaid upon, an image.


Both models capture imaging physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator, equipment used to create AIM instances, subject demographics, and adjudication observations. The 4.0 model has nine additional classes for storing lesion observations in Radiology and Oncology.

...


In the AIM 3.0 model, an image annotation or annotation of annotation is stored as a single file, either as an AIM DICOM SR or AIM XML document. When AIM was used in Pathology, hundreds of thousands of AIM files were created for a single study. Managing AIM files from different image studies became very complicated. A collection described in the next section was used to store AIM instances from the same imaging study.


The next section describes changes from AIM version 3.0 to the AIM Foundation model.

...

The AIM UML model illustrated as a UML class diagram is used to capture information about how images are perceived by human or machine observers. Our design process started with understanding the initial requirements [2] and the information in "From AIM 3.0 to the AIM Foundation Model," on page . We identified a set of information objects that are used to collect information about imaging annotations and markup. Classes are divided into image semantic content, calculation, markup, image reference, and AIM statements. If there are classes that do not pertain to a specific group, we classify them in the general information group. These classes contain information about the workstation used to create the AIM annotations, the user that creates the AIM annotations, patient identification, DICOM segmentation, annotation role, inference, workflow activity, adjudication observation, image annotation, and annotation-of-annotation.


The AIM Foundation Model, shown in figure 1, has evolved through an iterative feedback process since the release of AIM version 3, revision 11. The model has gone through many reviews and recommendation processes. Enterprise Architect can be used to view the AIM UML class diagram file, AIM_Foundation_v4_rv47_load.eap. One can also view this diagram in JPEG format. You can download the model from the NCI Wiki.


Based on the information in Figure 4, we can categorize the collection of classes into six groups: General Information, Calculation, Image Semantic Content (Finding), Markup, Image References, and AIM Statements. AIM statements are described in "From AIM 3.0 to the AIM Foundation Model," on page " (need reference). Descriptions of the other five groups follow.


AIM Foundation ModelImage Modified
Figure 4. AIM Foundation Model

...


The next class that we should examine is AnnotationEntity class. It is an abstract base class for the ImageAnnotation and AnnotationOfAnnotation classes. The AnnotationEntity class captures name, a general description of the AIM annotation, type of annotation via controlled terminology, creation date and time, a reference to the AIM template used to create an annotation, a reference to a previously related AIM annotation, and AIM annotation UID.


The ImageAnnotation class annotates images. The AnnotationOfAnnotation class annotates other AIM annotations for comparison and reference purposes. Both ImageAnnotation and AnnotationOfAnnotation have AnnotationRoleEntity, AuditTrail, InferenceEntity and TaskContextEntity classes. The ImageAnnotation associates with an abstract class SegmentationEntity. The model currently supports DICOM Segmentation via DicomSegmentationEntity. DICOM Segmentations are either binary or fractional. The DicomSegmentationEntity class represents a multi-frame image representing a classification of pixels in one or more referenced images. The class contains the DICOM SOP class UID that defines the type of segmentation. It references its own instance UID and the instance UID of the image to which the segmentation is applied. It also has an identification number for the segment that shall be unique within the segmentation instance in which it is created. An ImageAnnotation may have zero or more segmentation objects. The AnnotationRoleEntity describes a role of an annotation. Each instance can have a role associated with it, e.g. a baseline case. The InferenceEntity class provides a conclusion derived through interpreting an imaging study and/or medical history. The AuditTrail class captures the status of an annotation instance using a coded term, contains identifying and descriptive attributes of the reading session, and the reading subtask that results in clinical environment or trial findings. The class consists of the overall task and the specific subtask. A task represents a unit of overall work. It may have one or more subtasks.

...

  • A source class may have or contain 1-to-0..* (zero-to-many) or 1-to-1..* (one-to-many) associations to a target class. The target association name must append "Collection", figure 13, at the end of the class name.

AIM Image Study is source and AIM Referenced DICOM Object is targetImage Modified
Figure 13. An Example of Collection

...

This class contains observations made about lesions in day-to-day clinical interpretations and clinical trial results at a specific timepoint. It also includes "lesions" that are created for the purpose of calibrating scanned film or other secondary capture images. For detailed information, see DICOM Clinical Trials Results Reporting Supplement (Working group 18).


AIM 4.0 ModelImage Modified
Figure 14. AIM 4.0 Model


Six classes were created for AIM statements as follows.

...

An instance of annotation-of-annotation may have one or more general lesion observations associated with it. AnnotationOfAnnotationHasGeneralLesionObservationEntityStatement represents a direct relationship between an instance of annotation-of-annotation and general lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a general lesion observation statement from an annotation-of-annotation.


Assumption:

  • An annotation-of-annotation, each with general lesion observation, was created earlier from a reader.
  • There is a system capable of reading and extracting information from the annotation for further displaying, computing and manipulating purposes.

...

An instance of annotation-of-annotation may have one or more timepoint lesion observations associated with it. AnnotationOfAnnotationHasTimePointLesionObservationEntityStatement represents a direct relationship between an instance of annotation-of-annotation and timepoint lesion observation. If you have two timepoint lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an annotation-of-annotation.


Assumption:

  1. An annotation-of-annotation, each with timepoint lesion observation, was created earlier from a reader.
  2. There is a system capable of reading and extracting information from the annotations for further displaying, computing and manipulating purposes.

...

An instance of image annotation may have one or more general lesion observations associated with it. ImageAnnotationHasGeneralLesionObservationEntityStatement represents a direct relationship between an instance of image annotation and general lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an image annotation.

...

An instance of image annotation may have one or more timepoint lesion observations associated with it. ImageAnnotationHasTimePointLesionObservationEntityStatement represents a direct relationship between an instance of image annotation and time-point lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an image annotation.


Assumption:

  1. An image annotation, each with timepoint lesion observation, was created earlier from a reader.
  2. There is a system capable of reading and extracting information from the annotation for further displaying, computing and manipulating purposes.

...

The class is used to record a relationship between a timepoint lesion observation entity and imaging physical entity. Each lesion observation can only be directly related to one imaging physical entity.
A use case: An imaging interpreter wants to link a timepoint lesion observation and imaging physical annotation.


Working with AIM:

  1. Create an imaging physical entity instance.
  2. Create a timepoint lesion observation entity instance.
  3. Create a TimePointLesionObservationEntityHasImagingPhysicalEntityStatement statement linking the timepoint lesion observation entity (subjects) to the imaging physical entity (objects).

...

The ImageAnnotationCollection object is required to have at least one ImageAnnotation object of type DICOMImageReferenceEntity or WebImageReferenceEntity. The DICOMImageReferenceEntity object must have one imaging study. The ImageStudy object may have one series objects. Each ImageSeries object may have one or more Image objects. It is implied in the model that all images originate from the same study of the same patient.


ImageAnnotationCollection object may have a Person object. An ImageAnnotation object may have DicomSegmentationEntity objects, which contain references to its own instance UID and referenced instance UID of the image to which the segmentation is applied. It also has the segmentation type, DICOM SOP class, UID, and an identification number of the segment. The identification of the segment shall be unique within the segmentation instance in which it is created. ImagingObservationEntity object is being captured as DICOM code sequence with a possible textual comment. An ImagingObservationEntity may have zero or more ImagingObservationEntityCharacteristic objects, which are captured as DICOM code sequences. An ImageAnnotationEntity may store conclusions derived by interpreting an imaging study and/or medical history in a collection of InferenceEntity objects that store the information as code sequence based on a controlled terminology.


ImageAnnotationEntity object may have zero or more TextAnnotationEntity objects. Each TextAnnotationEntity object may have a two or three-dimensional Cartesian coordinate set defined as a MultiPoint type object. TextAnnotationEntity is used as a text markup that can be shown on an image. Graphic markups are stored as TwoDimensionGeometricShapeEntity and ThreeDimensionGeometricShapeEntity objects, which extended from GeometricShapeEntity. Two dimension graphic types are MultiPoint, Point, Circle, Ellipse, and Polyline objects. These inherit the TwoDimensionGeometricShapeEntity abstract class properties and methods. Each two-dimensional graphic type contains one or more TwoDimensionSpatialCoordinate instances. Three-dimensional graphic types are Point, MultiPoint, Polyline, Polygon, Ellipse, and Ellipsoid. These objects inherit the ThreeDimensionGeometricShapeEntity abstract class properties and methods. Each three-dimensional graphic type contains one or more ThreeDimensionSpatialCoordinate instances. The coordinateIndex attribute in TwoDimensionSpatialCoordinate or ThreeDimensionSpatialCoordinate class signifies the order in which a coordinate appears in the shape. GeometricShape class closely follows DICOM 3.0 part 3, C.18.6.1.2 and C.18.9.1.2 Graphic Type. TwoDimensionSpatialCoordinate contains SOP Instance UID and frame number (multi-frame image) to identify which image a geometric shape object belongs to.


ImageAnnotationCollection inherits AnnotationCollection that may have at most one Equipment and User objects. ImageAnnotation inherits properties and methods from the AnnotationEntity class. It may have zero or more ImagingPhysicalEntity, ImagingObservationEntity and CalculationEntity objects.


A CalculationEntity object can be related to a single markup or to a collection of markups and other calculations, which are not related to markup. A calculation may reference other calculations by using CalculationEntityReferencesCalculationEntity and CalculationEntityUsesCalculationEntity objects, which contain a referenced CalculationEntity object UID. A CalculationEntity object may have zero or more CalculationResult objects. It is possible for a Calculation to have no CalculationResult. This means that the information provided in the CalculationEntity object is sufficient to describe the calculation.


A calculation result can be a scalar, vector, matrix, histogram, or array. Dimensionality of calculation results is represented by Dimension objects. A CalculationResult object must have at least one Dimension object. The Index attribute in the Dimension object is a zero based unique index of the dimension. The Size attribute in the Dimension object specifies how many members a dimension has. Label attribute provides textual meaning to a dimension.


A CalculationResult object may have zero or more Data objects. The absence of any Data object means that result is an empty set. Each Coordinate object specifies a dimension index and a position within the dimension. The number of Coordinate objects for each Data object cannot exceed the total number of Dimension objects in a CalculationResult. A Data object cannot have more than one Coordinate object with the same dimensionIndex.

...

The AnnotationOfAnnotationCollection object is required to have at least one AnnotationOfAnnotation object. AnnotationOfAnnotation works very much the same way as ImageAnnotation for calculation group and image semantic content group (see section 7.c). The AnnotationOfAnnotation object must have at least one AIM statements that contains a UID of ImageAnnotation or AnnotationOfAnnotation object. AnnotationOfAnnotation may store a conclusions derived by interpreting an imaging study and/or medical history in a collection of Inference object, which stores the information as a code sequence based on a controlled terminology.


AnnotationOfAnnotation may refer to a collection of ImageAnnotation objects that can come from different studies.

...