NIH | National Cancer Institute | NCI Wiki  

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...


The AIM Foundation and AIM 4.0 model are used to express and capture image annotation and markup information relevant to images. They are described using a Unified Modeling Language (UML) class diagram. An annotation can contain explanatory or descriptive information that is generated by humans or machines, which directly relates to the content of a referenced image or images. It describes information regarding the meaning of pixel information in images. Annotations also become a collection of image semantic content that can be used for data mining purposes. An image markup is a graphical symbol or textual description associated with an image. Markups can be used to depict textual information and regions-of-interest visually alongside of, or more typically when overlaid upon, an image.


Both models capture imaging physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator, equipment used to create AIM instances, subject demographics, and adjudication observations. The 4.0 model has nine additional classes for storing lesion observations in Radiology and Oncology.

...


In the AIM 3.0 model, an image annotation or annotation of annotation is stored as a single file, either as an AIM DICOM SR or AIM XML document. When AIM was used in Pathology, hundreds of thousands of AIM files were created for a single study. Managing AIM files from different image studies became very complicated. A collection described in the next section was used to store AIM instances from the same imaging study.


The next section describes changes from AIM version 3.0 to the AIM Foundation model.

...

  • AnnotationEntity (abstract class)
  • AnnotationRoleEntity
  • CalculationEntity
  • ImagingObservationEntity
  • ImagingPhysicalEntity
  • TaskContextEntity


Annotation statement (common to both types of annotationsImage Added Annotation statement (common to both types of annotationsImage Removed
Figure 1. Annotation statements can be used for image annotations and annotation of annotations

...


Statements available from this group are shown in Figure 2.


AnnotationofAnnotation StatementImage Added AnnotationofAnnotation StatementImage Removed
Figure 2. Annotation of annotation statement can be used for AnnotationOfAnnotation

...


Statements available for the image annotation statement group are in Figure 3.


ImageAnnotation statementImage Added ImageAnnotation statementImage Removed
Figure 3. Image annotation statement can be used for ImageAnnotation

...

The AIM UML model illustrated as a UML class diagram is used to capture information about how images are perceived by human or machine observers. Our design process started with understanding the initial requirements [2] and the information in "From AIM 3.0 to the AIM Foundation Model," on page . We identified a set of information objects that are used to collect information about imaging annotations and markup. Classes are divided into image semantic content, calculation, markup, image reference, and AIM statements. If there are classes that do not pertain to a specific group, we classify them in the general information group. These classes contain information about the workstation used to create the AIM annotations, the user that creates the AIM annotations, patient identification, DICOM segmentation, annotation role, inference, workflow activity, adjudication observation, image annotation, and annotation-of-annotation.


The AIM Foundation Model, shown in figure 1, has evolved through an iterative feedback process since the release of AIM version 3, revision 11. The model has gone through many reviews and recommendation processes. Enterprise Architect can be used to view the AIM UML class diagram file, AIM_Foundation_v4_rv47_load.eap. One can also view this diagram in JPEG format. You can download the model from the NCI Wiki.


Based on the information in Figure 4, we can categorize the collection of classes into six groups: General Information, Calculation, Image Semantic Content (Finding), Markup, Image References, and AIM Statements. AIM statements are described in "From AIM 3.0 to the AIM Foundation Model," on page " (need reference). Descriptions of the other five groups follow.


AIM Foundation ModelImage Added AIM Foundataion ModelImage Removed
Figure 4. AIM Foundation Model

...


The next class that we should examine is AnnotationEntity class. It is an abstract base class for the ImageAnnotation and AnnotationOfAnnotation classes. The AnnotationEntity class captures name, a general description of the AIM annotation, type of annotation via controlled terminology, creation date and time, a reference to the AIM template used to create an annotation, a reference to a previously related AIM annotation, and AIM annotation UID.


The ImageAnnotation class annotates images. The AnnotationOfAnnotation class annotates other AIM annotations for comparison and reference purposes. Both ImageAnnotation and AnnotationOfAnnotation have AnnotationRoleEntity, AuditTrail, InferenceEntity and TaskContextEntity classes. The ImageAnnotation associates with an abstract class SegmentationEntity. The model currently supports DICOM Segmentation via DicomSegmentationEntity. DICOM Segmentations are either binary or fractional. The DicomSegmentationEntity class represents a multi-frame image representing a classification of pixels in one or more referenced images. The class contains the DICOM SOP class UID that defines the type of segmentation. It references its own instance UID and the instance UID of the image to which the segmentation is applied. It also has an identification number for the segment that shall be unique within the segmentation instance in which it is created. An ImageAnnotation may have zero or more segmentation objects. The AnnotationRoleEntity describes a role of an annotation. Each instance can have a role associated with it, e.g. a baseline case. The InferenceEntity class provides a conclusion derived through interpreting an imaging study and/or medical history. The AuditTrail class captures the status of an annotation instance using a coded term, contains identifying and descriptive attributes of the reading session, and the reading subtask that results in clinical environment or trial findings. The class consists of the overall task and the specific subtask. A task represents a unit of overall work. It may have one or more subtasks.


General Information Group ModelImage Added Image Removed
Figure 5. General Information Group


The Calculation group, shown in Figure 6, represents the calculation results of an AIM annotation. Calculation results may or may not be directly associated with graphical symbols or markups. For example, given an image with a single ellipse markup, calculation results can be an area in square millimeters and references maximum and minimum pixel values. As another example, an image has an arrow pointing to a specific location and two concentric circles, with an area measurement of the larger circle minus the smaller circle. The computation result is based on the independence calculation made on each circle. The AIM schema allows calculation results that are not directly related to markups. CalculationEntity class has overall information about a calculation performed related directly to image or images. It defines a type of calculation, such as area, height, radius, and volume of ellipsoid from UCUM, and takes the form of controlled terminology that can be captured in code value, code meaning, and coding scheme designator in a single attributed call, typeCode. It also captures MathML as a string attribute within the class. A questionTypeCode attribute captures a reason why a calculation is needed as code. A textual description can be stored in the description attribute. The CalculationResult abstract class contains the type of result (e.g. binary, scalar, vector, histogram, array, histogram, or matrix), unit of measurement, and coded data type (a primitive programming data type such as integer, double, etc. as well as other data type such as URI). A CalculationResult can be stored as a compact or an extended result, CompactCalculationResult and ExtendedCalculation result, respectively. CompactCalculationResult has three attributes: value, encoding, and compression. The result of a calculation is captured in a string format in a value attribute that can hold a value of array, binary, histogram, matrix, scalar, and vector. Encoding is an encoding method applied to the content of the value attribute. Compression is a method used to compress the content in value attribute. CalculationResult has an association with the Dimension class that states how many dimensions a CalculationResult has. ExtendedCalculation stores a result of a calculation individually with the precise location of each element in the result. The Data class is used to store the result value. The Coordinate class identifies location within a dimension for the Data class. A CalculationEntity may have a relationship with a markup or a collection of markups, other calculations, imaging observation, and imaging physical entity. These types of relationships can be captured as AIM statements.


Calculation Group ModelImage Added Image Removed
Figure 6. Calculation Group


The classes in the Image Semantic Content group, shown in Figure 7, are used to gather image findings or interpretations of images. The ImagingPhysicalEntity class stores an anatomical location (e.g. femur) as a coded term from a recognized controlled vocabulary (RadLex, SNOMED-CT, UMLS, etc). The ImagingPhysicalEntityCharacteristic class further describes the ImagingPhysicalEntity class such as "fracture". The ImagingObservationEntity class is the description of things that are seen in an image. "Mass," "Radiographic evidence of pleural effusion," "Foreign Body," and Artifact," are all examples of ImagingObservationEntity. The ImagingObservationEntityCharacteristic class includes descriptors of the ImagingObservationEntity class such as "dense," "heterogeneous," "hypoechoic," and "spiculated". Both ImagingPhysicalEntityCharacteristic and ImagingObservationEntityCharacteristic may be associated with CharacteristicQuantification. A quantification can be a numerical value, an interval (e.g. 34-67%), a scale (e.g. 1:None, 2:Mild), a quantile (e.g. 1(1-50), 2(51-100)), and a non-quantifiable (e.g. none, mild, mark).


Image Semantic Content Group (Finding) ModelImage Added Image Removed
Figure 7. Image Semantic Content Group (Finding)

...


The TextAnnotationEntity class has coordinates captured as SCOORD or SCOORD3D graphic type as TwoDimensionMultiPoint or ThreeDimensionMultiPoint, respectively. A TextAnnotationEntity's MultiPoint implementation is expected to have no more than two coordinates that can be represented as an arrow connecting TextAnnotationEntity to a point on an image. Only the ImageAnnotation class can have markups.


Markup Group ModelImage Added Image Removed
Figure 8. Markup Group


The ImageReference group, as shown in Figure 9, represents an image or collection of images being annotated. The two possible types of references are DICOM and URI or web image reference. First, DICOMImageReferenceEntity associates with other classes that mimic the DICOM information model. It has one ImageStudy object that has one ImageSeries object, which in turn has one or more Image objects. The ImageStudy class has study instance UID, start date and start time, and procedure description. ImageStudy may have zero or more references to DICOM objects via the ReferencedDicomObject class. The ImageSeries class has series instance UID. The Image class has SOP class UID and SOP instance UID. The Image class has two associations with GeneralImage and ImagePlane; both classes came from the DICOM module general image and image plane, respectively. They are used to store frequently-used DICOM tags such as patient orientation, pixel spacing, and image position. The second image reference type is WebImageReference that contains a URI to an image.

ImageReference Group ModelImage Modified
Figure 9. ImageReference Group

...

  1. Naming Convention:
    1. A name of a class must explicitly describe what information the class will collect. It must start with a capital letter. If part of the class name has a capital abbreviation, only the first character of the abbreviation is capitalized, e.g. DICOM should be Dicom see figure 10.

AIM Referenced DICOM Object ClassImage Modified
Figure 10. A Class Name

Association name, figure 11, of a source class has the same name as the source class name with the first character being a lower case.

AIM Image Study ClassImage Modified
Figure 11. An Association Name of a Source Class

Association name of a target class, figure 12, has the same name as the target class name with the first character being a lower case.

AIM Image Study is source and AIM Image Series and AIM Referenced DICOM Object are targetsImage Modified
Figure 12. An Association Name of a Target Class

  • A source class may have or contain 1-to-0..* (zero-to-many) or 1-to-1..* (one-to-many) associations to a target class. The target association name must append "Collection", figure 13, at the end of the class name.

AIM Image Study is source and AIM Referenced DICOM Object is targetImage Modified
Figure 13. An Example of Collection

...

This class contains observations made about lesions in day-to-day clinical interpretations and clinical trial results at a specific timepoint. It also includes "lesions" that are created for the purpose of calibrating scanned film or other secondary capture images. For detailed information, see DICOM Clinical Trials Results Reporting Supplement (Working group 18).


AIM 4.0 ModelImage Modified
Figure 14. AIM 4.0 Model


Six classes were created for AIM statements as follows.

...

An instance of annotation-of-annotation may have one or more general lesion observations associated with it. AnnotationOfAnnotationHasGeneralLesionObservationEntityStatement represents a direct relationship between an instance of annotation-of-annotation and general lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a general lesion observation statement from an annotation-of-annotation.


Assumption:

  • An annotation-of-annotation, each with general lesion observation, was created earlier from a reader.
  • There is a system capable of reading and extracting information from the annotation for further displaying, computing and manipulating purposes.

...

An instance of annotation-of-annotation may have one or more timepoint lesion observations associated with it. AnnotationOfAnnotationHasTimePointLesionObservationEntityStatement represents a direct relationship between an instance of annotation-of-annotation and timepoint lesion observation. If you have two timepoint lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an annotation-of-annotation.


Assumption:

  1. An annotation-of-annotation, each with timepoint lesion observation, was created earlier from a reader.
  2. There is a system capable of reading and extracting information from the annotations for further displaying, computing and manipulating purposes.

...

An instance of image annotation may have one or more general lesion observations associated with it. ImageAnnotationHasGeneralLesionObservationEntityStatement represents a direct relationship between an instance of image annotation and general lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an image annotation.

...

An instance of image annotation may have one or more timepoint lesion observations associated with it. ImageAnnotationHasTimePointLesionObservationEntityStatement represents a direct relationship between an instance of image annotation and time-point lesion observation. If you have two general lesion observations, you will need to create two statements.


A use case: An adjudicator wants to create a timepoint lesion observation statement from an image annotation.


Assumption:

  1. An image annotation, each with timepoint lesion observation, was created earlier from a reader.
  2. There is a system capable of reading and extracting information from the annotation for further displaying, computing and manipulating purposes.

...

The class is used to record a relationship between a timepoint lesion observation entity and imaging physical entity. Each lesion observation can only be directly related to one imaging physical entity.
A use case: An imaging interpreter wants to link a timepoint lesion observation and imaging physical annotation.


Working with AIM:

  1. Create an imaging physical entity instance.
  2. Create a timepoint lesion observation entity instance.
  3. Create a TimePointLesionObservationEntityHasImagingPhysicalEntityStatement statement linking the timepoint lesion observation entity (subjects) to the imaging physical entity (objects).

...

The ImageAnnotationCollection object is required to have at least one ImageAnnotation object of type DICOMImageReferenceEntity or WebImageReferenceEntity. The DICOMImageReferenceEntity object must have one imaging study. The ImageStudy object may have one series objects. Each ImageSeries object may have one or more Image objects. It is implied in the model that all images originate from the same study of the same patient.


ImageAnnotationCollection object may have a Person object. An ImageAnnotation object may have DicomSegmentationEntity objects, which contain references to its own instance UID and referenced instance UID of the image to which the segmentation is applied. It also has the segmentation type, DICOM SOP class, UID, and an identification number of the segment. The identification of the segment shall be unique within the segmentation instance in which it is created. ImagingObservationEntity object is being captured as DICOM code sequence with a possible textual comment. An ImagingObservationEntity may have zero or more ImagingObservationEntityCharacteristic objects, which are captured as DICOM code sequences. An ImageAnnotationEntity may store conclusions derived by interpreting an imaging study and/or medical history in a collection of InferenceEntity objects that store the information as code sequence based on a controlled terminology.


ImageAnnotationEntity object may have zero or more TextAnnotationEntity objects. Each TextAnnotationEntity object may have a two or three-dimensional Cartesian coordinate set defined as a MultiPoint type object. TextAnnotationEntity is used as a text markup that can be shown on an image. Graphic markups are stored as TwoDimensionGeometricShapeEntity and ThreeDimensionGeometricShapeEntity objects, which extended from GeometricShapeEntity. Two dimension graphic types are MultiPoint, Point, Circle, Ellipse, and Polyline objects. These inherit the TwoDimensionGeometricShapeEntity abstract class properties and methods. Each two-dimensional graphic type contains one or more TwoDimensionSpatialCoordinate instances. Three-dimensional graphic types are Point, MultiPoint, Polyline, Polygon, Ellipse, and Ellipsoid. These objects inherit the ThreeDimensionGeometricShapeEntity abstract class properties and methods. Each three-dimensional graphic type contains one or more ThreeDimensionSpatialCoordinate instances. The coordinateIndex attribute in TwoDimensionSpatialCoordinate or ThreeDimensionSpatialCoordinate class signifies the order in which a coordinate appears in the shape. GeometricShape class closely follows DICOM 3.0 part 3, C.18.6.1.2 and C.18.9.1.2 Graphic Type. TwoDimensionSpatialCoordinate contains SOP Instance UID and frame number (multi-frame image) to identify which image a geometric shape object belongs to.


ImageAnnotationCollection inherits AnnotationCollection that may have at most one Equipment and User objects. ImageAnnotation inherits properties and methods from the AnnotationEntity class. It may have zero or more ImagingPhysicalEntity, ImagingObservationEntity and CalculationEntity objects.


A CalculationEntity object can be related to a single markup or to a collection of markups and other calculations, which are not related to markup. A calculation may reference other calculations by using CalculationEntityReferencesCalculationEntity and CalculationEntityUsesCalculationEntity objects, which contain a referenced CalculationEntity object UID. A CalculationEntity object may have zero or more CalculationResult objects. It is possible for a Calculation to have no CalculationResult. This means that the information provided in the CalculationEntity object is sufficient to describe the calculation.


A calculation result can be a scalar, vector, matrix, histogram, or array. Dimensionality of calculation results is represented by Dimension objects. A CalculationResult object must have at least one Dimension object. The Index attribute in the Dimension object is a zero based unique index of the dimension. The Size attribute in the Dimension object specifies how many members a dimension has. Label attribute provides textual meaning to a dimension.


A CalculationResult object may have zero or more Data objects. The absence of any Data object means that result is an empty set. Each Coordinate object specifies a dimension index and a position within the dimension. The number of Coordinate objects for each Data object cannot exceed the total number of Dimension objects in a CalculationResult. A Data object cannot have more than one Coordinate object with the same dimensionIndex.

...

The AnnotationOfAnnotationCollection object is required to have at least one AnnotationOfAnnotation object. AnnotationOfAnnotation works very much the same way as ImageAnnotation for calculation group and image semantic content group (see section 7.c). The AnnotationOfAnnotation object must have at least one AIM statements that contains a UID of ImageAnnotation or AnnotationOfAnnotation object. AnnotationOfAnnotation may store a conclusions derived by interpreting an imaging study and/or medical history in a collection of Inference object, which stores the information as a code sequence based on a controlled terminology.


AnnotationOfAnnotation may refer to a collection of ImageAnnotation objects that can come from different studies.

...