Imaging reports contain both graphical drawings and medical knowledge in the form of annotations. These annotations are stored as unstructured text and separated from graphical drawings, which are typically in a proprietary format on an imaging system. Extracting this valuable medical information and combining it with drawings on another system is a time-consuming process that yields results that are cumbersome to filter and search. Also, existing vocabularies used to describe medical images contain thousands of terms. This makes it difficult for users to find these terms and then include them in their AIM annotations.

The AIM model begins to solve this problem by capturing the descriptive information of an image with user-generated graphical symbols placed on the image into a single common information source. AIM captures medical findings using standard vocabularies such as RadLex, SNOMED CT®, and DICOM, and user-defined terminology. Image information captured in the AIM model includes the anatomic entity and its characteristics, imaging observation and its characteristics, and inference.