Digital Imaging Terminology

Jargon: Digital Imaging Terminology Explained !

This is a good range of terms relating to medical digital imaging terminology that you will come up against in the medical digital imaging world. It is by no means complete but some of the most common ones to help you out.

Read as much or as little as you want but ask us if you are not sure of something you have read or been told. If you want something added and explained, just contact us.

AE Title

Application Entities (AE’s) – the nodes in the DICOM network and their name – AE Title

Cassette

A light-proof housing for x-ray film, containing front and back intensifying screens, between which the film is placed and held during exposure. Although it is usual to have two screens, there may be only one where there is a special need for a high detail picture.

Cassettes are also used in Computed Radiography (CR) Systems but holding an Imaging Plate (IP) instead of film. In this case we will refer to this as the cassette or cassette shell.

See “Imaging Plate” and “Computed Radiography

Cassette Grid

A cassette grid is composed of alternating strips of lead and radio translucent material such as aluminium. Placed on top of the cassette it permits the passage only of the x-rays that are passing directly to the film. Scattered rays are absorbed by the lead and this reduces the effect of scatter on the film and provides a more clear-cut image.

X-ray grids improve the quality of a radiograph by trapping most of the scattered radiation, the biggest contributing factor to poor diagnostic quality. Introducing an x-ray grid between the x-ray beam and the film or plate will provide a clearer and more detailed image.

Let the experts at ARO help you choose the grid that fits your needs. We can help you with many sizes and configurations including grids for CR, DR, C-Arm, decubitus, mammography, and standard applications

Different ratios, line spacing’s and focal distances are available to suit varying needs, talk to us about what is best for you.

Caesium Iodide

Caesium iodide (CsI) is an ionic compound often used as the input phosphor of an x-ray image intensifier tube found in fluoroscopy equipment. Caesium iodide photocathodes are highly efficient at extreme ultraviolet wavelengths.[1]

An important application of caesium iodide crystals, which are scintillators, is electromagnetic calorimetry in experimental particle physics. Pure CsI is a fast and dense scintillating material with relatively high light yield. It shows two main emission components: one in the near ultraviolet region at the wavelength of 310 nm and one at 460 nm. The drawbacks of CsI are a high temperature gradient and a slight hygroscopicity.

Caesium iodide can be used in Fourier Transform Infrared (FT-IR) spectrometers as a beamsplitter. CsI has a wider transmission range than the more common potassium bromide beamsplitters, extending its usefulness into the far infrared. A problem with optical-quality CsI crystals is that they are very soft with no cleavage, making it difficult to create a flat polished surface. Also, the CsI optical crystals must be stored in a desiccator to prevent water damage to the surfaces, and coated (typically with germanium) to minimise water damage from short term atmospheric exposure during beamsplitter swapouts.

Source and more information: http://en.wikipedia.org/wiki/Caesium_iodide

Computed Radiography

Computed Radiography (CR) uses very similar equipment to conventional radiography except that in place of a film to create the image, an imaging plate (IP) made of photostimulable phosphor is used. The imaging plate is housed in a special cassette and placed under the body part or object to be examined and the x-ray exposure is made. Hence, instead of taking an exposed film into a darkroom for developing in chemical tanks or an automatic film processor, the imaging plate is run through a special laser scanner, or CR reader, that reads and digitizes the image. The digital image can then be viewed and enhanced using software that has functions very similar to other conventional digital image-processing software, such as contrast, brightness, filtration and zoom.

Source and more information: http://en.wikipedia.org/wiki/Computed_radiography

DICOM

Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. It includes a file format definition and a network communications protocol. The communication protocol is an application protocol that uses TCP/IP to communicate between systems. DICOM files can be exchanged between two entities that are capable of receiving image and patient data in DICOM format. The National Electrical Manufacturers Association (NEMA) holds the copyright to this standard.It was developed by the DICOM Standards Committee, whose members[2] are also partly members of NEMA.

Source and more information: http://en.wikipedia.org/wiki/DICOM

Direct Radiography

This known as Direct Radiography (DR) or Direct Digital Radiography (DDR)

There are two major variants of digital image capture devices: flat panel detectors (FPDs) and CCD detectors.

Flat Panel Detectors (FPDs) are further classified in two main categories:

1. Indirect FPDs. Amorphous silicon (a-Si) is the most common material of commercial FPDs. Combining a-Si detectors with a scintillator in the detector’s outer layer, which is made from caesium iodide (CsI) or gadolinium oxysulfide (Gd2O2S), converts X-rays to light. Because of this conversion the a-Si detector is considered an indirect imaging device. The light is channeled through the a-Si photodiode layer where it is converted to a digital output signal. The digital signal is then read out by thin film transistors (TFTs) or fiber-coupled CCDs. The image data file is sent to a computer for display.

2. Direct FPDs. Amorphous selenium (a-Se) FPDs are known as “direct” detectors because X-ray photons are converted directly into charge. The outer layer of the flat panel in this design is typically a high-voltage bias electrode. X-ray photons create electron-hole pairs in a-Se, and the transit of these electrons and holes depends on the potential of the bias voltage charge. As the holes are replaced with electrons, the resultant charge pattern in the selenium layer is read out by a TFT array, active matrix array, electrometer probes or microplasma line addressing.

Charge Coupled Detectors (CCD)

The design of a charge-coupled device (CCD)-based DR system is straightforward. The detector is comprised of a large FOV (e.g., 43 cm by 43 cm) scintillator that converts absorbed X-ray energy into light. It also includes an optical lens assembly to focus the light onto the photosensitive CCD array, and a CCD camera to integrate, scan and output the corresponding light image. While there were initially several configurations in early systems, today’s CCD-based detector is typically comprised of a single-compound optical lens and a high-resolution CCD camera comprised of 9 million pixels (3000× 3000 pixels) to 16 million pixels (4000 × 4000 pixels) or greater. When referred back to the image plane, this results in image pixel sizes of ~0.10 to ~0.14 µm (Figure 2). The photosensitive area of the CCD chip is actually quite small, on the order of 2.5 cm × 2.5cm to 4.0 cm × 4.0 cm, which is required to maintain extremely high charge-coupling efficiency and low-noise operation during the readout of the image. Thus, there is a large optical demagnification that is necessary to focus the full FOV light image onto the CCD sensor. One physical difficulty is the inefficiency of light collection caused by the dispersed light emission from the phosphor, resulting in only a small fraction that can be focused onto the CCD, thus potentially reducing the statistical integrity of information carried by the X-ray photons and increasing overall noise in the image. This is determined by the demagnification factor, conversion efficiency, luminance and directionality of the light emission. A non-structured phosphor such as gadolinium oxysulfide has a high light dispersion and corresponding low fraction of light that can be focused on the CCD, while a structured phosphor such as cesium iodide (CsI) produces a more forward-directed light output, so that the lens-light collection efficiency, and thus the SNR in the output image, is better for a given incident X-ray exposure. Newer, advanced CCD systems with a CsI phosphor have proven to be reasonably efficient, particularly when using higher kilovolt peak (kVp) techniques that produce more light photons per absorbed X-ray photon. One minor disadvantage in some positioning situations is the relatively large and bulky enclosure of a CCD-based DR system, necessitated by placing the CCD out of the direct X-ray beam and using mirror optics to reflect the light to the photosensor array.

Linear CCD arrays optically coupled to a scintillator by fiberoptic channel plates (often with a demagnification taper of 2:1 to 3:1) are used in slot-scan geometries (Figure 3). A significant advantage is pre- and postpatient collimation that limits X-ray scatter and allows grid-free operation with equivalent image quality (in terms of SNR) of a large area FOV at 2 to 4 times less patient dose. Disadvantages include the extended exposure time required for image acquisition with potential motion artifacts and reduced X-ray tube efficiency. Nevertheless, imaging systems based on slot-scan acquisition have provided excellent clinical results for dedicated chest and full-body trauma imaging.

Source and more information: http://www.appliedradiology.com/Article.aspx?id=11045

Gadolinium oxysulfide

Gadolinium oxysulfide (Gd2O2S), also called gadolinium sulfoxylate, GOS or Gadox, is an inorganic compound, a mixed oxide-sulfide of gadolinium. Its CAS number is [12339-07-0].

Uses

The main use of gadolinium oxysulfide is in ceramic scintillators. Scintillators are used in radiation detectors for medical diagnostics. The scintillator is the primary radiation sensor that emits light when struck by high energy photons. Gd2O2S based ceramics exhibit final densities of 99.7% to 99.99% of the theoretical density (7.32 g/cm3) and an average grain size ranging from 5 micrometers to 50 micrometers in dependence with the fabrication procedure.[1] Two powder preparation routes have been successful for synthesizing Gd2O2S: Pr, Ce, F powder complexes for the ceramic scintillators. These preparations routes are called the halide flux method and the sulfite precipitation method. The scintillation properties of Gd2O2S: Pr, Ce, F complexes demonstrate that this scintillator is promising for imaging applications. There are two main disadvantages to this scintillator; one being the hexagonal crystal structure, which emits only optical translucency and low external light collection at the photodiode. The other disadvantage is the high X-ray damage to the sample.[2]

Terbium-activated gadolinium oxysulfide is frequently used as a scintillator for x-ray imaging. It emits wavelengths between 382-622 nm, though the primary emission peak is at 545 nm. It is also used as a green phosphor in projection CRTs, though its drawback is marked lowering of efficiency at higher temperatures.[1] Variants include, for example, using praseodymium instead of terbium (CAS registry number [68609-42-7], EINECS number 271-826-9), or using a mixture of dysprosium and terbium for doping (CAS number [68609-40-5], EINECS number 271-824-8).

Gadolinium oxysulfide is a promising luminescent host material, because of its high density (7.32 g/cm3) and high effective atomic number of Gd. These characteristics lead to a high stopping power for X-ray radiation. Several synthesis routes have been developed for processing Gd2O2S phosphors, including: solid state reaction method, reduction method, combustion synthesis method, emulsion liquid membrane method, and gas sulfuration method. The solid state reaction method and reduction methods are most commonly used because of their high reliability, low cost, and high luminescent properties. (Gd0.99, Pr0.01)2O2S sub-microphosphors synthesized by homogeneous precipitation method are very promising for a new green emitting material to be applied to the high resolution digital X-ray imaging field[3] Gadolinium oxysulfide powder phosphors are intensively used for conversion of X-rays to visible light in medical X-ray imaging. Gd2O2S: Pr based solid state X-ray detectors have been successfully reintroduced to X-ray sampling in medical computed tomography (imaging by sections or sectioning, through the use of any kind of penetrating wave).

Crystal Structure

The crystal structure of gadolinium oxysulfide has trigonal symmetry and the space group with one formula unit per unit cell. Each gadolinium ion is coordinated by four oxygen atoms and three sulfur atoms in a non-inversion symmetric arrangement. The Gd2O2S structure is a sulfur layer with double layers of gadolinium and oxygen in between.[4]

Source and more information: http://en.wikipedia.org/wiki/Gadolinium_oxysulfide

Fundus Photography

Fundus Photography involves capturing a photograph of the back of the eye i.e. fundus. Specialized fundus cameras that consist of an intricate microscope attached to a flashed enabled camera are used in fundus photography. The main structures that can be visualized on a fundus photo are the central and peripheral retina, optic disc and macula. Fundus photography can be performed with coloured filters, or with specialized dyes including fluorescein and indocyanine green.

The models and technology of fundus photography has advanced and evolved rapidly over the last century. Since the equipments are sophisticated and challenging to manufacture to clinical standards, only a few manufacturers/brands are available in the market. For more information on our products, click here for Medical Fundus Camera, or here for Veterinary Fundus Camera

Source and more information: https://en.wikipedia.org/wiki/Fundus_photography

Imaging Plate (IP)

The Computed Radiography (CR0 imaging plate (IP) contains photostimulable storage phosphors, which store the radiation level received at each point in local electron energies. When the plate is put through the scanner, the scanning laser beam causes the electrons to relax to lower energy levels (photostimulated luminescence), emitting light that is detected by a photo-multiplier tube, which is then converted to an electronic signal. The electronic signal is then converted to discrete (digital) values and placed into the image processor pixel map.

Imaging plates can theoretically be re-used thousands of times if they are handled carefully. IP handling under industrial conditions, however, may result in damage after a few hundred uses. An image can be erased by simply exposing the plate to a room-level fluorescent light. Most laser scanners automatically erase the image plate after laser scanning is complete. The imaging plate can then be re-used. Reusable phosphor plates are environmentally safe but need to be disposed of according to local regulations.

They are generally stored inside a cassette or cassette shell for use and storage. See “Cassette” and “Computed Radiography

IP Address

An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: “A name indicates what we seek. An address indicates where it is. A route indicates how to get there.

Source and more information: http://en.wikipedia.org/wiki/IP_address

JPEG

JPEG (JAY-peg) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.[citation needed]

JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web.[citation needed] These format variations are often not distinguished, and are simply called JPEG.

The term “JPEG” is an acronym for the Joint Photographic Experts Group, which created the standard. The MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions, which provides a MIME type of image/pjpeg when uploading JPEG images.[2] JPEG files usually have a filename extension of .jpg or .jpeg.

JPEG/JFIF supports a maximum image size of 65535×65535 pixels,[3] hence up to 4 gigapixels (for an aspect ratio of 1:1).

Source and more information: https://en.wikipedia.org/wiki/JPEG

Modality

Imaging Modalities are the source of acquisition of digital imaging data. The can be devices including ultrasound (US), magnetic resonance (MR), positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), Digital radiography (DR), computed radiography (CR) ophthalmology, etc.

MPEG

The Moving Picture Experts Group (MPEG) is a working group of authorities that was formed by ISO and IEC to set standards for audio and video compression and transmission.[1] It was established in 1988 by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione,[2] group Chair since its inception. The first MPEG meeting was in May 1988 in Ottawa, Canada.[3][4][5] As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG’s official designation is ISO/IEC JTC 1/SC 29/WG 11 – Coding of moving pictures and audio (ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 11).

Source and more information: https://en.wikipedia.org/wiki/Moving_Picture_Experts_Group

PACS

Picture Archiving and Communication System (PACS) is a medical imaging technology which provides economical storage of, and convenient access to, images from multiple modalities (source machine types).[1] Electronic images and reports are transmitted digitally via PACS; this eliminates the need to manually file, retrieve, or transport film jackets. The universal format for PACS image storage and transfer is DICOM (Digital Imaging and Communications in Medicine). Non-image data, such as scanned documents, may be incorporated using consumer industry standard formats like PDF (Portable Document Format), once encapsulated in DICOM. A PACS consists of four major components: The imaging modalities such as X-ray plain film (PF), computed tomography (CT) and magnetic resonance imaging (MRI), a secured network for the transmission of patient information, workstations for interpreting and reviewing images, and archives for the storage and retrieval of images and reports. Combined with available and emerging web technology, PACS has the ability to deliver timely and efficient access to images, interpretations, and related data. PACS breaks down the physical and time barriers associated with traditional film-based image retrieval, distribution, and display.

Source and more information: http://en.wikipedia.org/wiki/Picture_archiving_and_communication_system

Pixel

In digital imaging, a pixel, or pel, (picture element) is a physical point in a raster image, or the smallest addressable element in a display device; so it is the smallest controllable element of a picture represented on the screen. The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a two-dimensional grid, and are often represented using dots or squares, but CRT pixels correspond to their timing mechanisms and sweep rates.

Source and more information: http://en.wikipedia.org/wiki/Pixel

Port Number

In computer networking, a port is an application-specific or process-specific software construct serving as a communications endpoint in a computer’s host operating system. A port is associated with an IP address of the host, as well as the type of protocol used for communication. The purpose of ports is to uniquely identify different applications or processes running on a single computer and thereby enable them to share a single physical connection to a packet-switched network like the Internet.

See Wikepedia for more information here:

Veterinary Practice Management Software

Veterinary Practice Management Software (VPMS) is a management system specific to the veterinary market, taking into account the needs of vets, their owners and animals. in some ways it needs to be more sophisticated than its human equivalent as it has to deal with multiple breeds and species and nuances of veterinarians have a broad range of skill sets to deal with many facets or treating animals.

Universal Serial Bus (USB)

Universal Serial Bus is an industry standard developed in the mid-1990s that defines the cables, connectors and communications protocols used in a bus for connection, communication and power supply between computers and electronic devices.

USB was designed to standardize the connection of computer peripherals (including keyboards, pointing devices, digital cameras, printers, portable media players, disk drives and network adapters) to personal computers, both to communicate and to supply electric power. It has become commonplace on other devices, such as smartphones, PDAs and video game consoles. USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports, as well as separate power chargers for portable devices.

As of 2008, approximately 6 billion USB ports and interfaces were in the global marketplace, and about 2 billion were being sold each year.

Source and more information: http://en.wikipedia.org/wiki/USB

Uninterruptible Power Supply (UPS)

An Uninterruptible Power Supply, also uninterruptible power source, UPS or battery/flywheel backup, is an electrical apparatus that provides emergency power to a load when the input power source, typically mains power, fails. A UPS differs from an auxiliary or emergency power system or standby generator in that it will provide near-instantaneous protection from input power interruptions, by supplying energy stored in batteries or a flywheel. The on-battery runtime of most uninterruptible power sources is relatively short (only a few minutes) but sufficient to start a standby power source or properly shut down the protected equipment.

A UPS is typically used to protect computers, data centers, telecommunication equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss. UPS units range in size from units designed to protect a single computer without a video monitor (around 200 VA rating) to large units powering entire data centres or buildings.

Source and more information: http://en.wikipedia.org/wiki/Uninterruptible_power_supply