|
|
Input resolution
Resolution, in general, is the number of the smallest image points or pixels (a word derived from "picture" and "element") which a reading unit (for example a scanner or a camera) can detect or differentiate. The units used are generally dpi = dots per inch, or dpcm = dots per cm. The higher the resolution, the larger the number of pixels that can be read.
Optical versus interpolated resolution
The optical resolution is also called the physical resolution. It specifies the number of lines or dots per inch or cm that can actually be differentiated by the CCD and the lens of the scanner, that is, how many can be clearly discerned. In practice, this can be seen from whether two lines next to one another can still be detected as separate (individual lines). Interpolated resolution is a resolution calculated mathematically by hardware or software that, as we'll see later, only has meaning when detecting bar codes, but has nothing to do with gray scale reproduction.
Input resolution
Gray scales are extremely important for image processing technology, since to reproduce half-tone originals, the input device must be able to detect each pixel at a certain data depth to be able to reproduce the different gray scales or even tonal values for an original. A good input device should be able to reproduce 256 or more tonal values (8 to 16 bits), or gray scales. We'll see why on the next page.
Necessity for more than 8 bits (256 gray scales)
When expanding from a reduced tonal value range to 256 tone values using only 8-bit transformations, there are gaps in the tonal value scale – some gray scales are missing. The clarity and sharpness of the original are lost. This can also happen when the transformation algorithm is not optimized from 10 bits to 8. The gaps in the histogram, or "spikes", are then clearly visible.
Gray scale simulation using raster
To be able to print gray scales, the print system uses a raster technology. Since it wouldn't be economical to print lots of different gray scales using lots of different individual inks, raster cells are formed to simulate the gray scales. The raster matrix of an image point with gray scales, based on the individual printer pixels.
Raster matrix
One image point on the scanner is converted using a raster matrix (generally a 16x16 matrix). If a point on the raster is black, up to 256 printer pixels can be placed into the raster cell. At a raster of 152 lpi, there are 152 raster cells per inch next to one another. The dimensional unit of lpi (lines per inch) is often confused with the print resolution. The print resolution is generally specified in dpi. Printing experts in Germany usually use lpcm to specify both the resolution of the printer as well as for the raster width.
Calculating the resolution needed for input devices
Gray scales are converted into a 16 x 16 matrix when output to an exposure unit, so one raster point ideally contains 256 individual pixels. So if a half-tone original is output in a 60 lpcm raster, each gray scale pixel is converted into a 16 x 16 matrix. A printer with a resolution of 2540 dpi can just barely reproduce such a raster point. A 60 lpcm raster point corresponds to about 150 dpi – that would then also theoretically be the input resolution required.
The formula
However, since losses occur during the analog-to-digital conversion (the Sampling Theorem), an additional Q factor is introduced (Q stands for quality). This factor is generally assumed to be 1.5, or even 2 in extreme cases. These relationships yield the following formula for calculation of the ideal scan resolution:
Scan resolution = raster width x 1.5 x scaling factor
Here's an example:
The scan resolution for a 60 lpcm raster is to be calculated at a 1:1 scaling factor. Since the value for the raster is given in cm, we also have to convert centimeters into inches:
Scan resolution = 60 lpcm x 2.54 cm/inch x 1.5 x 1 = 228.6 dpi
|
(click to enlarge)
(click to enlarge)
|
|