|
1.INTRODUCTIONRadiance is a familiar and important concept in optical engineering; it describes the radiant flux in an optical system as a function of 3D spatial position and ray direction, but only in the limit where diffraction can be ignored. If one knows the radiance throughout a system, other familiar radiometric quantities such as irradiance, radiant intensity and flux through some area can readily be computed (see Sec. 2). Because x rays and gamma rays have very small wavelengths compared to most structures we encounter in imaging, these same definitions and properties of radiance apply with minor modifications in radiological imaging. On the other hand, diffraction often cannot be ignored at longer wavelengths, and for those cases a physical-optics counterpart of radiance, with similar conservation and propagation formulas, is available. There is even a form of radiance for the emerging field of quantum imaging. Details are given in Sec. 3. Some basic propagation and conservation laws, discussed in Sec. 4, often allow the calculation of radiance for an arbitrary point in a system and an arbitrary ray direction if the radiance function is known just on a single plane. In imaging, these relations lead to simple ways of calculating the radiance in the image when one knows the radiance of the object, even for very complex imaging systems. Nothing we have said so far about radiance has any obvious relation to the second term in the title, photon noise, but a common problem in optical engineering is the limited number of photons that can be collected and the resulting Poisson noise in any measurement of interest, including an image. As we shall see in detail in Secs. 5 and 6, any measurement we make in optics or radiology is ultimately described by a Poisson point process (PPP), and the complete statistical properties of this PPP are described by a radiance function. Thus the conservation and propagation rules for radiance tell us all we will ever need to know about the image statistics, at least for a single object. Image statistics for a single object are important in assessment of image quality, but a full discussion of image quality must also account for the randomness of the objects being imaged and in some cases the randomness of the imaging system itself. This line of research is summarized in Sec. 7, where the concept of information content of a photon is introduced. A useful general reference for this paper is Barrett and Myers, Foundations of Image Science,1 especially Chapter 10, which deals with radiance, and Chapter 11, which deals with Poisson point processes. 2.RADIANCE IN GEOMETRICAL OPTICS AND RADIOLOGY2.1.Definitions and notationAs noted in the introduction, radiance is a function of spatial position and ray or flux direction. The position of a point in space is specified by a 3D vector R from some arbitrary origin, and the flux direction is specified by a unit vector ŝ. Thus radiance is denoted L(R, ŝ). Because ŝ is a unit vector, it has only two independent components, say sx and sy; the third component is given by To define the radiance for the given point and flux direction, we construct an arbitrary plane P passing through the point. We denote the normal to this plane by the unit vector With this geometry, the radiance is defined as where Φ is radiant flux (W), dω is the differential solid angle of a cone of flux directions from a point on the plane, and dA is the differential area on the reference plane P (see Fig. 1). The cos θ in (2.1) is defined as the scalar product between the flux direction ŝ as and the unit vector Figure 1.Geometry for specifying radiance on a reference plane. The 3D vector R specifies a point on the plane; the 3D unit vector ![]() The cosine can be associated with either of the partial derivatives in (2.1), leading to two alternative definitions of radiance: where dAproj is the projected differential area and dΩ is the projected differential solid angle. If we again choose coordinates so that the reference plane is perpendicular to the z axis, these projections are given by The two alternative definitions in (2.3) show that radiance can be interpreted either as the flux per unit projected area per unit solid angle or as the flux per unit projected solid angle per unit actual area (on the reference plane). In both cases the word projected refers to projection onto a plane perpendicular to the flux direction through multiplication with cos θ. With either interpretation, radiance is invariant as the orientation of the reference plane is changed. Since radiant flux is measured in watts (W), the SI units of radiance are watts per square meter per steradian, which we indicate as [L] = W/(m2 · sr). 2.2.Other radiometric quantities derived from radianceIrradiance, radiant intensity and radiant flux can all be found by appropriate integration if the radiance is known on a reference plane. Irradiance (power per unit area) incident on the plane P is given by The 3D observation point R is fully specified by the plane P indicated in the superscript and the 2D vector r in that plane. For The two integrals in (2.6) run over hemispheres centered on the normal to the plane, so (2.5) applies, but ∫hemi dω = 2π while ∫hemi dΩ = π. Radiant exitance, denoted MP(r), has the same definition as irradiance, (2.6), but it refers to flux reflected or emitted from plane P rather that incident on it. For a planar Lambertian source or reflector, where the radiance at the plane is independent of direction ŝ, it follows from (2.6) that M = πL. Radiant intensity, used to express the directional dependence of the flux from some area A on plane P, is given by The radiant flux (power) through some area A on plane P is given by 2.3.Spectral radianceIf we wish to specify the spectral dependence of the radiance (per unit wavelength), we define [cf. (2.3)] We can also define the spectral radiance in terms of the frequency of the radiation, ν = c/λ: or in terms of the photon energy ε = hν: 2.4.Flux and radiance in photon unitsSo far radiant flux has been in SI units of watts or joules per second, but for monochromatic radiation or for the spectral radiances defined above, we can express the flux in photons per second just by dividing the flux in joules/s by the photon energy in joules, denoted as εJ. Thus, for example, the spectral radiance per unit wavelength from (2.9) becomes a spectral photon radiance, given by The photon energy in joules is equal to the photon energy in electron volts multiplied by the charge of an electron in coulombs: The word photon in this usage has no quantum-mechanical implications; it is just a way of changing the units of flux and spectral radiance. 3.RADIANCE IN PHYSICAL OPTICS AND QUANTUM OPTICS3.1.Physical opticsPhysical optics deals with coherence and diffraction. The basic difficulty in integrating radiometry with coherence theory and scalar diffraction theory is that scalar fields are complex, while radiometric quantities are real and nonnegative. For quasimonochromatic light of frequency ν (wavelength λ =c/ν), Adriaan Walther2–4 defined the generalized spectral radiance (actually a spectral radiance per unit frequency) at a point R on plane P by where θ is the usual angle between ŝ and the normal to the plane, k = 2π/λ = 2πν/cm, e(R) is a scalar electric field (e.g., one component of the electric field vector), the angle brackets denote a statistical average over an ensemble of scalar fields, and R and R′ are 3D position vectors specifying points in the plane. The expectation in (3.1) is a familiar quantity from coherence theory, the ‘mutual intensity, and (3.1) states that (within constants) the generalized spectral radiance is the 2D Fourier transform of the mutual intensity. Thus Walther’s radiance definition makes an important connection between radiometry and coherence. Another connection, noted by Walther in his 1968 paper, is that the integral in (3.1) is a Wigner Distribution Function (WDF), which originated in quantum mechanics but is now an important tool in phase-space optics] see the book by Testorf et al.5 and the review article by Alonso.6 By construction, the WDF is real, but it can take on negative values, which of course the radiance defined by (2.1) cannot. Nevertheless, when one uses the generalized radiance to compute irradiance, radiant intensity or flux with the integral formulas from Sec. 2, the results are nonnegative. 3.2.Quantum opticsA quantum-mechanical definition of radiance can be given by a straightforward generalization of (3.1)1: where ê+ and ê− are electric field operators and the angle brackets now denote a quantum-mechanical expectation. The other radiometric quantities can then be computed just as in the classical case. An example of a quantum imaging systems that can be described radiometrically by (3.2) is shown in Fig. 2. In the optical or infrared realm, the source of entangled photons in this figure can be a process called spontaneous parametric down-conversion (SPDC), but in medical imaging with positron-emission tomography (PET) it can be the two entangled gamma-ray photons produced by electron-positron annihilation. 4.TRANSPORT AND CONSERVATION OF RADIANCE4.1.Boltzmann transport equationThe Boltzmann equation is an equation for the time derivative of the spectral photon radiance Lp,ε (R, ŝ, ε, t), or LP,ε for short. This derivative has contributions from the physical processes of absorption, emission, scattering and propagation radiation, so we have where the subscripts have the obvious meanings. The four terms in this equation are derived in Sec. 10.3 of Ref. 1, and the resulting transport equation is where cm is the speed of light in the medium, μtot is the total attenuation coefficient (with contributions from both absorption and scattering processes), For a self-luiminous volumetric source, as in fluorescent microscopy or nuclear medicine, the units of the source term are [Ξp,ε] = (photons/s) / (m3 · sr · eV). If the source emits isotropically, we can write where f (R, ε, t) is the volumetric source we wish to image. Note that the 4π in (4.3) carries the units of sr. For planar Lambertian sources (a common model in optics), the source is confined to a plane (say, z = 0), and the source term is written as Now f (r, ε, t) is the planar Lambertian source we wish to image. Note that [δ(z)] = 1/m. If the source is independent of time, or at least slowly varying, we can derive a steady-state form of the Boltzmann equation by setting dLp,ε/dt = 0; the result is If we use either of the two source models, (4.3) or (4.4), then (4.5) can be written abstractly as where f is the object to be imaged and (within constants) If we assume there are no sources outside some volume V, then If we know the source distribution and the characteristics of the propagation medium and/or imaging system, we can calculate the spectral photon radiance at any point outside V. 4.2.Conservation of radiance along a ray in lossless mediaA simple but important example of the Boltzmann formalism is to consider a volumetric, monochromatic, time-independent source f(R) inside a medium with constant speed of light cm and no absorption, scattering or other radiation sources. For points outside the source region, (4.5) becomes To solve this equation,1 we choose a Cartesian coordinate system such that the z axis is parallel to ŝ. Then ŝ · If we define ℓ = z – z′ and go back to a general vector notation, we can also write this result as As illustrated in Fig. 3, the interpretation of this equation is that Lp,ε(R, ŝ, ε) is obtained by integrating the source distribution along a line parallel to ŝ and passing through the point R. The operator that performs this integration is a special case of the operator Figure 3.Diagram showing that radiance in a homogeneous medium is a line integral of a volumetric source that does not absorb or scatter its own radiation. The cross indicates the origin of coordinates for the three 3D position vectors. ![]() It follows immediately from (4.10) that where R1, R2 and R3 are three different points along the same line through the source, but all three lying outside the source region (see Fig. 3). Since this expression holds for all photon energies and all units for the flux, it is valid for the total radiance as well as the spectral photon radiance: Noting that the three radiances in (4.12) have the same angular argument ŝ, we can call this line a ray, and (4.12) shows that radiance is constant along a ray in a homogeneous, lossless medium. Alternatively, we can define a ray as the path along which the radiance is constant, a path found by solving the Boltzmann transport equation. Moreover, this same conclusion holds if we consider a lossless optical system, i.e., one with no absorption, scattering or reflection losses, and trace any ray through it as in Fig. 4. Again, radiance is constant along the ray, but now the ray direction depends on which portion of the ray we examine, so we have to write Figure 4.Diagram showing the invariance of radiance along a ray in a lossless optical system, as expressed by (4.13). The three points are vectors from a common origin of coordinates, as in Fig. 3, but the vectors themselves are omitted for simplicity in this figure. Note that all three points are in media of the same refractive index; otherwise L/n2 would be the conserved quantity. ![]() so long as all three points are in media of the same refractive index (e.g., air). If the points are in homogeneous media of different refractive indices, but again there are no losses, it can be shown that L/n2 is the conserved quantity: 4.3.Conservation of radiance in a lossless paraxial imaging systemABCD matrices are an elementary tool for analyzing paraxial optical systems. If we consider two arbitrary planes perpendicular to the optical axis in a system with rotational symmetry, the ray height x and the angle of the ray with respect to the optical axis, α, in the two planes are well known to be related by If the system is paraxial but not rotationally symmetric, then we wirite where s⊥ is the projection of the flux direction ŝ onto the plane (i.e., perpendicular to The principle of conservation of radiance in a lossless paraxial system is most easily stated if we write the radiance as With this notation, the radiance on an output plane is related to the radiance on a parallel input plane by 4.4.Transport equations for physical opticsA lesser-known application of ABCD matrices is to paraxial diffraction problems. The field propagation from input to output plane is described in the Fresnel approximation by7 Friberg8 studied the propagation of Walther’s generalized radiance through systems described by ABCD matrices. With notation similar to (4.18), he showed that Thus the physical-optics radiance is constant along the ray defined by geometrical optics. The same conclusion was reached by Walther (1978) by a stationary-phase approximation, valid asymptotically in the limit of zero wavelength. Friberg’s derivation does not require this limit (though it does use the Fresnel approximation), and it is valid for any state of coherence of the fields.1, 6 A fully quantum mechanical version of Fresnel diffraction theory has been given by Shapiro.9 Combining this theory with the quantum-mechanical form of the Walther generalized radiance in (3.1) leads to a transport equation for quantum radiance, at least for paraxial propagation. 5.RADIANCE DETECTORS FOR IMAGING SYSTEMSAll radiation detectors are sensitive to the radiance incident on them. In particular, if we exclude so-called thermal detectors, they all respond in some way to the spectral photon radiance defined in Sec. 2, but in the subsections below we distinguish four different classes of detectors, depending on what information they yield about the spectral photon radiance function. 5.1.Integrating detectorsA CCD or CMOS camera produces a pixellated image related to the optically induced electrical charge accumulated at each pixel. A first approximation is that the mean charge (averaged over many images of the same object) is proportional to mean number of incident photons with energy above the bandgap energy during the exposure time. In fact, however, the mean charge produced is influenced also by the spectral, spatial, angular and temporal distribution of the flux. If the spectral photon radiance is known in the detector plane, and if the sensor array responds linearly to the radiation incident on it, the most general expression for the mean output at pixel m has the form1 where ηm(R, ŝ, ε) specifies the quantum efficiency of the detector, τ is the exposure time and R is a general 3D vector, but the spatial integral is over the two variables needed to specify points in the detector plane. Other locations for P may also be chosen. Various optical elements such as lenslets, pinholes, spectral filters, choppers or other optical modulators may be placed in the optical path between P and the detector array. The general form of the mean CCD/CMOS output signal in this case is The function dm(R, ŝ, ε, t) describes the propagation of radiance through the optical elements between plane P and the detector as well as the response of the detector to the the radiance incident on it. One way to summarize (5.2) is to say each mean pixel output is a linear functional of the radiance in plane P, where the optical elements between the reference plane and the detector give us great freedom in choosing the weighting functions dm(R, ŝ, ε, t). Cell-phone cameras and DSLRs are now available with 50 MP sensors, which means that we can measure 50 million independent linear functionals of the radiance function in an arbitrary plane. One application of (5.2), of considerable current interest, is the light-field camera, pioneered by Ren Ng10 and illustrated in Fig. 5. The microlens array makes the photosensor array sensitive to the angular dependence of the radiance and not just the irradiance, and that allows the image to be refocused on different planes after it is recorded. Plane P in (5.2) is tangent to the left face of the microlens array in Fig. 5, so the light-field camera essentially captures samples of Figure 5.Schematic of a light-field camera (adapted from Ng10). The object is imaged onto the microlens array, and each microlens images the pupil of the main imaging system onto the photosensor array. ![]() 5.2.Pixellated photon-counting detectorsSo-called photon counting detectors have circuitry at each pixel to detect the presence of a photoelectric interaction in the pixel and record the number of interactions (“counts”) during some exposure time for later readout. What is counted, therefore, is photoelectric interactions rather than photons; even a classical electromagnetic field induces photoelectric interactions. Nevertheless, we will continue to use common parlance and refer to a photoelectric interaction as absorption of a photon. As a practical matter, it is easier to distinguish radiation-induced photoelectric interactions from thermally induced electrons (dark current) if some form of optical gain is used. Intensified CCDs, for example, use an optical image intensifier ahead of the CCD sensor so that a single optical photon absorbed in the photocathode of the intensifier produces a burst of light and hence multiple photons absorbed in the CCD element. Another way of incorporating gain is to consider an array of avalanche photodiodes (APDs) as the image detector. In this case an absorbed photon produces an electron-hole pair, and each charge carrier is accelerated by an applied electric field to energies greater than the bandgap energy, so it can produce additional electron-hole pairs. For x rays or gamma rays, the gain can be provided by scintillation materials for which each absorbed high-energy photon produces a burst of low-energy optical photons that can be detected by optical sensors. Perhaps surprisingly, (5.1) and (5.2) are still valid descriptions of the mean output at pixel m for arrays of photon-counting detector. The actual outputs, gm without the bar, are integers for photon-counting arrays, but the means are still linear functionals of the radiance so long as the event rate is low enough that there is no loss of counts from temporally overlapping events (pile-up) and no overflow of the registers that store the number of counts at each pixel. 5.3.Position-sensitive photon-counting detectorsIt is also possible to build photon-counting detectors without pixels. A common configuration is shown in Fig. 6, which shows a large-area silicon photodiode with strip electrodes on the four edges. Light incident in a spot, as shown, creates signals in all four electrodes, and a simple centroid calculation provides a measure of the x-y position of the spot. The primary application of position-sensitive detectors is to track a beam or localized spot of light, but the addition of an image intensifier makes it possible for them to respond to individual photoelectric interactions. As in Sec. 5.2, a single interaction in the photocathode of the intensifier produces a burst of light and hence measurable signals at each of the four output electrodes. Thus the x-y coordinates of the individual interactions can be computed and stored in real time, as the events occur; no integration time is needed, and there are no detector pixels. Instead, the data consist of a list of the coordinates, possibly tagged with the time at which the event occurred. This form of data storage is referred to as list mode, in contrast to the pixellated or binned mode implicit in (5.1) and (5.2). Listmode data storage has received considerable attention in the radiology literature,11–13 and it is used routinely in PET. 5.4.Photon-processing detectorsA photon-absorption event in a detector can be characterized by several attributes, including the position R of the interaction, the energy ε deposited in the detector by the event, the direction ŝ of the radiant flux that produced the event and possibly the time t at which the event occurs. For the jth event (j = 1…, J), We can organize these attributes into an attribute vector with 5-7 components (2 or 3 components for Rj, depending on the thickness of the detector; one component for energy εj; two components for the unit vector ŝ, and possibly time tj). We denote this attribute vector for the jth event by Aj. Of course we cannot directly measure the components of an attribute vector, but we can estimate them from suitable data sets. For example, if the position-sensitive detector of Fig. 6 is operated with an image intensifier, the pulse signals from the four electrodes can be used to estimate xj and yj, which are two of the 5-7 components of Aj. To get sufficient information to estimate additional components, we need to add optical elements such as lenslets or color filters in front of the detector to alter its sensitivity to the spectral photon radiance [see (5.2) and Fig. 5). The goal of a photon-processing detector is estimate the components of each attribute vector as accurately as possible. We denote the estimated attributes with tildes. Thus, if Aj = (Rj, ŝj, εj), then Of course these estimates will not constitute perfect measurements of the true photon attributes, but statistical estimation theory1, 14, 15 provides us with the tools to analyze and minimize the bias and variance of the estimates with respect to the true attributes. An efficient estimator is one that is unbiased and has the smallest possible variance, as specified by the Cramèr-Rao lower bound. It is well known from statistical estimation theory that the maximum-likelihood (ML) estimator is efficient in any problem where an efficient estimator exists, and in any case the ML estimator is asymptotically efficient as more or better data are acquired about each photon-absorption event. For a review of ML estimation as applied to photon-processing detectors, see Barrett et al.16 To summarize what we know about how to implement optimum estimation of photon attributes, an ideal photon-processing detector must:
Figure 7 shows a conceptual diagram of a photon-processing detector capable of operating in the visible or NIR portions of the spectrum. A practical realization of a photon-processing detector for x rays, gamma rays and charged particles is the intensified Quantum Imaging Detector (iQID) developed by Brian Miller.17, 18 Computer systems for performing the MLE step in real time are discussed by Hesterman at al.19 6.PROPERTIES OF IMAGE DATA FROM PHOTON-PROCESSING DETECTORSSo far we have discussed the mean values of data obtained by integrating detectors and photon-counting arrays (see Secs. 5.1 and 5.2), but we have not done the same for photon-processing detectors, and we have not discussed the statistical properties of the data in any case. 6.1.List-mode data and point processesIn practice, the estimated photon attributes will be stored in listmode, where each list entry is a K-dimensional vector, with K being the number of estimated attributes per event; K = 6 in our example where 3D position, two components of a unit vector and an energy are estimated. A mathematically equivalent representation of the same information is a random point process, a sum of delta functions defined by For a discussion of angular delta functions, Because of the independence of the photon-absorption events, the number of events J is a Poisson random variable, and u(R, ŝ, ε) is called a Poisson random process. The mean of this point process is obtained by averaging over the statistics of each of the J vectors Aj and over J itself for a given object f that produced the data. The result is a function of A which depends on f, so we can denote it as ū(A|f), or more explicitly as ū(R, ŝ, ε|f). Technically, this function can be regarded as a vector ū(f) in the Hilbert space 𝕃2(ℝK), which is the space of square-integrable functions of K continuous variables. 6.2.Linear imaging of emissive objectsFor self-luminous objects as in astronomy, nuclear medicine or fluorescence microscopy, Caucci and Barrett13 have shown that where For comparison, all linear photon-counting imaging systems that record only the number of counts in each of M bins are described by where now 6.3.Nonlinear imaging of transmissive objectsIn many situations in optics and radiology, an image is formed by illuminating a transmissive object with a known radiation source. In these cases, the object is described by the spatial and spectral distributions of its attenuation and scattering coefficients. In terms of the Boltzmann equation (4.5), the object properties are embedded in the attenuation term cmμtotLp,ε and the scattering term We can still use (6.3) to describe the mean image data with a photon-counting or integrating detector, and (6.2) for a photon-processing detector, but now the operators 6.4.Probability density functions for photon-processing dataThe full probability law for a univariate Poisson random variable is determined by its mean. The same thing is true for a Poisson random process but now the mean is a function, ū(R, ŝ, ε) and the probability law is a probability density function or PDF. The 6D PDF for the attribute estimates is obtained by normalizing the mean: Because the events are statistically independent (conditional on f), the overall PDF for the entire data set can be expressed as a product of 6D PDFs like those in (6.4).11, 13 6.5.Computation of the conditional mean of the Poisson Point ProcessAccording to (6.4), the mean of the Poisson point process tells you everything about the statistics of the process, conditional on a a specific object f, but how do we compute that mean? The first approach is to recognize that, within constants, the mean ū(R, ŝ, ε|f) is just a blurred version of the spectral photon radiance in plane P, As we saw in Sec. 4.1, the spectral photon radiance is the solution of the Boltzmann transport equation, and often analytical solutions can be found; a simple example is (4.10). In many other situations For emissive objects, it may be possible to approximate the linear operator When analytical solutions are not readily available, we can use Monte-Carlo simulation to estimate the spectral photon radiance. There is even a version of Monte-Carlo simulation, called adjoint Monte Carlo,1 that allows direct estimation of the spectral photon radiance blurred by the estimation errors. 7.RELATION TO IMAGE QUALITY7.1.Task-based assessmentIn much of the medical-imaging community, and increasingly in other areas of imaging, image quality is defined in terms of the performance of some observer on some clinically or scientifically relevant task. Generically, the task can be either classification or estimation. An estimation (or quantitation) task is to determine a numerical value for one or more parameters associated with the object being imaged. Examples include cardiac ejection fraction or the concentration of a dye or contrast agent in some region of interest. In a classification task, the goal is to assign the object to one of two or more classes. A simple example is tumor detection, where the classes are tumor-present and tumor-absent. The observer for a classification task can be either a human or a mathematical algorithm. The performance of a human observer can be measured by well-established psychophysical methods and analysis of the resulting ROC (receiver operating characteristic) curves. Psychophysical studies are complicated and time-consuming, however, so there is considerable interest in developing mathematical model observers that will predict human performance. Another form of mathematical observer is the ideal observer, which makes optimal use of the available data in such a way as to maximize an ROC-based figure of merit. If we know the performance of the ideal observer, we can compare it to the performance of a human and determine how well the human can extract the information contained in the data. Often the performance of the true ideal observer is difficult to compute, so we use the optimal linear observer, which we call the Hotelling observer. It is optimal among all observers constrained to perform only linear observations on the data. An important variant on the Hotelling observer is the channelized Hotelling observer (CHO), which is the Hotelling observer applied not to an image but to a reduced data set consisting of the sampled outputs of a small number of linear filters, called channels. An important example is spatial-frequency-selective channels such as those known to be operative in the human visual system. 7.2.Object variabilityFor all of these tasks and observers, task performance is limited by the randomness in the data. So far we have discussed randomness arising from photon noise in different kinds of detectors, but there is also an inevitable statistical variability in the objects being imaged,23 and in some cases the imaging system itself introduces a form of noise.24 To account for the object variability in a classification task, we have to understand the statistics of the data conditional on a class C of objects. For any data set g, including the listmode data acquired by photon-processing detectors, it follows from elementary laws of probability that where pr(g|f) is the PDF of the data given a particular object f, and the expectation is over all objects drawn from the class. To visualize this expectation and its impact on classification performance, we can imagine using an accurate simulation program to generate N sample objects from class C. Denoting the nth sample object as fn (n = 1, …N), we can approximate pr(g|C) by a sample average as For any form of photon noise, which is the subject of this paper, the PDF pr(g|fn) is fully determined by its conditional mean ḡ(fn). For a photon-counting detector with M pixels, therefore, (6.3) and (7.2) show that where g is a vector in an M-D space with components (g1, g2, g3, …,gM). These components are statistically independent conditional on an object f, so pr(g| To display this M-D PDF, we consider two components at a time. For example, we can plot the bivariate PDF pr(gm,gm′|C) as in Fig. 8 for different total exposure times τ. The axes on these plots represent gm and gm′ (not m and m′), and the gray level is the PDF. The leftmost panel is for the largest τ, and the rightmost panel is for the smallest τ. Figure 8.Illustration of the bivariate class PDF pr(gm, gm′|C) for a class of objects C and a photon-counting pixel array. Distributions are shown for three different values of the exposure time τ (increasing right to left). See text for explanation. ![]() The points shown by x on all three panels correspond to the values of A similar illustration holds for photon-processing detectors, but there the discrete index m is replaced by continuous indices like (R, ŝ, ε). The exposure time τ controls the total number of recorded events J in the data list for a particular object, and again the overall variance and covariance for the class PDF depend on both the exposure time and the randomness of the objects. For mathematical details, see Caucci.13 7.3.Information content of a photonConsider an imaging problem where the task is to decide if the object that produced a particular image g was from class C1 or class C2. To perform this task, we need some knowledge the two class PDFs, pr(g|C1) and pr(g|C2). These densities can again be approximated by expressions like (7.3) if we can accurately simulate objects from the two classes. A depiction of the data statistics like that in Fig. 8, but for two classes, is given in Fig. 9. It is visually evident that the classification task can be performed with high accuracy at large exposure time (leftmost image in the figure) but not for shorter exposures where the photon noise dominates. Figure 9.Illustration of the bivariate PDFs, pr(gm, gm′|C1) and pr(gm, gm′|C2), for two classes and a photon-counting pixel array. Distributions are shown for three different values of the exposure time τ (increasing right to left). See text for explanation. ![]() It is beyond the scope of this paper, but it is possible to make these statements much more precise by specifying the observer that will perform the tasks (e.g., human or Hotelling observer) and by adopting some measure of the task performance, such as area under an ROC curve. If one then plots this performance metric as a function of the mean number of absorbed photons, then one can define the information content per photon for this particular imaging system, task and observer. For a full mathematical exposition of this approach, with application to imaging with ionizing radiation, the reader can consult a forthcoming tutorial review article in Physics in Medicine and Biology.25 8.SUMMARY AND CONCLUSIONSThis paper has considered two topics that at first blush might appear to be unrelated: radiance and photon noise. In one sense the connection is elementary: All radiation detectors respond to electromagnetic radiation through photoelectric interactions, and the statistical properties of the detector output are related to the inherent Poisson randomness of these interactions. Poisson probability laws are determined by their means, and the mean output from a detector is some functional of the radiance incident on the detector. In practical engineering problems we strive to develop highly linear detectors, so the relation between radiance incident on the detector and the mean of some Poisson process is usually also linear. In the context of imaging, the radiance incident on the detector is some other functional of the radiance incident on or emitted by the object being imaged; this functional is linear for self-luminous or reflecting objects but nonlinear for transmissive objects. In either case, the basic problem in image analysis or task-based assessment of image quality is to understand the full relationship between object properties and the Poisson statistics on the detector output. This program requires clear definitions of radiance in various circumstance; methods for transport of the radiance from object to imaging detector; mathematical expressions for the mean response of the detector to a given incident radiance; a clear statement of just what constitutes a detector output; an understanding of how the statistics of this output relate to the randomness not only of the photoelectric interactions but also of the objects being image, and finally a rigorous way of relating these statistics to image quality. This paper has attempted to survey the current state of knowledge for all steps in this chain. We started with the usual elementary definition of radiance in terms of radiant flux, in the limit of geometrical optics, and we related this radiance to other radiometric quantities, such as irradiance and radiant intensity in the usual way. A particularly important radiometric quantity for this discussion turned out to be the spectral photon radiance, defined in (2.12). Then we explored definitions of radiance valid for physical optics and quantum optics, and we studied the transport and conservation of radiance beginning with the Boltzmann transport equation. In particular, the crucial principle that radiance is constant along a ray for lossless optical systems was developed for both geometrical and physical optics. We discussed four generic types of radiance detector for imaging systems. In addition to the familiar distinction between integrating and photon-counting detectors, we briefly discussed position-sensitive photon-counting detectors, which are usually used for beam-tracking applications but can be used for single-photon counting and position estimation with the addition of an image intensifier or other optical gain medium. An interesting facet of this class of detectors is that the output is a list of estimated interaction positions rather than an array of pixel values. Finally, we introduced a new class of devices, called photon-processing detectors, where multiple attributes of each photon interaction event can be estimated and stored in an expanded list. The statistical properties of the listmode data from these last two classes of detectors were analyzed by introducing a Poisson point process, defined as a sum of delta functions in photon-attribute space; the mean of this point process is the spectral photon radiance. We concluded with a brief summary of task-based assessment of image quality, with an emphasis on photon-counting and photon-processing detectors. This analysis led to a definition of the task-based information content of a single photon. ACKNOWLEDGMENTSWe have benefited significantly from discussions with Abhinav Jha and Eric Clarkson. This research was supported by the National Institutes of Health under grants R37 EB000803 and P41 EB002035. REFERENCESBarrett, H. H. and Myers, K. J., Foundations of Image Science, Wiley, New York
(2004). Google Scholar
Walther, A.,
“Radiometry and coherence,”
J. Opt. Soc. Am, 58 1256
–1259
(1968). Google Scholar
Walther, A.,
“Radiometry and coherence,”
J. Opt. Soc. Am, 63 1622
–1623
(1973). Google Scholar
Walther, A.,
“Propagation of the generalized radiance through lenses,”
J. Opt. Soc. Am, 68 1606
–1610
(1978). Google Scholar
Testorf, M., Hennelly, B., and Ojeda-Castaneda, J., Phase-Space Optics: Fundamentals and Applications, McGraw-Hill Professional(2009). Google Scholar
Alonso,M. A.,
“Wigner functions in optics: describing beams as ray bundles and pulses as particle ensembles,”
Advances in Optics and Photonics, 3
(4), 272
–365
(2011). Google Scholar
Siegman, A. E., Lasers, University Science Books, Mill Valley, CA
(1986). Google Scholar
Friberg, A. T.,
“Propagation of a generalized radiance in paraxial optical systems,”
J. Opt. Soc. Am. A, 30
(18), 2443
–2446
(1991). Google Scholar
Shapiro, J. H.,
“The Quantum Theory of Optical Communications,”
IEEE Journal of Selected Topics in Quantum Electronics, 15
(6),
(2009). Google Scholar
Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., and Hanrahan, P.,
“Light field photography with a hand-held plenoptic camera,”
Stanford Tech Report CTSR, 2005
–02
(2005). Google Scholar
Barrett, H. H., Parra, L., and White, T. A.,
“List-mode likelihood,”
J. Opt. Soc. Am. A, 14 2914
–2923
(1997). Google Scholar
Parra, L., and Barrett, H. H.,
“List-mode likelihood-EM algorithm and noise estimation demonstrated on 2D-PET,”
IEEE Trans. Med. Imag., MI-17, 228
–235
(1998). Google Scholar
Caucci, L. and Barrett, H. H.,
“Objective assessment of image quality V: Photon-counting detectors and list-mode data,”
J. Opt. Soc. Am. A, 29
(6), 1003
–1016
(2012). Google Scholar
Melsa, J. L. and Cohn, D. L., Decision and Estimation Theory, McGraw-Hill, New York
(1978). Google Scholar
Van Trees, H. L., Detection, Estimation, and Modulation Theory, 1 Wiley, New York
(1968). Google Scholar
Barrett, H. H., Hunter, W. C. J., Miller, B. W., Moore, S. K., Chen, Y., and Furenlid, L. R., , “Maximum-likelihood methods for processing signals from gamma-ray detectors,” plenary paper, IEEE Trans. Nucl. Sci. 56, 725–735 (2009). Google Scholar Miller, B. W., Barber, H. B., Barrett, H. H., Liu, Z., Nagarkar, V. V., and Furenlid, L. R.,
“Progress in BazookaSPECT: high-resolution dynamic scintigraphy with large-area imagers,”
in Proc. SPIE,
85080F
(2012). Google Scholar
Miller, B. W., Gregory, S. J., Fuller, E. S., Barrett, H. H., Barber, H. B., and Furenlid, L. R.,
“The iQID camera: An ionizing-radiation quantum imaging detector,”
Nuclear Instruments and Methods in Physics Research A,
(2014). Google Scholar
Hesterman, J. Y., Caucci, L., Kupinski, M. A., Barrett, H. H., and Furenlid, L. R.,
“Maximum-likelihood estimation with a contracting-grid search algorithm,”
IEEE Trans. Nucl. Sci., 57
(3), 1077
–1084
(2010). Google Scholar
Jha, A. K., Kupinski, M. A., Masumura, T., Clarkson, E., Maslov, A. A., and Barrett, H. H.,
“A three-dimensional Neumann-series approach to model light transport in uniform media,”
J. Opt. Soc. Am. A, 29
(8), 1741
–1757
(2012). Google Scholar
Jha, A. K., Kupinski, M. A., Barrett, H. H., Clarkson, E., and Hartman, J. H.,
“A three-dimensional Neumann-series approach to model light transport in non-uniform media,”
J. Opt. Soc. Am. A, 29
(8), 1885
–1899
(2012). Google Scholar
Jha, A. K., Clarkson, E., and Kupinski, M. A.,
“An ideal-observer framework to investigate signal de-tectability in diffuse optical imaging,”
Biomed. Opt. Expr., 4
(10), 2107
–2123
(2013). Google Scholar
Barrett, H. H.,
“Objective assessment of image quality: Effects of quantum noise and object variability,”
J. Opt. Soc. Am. A, 7 1266
–1278
(1990). Google Scholar
Barrett, H. H., Myers, K. J., Devaney, N., and Dainty, J. C.,
“Objective assessment of image quality: IV. Application to adaptive optics,”
J. Opt. Soc. Am. A, 23 3080
–3105
(2006). Google Scholar
Barrett, H. H., Myers, K. J., Hoeschen, C., Kupinski, M. A., and Little, M. P.,
“Task-based measures of image quality and their relation to radiation dose and patient risk,”
Invited tutorial review in Phys. Med. Biol., pending,
(2014). Google Scholar
|