Recently, the JPEG2000 committee (ISO/IEC JTC1/SC29/WG1) decided to start up a new standardization activity to support the encoding of volumetric and floating-point data sets: Part 10 - Coding Volumetric and Floating-point Data (JP3D). This future standard will support functionalities like resolution and quality scalability and region-of-interest coding, while exploiting the entropy in the additional third dimension to improve the rate-distortion performance. In this paper, we give an overview of the markets and application areas targeted by JP3D, the imposed requirements and the considered algorithms with a specific focus on the realization of the region-of-interest functionality.
The recent JPEG2000 image coding standard includes a lossless coding mode based on reversible integer to integer filter banks, which are constructed by inserting rounding operations into the filter bank lifting factorisation. The baseline (Part 1) of the JPEG2000 standard supports a single reversible filter bank, the finite length input to which is symmetrically extended to avoid difficulties at the boundaries. While designing support for arbitrary filter banks for Part 2 of the standard, we discovered that reversibility is not always possible for even length integer to integer filter banks combined with symmetric pre-extension.
A new set of boundary-handling algorithms has been developed for discrete wavelet transforms in the ISO/IEC JPEG-2000 Still Image Coding Standard. Two polyphase component extrapolation policies are
specfied: a constant extension policy and a symmetric extension policy. Neither policy requires any computations to generate the extrapolation. The constant extension policy is a low-complexity option that buffers just one sample from each end of the input being extrapolated. The symmetric extension policy has slightly higher memory and conditional-logic requirements but is mathematically equivalent to wholesample symmetric pre-extension when used with whole-sample symmetric filter banks. Both policies can be
employed with arbitrary lifted filter banks, and both policies preserve resolution scalability and reversibility. These extension policies will appear in Annex H, "Transformation of images using arbitrary wavelet transformations," in Part 2 ("Extensions") of the JPEG-2000 standard.
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.