Computational lithography, as we know it today, is the result of an evolutionary process that began in the 1970s when IBM scientists and academics from University of California, Berkeley, developed a photoresist and optical projection simulator that eventually became SAMPLE. This early work helped us explain what we were seeing and measuring in our fabs. By the 1980s, the simulation program PROLITH was written for the personal computer and distributed freely. This allowed numerous lithography practitioners easier access to simulation software that enabled not only fundamental understanding of the process, but also the beginnings of methods to do root-cause problem solving. By the 1990s, numerous other programs were being written by commercial and academic entities that further enhanced our problem-solving capabilities, and researchers even started to delve into the optimization of imaging systems. During this period, the industry broke the k1=0.5 barrier and had to start working with nonlinear imaging processes. This necessitated using simulation and modeling programs to enable methods to optimize our optical tools. For example, numerical aperture (NA) and σ optimization could be first accomplished with computation methods before attempting the more costing experimental methods, and optical effects, such as aberrations, could be routinely explored and understood before problems occurred in manufacturing. This period also saw the advent of rule-based optical proximity correction (OPC), which was an early use of computation to optimize the lithographic imaging process.
© 2009 Society of Photo-Optical Instrumentation Engineers
Donis Flagello and Chris Mack
"Guest Editorial: Computational Lithography", J. Micro/Nanolith. MEMS MOEMS. 8(3), 031401 (September 18, 2009). ; http://dx.doi.org/10.1117/1.3240492