CD requirements for advanced photomasks are getting very demanding for the 100 nm-node and below; the ITRS roadmap requires CD uniformities below 10 nm for the most critical layers. To reach this goal, statistical as well as systematic CD contributions must be minimized. Here, we focus on the reduction of systematic CD variations across the masks that may be caused by process effects, e.g. dry etch loading.
CD requirements for advanced photomasks are getting very demanding for the 100 nm-node and below; the ITRS roadmap requires CD uniformities below 10 nm for the most critical layers. To reach this goal, statistical as well as systematic CD contributions must be minimized. Here, we focus on the reduction of systematic CD variations across the masks that may be caused by process effects, e.g. dry etch loading. We address this topic by compensating such effects via design data correction analogous to proximity correction. Dry etch loading is modeled by gaussian convolution of pattern densities. Data correction is done geometrically by edge shifting. As the effect amplitude has an order of magnitude of 10 nm this can only be done on e-beam writers with small address grids to reduce big CD steps in the design data. We present modeling and correction results for special mask patterns with very strong pattern density variations showing that the compensation method is able to reduce CD uniformity by 50-70% depending on pattern details. The data correction itself is done with a new module developed especially to compensate long-range effects and fits nicely into the common data flow environment.
Within the past decades data file sizes and the related computing power for mask data preparation grew linearly following Moore’s law. However, within the last two years the balance between rising data complexity and computing equipment became unstable due to the massive introduction of OPC and the broad rollout of complex variable shaped beam (VSB) data formats. The disturbance of the former linear coherence led to exploding data conversion times (exceeding 100 hours for a single layer) accompanied by heavily escalating data volumes. A very promising way out of that dilemma is the recently announced introduction of distributed job processing within the mask data processing flow. This way was initially introduced to fracture flat jobs. Building on our first promising results last year we now implemented a fully automated design flow with an integrated Linux based cluster for distributed processing. The cluster solution is built in an automated environment in coexistence with our conventional SUN servers. We implemented a highly reliable DP flow on a large scale base which became as stable as our former Solaris SUN system. In the meanwhile we reached a job first time success rate exceeding 99%. After reaching a very stable state we recently started to extend our flat processing conversion steps by investigating hierarchical distributed processing in CATS version 23. We also report on benchmark results comparing new promising hardware configurations to further improve the cluster performance.
Raster scan pattern generators have been used in the photomask industry for many years. Methods and software tools for data preparation for these pattern generators are well established and have been integrated into design flows with a high degree of automation. But the growing requirements for pattern fidelity have lead to the introduction of 50 kV variable shaped beam pattern generators. Due to their different writing strategy these tools use proprietary data formats and in turn require an optimized data preparation. As a result the existing design flow has to be adopted to account for these requirements. Due to the fact that cycle times have grown severely over the last years the automation of this adopted design flow will not only enhance the design flow quality by avoiding errors during manual operations but will also help to reduce turn-around times. We developed and implemented an automated design flow for a variable shaped beam pattern generator which had to fulfill two conflicting requirements: Well established automated tools originally developed for raster scan pattern generators had to be retained with only slight modifications to avoid the (re)implementation and the concurrent usage of two systems while on the other hand data generation especially during fracturing had to be optimized for a variable shaped beam pattern generator.
Mask data preparation has become a major concern in the supply chain from design to the fab. The mask industry is facing a number of problems induced by a massive introduction of OPC. Affecting the design flow the exponentially escalating data volume came into focus because the data-prep infrastructure fails to keep up with this development. As a consequence the turn around time from tape out to 'ready to write' data is permanently rising from a couple of hours 2 years ago to many days/weeks nowadays. Computation times of many days on modern workstations is no exception anymore. Especially when mask data have to be converted to another writing tools data format or just a mask manufacturing process has to be adapted by modifying the data bias, we encountered computing times of more than hundred hours for a single layer. We found a way to reduce these computation times by a factor of more than 100 by introducing distributed computing on a Linux based cluster.
By approaching the physical resolution limits of optical lithography for a given wavelength, data complexity on certain layers of chip layouts increases, while feature sizes decrease. This becomes even more apparent when introducing optical enhancement techniques. At the same time, more and more complex procedures to fracture mask data out of a DRC clean chip-GDS2 require checks on mask data regarding integrity, as well as mask manufacturability and inspectability. To avoid expensive redesigns and large mask house cycle times it is important to find shortcomings before the data are submitted to the mask house. As an approach to the situation depicted, a (Mask) Manufacturing Rule Check (MRC) can be introduced. Aggressive Optical Proximity Correction (OPC) is a special challenge for mask making. Recently, special algorithms for mask inspection of OPC assist features have been implemented by equipment vendors. Structures smaller than two inspection pixels, like assist structures, can be successfully inspected with certain algorithms. The impact of those algorithms on mask pattern requirements and suitable MRC adoptions will be discussed in the present paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.