Design Rule Checking
Design Rule Checking
CONTENTS
20.1 Introduction 504
20.2 Concepts 504
20.2.1 DRC Operations 505
20.2.2 Language-Based DRCs 506
20.3 Design for Manufacturing 507
20.3.1 Operations and Property Assignment 508
20.4 Pattern Matching 508
20.4.1 Capturing a Pattern and Building a Pattern Library 509
20.4.2 Running the PM Tool 509
20.5 Multi-Patterning 509
20.5.1 Using Stitches to Resolve Conflicts 511
20.5.2 Density Balancing Between Split Mask Layer Sets 511
20.6 When to Perform DRC 511
20.7 Flat DRC 512
20.8 Hierarchical DRC 512
20.9 Geometric Algorithms for Physical Verification 512
20.9.1 Scan Line–Based Analysis 512
[Link] Time Complexity of the Scan Line Algorithm 513
503
20.1 INTRODUCTION
After the physical mask layout for a circuit is created using a specific design process, it is
evaluated by a set of geometric constraints, or rules, for that process. The main objective of
design rule checking (DRC) is to achieve a high overall yield and reliability for the design. To meet
this goal of improving die yields, DRC has evolved from simple measurement and Boolean checks
to more involved rules that modify existing features, insert new features, and check the entire
design for process limitations such as layer density. While design rule checks do not validate the
design’s logical functionality, they verify that the structure meets the manufacturing constraints
for a given design type and process technology.
A completed layout consists not only of the geometric representation of the design but also
data that provide support for the manufacture of the design. With each new technology advance,
DRC includes additional manufacturing-related elements, and EDA vendors work with the
manufacturing companies to develop tools to help manage design verification for these elements.
Three such elements are design for manufacturing (DFM), pattern matching (PM), and multi-
patterning technology (MPT).
Before discussing how DRC verification works, an understanding of DRC concepts is
useful.
20.2 CONCEPTS
The physical mask layout consists of shapes on drawn layers that are grouped into one or more
cells. A cell may contain a placement of another cell. If the entire design is represented in one
cell, it is a flat design; otherwise, it is a hierarchical design. Figure 20.1 shows an example of
a hierarchical design with a top cell that contains other instances of cells and primitive objects.
In the lower left, a flat view of one cell is magnified to show its content.
In early design verification stages, the layout database may contain text objects, properties, or
other attributes that further define the purpose of an object in the layout or provide information
that ties an object or set of objects to the logical representation of the design.
Geometries in a verification flow are grouped into layers. Most verification systems provide
unlimited numbers of layers of the following types:
◾◾ Drawn layers represent the original layout data. They are merged on input to the verifica-
tion system to remove any overlap or abutment of geometry on the same layer.
◾◾ Polygon layers are the output of a layer creation operation such as a Boolean opera-
tion, a topological polygon operation, or a geometric measurement function. Here are
pseudocode examples:
RAM
LLOGIC
Block
MEM
◾◾ Edge layers represent the edges of merged polygons as categorized by length, angle, or
other attributes. Examples:
◾◾ Error layers contain clusters of one to four edges from a DRC spatial measurement for
use in graphical result representation. Example:
To perform DRC, one must be able to select particular data, perform a rich set of operations
on the selected data, and choose the output format. DRC operation output is either reported
as an error or provided as an input to another DRC operation. Some operations have more
information to return than a polygon or edge alone may convey, and so, they create a separate
report with additional details.
The following design rule pseudocode combines two DRC operations and returns edge
clusters representing the errors:
ME POLY
CP
Enclosure violations
Figure 20.3 shows the initial layers analyzed by Rule1 in a small region. The first DRC
operation creates layer X to represent all the CP edges that are enclosed by the ME layer between
2.24 and 2.26 units. The three violations of the second enclosure operation between X and POLY
are represented as edge clusters.
A complete DRC system provides a common language to describe all phases of layout verification.
The language allows the same rules to be applied to checks for flat, cell/block based, and
hierarchical full-chip modes. In addition to its primary role of specifying design rules, this DRC
language may also derive layers for a process technology, define devices, and specify parasitic
parameters.
The following pseudocode shows how DRC operations in a language-based system may be
combined to modify a layout, by adding slots on wide metal lines and adding fill shapes to low-
density regions of the design.
MET1 is layer 16
// smooth any small metal jogs
resize_met1 = shrink MET1 by 3 units then expand by 3 units
// shrink the wide metal for proper slot enclosure
fat_met1 = shrink resize_met1 by 0.6 units
// metal1 square slots for Cu process
Rule met1_slots = output 0.5 x 0.5 unit squares spaced 0.6 units
inside of fat_met1
met1_density = area ratio of MET1 to 100 x 100 unit
grid box < 0.1
met1_fill = 0.3 x 0.3 unit squares spaced 0.25 units
inside of met1_density
met1_exp = expand MET1 by 0.25 units
Rule met1_mask = output met1_fill having no shared met1_exp area
First, the MET1 layer is resized in two steps to smooth the contour of the metal route and to
select only metal regions that may validly contain a slot. The met1_slots rule creates squares of
the specified dimensions inside the regions of fat_met1 derived previously. These squares are
output as met1_slots results. The second part of this example calculates the ratio of MET1 area
to a 10,000 square unit region of the layout, selecting regions containing an unacceptably low area
(density) of MET1. The second met1_fill layer contains fill shapes in these low-density regions.
The generated squares are placed a minimum distance away from the original MET1 shapes.
The results of this pseudocode are shown in Figure 20.4. The dark squares are metal slots along
the bottom of the frame, and the metal fill is in the upper right.
Traditional physical verification is dominated by simple design rule checks that identify
sensitive layout features known to fail during manufacturing. As the technology for creating
smaller device features advances, checking that these new devices are properly structured also
becomes more complex. At advanced nodes, the increasing sensitivity of manufacturing to a
combination of design features leads to the need for a rating system [1]. The rating system is
employed by rules intended to improve manufacturing yield and robustness of the fabricated
circuit. Different aspects of the design may be close to failure, and choices for improving the
layout can be made by a designer or by an automated tool based upon the rating system.
contact
? sd
poly
One mechanism for recording abstract information about a single object or a collection of
objects on one or more layers is a property. A property is an attribute containing numeric or
string values (possibly both) that permit the classification of objects. Properties can be read from
the layout database or be generated and attached to geometric objects during a verification run.
Equation-based DRC checks allow a user to filter one or several layers with a combination of
relationship equations and to assign each object on the layer a property result. The property
assigned to a shape, a layer, or a cell definition is interpreted by a separate DRC operation and can
be stored as part of the results data. The rule writer defines the equation and provides meaning
to the property.
Figure 20.5 illustrates a LAYER_PROPERTY operation to count the contacts in a source-drain
(sd) layer. The resulting sd_prop layer has a property assigned to each sd object to indicate the
contact count and the ratio of sd area to contact area. The LAYER_PROPERTY equation accounts
for arithmetic errors that might occur, such as when no contacts are in the sd region. The sd_prop
layer can then be used later in the flow to classify sd polygons by the property values and report
them.
Pattern matching (PM) is the process of locating specific areas in a design that are identical or
nearly identical to a defined set of layer geometries. A PM tool scans a design for specific
predefined patterns of shapes. The patterns may be specified as containing one or more layers
inside a rectilinear extent for the tool to match in the design layout. PM can find targeted patterns
for special DRC checks, identify known lithographic problem areas, and create layers for classify-
ing design features for other processing.
The major capabilities of a PM tool include the following:
◾◾ Enable the user to build a pattern, including elements that are spatially fixed or movable,
relative to other elements in the pattern.
◾◾ Store a pattern library of the individual patterns.
◾◾ Select a subset of patterns from the pattern library.
◾◾ Search the design and choose regions that match patterns.
◾◾ Present results.
Figure 20.6 shows the results from running the PM tool with a library of two patterns. The layout
has a highlighted box in the center for an area matching Pattern_1. Pattern_2 has a match in the
upper right. Although an area in the lower right looks very similar to Pattern_2, it is not identical,
Pattern_2
Pattern_1
Layout
and so not highlighted. Once areas of the design that match a pattern are selected, they can be
used for further analysis.
The PM tool can find matches of exact or fuzzy patterns. As shown in Figure 20.6, exact
patterns permit no variability of edge placement relative to other edges in the pattern and
obtain a match, although rotation and reflection of the pattern are permitted. Fuzzy patterns
permit variability in edge placements within defined limits to obtain a match.
Individual patterns are created in an editing tool that marks specific areas of a design. For each
area, the user marks in the design, a separate window displays a copy of visible layers, and the user
identifies the edges or shapes for creating the pattern template. This pattern-capture capability
allows the user to identify variable edges of the pattern. Each variable edge also has a range of
allowed displacement.
The patterns are aggregated into libraries for the PM tool to match against a design. There may
be over 100,000 patterns in a complete pattern library. Some foundry rule deck providers
may build sample pattern libraries that are available as part of the process technology.
Once the pattern library is created, it is included as part of the rule deck for running against the
layout. The library is applied like any other design rule check, and designers might simply use an
existing library without having created it. Figure 20.7 shows the user’s flow for using a pattern
matching tool as part of a physical verification flow.
A match from the PM tool can be an error to report or an input to another processing step for
a design rule check.
20.5 MULTI-PATTERNING
Technology advances allow layout features to be placed closely together such that it becomes
difficult for them to produce accurately if placed on the die in a single manufacturing step. The
drawn layer in the design layout represents the desired physical output, but one mask may be
insufficient to produce that layer on the die.
When the 45 nm half-pitch processes were introduced, the lithographic solutions to create the
physical mask for small objects required double exposure or double patterning [2]. This meant
that a single drawn layer was split in to two. Initially, the configuration of the layers targeted for
Layout
Physical verification
Pattern matching
engine
Library
Results
database
Customize
double patterning was primarily rectangles, so simple DRC spacing rules sufficed. As the require-
ment for double patterning began to include metal routing layers, other geometric configurations
were included.
The physical verification steps for DRC include the ability to check that a drawn layer
can be split into two (or more) output mask layers and meet a set of relationship constraints
for overlap of and spacing between objects. This capability is known as multi-patterning
technology (MPT).
An MPT function examines the design to verify that each mask layer can be divided into a
split layer set given the manufacturing constraints. Figure 20.8 shows an example of a mask
layer initially represented as a simple original layer. Objects on the mask layer that are too close
together for reliable manufacture must be split into separate layers. The composite of the split
layers represents the original mask layer.
Future advances will require splitting a mask layer into three or more layers.
Shapes on split layers may still be too close together for reliable manufacture. The DRC
tool reports this situation to the designer using conflict rings or conflict paths, as shown in
Figure 20.9.
When a conflict ring is reported, the designer needs to change the spacing for the
affected shapes. More than one split layer may need to be modified to maintain the intent
of the design. All the conflict rings must be resolved before the design can pass physical
verification.
Some flows allow for using a stitch to resolve a conflict ring or conflict path. A stitch is a region
of the mask layer that is output on two split layers into which the original mask layer is divided.
A stitch may be an original layer, an overlap of the split layers, or an object generated during the
physical verification run. Stitches must also pass specific design rule checks.
Some systems have a stitch generation function. This function has a complete set of require-
ments for a valid stitch for the mask layer. The function scans the mask layer and creates a
collection of valid stitch candidates. These stitch candidates are passed into the verification tool
and are used to help automatically resolve conflicts in the layout.
Ideally, the manufacturing process for each of the split masks should be the same, implying that
the split layers together should have roughly the same density across the extent of the original
mask layer. Density measurement functions analyze the split layers, and the designer resolves
any discrepancies. The design flow may allow stitches to improve the relative density of the split
layers.
Initially, DRC was considered primarily at the cell creation and block assembly levels of design,
and physical layout was done by hand [3]. As the complexity in the layout increased, DRC became
a requirement at more stages in the manufacturing process.
Interactive checks verify small cells or cell libraries, often from within the layout editor.
As blocks are assembled, the integrated blocks also need to be verified. Areas between blocks
and transitions from the block to the routing areas are verified for layout accuracy. After the full
chip is assembled, DRCs may also create fill objects, insert slots in wide metal routes, create and
analyze features for Optical Proximity Correction, or insert Sub-Resolution Assist Features
(SRAFs, or scatter bars). System-on-chip (SoC) designs require checks at both the interactive
and batch phases of the design. At the full-chip phase, DRC is used as a ready-to-manufacture
design certification tool.
In older verification systems, the cell-based hierarchical input for a design was flattened, and the
entire design was internally represented as one large cell. Overlaps between shapes on a layer
were removed, or merged, and design rule checks were performed on the merged data.
Errors in the design were reported from the top-level view of the design. Since the input
hierarchy no longer existed, any output from the system was represented as flat data.
As designs become more complex, their verification in flat mode rapidly becomes impractical,
consuming too many compute and storage resources and taking too long to complete. Modern
verification systems take advantage of the input design hierarchy and other repetitions found in a
physical layout design to identify blocks of the design that are analyzed once and then reused to
significantly reduce verification time. The DRC results are reported at the lowest practical level
of the design hierarchy.
Graphical output from a hierarchical verification tool retains or enhances the original design
hierarchy. The tool can optimize the hierarchy using various processes, like interconnect layer
cell recognition, automatic via recognition, selective placement flattening for billions of simple
cells (like vias), or dense overlaps of large placements, and expanding certain types of array
placements and duplicate placement removal.
All design rule checking programs, whether hierarchical or flat, require low-level algorithms that
analyze geometric relationships between primitive data objects such as polygons and edges [4].
Many computational geometry algorithms are applied to perform this analysis, which seek to
minimize the time and space resources required when the number of objects is large.
Scan line–based sweep algorithms [5,6] have become the predominant form of low-level geometric
analysis. A scan line sweep analyzes relationships between objects that intersect a virtual line,
either vertical or horizontal, as that line is swept across the layout extent. Figure 20.10 shows a
Scan
history
Edges sorted
by increasing
X and Y, then
fed to scan line
Scan line
processing Edge read
Layout extent
scan line moving across the extent of the layout, analyzing geometric data represented as edges.
The edges provided as input to the scan line are ordered in increasing X and increasing Y.
In practice, the number of objects intersecting the scan line is O( n ). This has advantages in both
space and time. For space, only O( n ) objects need to be in active memory for analysis. For time,
there are n scan line sweep points with n objects at each sweep point, so the time complexity
is approximately
O( n ´ n ) = O(n).
In fact, the O(n) assumption is slightly optimistic and most implementations are between
O(n log n) and O(( n )3 ). Any type of object can be placed in a scan line sweep and, as a result,
this approach is very flexible.
Typically, either edges or trapezoids are used for most implementations [7]. Edge representations
have an advantage of directly representing the geometries being analyzed. Trapezoids, which are
formed by fracturing input polygons along either axis, provide several performance optimizations,
but require additional bookkeeping complexity due to the false boundaries created from fracturing.
By expanding the scan line to have some width using a history band, separation or distance,
relationships can be readily handled.
Another issue that must be addressed by the low-level algorithms that support all-angle geom-
etries arises from the fact that all layout objects have vertices that lie on a finite x–y grid. Orthogonal
geometries intersect each other only at grid points. Non-orthogonal geometry can intersect o ff-grid,
and the resulting perturbations must be addressed to provide robust implementations. One such
method enhances the scan line to include advanced geometric snap-rounding algorithms [8].
Introducing design hierarchy to the DRC process requires additional structures for processing
data. Hierarchical DRC operations determine the subset of data that can be acted upon on a per-
cell basis. Each cell is processed once, and the results are applied to multiple areas of the original
layout. Data outside of the subset are promoted up the hierarchy to the lowest point at which they
can be accurately acted upon by the DRC operation on a cell-specific basis. Hierarchical layout
verification incorporates these concepts:
Intrinsic geometries are promoted based on their proximity to the interaction geometries in the
cell on related layers. Promotion is also dependent on the algorithmic intricacies of the type of
DRC operation being executed. When promotion ceases, the intrinsic geometry may normally be
analyzed or manipulated on a per-cell basis.
To show how the concepts of intrinsic geometries, interaction geometries, and object promotion
work in a hierarchical DRC operation, consider the following operation.
Z = X Boolean AND Y
In Figure 20.11, cell B is placed in cell Top. Object 1 is an intrinsic geometry in cell Top, and
objects 2, 3, and 4 are intrinsic geometries in cell B. Object 5 is an interaction geometry showing
the overlap of object 1 in Top with cell B.
The AND operation is first performed in cell B. Objects 3 and 4 are sufficiently remote from
5 and are processed by the AND operation in B. The intersection of objects 2 and 5 is promoted
into cell Top. The AND operation is then completed in cell Top because no further promotion is
required. Figure 20.12 shows the result.
Layer Z contains two intrinsic geometries, one in cell Top and one in cell B. A layer Z inter-
action geometry is also created in cell B to show the overlap of the intrinsic Z shape in Top.
In Figure 20.12, this interaction geometry would be coincident with the intersection of objects
2 and 5.
Another useful data structure for verification is a connectivity model, which encapsulates
geometric interactions within a layer or between several layers, into a logical structure associated
with the geometries.
Connectivity models in flat verification can be easily implemented by encapsulating the
interaction sets as a single unique number. Hierarchical connectivity models require a more
complex encapsulation, using the concepts of pins and nets.
Within any given cell, a net organizes the geometric interactions with regard to connectivity.
A net may also be an external pin, an internal pin, or both. An internal pin forms a connection
to a net within the hierarchical subtree of the cell, while an external pin forms a connection to a
1 Top 5 3 B Layer X
2 4 Layer X interaction
Layer Y
1 Top
5 3 B
2 4
5 3 B 3 and 4 processed in B
Intersection 2 4 2 and 5 intersection in B
promoted gets promoted to Top
Z = X Boolean AND Y
Layer X
Layer X interaction
Layer Y
Top A1 A B1 B
T1
A1 A
T2
A2
B2
B1 B
A2
Internal pin
External pin
B2
Non-pin net
net outside the hierarchical subtree of the cell. Pins allow hierarchical algorithms to traverse a net
in the hierarchy both up, via an external pin, and down, via an internal pin. A net is considered
complete at the top-most cell, in which that net is not an external pin.
Examples of internal and external pins are shown in Figure 20.13. In the design, net B1 is an
external pin of cell B. When cell B is placed in cell A, B1 connects to A1. Net A1 is an external pin
of cell A. Net A1 is also an internal pin to cell A when cell B is placed in A. When cell A is placed
in cell Top, net A1 connects to net T1—an internal pin to Top. The net is then complete in cell
Top. Nets B2, A2, and T2 make no connections outside their parent cells, and so, they are not pins.
Hierarchical algorithms using connectivity models must work hand in hand with the
geometrical promotion techniques described earlier. Logical promotion, via dynamic creation of
new logical pins, must accompany geometric promotion.
The polygon connectivity model determines which geometries interact with each other on a
particular layer. The polygon connectivity model is useful for those topological operations that
require information about polygonal relationships.
Consider the operation
Z = X overlaps Y
The operation selects all layer X polygons that overlap a Y polygon or have a coincident edge
with a layer Y polygon. For flat verification, this operation is comparatively simple since full
polygons in both X and Y are present at the same (single) hierarchical level. In hierarchical veri-
fication, a single polygon may be broken across hierarchical boundaries and exist at multiple
locations in the hierarchy. Flattening data in order to get full polygons at a single hierarchical
level is undesirable because flattening causes explosive growth of the data set and degrades the
hierarchy for future operations involving Z. Fortunately, selective geometric and logical
promotion, along with careful traversal of the nets and pins of the polygon connectivity models
in X and Y, can produce hierarchical output in Z without excessive flattening.
The nodal connectivity model, if specified, exists for the entire design and determines the
g eometries that interact with each other on layers predefined as having electrical connectivity.
This electrical connectivity is defined by the user with an operation between two or more layers.
The complete connectivity sequence includes all the interconnect layers, as suggested by this
pseudocode:
This connectivity is essential for connectivity-based DRC checks, device recognition, and circuit
topology extraction.
The complete electrical network can be determined in a single pass, or it can be built up
incrementally layer by layer. Incremental connectivity allows specialized DRC checks to be
performed after a layer has its connectivity determined and before a subsequent layer is added
to the network. Incremental connectivity is particularly needed for operations that compute
electrostatic charge accumulation by calculating area ratios between layers on the same electrical
node. If incremental connectivity were not used, many copies of the interconnect layers would
be necessary in order to partition the connectivity for all the appropriate area ratio calculations
to occur. The memory required to do this is impractical, and so, incremental connectivity is
employed.
Due to the enormous data and rule volume at current nodes, it is essential for a physical verifica-
tion tool to support parallel processing [9]. The number of separate operations in a DRC flow
at 20 nm, for example, is approaching 50,000. The number of geometries is typically 250–500
billion just on drawn layers and many orders of magnitude greater on derived layers through the
entire flow.
The DRC tool must support both SMP and distributed architectures, as well as a combination
of the two, and should scale well to 200 or more processors. Most users of any DRC application
expect 24 hour, and even overnight, turnaround times and are willing to utilize whatever hard-
ware resources are required to achieve this goal. If the DRC tool cannot take full advantage of
the customer’s hardware resource then it is not a viable product.
How is parallelism achieved? There are two obvious avenues called cell-based parallelism and
operation-based parallelism.
Cell-based parallelism takes advantage of the inherent parallelism of the hierarchy itself [10].
Let C = {C1,…,Cn} be the set of all cells in the design. Given any operation, such as a Boolean or a
DRC spacing measurement, the idea is that there are subsets C′ of C, for which the work required
to generate the operation can be done in parallel over all cells in C′. However, there are two
limiting factors in this approach.
The first limiting factor is that a promotion-based hierarchical algorithm requires the work
for all cells placed in any given cell A throughout its sub-hierarchy to be complete or partially
complete before work can begin on A itself. This impedes scalability near the top of the graph
defining the hierarchy.
The second limiting factor in cell-based parallelism is that the design may not be very
hierarchical to begin with. For example, a modern SoC design may have macros with
hundreds of thousands of standard cell placements, each of which execute practically instan-
taneously. However, promoted data at the macro level can be computationally intensive, again
limiting scalability. This, however, is not as much of a problem as first appears and can be mitigated
with clever injection strategies that introduce extra levels of hierarchy into the macro via
a rtificially created cells called bins [11].
Operation-based parallelism takes advantage of the parallelism built into the rule deck itself.
A simple example is that a spacing check on METAL1 has no dependencies on a VIA3/METAL4
enclosure check, and they can obviously be executed in parallel. The analysis of a typical sub-
20 nm flow shows that out of 50,000 or more separate operations, there are often hundreds at any
given point in the flow that await immediate execution (the input layers have been generated) and
large numbers of these may be executed independently. This approach, by itself, also has certain
limitations.
First, the operation graph is isomorphic to the directed acyclic graph underlying the hierar-
chical design and has the same dependency restrictions—in this case, an operation may not be
executed before its input data set (products of other operations) has been generated.
Next, there are often a few long-duration operations, such as in multi-patterning, which are
extremely computationally intensive and may cause what is referred to as a tail if they end up
executing by themselves near the end of the flow. Another issue is the difficulty in managing
connectivity models, and the fact that the nodal model is shared between all affected operations
in the rule deck. This means that the execution of connectivity-based operations effectively needs
to be serialized. For example, a flow that only checks voltage-based rules may have minimal
opportunities for operation parallelism. Current research avenues include injection strategies for
operation parallelism similar to injection for cell-based parallelism.
Optimum scalability can be achieved by combining both cell-based and operation-based
parallelism [12]. Operations are executed in parallel to the greatest extent possible, and each
separate operation is generated using cell-based parallelism. The complexity in this approach
manages the multiple levels of mutex requirements to protect the hierarchical database model
that is being simultaneously manipulated, read, and added to by N operations. This complexity
may be mitigated somewhat by strategies such as duplicating the portion of the database model
required only by the operation and mapping that model (and virtually the operation itself) to a
separate process.
20.12 PROSPECTS
Design size and complexity in sub-28 nm processes have resulted in substantial increases in chip
production costs as well as a lag in technology node advances versus Moore’s law projections [13].
Successful DRC applications will continue to adapt to the challenges posed by increasingly sophis-
ticated design and manufacturing requirements.
REFERENCES
1. C. Gérald, M. Gary, F. Gay et al. A high-level design rule library addressing CMOS and heterogeneous
technologies. IEEE International Conference on IC Design & Technology, Austin, TX, 2014.
2. W. Yayi and R.L. Brainard. Advanced Processes for 193-nm Immersion Lithography. SPIE Press
Bellingham, WA, (2009), pp. 215–218.
3. M. Carver and L. Conway. Introduction to VLSI Systems (Addison-Wesley, Reading, MA, 1980),
pp. 91–111.
4. M. de Berg, O. Cheong, M. van Kreveld et al. Computational Geometry: Algorithms and Applications,
3rd edn. (Springer-Verlag, Berlin, Germany, 2008).
5. M. I. Shamos and D. J. Hoey. Geometric intersection problems. Proceedings 17th Annual Conference
on Foundations of Computer Science Houston, TX, (1976), pp. 208–215.
6. J. L. Bently and T. A. Ottmann. Algorithms for reporting and counting geometric intersections. IEEE
Transactions on Computers C-28(9) (1979): 643–647.
7. K.-W. Chiang, S. Nahar, and C.-Y. Lo. Time-efficient VLSI artwork analysis algorithms in GOALIE2.
IEEE Transactions on Computer-Aided Design 8(6) (1989): 640–647.
8. W. Barry and A. Michael. Parallel Programming: Techniques and Applications Using Networked
Workstations and Parallel Computers (Prentice Hall, Upper Saddle River, NJ, 1998).
9. H. John Stable snap rounding. Computational Geometry: Theory and Applications 46(4) (2013):
403–416.
10. Z. Bozkus and L. Grodd. Cell based parallel verification on an integrated circuit design. US Patent
6,397,372 (1999).
11. L. Grodd Placement based design cells injection into an integrated circuit. US Patent 6,381,731
(2002).
12. L. Grodd, R. Todd, and J. Tomblin. Distribution of parallel operations. Japan Patent 5,496,986
(2014).
13. Z. Or-Bach. Moore’s lag shifts paradigm of semi industry. EE Times (September 3, 2014). [Link]
[Link]/[Link]?section_id=36&doc_id=1323755 (accessed October 10, 2014).