0% found this document useful (0 votes)
197 views10 pages

2025 - XIAN - BRepFormer - Transformer-Based B-Rep Geometric Feature

Uploaded by

yanchangya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views10 pages

2025 - XIAN - BRepFormer - Transformer-Based B-Rep Geometric Feature

Uploaded by

yanchangya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

BRepFormer: Transformer-Based B-rep Geometric Feature

Recognition
Yongkang Dai Xiaoshui Huang Yunpeng Bai
School of Software School of Public Health National University of Singapore
Northwestern Polytechnical Shanghai Jiao Tong University School Singapore
University of Medicine [email protected]
Xi’an, China Shang hai, China
[email protected] [email protected]
arXiv:2504.07378v2 [cs.CV] 11 Apr 2025

Hao Guo Hongping Gan Ling Yang


School of Software School of Software ZJU Hangzhou Global Scientific and
Northwestern Polytechnical Northwestern Polytechnical Technological Innovation Center
University University Hang zhou, China
Xi’an, China Xi’an, China [email protected]
[email protected] [email protected]

Yilei Shi
School of Software
Northwestern Polytechnical
University
Xi’an, China
[email protected]

Abstract CCS Concepts


Recognizing geometric features on B-rep models is a cornerstone • Computing methodologies → Computer vision problems;
technique for multimedia content-based retrieval and has been Shape modeling; • Applied computing → Computer-aided manu-
widely applied in intelligent manufacturing. However, previous facturing.
research often merely focused on Machining Feature Recognition
(MFR), falling short in effectively capturing the intricate topological Keywords
and geometric characteristics of complex geometry features. In CAD, Geometric Feature Recognition, Boundary Representation
this paper, we propose BRepFormer, a novel transformer-based (B-rep), Transformer
model to recognize both machining feature and complex CAD
ACM Reference Format:
models’ features. BRepFormer encodes and fuses the geometric and Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling
topological features of the models. Afterwards, BRepFormer utilizes Yang, and Yilei Shi. 2025. BRepFormer: Transformer-Based B-rep Geometric
a transformer architecture for feature propagation and a recognition Feature Recognition. In Proceedings of . ACM, New York, NY, USA, 10 pages.
head to identify geometry features. During each iteration of the https://doi.org/XXXXXXX.XXXXXXX
transformer, we incorporate a bias that combines edge features
and topology features to reinforce geometric constraints on each 1 Introduction
face. In addition, we also proposed a dataset named Complex B- Geometric feature recognition serves as a critical link between
rep Feature Dataset (CBF), comprising 20,000 B-rep models. By Computer-Aided Design (CAD) and Computer-Aided Manufactur-
covering more complex B-rep models, it is better aligned with ing (CAM). It is a cornerstone technique for multimedia content-
industrial applications. The experimental results demonstrate that based retrieval and plays a key role in automating manufacturing
BRepFormer achieves state-of-the-art accuracy on the MFInstSeg, processes, improving efficiency, and reducing human errors. While
MFTRCAD, and our CBF datasets. traditional rule-based geometric feature recognition methods are
widely used in industry, they are labor-intensive and struggle to
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed adapt to complex geometric variations and topological changes
for profit or commercial advantage and that copies bear this notice and the full citation [13, 18, 32]. To address these limitations, learning-based approaches
on the first page. Copyrights for components of this work owned by others than the have been introduced, often converting CAD models into intermedi-
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission ate representations such as point clouds [22], voxels [26], or images
and/or a fee. Request permissions from [email protected]. [31]. However, these transformations lead to key topological and
, geometric information loss, increased computational costs, and re-
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-x-xxxx-xxxx-x/YYYY/MM duced recognition accuracy. Although recent deep learning models
https://doi.org/XXXXXXX.XXXXXXX that directly process boundary representation (B-rep) data have
,
,
Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling Yang, and Yilei Shi

shown promise in preserving geometric and topological details, • Our BRepFormer model achieves state-of-the-art accuracy
as well as improving recognition accuracy [4], challenges remain on the experimental datasets, demonstrating the superiority
in handling highly complex topologies and diverse manufacturing of our approach.
processes, limiting their practical effectiveness.
We propose BRepFormer, a transformer-based geometric feature
recognition network that leverages a transformer architecture to
effectively capture and process B-rep features. Unlike previous ap-
proaches that suffer from information loss and limited topological 2 Related Work
awareness [25], BRepFormer directly operates on the boundary
representation (B-rep), ensuring high-fidelity feature extraction. 2.1 Rule-Based Geometric Feature Recognition
Specifically, our model extracts both the geometric and topological Rule-based geometric feature recognition methods identify geomet-
features of the CAD model from multiple perspectives. During the ric features in CAD models by applying predefined rules based on
feature encoding stage, geometric edge features and topological topological and geometric information. Early studies by Henderson
features are processed by separate encoders and then fused to form [12], Donaldson [7], and Chan [5] introduced rule-based approaches
an attention bias, which serves as a constraint input into the trans- that define boundary patterns and use expert systems for feature
former module. Meanwhile, the extracted geometric face features recognition. While intuitive and easy to implement, these methods
are encoded into tokens together with a virtual face. These tokens struggle with flexibility due to the inherent limitations of predefined
serve as carriers of information that are fed into the transformer rules. Encoding all machining knowledge into a rule-based system
module, facilitating deep interaction and information fusion among is challenging, making maintenance cumbersome and leading to
the features. Finally, the recognition head accepts the output from poor adaptability, particularly in handling complex intersecting
the transformer module and fuses the global and local features features.
within it, achieving high-precision recognition of geometric fea- Volumetric decomposition is another well-established rule-based
tures. By integrating these components, BRepFormer effectively approach for geometric feature recognition. This method decom-
propagates information across the CAD model structure, improving poses the material to be removed from a CAD model into inter-
geometric feature recognition accuracy while preserving critical mediate volumes and reconstructs geometric features based on
geometric and topological relationships. predefined rules. Woo et al. [34] introduced the Alternating Sum of
We conducted experiments on the public MFInstSeg [35] and Volumes (ASV) decomposition method, representing solids as a hi-
MFTRCAD [36] datasets, as well as our CBF dataset. The results erarchical structure of convex bodies through union and difference
demonstrate that BRepFormer achieves state-of-the-art recogni- operations. Requicha et al. [32] further developed an automatic
tion accuracy on these three datasets. Notably, BRepFormer excels feature recognition approach that decomposes machinable volumes
in recognizing complex geometric structures and diverse machin- into manufacturable features, addressing feature interactions using
ing features. On the MFTRCAD [36] dataset, it achieved a 93.16% generation-testing strategies and computational geometry tech-
recognition accuracy, surpassing the previous best method by 3.28 niques. Wilde et al. [19] refined the ASV method by introducing
percentage points. To further validate the effectiveness of our ap- partitioning techniques, resolving convergence issues, and enhanc-
proach, we performed detailed ablation studies, analyzing the im- ing its applicability in feature recognition.
pact of each model component on overall performance. The ablation Graph-based methods have gained attention recently due to
study results indicate that each feature component contributes to their strong alignment with the boundary representation (B-rep)
an improvement in the model’s accuracy. structure of CAD models. These methods construct an Attribute
In summary, our proposed network, BRepFormer, effectively Adjacency Graph (AAG) to capture the topological relationships
leverages the inherent information within the B-rep structure and and geometric attributes of faces and edges. Geometric features can
achieves high-precision geometric feature recognition performance. then be identified by applying graph-matching techniques to detect
The key contributions of this work are as follows: subgraph patterns within the AAG. Joshi et al. [18] first applied the
AAG for topological and geometric information matching. Shah et
al. [9] enhanced this approach by integrating hint-based feature
recognition, introducing minimal condition subgraphs to improve
• We develop a method (Sec. 3.2 and 3.3) that effectively ex- the handling of feature interactions.
tracts and encodes both topological and geometric features Hint-based methods [11, 24] offer a rule-driven approach for
from CAD models, enabling a more informative and struc- identifying complex intersecting features. They extract topological
tured representation for machine learning models. and geometric patterns, along with heuristic "hints" derived from
• We introduce a novel transformer-based architecture (Sec. residuals left by feature intersections. These hints guide the re-
3.4) that enforces edge and face constraints during informa- construction of incomplete feature information through reasoning.
tion propagation, significantly improving feature recognition While this method improves the recognition of intersecting features
accuracy. and aids subsequent process planning, defining comprehensive and
• We introduce a CAD dataset (Sec. 4) that is more aligned precise hints, along with robust reasoning rules, remains challeng-
with industrial applications, offering a more complex collec- ing. This complexity makes the hint-based method challenging to
tion of CAD models to contribute to the geometric feature achieve fully automated and high-precision feature recognition in
recognition. practical applications.
,
,
BRepFormer: Transformer-Based B-rep Geometric Feature Recognition

Figure 1: Our model consists of four main components: (1) Feature Extractor, which extracts topological and geometrical
features from the B-rep model; (2) Feature Encoder, where edge and topological features are combined to create an attention
bias and face features are augmented with a virtual face before being input into the transformer block; (3) Transformer Block,
which processes the encoded features using grouped query attention and attention bias; and (4) Recognition Head, where the
output features are fused and passed through a classification head to obtain the recognition results.

2.2 Deep Learning-Based Geometric Feature structural similarity with B-rep. Cao et al. [4] pioneered the trans-
Recognition formation of B-rep models into graph-structured representations
for learning. Colligan et al. [6] introduced Hierarchical CADNet, a
Various deep learning-based methods [14, 15, 22, 38, 40] have been
novel approach that leverages a two-level graph representation to
developed for 3D structural representation, each addressing dif-
improve recognition accuracy. Specifically, it utilizes the Face Adja-
ferent challenges. Point-cloud-based methods leverage neural net-
cency Graph (FAG) to capture topological information and mesh
works to extract features but often suffer from information loss.
patches to represent geometric details. Jayaraman et al. [16] pro-
MFPointNet [22] employs selective downsampling layers for fea-
posed UV-Net, which encodes surfaces and curves with CNNs and
ture recognition, while Yao et al. [40] proposed a hierarchical neu-
utilizes graph convolutional networks (GCNs) for feature learning.
ral network to improve the recognition of complex overlapping
Recent advancements further refine B-rep-based learning meth-
features. Shi et al. [30] introduced a multi-sectional view (MSV)
ods. Lambourne et al. [20] developed BRepNet, which defines con-
representation and MsvNet, enriching 3D model representation by
volution kernels for directed coedges, improving pattern detection.
incorporating multi-view features.
Lee et al. [21] introduced BRepGAT, incorporating graph atten-
Voxel-based approaches use 3D convolutional neural networks
tion networks (GATs) for precise feature segmentation. Wu et al.
(CNNs) to process CAD models but face resolution-related informa-
[35] proposed AAGNet, a multi-task GNN that simultaneously per-
tion loss. FeatureNet [42] applies 3D CNNs for feature recognition,
forms semantic, instance, and base segmentation using the geo-
while Peddireddy et al. [27, 28] refined voxelization techniques
metric attribute adjacency graph (gAAG). Xia et al. [36] developed
to predict machining processes like milling and turning. Despite
MFTReNet, which learns semantic segmentation, instance group-
these improvements, voxelization inherently reduces geometric
ing, and topological relationship prediction directly from B-rep data.
fidelity, particularly at low resolutions. Mesh-based approaches
Despite these advancements, the generation of large-scale datasets
seek to retain geometric details for improved recognition. Jia et
with detailed topological labels requires substantial annotation ef-
al. [17] proposed an innovative method that combines the original
forts, thereby increasing the cost of data preparation.
MeshCNN with Faster RCNN, forming a geometric feature recogni-
tion scheme based on Mesh Faster RCNN. This approach enhances
the accuracy of geometric feature detection while preserving the 3 Method
mesh geometry. However, the high memory demand when pro-
cessing high-resolution data limits its application in large-scale
3.1 Overview
scenarios. We propose a novel approach for CAD geometric feature recog-
In the CAD industry, B-rep (boundary representation) is the nition based on a transformer architecture as show in Figure 1,
dominant format for 3D models, making graph-based representa- consisting of four different parts : 1) Feature Extractor Module
tions particularly effective for geometric feature recognition. Graph considers the topological and geometric features of the model from
neural networks (GNNs) have been widely applied due to their multiple perspectives; 2) Feature Encoder Module encodes the
initial input features into a format that is friendly for the network,
thereby generating the main integrated features; 3) Transformer
,
,
Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling Yang, and Yilei Shi


Block Module further extracts features using a transformer struc- also record all the edge chains 𝑒𝑖𝑘 , 𝑒𝑘𝑙 , . . . , 𝑒𝑚 𝑗 that make up this
ture, with a designed attention bias constraint; 4) Recognition shortest path. To effectively represent this edge path information,
Head Module fuses the features and classifiers the faces of B-rep we introduce a three-dimensional matrix 𝑀𝑒 ∈ R𝑁 ×𝑁 ×MaxDistance
data. to store edge path data. The parameter MaxDistance is employed
to specify the upper limit of the distance between any two  surfaces
3.2 Feature Extraction Module within a single model. For any pair of surfaces 𝑓𝑖 , 𝑓 𝑗 , in a given
The Feature Extraction Module extracts feature for both the ge- model, when the shortest path distance between them is less than
ometry and topology of the B-rep model, focusing on faces and MaxDistance, the elements in the edge path matrix that exceed the
edges. actual shortest path range will be initialized to -1 as an indicator of
invalid values.
3.2.1 Topological Feature Extraction. For topology, we extract four
different features, including three face features (Face Shortest Dis- 3.2.2 Geometric Feature Extraction. Our approach further conducts
tance, Face Angular Distance and Face Centroid Distance) and one geometric feature extraction from two aspects: the UV domain and
feature for edges (Shortest Edge Path). geometric attributes, as detailed below.
Face Shortest Distance. To fully capture the direct and indirect For the extraction of UV domain features from CAD models, our
spatial relationships between any two faces in the B-rep model, approach is inspired by UV-Net [16]. We sample and discretize the
we employed the Dijkstra algorithm to compute the shortest path parametric surfaces and parametric curves surfaces in B-rep into
length between all pairs of faces. This method quantifies the topo- regular 2D and 1D point grids with a uniform step size, as shown
logical distance between faces as the number of intervening faces in Figure 2. For the grid feature representation of each face, the
along the connecting path. Based on these results, we constructed 3D coordinates, 3D normal vectors of the points, and an additional
an extended adjacency matrix 𝑀𝑑 ∈ 𝑅 𝑁 ×𝑁 , where each element dimension indicating visibility are included. For the grid feature
𝑚𝑑 (𝑓𝑖 , 𝑓 𝑗 ) represents the shortest distance from any face 𝑓𝑖 to an- representation of each edge, the 3D coordinates and 3D tangent
other face 𝑓 𝑗 . This matrix captures not only direct connections vectors of each point are also included. Moreover, to enhance the
between faces but also indirect connections via other faces, thereby information representation, we add the normal vectors of the two
reflecting complex connectivity patterns. adjacent faces of the edge to the edge grid points. Compared with
Face Angular Distance. To more accurately describe the rel- discrete representation methods that may be insufficient in accu-
ative position between two faces, our approach also extracts the rately describing complex surfaces and curves (such as voxel [39]
dihedral angle between any two faces. When two faces share a com- grids and traditional meshes [8] ), our use of UV grids can cap-
mon edge, the dihedral angle can explicitly indicate the geometric ture precise geometric information and achieve a friendly input
relationship formed by the two faces, whether it is concave, convex, representation for neural networks.
or a smooth transition. For two non-adjacent faces, we use their Furthermore, our method also extracts the geometric attributes
normal vectors to calculate the angle between them. This method inherent in the solid entities of CAD models as show in Figure 2. To
reflects the relative direction of the two faces in three-dimensional characterize the geometric attribute features of surfaces, we extract
space, regardless of the series of intermediate faces through which the following information: surface type (e.g., plane, conical surface,
they are connected. At the same time, our approach also constructs cylindrical surface, etc.), area, centroid coordinates, and whether it
a matrix 𝑀𝑎 ∈ R𝑁 ×𝑁 to represent the angular information between is a rational B-spline surface. For the geometric attribute features
faces, where each element 𝑚𝑎 (𝑓𝑖 , 𝑓 𝑗 ) represents the angle between of curved edges, we extract their type (e.g., circular, closed curve,
a face 𝑓𝑖 and another face 𝑓 𝑗 . elliptical, straight line, etc.), length, and convexity (i.e., concave,
Face Centroid Distance. The centroid distance between faces, convex, or smooth transition). These geometric attributes of faces
as another key topological feature, can be used to quantify their and edges can be directly obtained from the original B-rep structure
spatial separation by measuring the Euclidean distance between and are encoded separately. By integrating the aforementioned
the centroids of two faces. To eliminate the influence of the model’s geometric attributes extracted from the UV domain, we obtain a
scale, we normalize this distance by the diagonal length of the geometric input representation of the entire CAD model, which
bounding box of the entire solid model, obtaining a relative dis- provides strong support for subsequent downstream tasks.
tance indicator. This normalized centroid distance can reflect the
similarity between CAD models of different sizes and proportions. 3.3 Feature Encoder Module
Similarly, we construct a matrix 𝑀𝑐 ∈ R𝑁 ×𝑁 to represent the an- Feature encoder module further encodes all features extracted
gular information between faces, where each element 𝑚𝑐 (𝑓𝑖 , 𝑓 𝑗 ) above, and output an attention bias and faces features, as the input
represents the Euclidean distance between the centroids of face 𝑓𝑖 for the following module.
and face 𝑓 𝑗 .
Shortest Edge Path. Considering the key role of edges in defin- 3.3.1 Topological Feature Encoder. For the above-mentioned three
ing global topological features, our approach particularly focuses topological relation matrices 𝑀𝑑 , 𝑀𝑎 , 𝑀𝑐 ∈ R𝑁 ×𝑁 between faces,
on the shortest edge path between any two faces. Edges are not we applied a unified processing pipeline to these matrices. Specif-
only the basic elements connecting different faces but also carry ically, each matrix is first transformed through a linear layer of
rich information about the model’s internal connectivity and sur- the same dimension, followed by sequential processing through
face continuity. Therefore, for any two face 𝑓𝑖 and 𝑓 𝑗 in the B-rep a normalization layer and a ReLU activation function for feature
model, we not only calculate the shortest distance between them but encoding, which is formulated as:
,
,
BRepFormer: Transformer-Based B-rep Geometric Feature Recognition

Table 1: Geometric Feature Dimensions in UV Domain

Element Feature Dimension


Face Coordinates 3D
Normal vectors 3D
Visibility 1D
Edge Coordinates 3D
Tangent 3D
Normal vectors to neighboring surfaces 6D

Table 2: Attribute Feature Encoding for Geometric Elements

Element Input Feature Encoder Layer Output


Face 𝑓type ∈ 𝑅 9 Linear (9, 32) ℎ 𝑓 ,type ∈ 𝑅 32
𝑓area ∈ 𝑅 1 Linear (1, 32) ℎ 𝑓 ,area ∈ 𝑅 32
𝑓cen ∈ 𝑅 3 Linear (3, 32) ℎ 𝑓 ,cen ∈ 𝑅 32
𝑓rat ∈ 𝑅 1 Linear (1, 32) ℎ 𝑓 ,rat ∈ 𝑅 32
Edge 𝑒 type ∈ 𝑅 11 Linear (11, 64) ℎ𝑒,type ∈ 𝑅 64
𝑒 len ∈ 𝑅 1 Linear (1, 32) ℎ𝑒,len ∈ 𝑅 32
Figure 2: The upper part of the figure shows the details of 𝑒 conv ∈ 𝑅 3 Linear (3, 32) ℎ𝑒,conv ∈ 𝑅 32
geometric UV domain sampling, while the lower part shows
the details of geometric attribute sampling.
of three 2D CNN layers, an adaptive average pooling layer, and a
fully connected layer, which encode the face features into a feature
dimension of 128. Similarly, the edge encoder has a structure akin to
𝑀𝑖′ = ReLU(Norm(Linear((𝑀𝑖 ))) (1) the face encoder but uses 1D CNN layers for preliminary encoding.
here, 𝑀𝑖 denotes the three topological matrices 𝑀𝑑 , 𝑀𝑎 , and 𝑀𝑐 . The encoded information of geometric features is shown in Table 1.
Based on these three parts, we obtain the face bias input into the We employed Multilayer Perceptrons (MLP) to encode the geo-
transformer module: metric attribute features of faces and edges. The encoded face
attributes include a 9-dimensional one-hot vector 𝑓type ∈ R9 to
𝐵 Face = Add(𝑀𝑑′ , 𝑀𝑎′ , 𝑀𝑐′ ) (2) represent surface types (e.g., plane, conical surface, cylindrical sur-
within the framework of the edge path matrix, we have considered face, etc.), a vector 𝑓area ∈ R1 to identify the surface area, a vector
the impact of edge weights on faces. For the shortest path between 𝑓cen ∈ R3 to identify the centroid coordinates of the face, and a
any two faces in a B-rep model, the influence of distant edges on vector 𝑓rat ∈ R1 to identify whether it is a rational B-spline sur-
the initial face diminishes progressively as the position along the face. The edge attributes include an 11-dimensional one-hot vector
path advances from the starting face. Based on these considerations, 𝑒 type ∈ R11 for identifying edge types (e.g., circular, closed curve,
our method integrates the edge path matrix with edge features to elliptical, straight line, etc.), a vector 𝑒 len ∈ R1 to identify the type
model the weight influence of edges on faces along the path. The of the edge, and a 3-dimensional one-hot vector 𝑒 conv ∈ R3 to char-
formula is expressed as follows: acterize the convexity of the edge (concave, convex, or smooth).
The encoded information of attribute features is shown in Table 2
𝐵𝐸𝑑𝑔𝑒 = (𝑀 𝑒 ⊗ 𝐻𝑒𝑑𝑔𝑒 ) (3) as indicated.
here, 𝑀𝑒 stands for the edge path matrix, and 𝐻𝑒𝑑𝑔𝑒 indicates the Finally, by concatenating the encoded features of the two types
extracted and encoded edge features. of geometric attributes and the geometric UV domain features, we
Finally, the sum of the obtained 𝐵𝐸𝑑𝑔𝑒 and 𝐵 𝐹𝑎𝑐𝑒 yields the at- obtained the complete geometric face and edge features, 𝐻 face and
tention bias that we input into the transformer module as shown 𝐻 edge respectively:
in Figure 1.
3.3.2 Geometric Feature Encoder. For the encoding of UV domain 𝐻 face = Concat(ℎ 𝑓 ,geo, ℎ 𝑓 ,type, ℎ 𝑓 ,area, ℎ 𝑓 ,cen, ℎ 𝑓 ,rat ) ∈ R256 (4)
sampling features in CAD geometric features, BRepFormer en-
codes face feature as 𝑓𝑔𝑒𝑜 ∈ R𝑁 𝑓 ×𝑁𝑢 ×𝑁 𝑣 ×7 , and edge feature as
𝐻 edge = Concat(ℎ𝑒,geo, ℎ𝑒,type, ℎ𝑒,len, ℎ𝑒,conv ) ∈ R256 (5)
𝑒 geo ∈ R𝑁𝑒 ×𝑁𝑢 ×12 . Here, 𝑁 𝑓 and 𝑁 𝑣 indicate the number of sam-
pling points for faces and edges based on the UV grid parameters
respectively. 𝑁𝑢 and 𝑁 𝑣 represent the number of sampling points 3.4 Transformer Block Module
along the u-axis and v-axis. The aforementioned geometric features Before feeding the encoded face features into our transformer ar-
are all fed into their respective encoders. The face encoder consists chitecture, we introduced a virtual face feature that is connected to
,
,
Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling Yang, and Yilei Shi

Figure 3: The entire process of constructing our dataset, (A) illustrates the selection of four components from the database, (B)
shows the application of random rotation, translation, and duplication under geometric constraints to generate a new B-rep
model, and (C) demonstrates the process of traversing and labeling all faces (L1, L2, L3, L4).

all B-rep elements. This virtual face feature, within the transformer These weights are used to compute the weighted sum of the values
structure, can engage in deep interactions with the actual face fea- 𝑉 . The outputs of all groups are concatenated and passed through
tures, thereby obtaining a global feature that represents the entire a linear transformation 𝑊𝑜 to produce the final output 𝑂.
B-rep model. In the FFN layer, we primarily use the SwiGLU activation func-
This part consists of 8 layers of designed transformer modules, tion, which is mathematically expressed as follows:
with each layer primarily incorporating Grouped Query Atten-
tion (GQA) [1], Root Mean Square Normalization(RMS Norm), and SwiGLU = Swish(𝑊 𝑥 + 𝑏) ⊗ (𝑉 𝑥 + 𝑐) (9)
SwiGLU activation function.
h First, all the feature i of the faces are here, 𝑊 and 𝑉 denote the learnable weight matrices that are applied
represented as 𝐻 face = ℎ face, ℎ face, . . . , ℎ face ∈ R +1) ×256 . Here,
0 1 2 𝑁 +1 (𝑁
to the input 𝑥, 𝑏 and 𝑐 are bias terms. The Swish activation function
each ℎ𝑖face represents the feature of a given face, 𝑁 + 1 indicates defined as Swish(𝑧) = 𝑧 · 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 (𝑧).
that it includes the features of the virtual face, and the superscript Finally, based on the introduced global virtual face feature and
0 denotes the initial iteration of the overall feature repre-
0 in 𝐻 face the input encoded face features, the module outputs 𝑓𝑔𝑙𝑜𝑏𝑎𝑙 , rep-
sentation. resenting the global feature of the B-rep model, and 𝑓𝑙𝑜𝑐𝑎𝑙 , repre-
First, all the input face features are fed into RMS Norm for nor- senting the local feature.
malization. Next, the normalized features are fed into the GQA
for feature propagation. Following this, the features processed by 3.5 Recognition Head
the attention mechanism are fed into the Feed-Forward Network Based on the local feature representation of 𝑓𝑙𝑜𝑐𝑎𝑙 and the global
(FFN), and then combined with residual connections for further feature 𝑓𝑔𝑙𝑜𝑏𝑎𝑙 output by the transformer module, we design a struc-
processing. The overall formula expressed as follows: ture to fuse these two parts of features. First, we broadcast the global
  feature to the same dimension as the local feature and stack the
𝑡 ′ 𝑡 −1 𝑡 −1 two parts along a new dimension to obtain the integrated feature
𝐻 face = Attention RMS Norm(𝐻 face ) + 𝐻 face (6)
representation 𝐹𝑎𝑙𝑙 . Then, we multiply the features 𝐹𝑎𝑙𝑙 by a learn-
 
𝑡
𝐻 face 𝑡 ′
= FFN RMS Norm(𝐻 face 𝑡 ′
) + 𝐻 face (7) able weight matrix 𝑊𝑤 and apply a 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 activation function
to obtain the weight representation of these features. Finally, we
Our attention uses GQA, which divides the queries into multiple perform element-wise multiplication between this weight repre-
groups and independently computes attention within each group. sentation and the 𝐹𝑎𝑙𝑙 to generate the final output features. The
It then concatenates the outputs of all groups 𝑄𝑔 and passes them specific formulas are as follows:
through a linear transformation to obtain the output, effectively
reducing computational load and memory usage, and 𝑡 indicates 𝐹𝑎𝑙𝑙 = 𝑠𝑡𝑎𝑐𝑘 [𝑓𝑙𝑜𝑐𝑎𝑙 , 𝑓𝑔𝑙𝑜𝑏𝑎𝑙 ] (10)
the iteration times. The specific expression formula for GQA is as
follows:
𝐹 out = 𝐹 all ⊗ softmax(𝐹 all · 𝑊𝑤 + 𝑏 𝑤 ) (11)
where 𝑠𝑡𝑎𝑐𝑘 denotes the operation of concatenation along a new
" ! #𝐺
𝑄𝑔 𝐾 𝑇
𝑂 = ­Concat softmax √︁ + 𝐵 Att 𝑉 (8) dimension, 𝐹𝑎𝑙𝑙 represents the newly obtained features, 𝑊𝑤 repre-
© ª
® 𝑊𝑜
𝑑𝑘 𝑔=1 sents the learnable weight matrix of the linear layer and 𝑏 𝑤 denotes
« ¬
where 𝑂 refers to the final output, 𝐺 indicates the upper limit of the bias term associated with it.
the number of groups. For each group 𝑔, the dot product of 𝑄𝑔 and For the final feature recognition task in CAD models, we em-
𝐾 is first scaled by the square root of the key vector dimension 𝑑𝑘 . ployed a MLP coupled with an 𝑎𝑟𝑔𝑚𝑎𝑥 activation function as the
This scaled product is then adjusted by the attention bias 𝐵𝑎𝑡𝑡 , and classifier to predict the geometric feature category 𝐶ˆ of each face.
the 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 function is applied to obtain the attention weights. The mathematical expression is as follows:
,
,
BRepFormer: Transformer-Based B-rep Geometric Feature Recognition

to 64, and the entire training process lasted for a maximum of 200
𝐶ˆ = arg max(MLP(𝐹 out ) ∈ R𝑁 ×𝐾 ) (12) epochs.
To evaluate the performance of our network, we used overall
We choose the cross-entropy as our final loss function, which is accuracy 𝐴, class accuracy 𝐴𝑐 , and mean Intersection over Union
as follow: (mIoU) to verify the performance of our model’s geometric feature
recognition capabilities, mathematically expressed as follows:
𝑁 𝐶
1 ∑︁ ∑︁
𝐿=− 𝑦𝑖,𝑗 log(𝐶ˆ𝑖,𝑗 + 𝜖) (13) 𝐹c
𝑁 𝑖=1 𝑗=1 𝐴= (14)
𝐹t
here, 𝜖 is a small positive constant added to the predicted probabili- here, 𝐹𝑐 denotes the number of correctly classified B-rep faces, and
ties to ensure numerical stability 𝐹𝑡 is the total number of B-rep faces in the CAD model. Overall
accuracy A calculates the proportion of correctly classified B-rep
4 Complex B-rep Feature (CBF) Dataset faces to the total number of B-rep faces in the CAD model.
We introduce a dataset named CBF to support research on complex
geometric feature recognition, and describe the method for creating 𝐶
1 ∑︁ 𝑦ˆ𝑐 = 𝑙𝑐
the corresponding dataset. The dataset comprises 20,000 CAD mod- 𝐴𝑐 = (15)
𝐶 𝑐=1 𝑦𝑐 = 𝑙𝑐
els in B-rep format. Each model is a B-rep model formed by boolean
combinations of a base plate and three different geometric features here, 𝐴𝑐 denotes the average accuracy across all classes. 𝑦𝑐 repre-
on it. The faces of each geometric feature are labeled accordingly, sents the total number of B-rep faces in label 𝑙𝑐 , while 𝑦ˆ𝑐 indicates
with the label information stored in a separate JSON file to support the number of correctly predicted B-rep faces in label 𝑙𝑐 . The class
research on model construction and training for these complex accuracy 𝐴𝑐 represents the average accuracy across all geometric
features. feature classes.
For our data creation process, Figure 3 demonstrates the entire
workflow. We first selected B-rep from public sources and separated 𝐶
1 ∑︁ 𝑦ˆ𝑐 = 𝑙𝑐 ∩ 𝑦𝑐 = 𝑙𝑐
the base plates and geometric features using relevant 3D modeling 𝑚𝐼𝑜𝑈 = (16)
software. Subsequently, we applied random rotations, translations, 𝐶 𝑐=1 𝑦ˆ𝑐 = 𝑙𝑐 ∪ 𝑦𝑐 = 𝑙𝑐
and duplications to the three extracted geometric features on the here, the parameter expressions in this part are the same as those
base plate. During these operations, we ensured the rationality of described above for 𝐴𝑐 . The mIoU is commonly used to evaluate
the generated models by manipulating their geometric constraints. the overlap between predicted results and true labels. In the context
Specifically, for translation, the script moved the model along the of our geometric feature recognition task, mIoU is calculated as the
normal and tangent vectors of the contact surface until it intersected average ratio of correctly classified B-rep faces that share the same
with the base plate, ensuring proper attachment. Rotations were class labels in both the actual and predicted outputs.
performed based on the geometric attributes of the contact surface
(such as normal vectors and centroids) to avoid model distortion 5.2 Experimental Datasets
or detachment. Duplication was carried out via boolean operations
to ensure that the newly generated components did not conflict We conducted experiments on the MFInstSeg and MFTRCAD public
with the base plate or other components. Finally, in the labeling datasets to evaluate the machining feature recognition ability of
stage, we traversed all faces of the composite model, determined our model and compared it with mainstream deep learning meth-
which original component each face belonged to (base plate, feature ods. The results indicate that our model achieved state-of-the-art
A, feature B, feature C), and assigned a label to each face. This accuracy on these datasets. Additionally, we verified the complex
information was stored in a dictionary, with face indices as keys feature recognition capability of our model on the proposed CBF
and corresponding component labels as values. dataset. The results indicate that our network also achieves state-
of-the-art accuracy. For all datasets, the data were split into 70% for
training, 15% for validation, and 15% for testing.
5 Experiments
5.1 Experimental Environment 5.2.1 MFInstSeg Dataset. The MFInstSeg dataset comprises 62,495
CAD model files stored in B-rep format. It includes 24 different
We employed a single NVIDIA 4090 GPU and PyTorch-lightning
types of machining features, with each model containing 3 to 10
V1.9.0 for network training, thereby emphasizing the lightweight
unique machining features. Table 3 shows the performance of our
characteristic of our network. In the training phase, AdamW was
model and other mainstream models on this dataset.
chosen as the optimizer, with an initial learning rate of 0.001. The 5.2.2 MFTRCAD Dataset. The MFTRCAD dataset comprises 28,661
parameters were set to 𝛽 1 = 0.9, 𝛽 2 = 0.999, and 𝜖 = 1 × 10 −8 CAD models stored in B-rep format. The authors further divided
to guarantee training stability. Additionally, we employed the Re- one of the traditional 24 machining feature categories into three
duceLROnPlateau [2] learning rate scheduling strategy, which dy- subcategories, leading to a total of 26 distinct machining features
namically adjusts the learning rate based on changes in validation in the dataset. Table 4 presents the performance of our model and
loss. It is important to note that during the initial 5000 steps of other mainstream models on this dataset.
training, we implemented a linear learning rate warm-up phase to 5.2.3 Complex Feature Dataset. In our CBF dataset, unlike previ-
help the model converge more effectively. The batch size was set ous datasets, this dataset requires the model to identify these three
,
,
Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling Yang, and Yilei Shi

Table 3: Recognition Performance on MFInstSeg Dataset 5.3.1 Ablation Analysis of Geometric Features. The geometric fea-
tures directly input into our BRepFormer network include the UV
Network Accuracy(%) mIOU(%) domain geometric features and the attribute features of the B-rep.
In this ablation study, a model with complete input features was
ASIN[41] 86.46 ± 0.45 79.15 ± 0.82
used as the baseline, and then each of the three input features was
GATv2[3] 95.90 ± 0.20 93.03 ± 0.36
removed one by one to generate the ablation models. The results
GraphSAGE[10] 97.69 ± 0.06 95.70 ± 0.14
shown in Table 6 indicate that the removal of any input feature
GIN[37] 98.14 ± 0.03 96.52 ± 0.06
leads to a decrease in feature recognition accuracy. Among them,
DeeperGCN[23] 99.03 ± 0.02 98.31 ± 0.01
the removal of attribute features causes the most significant perfor-
AAGNet[35] 99.15 ± 0.03 98.45 ± 0.04
mance drop, while the removal of UV domain geometric features
MFTRNet[36] 99.56 ± 0.02 98.43 ± 0.03
has a less noticeable impact. This suggests that attribute features
BRepFormer(Ours) 99.62 ± 0.03 98.74 ± 0.09
are more important than the input geometric features within our
network architecture.
Table 4: Recognition Performance on MFTRCAD Dataset
Table 6: Impact of Removing Different Geometric Features
on BRepFormer Performance
Network Accuracy(%)
PointNet++[29] 67.89 ± 0.08 Input Accuracy(%) Class Acc(%) mIoU(%)
DGCNN[33] 67.97 ± 0.07
ASIN[41] 68.57 ± 0.41 Full (baseline) 94.66 94.97 87.48
Hierarchical CADNet[6] 78.39 ± 0.03 w/o Face Attr 92.34 (-2.32) 91.23 (-3.74) 84.10 (-3.38)
AAGNet[35] 79.45 ± 0.02 w/o Edge Attr 91.01 (-3.65) 90.01 (-4.96) 81.69 (-5.79)
MFTRNet[36] 89.88 ± 0.02 w/o UV-grid 92.95 (-1.71) 91.85 (-3.12) 85.05 (-2.43)
BRepFormer(Ours) 93.16 ± 0.11
5.3.2 Ablation Analysis of Topological Features. In our BRepFormer
network, we focused on extracting four key topological features
distinct geometric features along with the base plate. The experi- from CAD models to delineate the comprehensive relational struc-
mental results of our model and other mainstream models on this ture of B-rep models. To demonstrate the effectiveness of the ex-
dataset is presented in Table 5. tracted features, we established a baseline model using the initial
complete topological features and then conducted ablation experi-
ments by progressively removing topological features on our CBF
Table 5: Recognition Performance on CBF Dataset
dataset. As shown in Table 7, the network’s accuracy decreased
successively with the removal of each of the topological features.
Network Accuracy(%) Class Acc(%) mIoU(%) This indicates that the global topological matrices facilitate accurate
AAGNet[35] 93.41 - 93.76 understanding of complex 3D solid models by neural network.
MFTRNet[36] 92.12 - 91.21
BRepFormer(Ours) 94.66 94.97 87.48 Table 7: Impact of Removing Different Topological Features
on BRepFormer Performance

Although our network outperforms other comparative networks Input Accuracy(%) Class Acc(%) mIoU(%)
in terms of overall accuracy, it performs poorly in the mIoU metric.
Full (baseline) 94.66 94.97 87.48
Analysis reveals that when the network identifies simple geomet-
w/o 𝑀𝑑 93.51 (-1.15) 92.73 (-2.24) 86.28 (-1.20)
ric features, the limited number of faces involved means that any w/o 𝑀𝑑 , 𝑀𝑎 and 𝑀𝑐 93.33 (-1.33) 92.29 (-2.68) 85.95 (-1.53)
misidentification significantly impacts overall accuracy. In contrast, w/o 𝑀𝑑 , 𝑀𝑎 , 𝑀𝑐 and 𝑀𝑒 93.22 (-1.44) 92.17 (-2.80) 85.60 (-1.88)
when dealing with complex geometric features, the higher number
of faces means that misidentification of some faces has a relatively
limited impact on overall accuracy. Based on the above analysis, 5.4 Geometric Recognition Presentation
despite certain shortcomings in the mIoU metric, our network still This section presents a visual demonstration of our network’s per-
maintains a leading position in overall recognition accuracy. This formance in geometric feature recognition. Figure 4 illustrates the
indicates that our network is more adept at recognizing complex, network’s capability in identifying machining features, with the
multi-faceted geometric features. green highlights indicating the features that have been success-
fully recognized by the network. Figure 5, meanwhile, displays the
5.3 Ablation Study outcomes of our network’s feature recognition on more complex
In the ablation study, we focused on the impact of the input features geometric shapes.
of our model on its performance. We systematically removed these 6 Conclusion
key features and tested the changes in model performance. All In this paper, we introduce BRepFormer, a novel geometric feature
ablation experiments were conducted on our proposed CBF dataset. recognition network based on the transformer architecture. Our
,
,
BRepFormer: Transformer-Based B-rep Geometric Feature Recognition

Design 30, 9 (1998), 727–739.


[10] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation
learning on large graphs. Advances in neural information processing systems 30
(2017).
[11] JungHyun Han and Aristides AG Requicha. 1997. Integration of feature based
design and feature recognition. Computer-Aided Design 29, 5 (1997), 393–403.
[12] Mark Richard Henderson. 1984. Extraction of feature information from three-
dimensional CAD data. Purdue University.
[13] Mark R Henderson and David C Anderson. 1984. Computer recognition and
extraction of form features: a CAD/CAM link. Computers in industry 5, 4 (1984),
329–339.
[14] Xiaoshui Huang, Zhou Huang, Sheng Li, Wentao Qu, Tong He, Yuenan Hou, Yifan
Zuo, and Wanli Ouyang. 2024. Frozen CLIP Transformer Is an Efficient Point
Figure 4: Examples of recognized machining features (high- Cloud Encoder. In Proceedings of the AAAI Conference on Artificial Intelligence,
lighted in green) in B-rep models Vol. 38. 2382–2390.
[15] Xiaoshui Huang, Zhou Huang, Yifan Zuo, Yongshun Gong, Chengdong Zhang,
Deyang Liu, and Yuming Fang. 2025. PSReg: Prior-guided Sparse Mixture of
Experts for Point Cloud Registration. arXiv preprint arXiv:2501.07762 (2025).
[16] Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G Lambourne, Karl DD Willis,
Thomas Davies, Hooman Shayani, and Nigel Morris. 2021. Uv-net: Learning from
boundary representations. In Proceedings of the IEEE/CVF conference on computer
vision and pattern recognition. 11703–11712.
[17] Jia-Le Jia, Sheng-Wen Zhang, You-Ren Cao, Xiao-Long Qi, and WeZhu. 2023.
Machining feature recognition method based on improved mesh neural network.
Iranian Journal of Science and Technology, Transactions of Mechanical Engineering
47, 4 (2023), 2045–2058.
Figure 5: Examples of recognized complex features (high- [18] Sanjay Joshi and Tien-Chien Chang. 1988. Graph-based heuristics for recognition
of machined features from a 3D solid model. Computer-aided design 20, 2 (1988),
lighted in green) in B-rep models 58–66.
[19] Yong Se Kim and DJ Wilde. 1992. A convergent convex decomposition of polyhe-
dral objects. (1992).
network effectively extracts both geometric and topological infor- [20] Joseph G Lambourne, Karl DD Willis, Pradeep Kumar Jayaraman, Aditya Sanghi,
Peter Meltzer, and Hooman Shayani. 2021. Brepnet: A topological message
mation from CAD models, and incorporates an attention bias that passing system for solid models. In Proceedings of the IEEE/CVF conference on
integrates geometric and topological features to regulate informa- computer vision and pattern recognition. 12773–12782.
tion propagation within the transformer module. Furthermore, we [21] Jinwon Lee, Changmo Yeo, Sang-Uk Cheon, Jun Hwan Park, and Duhwan Mun.
2023. BRepGAT: Graph neural network to segment machining feature faces in
propose the CBF dataset, which features more complex geomet- a B-rep model. Journal of Computational Design and Engineering 10, 6 (2023),
ric and topological representations and is specifically designed for 2384–2400.
[22] Ruoshan Lei, Hongjin Wu, and Yibing Peng. 2022. Mfpointnet: A point cloud-
complex feature recognition tasks. Finally, BRepFormer achieves based neural network using selective downsampling layer for machining feature
state-of-the-art accuracy on the public MFInstSeg and MFTRCAD recognition. Machines 10, 12 (2022), 1165.
datasets, as well as our CBF dataset, thereby demonstrating its [23] Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. 2020. Deepergcn:
All you need to train deeper gcns. arXiv preprint arXiv:2006.07739 (2020).
superiority in both machining feature recognition and complex [24] Haiyan Li, Yunbao Huang, Yuhang Sun, and Liping Chen. 2015. Hint-based
geometric feature recognition tasks. generic shape feature recognition from three-dimensional B-rep models. Advances
in Mechanical Engineering 7, 4 (2015), 1687814015582082.
[25] Yaolong Ma, Yingzhong Zhang, and Xiaofang Luo. 2019. Automatic recognition of
References machining features based on point cloud data using convolution neural networks.
[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico In Proceedings of the 2019 international conference on artificial intelligence and
Lebrón, and Sumit Sanghai. 2023. Gqa: Training generalized multi-query trans- computer science. 229–235.
former models from multi-head checkpoints. arXiv preprint arXiv:2305.13245 [26] Fangwei Ning, Yan Shi, Maolin Cai, and Weiqing Xu. 2023. Part machining feature
(2023). recognition based on a deep learning method. Journal of Intelligent Manufacturing
[2] Ayman Al-Kababji, Faycal Bensaali, and Sarada Prasad Dakua. 2022. Sched- 34, 2 (2023), 809–821.
uling techniques for liver segmentation: Reducelronplateau vs onecyclelr. In [27] Dheeraj Peddireddy, Xingyu Fu, Anirudh Shankar, Haobo Wang, Byung Gun
International conference on intelligent systems and pattern recognition. Springer, Joung, Vaneet Aggarwal, John W Sutherland, and Martin Byung-Guk Jun. 2021.
204–212. Identifying manufacturability and machining processes using deep 3D convolu-
[3] Shaked Brody, Uri Alon, and Eran Yahav. 2021. How attentive are graph attention tional networks. Journal of Manufacturing Processes 64 (2021), 1336–1348.
networks? arXiv preprint arXiv:2105.14491 (2021). [28] Dheeraj Peddireddy, Xingyu Fu, Haobo Wang, Byung Gun Joung, Vaneet Ag-
[4] Weijuan Cao, Trevor Robinson, Yang Hua, Flavien Boussuge, Andrew R Colligan, garwal, John W Sutherland, and Martin Byung-Guk Jun. 2020. Deep learning
and Wanbin Pan. 2020. Graph representation of 3D CAD models for machin- based approach for identifying conventional machining processes from CAD
ing feature recognition with deep learning. In International design engineering data. Procedia Manufacturing 48 (2020), 915–925.
technical conferences and computers and information in engineering conference, [29] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. 2017. Pointnet++:
Vol. 84003. American Society of Mechanical Engineers, V11AT11A003. Deep hierarchical feature learning on point sets in a metric space. Advances in
[5] AKW Chan and Keith Case. 1994. Process planning by recognizing and learning neural information processing systems 30 (2017).
machining features. International Journal of Computer Integrated Manufacturing [30] Peizhi Shi, Qunfen Qi, Yuchu Qin, Paul J Scott, and Xiangqian Jiang. 2020. A
7, 2 (1994), 77–99. novel learning-based feature recognition method using multiple sectional view
[6] Andrew R Colligan, Trevor T Robinson, Declan C Nolan, Yang Hua, and Weijuan representation. Journal of Intelligent Manufacturing 31 (2020), 1291–1309.
Cao. 2022. Hierarchical cadnet: Learning from b-reps for machining feature [31] Yang Shi, Yicha Zhang, and Ramy Harik. 2020. Manufacturing feature recognition
recognition. Computer-Aided Design 147 (2022), 103226. with a 2D convolutional neural network. CIRP Journal of Manufacturing Science
[7] Iain A Donaldson and Jonathan R Corney. 1993. Rule-based feature recognition and Technology 30 (2020), 36–57.
for 2· 5D machined components. International Journal of Computer Integrated [32] Jan H Vandenbrande and Aristides AG Requicha. 1993. Spatial reasoning for the
Manufacturing 6, 1-2 (1993), 51–64. automatic recognition of machinable features in solid models. IEEE Transactions
[8] Yutong Feng, Yifan Feng, Haoxuan You, Xibin Zhao, and Yue Gao. 2019. Meshnet: on Pattern Analysis and Machine Intelligence 15, 12 (1993), 1269–1285.
Mesh neural network for 3d shape representation. In Proceedings of the AAAI [33] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and
conference on artificial intelligence, Vol. 33. 8279–8286. Justin M Solomon. 2019. Dynamic graph cnn for learning on point clouds. ACM
[9] Shuming Gao and Jami J Shah. 1998. Automatic recognition of interacting Transactions on Graphics (tog) 38, 5 (2019), 1–12.
machining features based on minimal condition subgraph. Computer-Aided
,
,
Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling Yang, and Yilei Shi

[34] Tony C Woo. 1982. Feature extraction by volume decomposition. In Proc. Conf. [39] Hongxiang Yan, Chunping Yan, Ping Yan, Yuping Hu, and Shibin Liu. 2023.
CAD/CAM Tech. Mech. Eng., Vol. 76. Manufacturing feature recognition method based on graph and minimum non-
[35] Hongjin Wu, Ruoshan Lei, Yibing Peng, and Liang Gao. 2024. AAGNet: A graph intersection feature volume suppression. The International Journal of Advanced
neural network towards multi-task machining feature recognition. Robotics and Manufacturing Technology 125, 11 (2023), 5713–5732.
Computer-Integrated Manufacturing 86 (2024), 102661. [40] Xinhua Yao, Di Wang, Tao Yu, Congcong Luan, and Jianzhong Fu. 2023. A
[36] Mingyuan Xia, Xianwen Zhao, and Xiaofeng Hu. 2024. Machining feature and machining feature recognition approach based on hierarchical neural network
topological relationship recognition based on a multi-task graph neural network. for multi-feature point cloud models. Journal of Intelligent Manufacturing 34, 6
Advanced Engineering Informatics 62 (2024), 102721. (2023), 2599–2610.
[37] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful [41] Yu Zhang, Yongsheng Zhang, Kaiwen He, Dongsheng Li, Xun Xu, and Yadong
are graph neural networks? arXiv preprint arXiv:1810.00826 (2018). Gong. 2022. Intelligent feature recognition for STEP-NC-compliant manufac-
[38] Zongyi Xu, Xiaoshui Huang, Bo Yuan, Yangfu Wang, Qianni Zhang, Weisheng turing based on artificial bee colony algorithm and back propagation neural
Li, and Xinbo Gao. 2024. Retrieval-and-alignment based large-scale indoor point network. Journal of Manufacturing Systems 62 (2022), 792–799.
cloud semantic segmentation. Science China Information Sciences 67, 4 (2024), [42] Zhibo Zhang, Prakhar Jaiswal, and Rahul Rai. 2018. Featurenet: Machining
142104. feature recognition based on 3d convolution neural network. Computer-Aided
Design 101 (2018), 12–22.

You might also like