没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
内容概要:本文介绍了使用深度卷积神经网络(CNN)进行方向到达(DoA)估计的方法。针对低信噪比环境,提出了一种基于二维卷积层的多通道输入数据的模型,能够有效解决DoA估计任务中的噪声鲁棒性和小样本快照问题。实验结果表明,在不同信噪比条件下,所提方法的性能优于传统方法。 适合人群:从事阵列信号处理和深度学习研究的专业人士,以及对方向到达估计感兴趣的科研人员。 使用场景及目标:适用于雷达、无线通信、声纳等领域中的低信噪比环境,提高方向到达估计的准确性。特别是当信噪比较低且样本量较少时,传统的DoA估计方法表现不佳。 其他说明:文中还讨论了在未知源数目的情况下进行DoA估计的可能性,并提出了相应的解决方案。此外,作者对未来的研究方向进行了展望,包括更高分辨率的网格识别更多目标等。
资源推荐
资源详情
资源评论















格式:pdf 资源大小:4.8MB 页数:103

格式:zip 资源大小:293.9MB










格式:zip 资源大小:991.0MB





3714 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 69, 2021
Deep Networks for Direction-of-Arrival
Estimation in Low SNR
Georgios K. Papageorgiou , Member, IEEE, Mathini Sellathurai , Senior Member, IEEE,
and Yonina C. Eldar
, Fellow, IEEE
Abstract—In this work, we consider direction-of-arrival (DoA)
estimation in the presence of extreme noise using Deep Learning
(DL). In particular, we introduce a Convolutional Neural Network
(CNN) that predicts angular directions using the sample covariance
matrix estimate. The network is trained from multi-channel data
of the true array manifold matrix in the low signal-to-noise-ratio
(SNR) regime. By adopting an on-grid approach, we model the
problem as a multi-label classification task and train the CNN to
predict DoAs across all SNRs. The proposed architecture demon-
strates enhanced robustness in the presence of noise, and resilience
to a relatively small number of snapshots. Moreover, it is able
to resolve angles within the grid resolution. Experimental results
demonstrate significant performance gains in the low-SNR regime
compared to state-of-the-art methods and without the requirement
of any parameter tuning in both cases of correlated and uncorre-
lated sources. Finally, we relax the assumption that the number
of sources is known aprioriand present a training method, where
the CNN learns to infer their number and predict the DoAs with
high confidence. The increased robustness of the proposed solution
is highly desirable in challenging scenarios that arise in several
fields, ranging from wireless array sensors to acoustic microphones
or sonars.
Index Terms—Direction-of-arrival (DoA) estimation,
convolution neural network CNN, deep learning DL, multilabel
classification, array signal processing.
I. INTRODUCTION
D
IRECTION-OF-ARRIVAL (DoA) estimation has been at
the forefront of research activity for many decades, due
to the plethora of applications ranging from radar and wireless
communications to sonar and acoustics [1], with localization
being one of the most significant ones. Estimation of the angular
directions is possible with the use of multiple sensors in a
specified geometric configuration, e.g., linear, rectangular and
circular. Efficient use of the observations from multiple sensors
enables the DoA estimation of several sources, depending on
Manuscript received November 16, 2020; revised April 26, 2021; accepted
June 1, 2021. Date of publication June 16, 2021; date of current version July
13, 2021. The associate editor coordinating the review of this manuscript and
approving it for publication was Dr. Florian Roemer. This work was supported
in part by UK’s EPSRC Grant EP/P009670/1. (Corresponding author: Georgios
Papageorgiou.)
Georgios K. Papageorgiou and Mathini Sellathurai are with the School of
Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, U.K.
(e-mail: g.papageorgiou@hw.ac.uk; m.sellathurai@hw.ac.uk).
Yonina C. Eldar is with the Electrical Engineering, Technion-Israel Institute
This article has supplementary downloadable material available at https://doi.
org/10.1109/TSP.2021.3089927, provided by the authors.
Digital Object Identifier 10.1109/TSP.2021.3089927
the number of array sensors. There are two major categories in
DoA estimation: the overdetermined case, where the number
of sources is less than the number of array sensors, and the
underdetermined case, where the number of sources is equal
or greater than the number of sensors [2], [3]. In this work,
we investigate the first category. Moreover, we focus on DoA
estimation in low SNR, which poses several challenges and is
critically important in many real-world situations.
One of the first methods introduced for DoA estimation is
MUltiple SIgnal Classification (MUSIC) [4], with several other
variants following soon after. The MUSIC estimator belongs to
the class of subspace-based techniques, which attempts to sep-
arate the signal and noise sub-spaces; angle estimation follows
from the so-called MUSIC pseudo-spectra over a specified grid,
where the corresponding peaks of the pseudo-spectra are se-
lected. Estimation of signal parameters via rotational invariance
techniques (ESPRIT) [5] and its variants is another success-
ful example [6]. Unitary ESPRIT [6], [7], which incorporates
forward-backward averaging, is a notable variant that leads
to improved performance compared to ESPRIT, especially in
cases of correlated source signals. A significant step towards
improvement in DoA estimation was with the development of
Root-MUltiple SIgnal Classification (R-MUSIC), which esti-
mates the angular directions from the solutions of higher-order
polynomials [8]. The aforementioned methods, are covariance-
based techniques that require a sufficient number of data snap-
shots to accurately estimate the DoAs, particularly in low SNRs.
Furthermore, they often assume that the number of sources is
known, which is not the case in many practical applications.
During the past decade, Compressed Sensing (CS) method-
ologies have also been used to address DoA estimation [9],
[10]. These methods exploit the sparse characteristic of the
signal sources in the spatial domain (angles). CS techniques
are generally separated into three main categories: a) on-grid,
b) off-grid and c) grid-less methods [11], [12]. Grid-less ap-
proaches achieve better performance at the expense of very high
computational complexity [13]. Off-grid and on-grid methods
offer a more balanced solution with less computational demands
at the expense of negligible loss in performance due to the grid
mismatch problem [14]. DoA estimates are obtained after the
solution of sparse minimization tasks, for which two major
approaches are identified: i) greedy methods based on the
0
pseudo-norm and ii) convex relaxations based on the
1
-norm.
Notable is the method known as
2,1
-SVD, which first performs
a dimensionality reduction technique to the received signal data
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

PAPAGEORGIOU et al.: DEEP NETWORKS FOR DIRECTION-OF-ARRIVAL ESTIMATION IN LOW SNR 3715
and then solves the
2,1
-norm minimization task in the reduced
dimension with significantly less computational burden. It was
introduced in [15] and was also employed later in [16]. One of
the major disadvantages that all these methods have in common
is that tuning of one or more parameters (which depends on the
number of snapshots, the SNR or both) is required to guaran-
tee good performance; moreover, the DoA estimates are often
extremely sensitive to the tuning of these parameters.
A very recent approach to DoA estimation is via the use of
Deep Learning (DL) [17], [18]. DL-based methods enjoy several
advantages over optimization-based ones: a) after training the
network no optimization is required and the solution is the result
of simple operations (multiplications and additions); b) they
do not require any specific tuning of parameters, in contrast
to optimization-based techniques, whose solution strongly de-
pends on the tuning of those parameters, and c) they demonstrate
resilience to data imperfections, e.g., using fewer snapshots,
performing well in low SNR. A deep neural network (DNN)
with fully connected (FC) layers was employed in [19] for DoA
classification of two targets using the signal covariance matrix.
However, the reported results indicate poor DoA estimation
results in high SNR. The authors in [20] proposed a DNN
for channel estimation in massive MIMO systems; however,
they focus on the high SNR regime and on channel estimation
performance. A multilayer autoencoder with a series of parallel
multilayer classifiers, i.e., a multi-layer perceptron (MLP), was
employed in [21] with focus on the robustness to array imper-
fections. The MLP architecture addresses DoA estimation of
only two sources via the use of a multitask autoencoder that
acts as a group of spatial filters followed by a series of paralllel
multi-layer DNNs for spatial spectrum estimation. The network
is trained at each individual SNR.
1
A deep Convolutional Neural
Network (CNN) that was also trained in low SNRs was proposed
in [22]. However, the method did not demonstrate significant
performance gains in terms of DoA estimation, due to the adop-
tion of 1-dimensional (1D) filters (convolutions). A CNN for
broadband DoA estimation in the context of speech processing
was proposed in [23], [24]. In contrast to these works, we study
the case of narrow-band DoA estimation. The authors in [25],
[26] employed DNNs for the range-based localization of ships.
Their approach is based on distance estimation, whereas we
focus on the estimation of the signal’s directions. A DL-based
method for pseudo-spectra estimation was published in [27],
where the authors also proposed an extension to estimate the
angles. However, the demonstrated results were in high SNR and
the number of sources was assumed to be known. In the context
of acoustics, the authors in [28] proposed a DNN for beamform-
ing with a single-snapshot sample covariance matrix (SCM),
which was later extended in [29], [30] to include slightly more
snapshots and sources. Such an approach cannot be adopted in
low SNR scenarios, where t he number of snapshots and sensors
needs to be considerably higher.
To the best of our knowledge no specific DL-based technique
was developed for robust DoA estimation in the low SNR regime.
Another disadvantage of methods such as [21], [22] is that they
1
https://github.com/LiuzmNUDT/DNN-DOA
are trained for a specific number of snapshots, which leads
to significant deviations for different amount of snapshots and
varying SNR.
The scope of this work is to fill in the gap in t he literature
of DoA estimation in the low SNR with the use of DL. Due to
large deviations of the SCM estimates from the true manifold
matrix in low SNR, DoA estimation becomes really challenging
and the majority of the methods fail to demonstrate the desired
robustness. In this work, we contribute towards this direction by:
a) exploiting a deep network with 2D convolutional layers, which
are well known for their excellent feature extraction properties;
b) using multi-channel data, i.e., the real part, imaginary part, and
phase of the complex valued covariance matrix entries (similar
to how they are used in image processing); and c) employing
dropout layers as a means of regularization, thus, improving the
generalization of the estimator and avoiding over-fitting. Our
contributions are summarized as follows:
r
We introduce a deep CNN trained on multi-channel data,
which are explicitly formed from the complex-valued data
of the true covariance matrix. The first and second channels
are formed by taking the real and imaginary parts, respec-
tively, from the complex-valued entries of the covariance
matrix. The third channel includes the phase from the com-
plex valued entries of the covariance matrix (in [−π, π]).
The proposed CNN employs 2D convolutional layers and is
trained to directly predict the angular directions of multiple
sources. Testing is performed using the SCM estimate for
any amount of snapshots. The use of multi-channel data
along with the adoption of 2D convolutional layers enables
the extraction of features from the input data leading to
more robust DoA estimation in low SNR. A discretization
approach (on-grid) is adopted for the desirable angular
(spatial) region and the direction estimation task is modeled
as a multi-label classification one.
r
Efficient methods for training the proposed network are
presented. In particular, we train the CNN across a range of
low SNRs and demonstrate that it can successfully predict
DoAs in high SNR as well.
r
We introduce a training method for a varying number
of sources. Subsequently, the proposed CNN infers the
number of sources from the received data, while predicting
the DoAs.
r
The performance of the proposed solution is evaluated
over an extensive set of simulated experiments, where it
is compared against state-of-the-art methods in various
experimental set-ups with off-grid angles. Additionally,
comparison to the Cramér-Rao lower bound (CRLB) is
provided as benchmark.
The results indicate that the proposed CNN: a) outperforms
its competitors in DoA estimation in the low SNR regime; b) is
resilient in estimation even for a small collection of snapshots
regardless of the angular separation of the sources; c) demon-
strates enhanced robustness in case of SNR mismatches, and d)
is able to infer both the number of sources and DoAs with very
small errors and high confidence level.
The rest of the paper is organized as follows: in Section II,
we present the signal model. In Section III, we introduce the

3716 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 69, 2021
proposed CNN for DoA prediction and in Section IV we discuss
the adopted training approach. Section V presents s imulation
results and in Section VI, we summarize and highlight our
conclusions.
Notation: Throughout the paper the following notation is
adopted: X denotes a set and |X | its cardinality; X is a matrix,
x is a vector and x is a scalar. The i, j-th element of a matrix X
is denoted (X)
i,j
and the i-th entry of a vector x is x(i). The
i-th example of the vector x is denoted as x(i). The imaginary
unit is j (so that j
2
= −1). The conjugate transpose of a matrix
is (·)
H
; its conjugate is (·)
∗
and its transpose is (·)
T
. The
N × N identity matrix is I
N
. The white circularly-symmetric
Gaussian distribution with mean m and covariance C is denoted
by CN(m, C). The convolution operator is denoted as ∗.The
floor of a number α is written as α. Functions are denoted by
lower case italics, e.g., f(·). The symbol E[·] is the expectation
operator; Re{·}, Im{·} denote the real and the imaginary parts
of a complex scalar / vector / matrix, respectively. Finally, the
phase of the complex-valued variable α is denoted by ∠{α}.
II. S
IGNAL MODEL
The standard model for a N-element sensor array in the
narrow-band mode, with K far-field sources present, is:
y(t)=
K
k=1
a(θ
k
)s
k
(t)+e(t)=
= A(θ)s(t)+e(t),t=1,...,T. (1)
Here A(θ)=[a(θ
1
), a(θ
2
),...,a(θ
K
)] is the N × K array
manifold matrix, θ =[θ
1
,...,θ
K
]
T
is the vector of (unknown)
source directions and T is the total number of collected snap-
shots, s(t)=[s
1
(t),...,s
K
(t)]
T
and e(t) denote the trans-
mitted signal and additive noise vectors at sample index t,
respectively. The model (1) is generic in the sense that it does
not depend on the array geometry; however, in this work we
will consider a uniform linear array (ULA) configuration for
simplicity.
2
Thus, the columns of the array manifold matrix are
expressed as
a(θ
k
)=[1,e
j
2πd
λ
sin(θ
k
)
,...,e
j
2πd
λ
sin(θ
k
)(N−1)
]
T
, (2)
where d is the array interelement distance and λ = c/f is the
wavelength at carrier frequency f with c the speed of light/sound.
In this case, A(θ) becomes a Vandermonde matrix. The follow-
ing assumptions are typical in the literature of narrow-band DoA
estimation:
A1) The source DoAs are distinct.
A2) Each source signal follows the unconditional-model
assumption (UMA) in [31], which assumes that the
transmitted signal is randomly generated (Gaussian sig-
naling). Moreover, the sources are uncorrelated, lead-
ing to a diagonal source covariance matrix: R
s
=
E[s(t)s
H
(t)] = diag(σ
2
1
,...,σ
2
K
).
2
The analysis and methodology also holds for any other array configuration,
e.g., non-uniform linear or rectangular.
A3) The additive noise values are independent and iden-
tically distributed (i.i.d.) zero-mean white circularly-
symmetric Gaussian, i.e., e(t) ∼CN(0,σ
2
e
I
N
) and un-
correlated from the sources.
A4) There is no temporal correlation between each snapshot.
We are interested in the estimation of the unknown DoAs θ
from measurements y(1),...,y(T ). Considering A1–A4, the
received signal’s covariance matrix is given by:
R
y
= E [y(t)y
H
(t)] = A(θ)R
s
A
H
(θ)+σ
2
e
I
N
. (3)
The statistical richness of R
y
in (3) allows for the estimation
of up to K ≤ N − 1 distinct DoAs. However, in practice, the
matrix in (3) is unknown and is replaced by its sample estimate
R
y
=
1
T
T
t=1
y(t)y
H
(t), (4)
which is an unbiased estimator of R
y
.
We note that assumptions A2–A4 are only used for generating
the data in the training procedure of the proposed CNN. The
network can make predictions even if these assumptions are vi-
olated. However, as the deviation of the test from the training data
becomes higher, estimation becomes less accurate. Violation of
these assumptions has different impact on classical estimators
in the field of DoA estimation as well.
III. A D
EEP CONVOLUTIONAL NEURAL NETWORK
FOR
DOAESTIMATION
In this section, we formulate DoA estimation as a multilabel
classification task. In Section III-A, we present the data manage-
ment and labeling approach, whereas Section III-B is devoted to
the description of the CNN’s architecture that learns to predict
the DoAs. The convolution layers perform the feature extraction
from the multi-channel input data, and, subsequently, the FC
layers use the output of the convolution layers to infer the DoA
estimates using a pre-selected grid.
A. Data Management and Labeling
DoA prediction is modeled as a multilabel classification
task.Forφ
max
∈{1
◦
,...,90
◦
} we consider 2G +1 discrete
points of resolution ρ (in degrees), which define a grid
G = {−Gρ,...,−ρ, 0
◦
,ρ,...,Gρ}⊂[−90
◦
, 90
◦
], such that
φ
max
= Gρ. At each SNR level, K angles are selected from
the set G and the respective covariance matrix is calculated
according to (3). The input data X of the proposed CNN is
a real-valued N × N × 3 matrix, whose third dimension rep-
resents different “channels.” In particular, the first and second
channels are the real and imaginary parts of R
y
, i.e., X
:,:,1
=
Re{R
y
} and X
:,:,2
=Im{R
y
}, whereas the third channel cor-
responds to phase entries, i.e., X
:,:,3
= ∠{R
y
}. Thus, the in-
put to the CNN is a collection of D data points defined as
X = {X
(1)
,...,X
(D)
}.
Next, for each example X
(i)
,theK training angles
in G are transformed into a binary vector with K ones
(the rest are zeros). For example, if φ
max
= 60
◦
and
the desired resolution is ρ = 1
◦
, the grid becomes G =

PAPAGEORGIOU et al.: DEEP NETWORKS FOR DIRECTION-OF-ARRIVAL ESTIMATION IN LOW SNR 3717
{−60
◦
,...,−1
◦
, 0
◦
, 1
◦
,...,60
◦
} with |G| = 121 grid points;
moreover, the angle pair {−60
◦
, −59
◦
} corresponds to the
121 × 1 binary vector z =[1, 1, 0,...,0]
T
, which serves as the
corresponding label/output of the proposed CNN. Thus, the i-th
label z
(i)
belongs to the set Z = {0, 1}
2G+1
according to the
described process. Hence, the i-th training example consists of
pairs in the form (X
(i)
, z
(i)
) leading to the training data set
D = {(X
(1)
, z
(1)
), (X
(2)
, z
(2)
),...,(X
(D)
, z
(D)
)} of size D.
According to the well-known universal approximation theo-
rem [32] a feed-forward network with a single hidden layer pro-
cessed by a multilayer perceptron can approximate continuous
functions on a compact subset of R
n
. The goal of this multilabel
classification task is to induce a ML hypothesis defined as
a function f from the input space to the output space, i.e.,
f : R
N×N×3
→Z. Although the true covariance matrix is used
for training the network, for its testing and evaluation the sample
covariance in (4) is used, since the former is unknown. To this
end, during the testing phase of the CNN all input examples can
be considered as “unseen data” to the training.
B. The Proposed CNN’s Architecture
The nonlinear function f is parametrized by a CNN of 8
layers, i.e.,
f(X)=f
8
(f
7
(...f
1
(X))) = z. (5)
The architecture of the proposed CNN is based on the stan-
dard convolutional structures used in the literature of image
processing [33], [34] with some modifications that are required,
due to the nature of our problem. Each function {f
i
(·)}
i=1,...,4
represents a series of convolutional layers: a 2D convolutional
layer of n
C
= 256 filters, followed by a batch normalization
layer [35] and a rectified linear unit ReLU layer, i.e., the nonlin-
ear activation function ReLU(x)=max(0,x) applied element
wise to the variables of the previous layer. Additionally, after
the ReLU layer of f
4
(·) a flatten layer is used, which shapes the
tensor-valued output of the final convolutional layer to a vector.
For the kernel of size κ × κ,weusedκ =3for f
1
(·) and κ =2
for the rest of the convolutional layers. The stride δ that we used
is δ =2for f
1
(·) and δ =1for the rest of the convolutional
layers (no padding). Hence, for each one of the n
C
filters the
mathematical expression of the convolution operation at the first
layer with input data X ∈ R
N×N×3
and kernel K ∈ R
κ×κ×3
is
a 2D matrix of dimension M × M (output of the layer per filter)
given by:
(X ∗ K)
m,n
=
κ
i=1
κ
j=1
3
k=1
K
i,j,k
X
δ(m−1)+i,δ(n−1)+j,k
, (6)
m, n =1,...,M, where M = (N − κ)/δ +1. Thus, the
convolution operation of the q-th filter at the -th convolutional
layer has the following parameters:
r
Input: X
[−1]
of size M
[−1]
× M
[−1]
× n
[−1]
C
with
X
[0]
= X and M
[0]
= N , n
[0]
C
=3;
r
Filter: kernel K
[]
q
of dimension κ
[]
× κ
[]
× n
[−1]
C
;
r
Stride: δ
[]
;
r
Bias: b
[]
q
;
r
Output: X
[]
q
= X
[−1]
∗ K
[]
q
of size M
[]
× M
[]
,
and is the M
[]
× M
[]
matrix given by:
(X
[−1]
∗ K
[]
q
)
m,n
=
κ
[]
i=1
κ
[]
j=1
n
[−1]
C
k=1
K
q,[]
i,j,k
X
[−1]
δ(m−1)+i,δ(n−1)+j,k
+ b
[]
q
, (7)
for m, n =1,...,M and q =1, 2,...,n
[]
C
. The collection of
the outputs in (7) for all filters q =1, 2,...,n
[]
C
leads to the
tensor X
[]
of dimension M
[]
× M
[]
× n
[]
C
. Thus, the total
number of learned parameters at the -th layer are (κ
[]
× κ
[]
×
n
[−1]
C
) × n
[]
C
for the filters plus n
[]
C
for the biases. Pooling layers
were not used (although tested), since the loss of information
resulted in poor performance.
Thereafter, the FC layers follow. Each function {f
i
(·)}
i=5,6,7
is a dense layer with 4096, 2048 and 1024 neurons, respec-
tively, followed by a ReLU layer and a Dropout layer. The
latter randomly sets weights to zero with probability 30% (non
trainable parameters), so the network is forced to learn instead
of memorizing the data. The Dropout layers also act as regular-
ization t o the learning process. The -th FC layer maps its input
c
[−1]
∈ R
V
[−1]
to the output c
[]
∈ R
V
[]
via a set of weights
W
[]
∈ R
V
[]
×V
[−1]
and biases b
[]
FC
∈ R
V
[]
. Thus, the output
of the -th FC layer (before nonlinear activation and dropout) is
given by
c
[]
= W
[]
c
[−1]
+ b
[]
FC
, (8)
where the set of parameters ϑ
[]
= {W
[]
, b
[]
FC
} with V
[−1]
×
V
[]
+ V
[]
entries (in total) is optimized during the training of
the neural network. The final (output) l ayer, f
8
(·), consists of a
dense layer with 2G +1neurons followed by a Sigmoid layer,
which applies the function s(x)=e
x
/(e
x
+1)element-wise to
the values of the previous layer and returns values in [0, 1]. The
selection of the sigmoid function over the softmax is due to the
presence of K labels, which could independently receive a value
equal (during the training) or close (during the inference) to 1.
Thus, the output of the CNN is a probability at each entry of
the predicted label, which for the input data X
(i)
is expressed as
ˆ
p
(i)
= f (X
(i)
)=
⎛
⎜
⎜
⎝
ˆp
1
.
.
.
ˆp
2G+1
⎞
⎟
⎟
⎠
. (9)
The layout of the proposed CNN is depicted in Fig. 1.
The training of the CNN is performed offline in a supervised
manner over the training data set D. In particular, since the
adopted approach is a multilabel classification task, we attempt
to optimize the set of all trainable parameters ϑ whose updates
are carried out via back-propagation by minimizing the recon-
struction error, i.e.:
ϑ
∗
= arg min
ϑ
1
D
D
i=1
L
ˆ
p
(i)
; z
(i)
, (10)
剩余15页未读,继续阅读
资源评论


通信码农天天码
- 粉丝: 102
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 利用OpenVINO后端实现YOLO-NAS_C++ implementation of YOLO-NAS utiliz
- DeepSeek API 免费开放,轻松解锁强大 AI 功能! 或:免费使用 DeepSeek API,开启高效智能体验之旅 或:DeepSeek API 免费畅享,赋能你的创新应用
- 移动通信技术之GG发展研究报告.doc
- 最新电脑软件工程专业毕业生自我鉴定(3篇).docx
- 2022年优秀-SQLServerk实现商业智能之数据转换服务.pptx
- 个人股票交易记录Excel表格.xls
- 通过NCNN部署DAMO-YOLO_Deploy DAMO-YOLO by NCNN.zip
- 《基于Python的深度学习课件PPT》.ppt
- 使用神经网络运行yolov模型和yolov模式_use ncnn to run yolov8 model and yol
- Yolov在Jetson Xavier nx和Jetson nano中检测反光衣服和头盔烟雾的C实现_A C++ imp
- 基于TensorRT的简单快速YOLOX部署_ Simple and fast YOLOX deployment bas
- 幼儿园小班网络安全教案(通用14篇).docx
- Yolo的范围,可能包括yolov-yolov-yolop-Yolo-face yolopose等。.._Extent
- 大数据可视化的关键技术.pptx
- 07-应用PSCAD进行新能源系统仿真研究.ppt
- 《计算机应用基础》第章-电子表格软件Excel.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
