fully connected layer as 1x1 convolution

本文探讨了卷积神经网络中1x1卷积的概念及其作用,指出传统意义上的全连接层实际上可以视为一种特殊的1x1卷积。此外,文章还介绍了卷积网络输入尺寸的灵活性,并说明了如何通过1x1卷积实现不同尺寸输入的有效处理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Quoting Lecun’s post:

In Convolutional Nets, there is no such thing as “fully-connected layers”. There are only convolution layers with 1x1 convolution kernels and a full connection table.

It’s a too-rarely-understood fact that ConvNets don’t need to have a fixed-size input. You can train them on inputs that happen to produce a single output vector (with no spatial extent), and then apply them to larger images. Instead of a single output vector, you then get a spatial map of output vectors. Each vector sees input windows at different locations on the input.

In that scenario, the “fully connected layers” really act as 1x1 convolutions.

References:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值