Forums - MC Traffic Linux Connection Tracking - StackExchange
Forums - MC Traffic Linux Connection Tracking - StackExchange
multicast traffic is able to poke a hole into the netfilter connection tracking system
[+2] [1] Martin
[2018-08-24 22:31:26]
[ iptables netfilter multicast ]
[ https://unix.stackexchange.com/questions/464731/multicast-traffic-is-able-to-poke-a-hole-into-the-netfilter-connection-
tracking ]
I have an IPTV solution at home where the ISP sends me hundreds of large UDP datagrams per second from 10.4.4.5 port 10 to
239.3.5.3 port 10, i.e. it is using multicast. My current iptables configuration for ingress traffic is very simple:
# iptables-save -c
# Generated by iptables-save v1.6.0 on Sun Aug 26 12:51:11 2018
*nat
:PREROUTING ACCEPT [44137:4586148]
:INPUT ACCEPT [6290:1120016]
:OUTPUT ACCEPT [419:75595]
:POSTROUTING ACCEPT [98:8415]
[26464:2006874] -A POSTROUTING -o eth0 -m comment --comment SNAT -j MASQUERADE
COMMIT
# Completed on Sun Aug 26 12:51:11 2018
# Generated by iptables-save v1.6.0 on Sun Aug 26 12:51:11 2018
*filter
:INPUT DROP [72447:97366152]
:FORWARD ACCEPT [77426:101131642]
:OUTPUT ACCEPT [148:17652]
[17:787] -A INPUT -i lo -j ACCEPT
[333:78556] -A INPUT -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -m comment --comment "established/related connect
COMMIT
# Completed on Sun Aug 26 12:51:11 2018
#
eth0 seen above is the ISP-facing NIC. Now the weird part is that while this multicast traffic gets dropped according to
counters(chain default policy counter increases several MB/s), then actually I do receive it in mplayer. The reason for this is that
multicast traffic seems to create a hole into netfilter connection tracking system. I can verify this with conntrack -L. Example:
Even if I execute conntrack -F, then this entry above reappears and I can see the video stream in mplayer. However,
eventually(after ~5 minutes) this entry disappears and also immediately the stream stops.
# ip -br link
lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>
eth2 DOWN 00:a0:c9:77:96:bd <NO-CARRIER,BROADCAST,MULTICAST,UP>
eth1 UP 00:14:bf:5f:de:71 <BROADCAST,MULTICAST,UP,LOWER_UP>
eth0 UNKNOWN 00:50:8d:d1:4f:ee <BROADCAST,MULTICAST,UP,LOWER_UP>
eth3 DOWN 00:a0:c9:4b:21:a0 <NO-CARRIER,BROADCAST,MULTICAST,UP>
eth4 UP 00:20:e2:1e:2e:64 <BROADCAST,MULTICAST,UP,LOWER_UP>
eth5 DOWN 00:20:fc:1e:2e:65 <NO-CARRIER,BROADCAST,MULTICAST,UP>
eth6 DOWN 00:20:fc:1e:2e:8e <NO-CARRIER,BROADCAST,MULTICAST,UP>
eth7 UP 00:20:fc:1e:2f:67 <BROADCAST,MULTICAST,UP,LOWER_UP>
wlan0 UP 00:21:91:e3:20:20 <BROADCAST,MULTICAST,UP,LOWER_UP>
br0 UP 00:14:bf:5e:da:71 <BROADCAST,MULTICAST,UP,LOWER_UP>
# ip -br address
lo UNKNOWN 127.0.0.1/8
eth2 DOWN
eth1 UP
eth0 UNKNOWN 192.0.2.79/24
eth3 DOWN
www.stackprinter.com/export?question=464731&service=unix.stackexchange 1/3
3/20/2019 multicast traffic is able to poke a hole into the netfilter connection tracking system
eth4 UP
eth5 DOWN
eth6 DOWN
eth7 UP
wlan0 UP
br0 UP 192.168.0.1/24
#
As I told, eth0 is connected to ISP. eth1 to eth7 plus wlan0 are part of bridge named br0. Routing table looks like this:
# ip -4 r
default via 192.0.2.1 dev eth0
192.0.2.0/24 dev eth0 proto kernel scope link src 192.0.2.79
192.168.0.0/24 dev br0 proto kernel scope link src 192.168.0.1
#
Various network parameters for all the interfaces can be seen here:
# ip -4 netconf
ipv4 dev lo forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth2 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth1 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth0 forwarding on rp_filter off mc_forwarding on proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth3 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth4 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth5 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth6 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev eth7 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev wlan0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
ipv4 dev br0 forwarding on rp_filter off mc_forwarding on proxy_neigh off ignore_routes_with_linkdown off
ipv4 all forwarding on rp_filter off mc_forwarding on proxy_neigh off ignore_routes_with_linkdown off
ipv4 default forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
#
Is this an expected behavior? My first thought was that conntrack module is able to inspect the IGMP "membership report" messages
and thus allows traffic to 239.3.5.3, but this doesn't explain, how traffic is allowed even after conntrack -F.
(1) COuld you provide additional informations just in case?: the whole rules: iptables-save -c + ip -br link; ip -br address; ip -4 route . -
A.B
@A.B I added requested additional information if this helps to understand the bigger picture. However, I think that the answer is in the inner workings
of Netfilter connection tracking for multicast traffic, which I don't understand well enough. - Martin
(1) Can you also add this information in the question? ip -4 netconf show. Also where is running mplayer? "behind" the linux router , from a
192.168.0.0/24 IP, or on the linux router? - A.B
@A.B Yes, mplayer is running in a laptop "behind" the Linux router in 192.168.0.0/24 network. It sends the IGMP "membership report" message
which causes ISP router to start sending multicast traffic to my Linux router. - Martin
After trying a similar setup using pimd [1], I can only conclude that:
normal (data) multicast packets are forwarded so are subject to filter/FORWARD as long as multicast routing is enabled for this flow. The
conntrack entry udp 17 29 src=10.4.4.5 dst=239.3.5.3 sport=10 dport=10 [UNREPLIED] src=239.3.5.3 dst=10.4.4.5
sport=10 dport=10 mark=0 use=1 is such a forwarded flow and will also increment the nat/PREROUTING and nat/POSTROUTING
counters by (only) one: the NEW packet that triggered this contrack entry.
link-local multicast packets (IGMP packets to 224.0.0.{1,22} and PIMv2 to 224.0.0.13) are stopped by filter/INPUT.
if the flow was enabled before, the multicast router will then activate forwarding of this specific multicast destination for a while. Once a
configured timeout occurs during which it didn't receive any IGMP report from LAN or PIMv2 from WAN because of the firewall, it will
consider there is no client listening anymore or no valid flow anymore and will stop forwarding the corresponding multicast flow.
IGMP packets coming from LAN, to allow the router to keep knowing which multicast clients are listening:
My specific setup is using pimd and PIMv2, I don't know if this protocol is always used or not, but I had to allow the PIM protocol for it to
work while keeping the DROP policy, when the source IP wasn't just 192.0.2.1 (but 10.4.4.5):
It might be needed to allow IGMP packets from the ISP router, but my specific setup didn't require them:
UPDATE:
www.stackprinter.com/export?question=464731&service=unix.stackexchange 2/3
3/20/2019 multicast traffic is able to poke a hole into the netfilter connection tracking system
Note that the filter/INPUT chain's DROP policy will still show hits: the linux router's own IGMP and PIMv2 packets, being multicast, are looped
back to the local system when sent outside and are thus (harmlessly) dropped since not enabled by the rules above. After adding the
corresponding rules, I hit a strange behaviour for PIMv2 and in the end I had to mark packets in filter/OUTPUT to allow their looped back copy
in filter/INPUT. While at it I also restricted the nat rule. In the end, with the following rules, the filter/INPUT's DROP policy counter always
stayed at [0:0] while forwarding multicast traffic:
You can simulate a multicast client and dump to stdout with socat [2] (specify the local IP if more than one interface):
socat -u UDP4-RECV:10,ip-add-membership=239.3.5.3:0.0.0.0 -
[1] https://manpages.debian.org/stretch/pimd/pimd.8.en.html
[2] https://manpages.debian.org/stretch/socat/socat.1.en.html
www.stackprinter.com/export?question=464731&service=unix.stackexchange 3/3