Hello all, I’m trying to approach automated testing of a network application i’m making.
The idea is that ideally i want to imitate it working on dozens of hosts in the same local network.
Ideally would be also to measure network load during the test.
what would be the appropriate approach to look into for this?
I saw that there exists nixos tests that seem to present a very nice framework for testing things over multiple machines, but i’m not sure it’s possible to setup an isolated virtual network for test machines, or do they share network with the host. Also the question is - what will be the memory overhead of running many test VMs.
Another way would be with containers/docker, but i’m unclear here how to run this setup. as a VM with docker inside?
Does it have to full VMs are are containers appropriate? The VM-in-container approach is possible and might be preferred for ease of deployability, provided you can’t just use a container outright.
Spreading it across hosts is complicated and becomes a whole process to build it out, most likely with kube and kube tools.
It’s absolutely possible to test complex networking setups using the NixOS test framework. There are many tests in Nixpkgs you could look for inspiration, for example
A dozen VMs running concurrently should be completely feasible, unless you’re application is very resource-heavy. If you want to optimize the resource usage you could run multiple instances on the same machine with network namespaces, but It would be more difficult to set up.
It looks like nixos tests are a good fit for me.
however I’m struggling to adjust network settings for the VMs: i need to send UDP broadcasts, and it seems that the interface is configured in such way that broadcast address of the interface is not set correctly - can I adjust that somehow?
checked, and indeed broadcast does seem to work if i just use 192.168.1.255 ,
but to determine the correct broadcast address for the interface i rely first on python’s psutil.net_if_addrs() , and in test network for eth1 it returns the IP itself, not the usual inverted mask kind of stuff.
running ip addr show not test vms i see that they do not have explicit brd displayed for eth1. I’m not sure why psutil returns IP itself as broadcast address instead of falling back to mask, but i have not encountered that before in real life scenarios.
do you by a chance know how to run tests interactively with flakes and nix build?
it seems that driverInteractive is not getting a console when run with nix build instead of nix-build
AFAIK .driverInteractive just builds a script that when run launches a sort of REPL to control the VM; it shouldn’t make any difference whether you build it with nix build or nix-build.
You need to run it as result/bin/nixos-test-driver --interactive.
as for this question, about why network configuration is different from what I usually see in real world - i’ve abandoned that idea and added a special case in program logic that if broadcast address returned from psutil.net_if_addrs() is equal to interface address - then don’t use it but derive broadcast address from net mask.
not sure how correct this logic is, but it’s a workaround