Add command line option to make network configuration of the TAP interface optional. This allows to use create the tap device using 'native' tools, for example: # nmcli connection add type tun ifname tap0 con-name tap0 mode tap # nmcli conn up tap0 ifname tap0 The advantage of not doing the configuration in the 'vm' binary is that we don't need to hardcode the name of a dhcp client and call it from go code, we can just use native OS networking (NetworkManager, systemd-resolved, ...) and let it do its work. There is currently no dhcp client easily available in ubi8 images, and for the images we use in CRC, we'd prefer not to use a busybox base image, this work helps with that. After this change, 'vm' can be started as a systemd service: [Unit] Description=gvisor-tap-vsock traffic forwarder BindsTo=sys-devices-virtual-net-tap0.device After=sys-devices-virtual-net-tap0.device [Service] Restart=on-success TimeoutStopSec=70 ExecStart=/usr/bin/gvisor-tap-vsock-forwarder -preexisting [Install] WantedBy=default.target Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
gvisor-tap-vsock
A replacement for libslirp and VPNKit, written in pure Go. It is based on the network stack of gVisor.
Compared to libslirp, gvisor-tap-vsock brings a configurable DNS server and dynamic port forwarding.
It can be used with Qemu, Hyperkit, Hyper-V and User Mode Linux.
Build
make
Run with Qemu (Linux or macOS)
Usually with Qemu, to not run as root, you would have to use -netdev user,id=n0.
With this project, this is the same but you have to run a daemon on the host.
There 2 ways for the VM to communicate with the daemon: with a tcp port or with a unix socket.
With gvproxy and the VM discussing on a tcp port:
(terminal 1) $ bin/gvproxy -debug -listen unix:///tmp/network.sock -listen-qemu tcp://0.0.0.0:1234
(terminal 2) $ qemu-system-x86_64 (all your qemu options) -netdev socket,id=vlan,connect=127.0.0.1:1234 -device virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee
With gvproxy and the VM discussing on a unix socket:
(terminal 1) $ bin/gvproxy -debug -listen unix:///tmp/network.sock -listen-qemu unix:///tmp/qemu.sock
(terminal 2) $ bin/qemu-wrapper /tmp/qemu.sock qemu-system-x86_64 (all your qemu options) -netdev socket,id=vlan,fd=3 -device virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee
Run with User Mode Linux
(terminal 1) $ bin/gvproxy -debug -listen unix:///tmp/network.sock -listen-bess unixpacket:///tmp/bess.sock
(terminal 2) $ linux.uml vec0:transport=bess,dst=/tmp/bess.sock,depth=128,gro=1,mac=5a:94:ef:e4:0c:ee root=/dev/root rootfstype=hostfs init=/bin/bash mem=2G
(terminal 2: UML)$ ip addr add 192.168.127.2/24 dev vec0
(terminal 2: UML)$ ip link set vec0 up
(terminal 2: UML)$ ip route add default via 192.168.127.254
More docs about the User Mode Linux with BESS socket transport: https://www.kernel.org/doc/html/latest/virt/uml/user_mode_linux_howto_v2.html#bess-socket-transport
Run with vsock
Made for Windows but also works for Linux and macOS with HyperKit.
Host
Windows prerequisites
$service = New-Item -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\GuestCommunicationServices" -Name "00000400-FACB-11E6-BD58-64006A7986D3"
$service.SetValue("ElementName", "gvisor-tap-vsock")
In the VM, be sure to have hv_sock module loaded.
Linux prerequisites
On Fedora 32, it worked out of the box. On others distros, you might have to look at https://github.com/mdlayher/vsock#requirements.
macOS prerequisites
Please locate the hyperkit state (there is a file called connect inside) folder and launch gvproxy with the following listen argument:
--listen vsock://null:1024/path_to_connect_directory
Run
(host) $ sudo bin/gvproxy -debug -listen vsock://:1024 -listen unix:///tmp/network.sock
VM
With a container:
(vm) # docker run -d --name=gvisor-tap-vsock --privileged --net=host -it quay.io/crcont/gvisor-tap-vsock:latest
(vm) $ ping -c1 192.168.127.1
(vm) $ curl http://redhat.com
With the executable:
(vm) # ./vm -debug
Services
API
The executable running on the host, gvproxy, exposes a HTTP API. It can be used with curl.
$ curl --unix-socket /tmp/network.sock http:/unix/stats
{
"BytesSent": 0,
"BytesReceived": 0,
"UnknownProtocolRcvdPackets": 0,
"MalformedRcvdPackets": 0,
...
Gateway
The executable running on the host runs a virtual gateway that can be used by the VM. It runs a DHCP server. It allows VMs to configure the network automatically (IP, MTU, DNS, search domain, etc.).
DNS
The gateway also runs a DNS server. It can be configured to serve static zones.
Activate it by changing the /etc/resolv.conf file inside the VM with:
nameserver 192.168.127.1
Port forwarding
Dynamic port forwarding is supported.
Expose a port:
$ curl --unix-socket /tmp/network.sock http:/unix/services/forwarder/expose -X POST -d '{"local":":6443","remote":"192.168.127.2:6443"}'
Unexpose a port:
$ curl --unix-socket /tmp/network.sock http:/unix/services/forwarder/unexpose -X POST -d '{"local":":6443"}'
List exposed ports:
$ curl --unix-socket /tmp/network.sock http:/unix/services/forwarder/all | jq .
[
{
"local": ":2222",
"remote": "192.168.127.2:22"
},
{
"local": ":6443",
"remote": "192.168.127.2:6443"
}
]
Tunneling
The HTTP API exposed on the host can be used to connect to a specific IP and port inside the virtual network. A working example for SSH can be found here.
Limitations
- ICMP is not forwarded outside the network.
Performance
Using iperf3, it can achieve between 1.6 and 2.3Gbits/s depending on which side the test is performed (tested with a mtu of 4000 with Qemu on macOS).
How it works with vsock
Internet access
- A tap network interface is running in the VM. It's the default gateway.
- User types
curl redhat.com - Linux kernel sends raw Ethernet packets to the tap device.
- Tap device sends these packets to a process on the host using vsock
- The process on the host maintains both internal (host to VM) and external (host to Internet endpoint) connections. It uses regular syscalls to connect to external endpoints.
This is the same behaviour as slirp.
Expose a port
- The process on the host binds the port 80.
- Each time, a client sends a http request, the process creates and sends the appropriate Ethernet packets to the VM.
- The tap device receives the packets and injects them in the kernel.
- The http server receives the request and send back the response.

