I have a testserver connected to my home network with static IP address, virtualized with KVM/Libvirt. To test some services inside my network (e.g. with cell phones), I'd like to assign those VMs IPs from my SOHO router network - either statically or using the routers DHCP.
So my goals would be:
- Assign a static IP outside of the routers DHCP scope (DHCP begins at 192.168.0.20, I use 192.168.0.10 in this example)
- Get a dynamically IP from the router DHCP (accessable using DNS, so it would be no problem)
In both cases, the VM doesn't got an IP address:
Since those VMs got automatically provisioned by Terraform, I think SO is a good place for this problem.
My Terraform POC file:
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.2"
}
}
}
resource "libvirt_volume" "centos7-img" {
name = "cnx_centos7.qcow2"
pool = libvirt_pool.default.name
source = "/var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
provider "libvirt" {
uri = "qemu:///system"
}
resource "libvirt_pool" "default" {
name = "default"
type = "dir"
path = "/tmp/kvm"
}
data "template_file" "cloudinit_network" {
template = file("network.cfg")
}
data "template_file" "cloudinit_data" {
template = file("cloudinit.cfg")
vars = {}
}
resource "libvirt_cloudinit_disk" "cloudinit" {
name = "cloudinit.iso"
user_data = data.template_file.cloudinit_data.rendered
network_config = data.template_file.cloudinit_network.rendered
pool = libvirt_pool.default.name
}
resource "libvirt_network" "cnx_network" {
name = "cnx_network"
#addresses = ["192.168.0.17/24"]
mode = "bridge"
bridge = "br0"
dhcp {
enabled = true
}
# Enables usage of the host dns if no local records match
dns {
enabled = true
local_only = false
}
}
resource "libvirt_domain" "cnx" {
name = "cnx-poc"
memory = 2048
vcpu = 4
cloudinit = libvirt_cloudinit_disk.cloudinit.id
network_interface {
network_id = libvirt_network.cnx_network.id
hostname = "cnx.fritz.box"
#addresses = ["192.168.0.10"]
# Required to get ip address in the output when using dhcp
wait_for_lease = true
}
disk {
volume_id = libvirt_volume.centos7-img.id
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
output "ips" {
value = libvirt_domain.cnx.*.network_interface.0.addresses
}
Cloudinit network.cfg
version: 2
ethernets:
eth0:
dhcp4: true
dhcp6: false
# addresses:
# - 192.168.0.10
gateway4: 192.168.0.1
Cloudinit cloudinit.cfg
This is not really required here. I just set a password, so that I can access the VM using the libvirt console and look into the ip config, even when networking doesn't work.
#cloud-config
password: password
chpasswd:
list: |
root:password
centos:password
expire: false
Network bridge /etc/netplan/50-cloud-init.yaml
(Host)
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
enp6s0:
#addresses: []
dhcp4: no
dhcp6: no
bridges:
br0:
interfaces: [enp6s0]
addresses: [192.168.0.17/24]
gateway4: 192.168.0.1
#mtu: 1500
nameservers:
addresses: [192.168.0.1]
search: ["fritz.box"]
parameters:
stp: true
#forward-delay: 4
dhcp4: no
dhcp6: no
version: 2
Tested and applied with:
$ sudo netplan generate
$ sudo netplan --debug apply
Other things I tried
Additional to the commented out lines in the config files, I tried the following things:
Directly referencing to the bridge
I tried to reference the bridge directly in the VM without defining a libvirt network like this:
resource "libvirt_domain" "cnx" {
name = "cnx-poc"
memory = 2048
vcpu = 4
cloudinit = libvirt_cloudinit_disk.cloudinit.id
network_interface {
bridge = "br0"
addresses = ["192.168.0.10"]
}
# ...
This doesn't work and I think that's related to this problem of the missing qemu-guest-agent
package. I can't simply solve this cause I need network access to install it, which doesn't work. I'll try to investigate if I could add two NICs (1x NAT 1x bridge) to have internet connection.
However this doesn't seem a good workaround. And in the ticket it's suggested to create a seperated network. I wouldn't have a problem with that workaround if it works, but I haven't luck with this so far.
Using no cloudinit/network
Cause I run into this problem some time ago, I tried NOT to specify the network config:
network_config = data.template_file.cloudinit_network.rendered
I assumed this may leads to let the VM use DHCP, or at least my static assigned IP from Terraform, seems not to work in this case.
Investigating the generated kvm objects
The generated network looks like this:
$ virsh net-dumpxml cnx_network
<network connections='1'>
<name>cnx_network</name>
<uuid>${removed}</uuid>
<forward mode='bridge'/>
<bridge name='br0'/>
</network>
This seems totally valid when looking at articles like this, who're explaining how to setup this manually with KVM and netplan on Ubuntu.
Also the interface type='network'
element in the xml of the VM inspected with virsh dumpxml cnx-poc
looks good:
<interface type='network'>
<mac address='${mac}'/>
<source network='cnx_network'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Some information about the environment
Host
- Ubuntu 18.04.5 LTS
- Static IP 192.168.0.17/24
- Terraform v0.13.6
- provider registry.terraform.io/dmacvicar/libvirt v0.6.2
- virsh 4.0.0
VM
- CentOS 7 Cloudimage
- 192.168.0.10 tested as static IP (not reserved for the DHCP range)
question from:
https://stackoverflow.com/questions/65945531/assign-static-dhcp-ip-from-the-hosts-network-to-kvm-vm-provosioned-by-terraform