Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
378 views
in Technique[技术] by (71.8m points)

ubuntu - Assign static/DHCP IP from the hosts network to KVM VM provosioned by Terraform

I have a testserver connected to my home network with static IP address, virtualized with KVM/Libvirt. To test some services inside my network (e.g. with cell phones), I'd like to assign those VMs IPs from my SOHO router network - either statically or using the routers DHCP.

So my goals would be:

  1. Assign a static IP outside of the routers DHCP scope (DHCP begins at 192.168.0.20, I use 192.168.0.10 in this example)
  2. Get a dynamically IP from the router DHCP (accessable using DNS, so it would be no problem)

In both cases, the VM doesn't got an IP address:

enter image description here

Since those VMs got automatically provisioned by Terraform, I think SO is a good place for this problem.

My Terraform POC file:

terraform {
 required_version = ">= 0.13"
  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

resource "libvirt_volume" "centos7-img" {
  name    = "cnx_centos7.qcow2"
  pool    = libvirt_pool.default.name
  source =  "/var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2"
  format =  "qcow2"
}
provider "libvirt" {
  uri = "qemu:///system"
}
resource "libvirt_pool" "default" {
  name = "default"
  type = "dir"
  path = "/tmp/kvm"
}

data "template_file" "cloudinit_network" {
  template = file("network.cfg")
}
data "template_file" "cloudinit_data" {
  template = file("cloudinit.cfg")
  vars = {}
}

resource "libvirt_cloudinit_disk" "cloudinit" {
  name           = "cloudinit.iso"
  user_data      = data.template_file.cloudinit_data.rendered
  network_config = data.template_file.cloudinit_network.rendered
  pool           = libvirt_pool.default.name
}
resource "libvirt_network" "cnx_network" {
   name = "cnx_network"
   #addresses = ["192.168.0.17/24"]
   mode = "bridge"
   bridge = "br0"
   dhcp {
      enabled = true
   }
  # Enables usage of the host dns if no local records match
  dns {
    enabled = true
    local_only = false
  }
}

resource "libvirt_domain" "cnx" {
  name   = "cnx-poc"
  memory = 2048
  vcpu   = 4
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
    network_id = libvirt_network.cnx_network.id
    hostname  = "cnx.fritz.box"
    #addresses = ["192.168.0.10"]
    # Required to get ip address in the output when using dhcp
    wait_for_lease = true
  }

  disk {
    volume_id = libvirt_volume.centos7-img.id
  }

  console {
    type = "pty"
    target_type = "serial"
    target_port = "0"
  }
  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = true
  }
}

output "ips" {
  value = libvirt_domain.cnx.*.network_interface.0.addresses
}

Cloudinit network.cfg

version: 2
ethernets:
  eth0:
    dhcp4: true
    dhcp6: false
    # addresses: 
    #   - 192.168.0.10
    gateway4: 192.168.0.1

Cloudinit cloudinit.cfg

This is not really required here. I just set a password, so that I can access the VM using the libvirt console and look into the ip config, even when networking doesn't work.

#cloud-config
password: password
chpasswd:
  list: |
    root:password
    centos:password
  expire: false

Network bridge /etc/netplan/50-cloud-init.yaml (Host)

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        enp6s0:
          #addresses: []
          dhcp4: no
          dhcp6: no
    bridges:
      br0:
        interfaces: [enp6s0]
        addresses: [192.168.0.17/24]
        gateway4: 192.168.0.1
        #mtu: 1500
        nameservers:
          addresses: [192.168.0.1]
          search: ["fritz.box"]
        parameters:
          stp: true
          #forward-delay: 4
        dhcp4: no
        dhcp6: no
    version: 2

Tested and applied with:

$ sudo netplan generate
$ sudo netplan --debug apply

Other things I tried

Additional to the commented out lines in the config files, I tried the following things:

Directly referencing to the bridge

I tried to reference the bridge directly in the VM without defining a libvirt network like this:

resource "libvirt_domain" "cnx" {
  name   = "cnx-poc"
  memory = 2048
  vcpu   = 4
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
    bridge = "br0"
    addresses = ["192.168.0.10"]
  }
  # ...

This doesn't work and I think that's related to this problem of the missing qemu-guest-agent package. I can't simply solve this cause I need network access to install it, which doesn't work. I'll try to investigate if I could add two NICs (1x NAT 1x bridge) to have internet connection.

However this doesn't seem a good workaround. And in the ticket it's suggested to create a seperated network. I wouldn't have a problem with that workaround if it works, but I haven't luck with this so far.

Using no cloudinit/network

Cause I run into this problem some time ago, I tried NOT to specify the network config:

network_config = data.template_file.cloudinit_network.rendered

I assumed this may leads to let the VM use DHCP, or at least my static assigned IP from Terraform, seems not to work in this case.

Investigating the generated kvm objects

The generated network looks like this:

$ virsh net-dumpxml cnx_network
<network connections='1'>
  <name>cnx_network</name>
  <uuid>${removed}</uuid>
  <forward mode='bridge'/>
  <bridge name='br0'/>
</network>

This seems totally valid when looking at articles like this, who're explaining how to setup this manually with KVM and netplan on Ubuntu.

Also the interface type='network' element in the xml of the VM inspected with virsh dumpxml cnx-poc looks good:

<interface type='network'>
  <mac address='${mac}'/>
  <source network='cnx_network'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

Some information about the environment

Host

  • Ubuntu 18.04.5 LTS
  • Static IP 192.168.0.17/24
  • Terraform v0.13.6
  • provider registry.terraform.io/dmacvicar/libvirt v0.6.2
  • virsh 4.0.0

VM

  • CentOS 7 Cloudimage
  • 192.168.0.10 tested as static IP (not reserved for the DHCP range)
question from:https://stackoverflow.com/questions/65945531/assign-static-dhcp-ip-from-the-hosts-network-to-kvm-vm-provosioned-by-terraform

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

It turned out that his problem was caused by an change of the libvirt provider plugin. Recently I updated from 0.5.2 to 0.6.2. With all releases newer than 0.4.2, the default behavior was changed:

Until terraform-provider-libvirt 0.4.2, qemu-agent was used by default to get network configuration. However, if qemu-agent is not running, this creates a delay until connecting to it times-out.

In current versions, we default to not to attempt connecting to it, and attempting to retrieve network interface information from the agent needs to be enabled explicitly with qemu_agent = true, further details here. Note that you still need to make sure the agent is running in the OS, and that is unrelated to this option.

Note: when using bridge network configurations you need to enable the qemu_agent = true. otherwise you will not retrieve the ip adresses of domains.

Be aware that this variables may be subject to change again in future versions.

The qemu-guest-agent was already installed in the CentOS cloud image, so there is no need to download it. But because of the changed libvirt provider behavior, it wasn't used. I didn't noticed that yet cause my VMs were build with NAT. It only got relevant now for my change to the bridge network.

In fact this means, that I just have to add the property to my vm domain like this:

resource "libvirt_domain" "cnx" {
  name   = "cnx-poc"
  memory = 2048
  vcpu   = 4
  cloudinit = libvirt_cloudinit_disk.cloudinit.id
  # Required for bridged networks for libvirt provider plugin > 0.4.2
  qemu_agent = true
  # ...
}

Now my VMs were build and they get IP addresses of my DHCP server:

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

Outputs:

ips = [
  [
    "192.168.0.163",
    "fe80::5054:ff:fe62:2adf",
  ],
]
ips_db2 = [
  [
    "192.168.0.162",
    "fe80::5054:ff:fe8a:ac6a",
  ],
]

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...