Introduction to the concept

Qemu

Qemu is an emulator that simulates the CPU and other hardware to the Guest OS. The Guest OS thinks it is dealing directly with the hardware, but in fact it is dealing with the hardware simulated by Qemu, and Qemu translates these instructions to the real hardware.

Since all instructions have to pass through Qemu, performance is poor.

Relationship between KVM-Qemu-Libvirt

KVM

KVM is a module for the linux kernel which requires CPU support and uses hardware-assisted virtualization technologies Intel-VT, AMD-V, memory-related such as Intel’s EPT and AMD’s RVI technologies. CPU instructions for Guest OS do not have to be translated by Qemu and run directly, greatly increasing speed. KVM exposes the interface via /dev/kvm and user-state programs can access this interface via the ioctl function. See the following pseudo-code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
open("/dev/kvm")
ioctl(KVM_CREATE_VM)
ioctl(KVM_CREATE_VCPU)
for (;;) {
    ioctl(KVM_RUN)
        switch (exit_reason) {
        case KVM_EXIT_IO: 
        case KVM_EXIT_HLT:
    }
}

The KVM kernel module itself can only provide CPU and memory virtualization, so it must be combined with QEMU to form a complete virtualization technology, which is called qemu-kvm.

qemu-kvm

Qemu integrates KVM, calling the /dev/kvm interface via ioctl and leaving the CPU instructions to the kernel module. kvm is responsible for cpu virtualization + memory virtualization, enabling virtualization of cpu and memory, but kvm cannot emulate other devices. qemu emulates IO devices (network cards, disks, etc.), and kvm with qemu enables true server virtualization. This is called qemu-kvm because it uses both of these things.

Qemu emulates other hardware such as Network, Disk, which also affects the performance of these devices, so pass through paravirtualised devices virtio_blk, virtio_net were created to improve device performance.

Qemu-KVM architecture

From UCSB CS290B

Libvirt

Why do you need Libvirt?

  • Hypervisors such as qemu-kvm have numerous command line virtual machine management tools with many parameters that are difficult to use.
  • Hypervisors are numerous and there is no unified programming interface to manage them, which is important for cloud environments.
  • There is no unified way to easily define the various manageable objects associated with a VM.

What does Libvirt offer?

  • It provides a unified, stable, open source application programming interface (API), a daemon (libvirtd) and a default command line management tool (virsh).
  • It provides management of the virtualised client and its virtualised devices, network and storage.
  • It provides a more stable set of application programming interfaces in C. Bindings to libvirt are now available in a number of other popular programming languages, and libraries for libvirt are already available directly in Python, Perl, Java, Ruby, PHP, OCaml and other high-level programming languages.
  • Its support for many different Hypervisors is implemented through a driver-based architecture. libvirt provides different drivers for different Hypervisors, including a driver for Xen, a QEMU driver for QEMU/KVM, a VMware driver, etc. Driver source files like qemu_driver.c, xen_driver.c, xenapi_driver.c, vmware_driver.c, vbox_driver.c can be easily found in the libvirt source code.
  • It acts as an intermediate adaptation layer, allowing the underlying Hypervisor to be completely transparent to upper-level user-space management tools, as libvirt shields the details of the various underlying Hypervisors and provides a unified, more stable interface (API) for upper-level management tools.
  • It uses XML to define the various VM-related managed objects.

Currently, libvirt is the most widely used tool and API for the management of virtual machines and some common virtual machine management tools (e.g. virsh, virt-install, virt-manager, etc.) and cloud computing frameworks (e.g. OpenStack, OpenNebula, Eucalyptus, etc.) all use libvirt’s APIs at the bottom.

The relationship between libvirt and KVM

From: Libvirt Wiki

Hands-on

Arch Linux installation and configuration

1
2
3
4
5
6
[root@liqiang.io]# yay -Sy archlinux-keyring
[root@liqiang.io]# yay -Sy qemu virt-manager virt-viewer dnsmasq vde2 bridge-utils openbsd-netcat
[root@liqiang.io]# yay -Sy ebtables iptables
[root@liqiang.io]# yay -Sy libguestfs
[root@liqiang.io]# sudo systemctl enable libvirtd.service
[root@liqiang.io]# sudo systemctl start libvirtd.service

This has installed all the required software and the next step is to configure.

1
2
3
4
5
6
[root@liqiang.io]# cat /etc/libvirt/libvirtd.conf
... ...
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
[root@liqiang.io]# sudo usermod -a -G libvirt $(whoami)
[root@liqiang.io]# sudo systemctl restart libvirtd.service

virsh operations

Configuring the network

1
2
3
4
[root@liqiang.io]# sudo virsh net-define /etc/libvirt/qemu/networks/default.xml
[root@liqiang.io]# sudo virsh net-start default
[root@liqiang.io]# sudo virsh net-autostart default # Boot-up
[root@liqiang.io]#

Configuring console connections.

1
2
3
[root@liqiang.io]# sudo systemctl enable serial-getty@ttyS0.service
[root@liqiang.io]# sudo systemctl start serial-getty@ttyS0.service
[root@liqiang.io]#

Create VM

1
2
3
4
5
6
7
8
9
[root@liqiang.io]# sudo virt-install --name=testvm-00 \
--os-type=Linux \
--os-variant=centos7.0 \
--vcpu=4 \
--ram=4096 \
--disk path=/home/liuliqiang/data/kvm/images/testvm00.img,size=30 \
--graphics spice \
--location=/home/liuliqiang/data/kvm/isos/CentOS-7-x86_64-DVD-2009.iso \
--network bridge:virbr0

Enter the VM

1
[root@liqiang.io]# virsh console zhangsan

Shutting down the VM

1
2
3
4
5
[root@liqiang.io]# virsh shutdown VM_NAME
[root@liqiang.io]# virsh shutdown --domain VM_NAME
[root@liqiang.io]# virsh destroy VM_NAME  # force stop
[root@liqiang.io]# virsh destroy --domain VM_NAME # force stop
[root@liqiang.io]# virsh undefine --domain VM_NAME # remove vm

View VM information

1
2
3
4
5
6
7
8
[root@liqiang.io]# virsh list --all
 Id   Name        State
----------------------------
 1    200      running
 2    envoy180    running
... ...
 -    base-f-vm   shut off
[root@liqiang.io]#

Ref