Libvirt is a set of open source software tools developed by Redhat. The goal is to provide a common and stable software library to manage virtual machines on a node efficiently and securely, and to support remote operations. Libvirt shields different virtualization implementations and provides a unified management interface. Users only care about the high-level functions, and the implementation details of VMM should be transparent to end users. libvirt then acts as a bridge between VMM and high-level functions, receiving user requests and then calling the interface provided by VMM to do the final work.

What is Libvirt

Why do I need Libvirt?

  1. Hypervisor such as qemu-kvm has many parameters for command line virtual machine management tools and is difficult to use.
  2. There are many different Hypervisors and no unified programming interface to manage them, which is very important for cloud environments.
  3. there is no uniform way to easily define the various manageable objects associated with VMs.

What does Libvirt provide?

  1. It provides a unified, stable, open-source application programming interface (API), a daemon (libvirtd), and a default command-line management tool (virsh).
  2. It provides management of the virtualized client and its virtualized devices, network and storage.
  3. It provides a more stable set of application programming interfaces in C. Currently, bindings to libvirt are provided in a number of other popular programming languages, and there are already libvirt libraries available directly in Python, Perl, Java, Ruby, PHP, OCaml, and other high-level programming languages.
  4. Its support for many different Hypervisors is implemented through a driver-based architecture. libvirt provides different drivers for different Hypervisors, including a driver for Xen, a QEMU driver for QEMU/KVM, a VMware driver, and so on. Driver source code files like qemu_driver.c, xen_driver.c, xenapi_driver.c, vmware_driver.c, vbox_driver.c can be easily found in the libvirt source code.
  5. It acts as an intermediate adaptation layer, allowing the underlying Hypervisor to be completely transparent to the management tools in the upper user space, because libvirt shields the details of the underlying Hypervisor and provides a unified, more stable interface (API) for the upper management tools.
  6. It uses XML to define various virtual machine-related managed objects.

Currently, libvirt has become the most widely used tool and API for managing various virtual machines, and some common virtual machine management tools (e.g. virsh, virt-install, virt-manager, etc.) and cloud computing framework platforms (e.g. OpenStack, OpenNebula, Eucalyptus, etc.) all use libvirt APIs on the underlying.

libvirt libvirt

Libvirt C API

Main objects managed by the Libvirti API

Object Explanation
Domain refers to an instance of an operating system (often a virtual machine) running on a virtual machine provided by the Hypervisor or the configuration used to start the virtual machine.
Hypervisor a software layer of a virtualized host
Node (host) a physical server.
Storage pool A collection of storage media, such as physical hard drives. A storage pool is divided into small containers called volumes. Volumes are distributed to one or more virtual machines.
Volume A storage space allocated from a storage pool. A volume is divided into one or more domains, often becoming a virtual hard drive within a domain.

Management model of objects

Object Name Object Python Class Description
Connect Connection to Hypervisor virConnectPtr Before any API can be called to manage a local or remote Hypervisor, a connection to that Hypervisor must be established.
Domain Guest domain virDomainPtr Used to enumerate and manage existing virtual machines, or to create new ones. Unique identifier: ID,Name, UUID. a domain may be temporary or persistent. A temporary domain can only be managed for the duration of its operation. A persistent domain has its configuration stored on the host.
Virtual Network Virtual Network virNetworkPtr The network device used to manage the virtual machines. A virtual network may be temporary or persistent. When libvirt is installed on each host, it has a default network device, “default”. It provides DHCP services to the virtual machines running on that host, and connects to the host via NAT.
Storage Pool Storage Pool virStoragePoolPtr Used to manage all storage within a virtual machine. This includes local disk, logical volume group, iSCSI target, FibreChannel HBA and local/network file system. unique identifier: Name, UUID. a storage pool may be temporary or persistent. the type of Pool can be be dir, fs, netfs, disk, iscsi, logical, scsi,mpath, rbd, sheepdog, gluster or zfs.
Storage Volume Storage Volume virStorageVolPtr Used to manage storage blocks within a storage pool, including blocks allocated within a pool, disk partitions, logical volumes, SCSI/iSCSI Lun, or files within a local or network file system, etc. Unique identifier: Name, Key, Path
Host device Host device virNodeDevPtr Used to manage the physical hardware devices on the host, including NIC, disk, diskcontroller, sound card, etc. Unique identifier: Name.

Libvirt XML Definitions

Libvirt uses XML to define various objects, detailed XML definitions are described at https://libvirt.org/format.html Several major objects are listed here.

disk

Any disk device, including floppy, hard disk, cdrom, or semi-virtualized drives are defined using the elements.

1
<disk type='**' device='**'>

Where.

  • type is used to specify the type of device source: file , block , dir , network or volume. The specific source is defined by the tag.
  • device is used to specify the type of device target: floppy , disk , cdrom , and “lun”, default is disk. The specific target is defined by the label.
volume type disk
1
2
3
4
5
<disk type='volume' device='disk'>
  <driver name='qemu' type='raw'/>
  <source pool='blk-pool0' volume='blk-pool0-vol0'/>
  <target dev='hdk' bus='ide'/>
</disk>
file type disk
1
2
3
4
5
<disk type='file' snapshot='external'>
  <driver name="tap" type="aio" cache="default"/>
  <source file='/var/lib/xen/images/fv0' startupPolicy='optional' />
  <target dev='hda' bus='ide'/>
</disk>
block type disk
1
2
3
4
5
<disk type='block' device='cdrom'>
   <driver name='qemu' type='raw'/>
   <target dev='hdd' bus='ide' tray='open'/>
   <readonly/>
 </disk>
network type disk
1
2
3
4
5
6
7
8
<disk type='network' device='cdrom'>
     <driver name='qemu' type='raw'/>
     <source protocol="http" name="url_path">
       <host name="hostname" port="80"/>
     </source>
     <target dev='hde' bus='ide' tray='open'/>
     <readonly/>
   </disk>

host device assignment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<hostdev mode='subsystem' type='usb'> #USB 设备直接分配
   <source startupPolicy='optional'>
     <vendor id='0x1234'/>
     <product id='0xbeef'/>
   </source>
   <boot order='2'/>
 </hostdev>
 <hostdev mode='subsystem' type='pci' managed='yes'> #PCI 设备直接分配
   <source>
     <address domain='0x0000' bus='0x06' slot='0x02' function='0x0'/>
   </source>
   <boot order='1'/>
   <rom bar='on' file='/etc/fake/boot.bin'/>
 </hostdev>

network interface

There are several interface types.

  1. type = ’network’ defines an interface that connects to the Virtual network

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    
    <devices>
        <interface type='network'>
        <source network='default'/> #虚拟网络的名称为 'default'
        </interface>
        ...
        <interface type='network'>
        <source network='default' portgroup='engineering'/>
        <target dev='vnet7'/>
        <mac address="00:11:22:33:44:55"/>
        <virtualport>
            <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
        </virtualport>
    
        </interface>
    </devices>
    
    1
    
    # virsh:attach-interface --domain d-2 --type network --source isolatednet1 --mac 52:53:00:4b:75:6f --config
    
  2. type=‘birdge’ Define a Bridge to LAN (bridge to physical network) interface: provided that a bridge exists on the host and the bridge is already connected to the physical LAN.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    <interface type='bridge'> #连接到 br0
    <source bridge='br0'/>
    </interface>
    <interface type='bridge'> #连接到br1
    <source bridge='br1'/>
    <target dev='vnet7'/>
    <mac address="00:11:22:33:44:55"/>
    </interface>
    <interface type='bridge'> #连接到 Open vSwithc bridge ovsbr
    <source bridge='ovsbr'/>
    <virtualport type='openvswitch'>
        <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
    </virtualport>
    </interface>
    
    1
    
    #virsh:attach-interface --domain d-2 --type bridge --source virbr0 --mac 52:22:33:44:55:66 --config
    
  3. type=‘ethernet’ Define an interface that connects to the LAN using the specified script

    1
    2
    3
    4
    5
    6
    
    <devices>
        <interface type='ethernet'>
        <target dev='vnet7'/>
        <script path='/etc/qemu-ifup-mynet'/>
        </interface>
    </devices>
    
  4. type=‘direct’ Define a direct attachment to physical interface: requires Linux macvtap driver support

    1
    2
    3
    
    <interface type='direct' trustGuestRxFilters='no'>
    <source dev='eth0' mode='vepa'/>
    </interface>
    
  5. type=‘hostdev’ Defines an interface that is directly assigned by the host PCI NIC (PCI Passthrough): assigns the host NIC to the virtual machine

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    <devices>
        <interface type='hostdev' managed='yes'>
        <driver name='vfio'/>
        <source>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </source>
        <mac address='52:54:00:6d:90:02'/>
        <virtualport type='802.1Qbh'>
            <parameters profileid='finance'/>
        </virtualport>
        </interface>
    </devices>
    

network

1
2
3
<bridge name="virbr0" stp="on" delay="5" macTableManager="libvirt"/>
<domain name="example.com" localOnly="no"/>
<forward mode="nat" dev="eth0"/>
  • bridge: Define a bridge used to construct this virtual network.
  • domain: Define the DNS domain of the DHCP server.
  • forward: Defines how the virtual network connects directly to the physical LAN. “mode” means forwarding mode.
  1. mode=‘nat’: All virtual networks connected to this virtual network will pass through the physical machine’s NIC and be converted to the physical NIC’s address.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    <network>
        <name>default</name>
        <bridge name="virbr0" />
        <forward mode="nat"/>
        <ip address="192.168.122.1" netmask="255.255.255.0">
            <dhcp>
                <range start="192.168.122.2" end="192.168.122.254" />
            </dhcp>
        </ip>
        <ip family="ipv6" address="2001:db8:ca2:2::1" prefix="64" />
    </network>
    

    You can also specify a public IP address and port number.

    1
    2
    
    <forward mode='nat'><nat><address start='1.2.3.4' end='1.2.3.10'/> </nat> </forward>
    <forward mode='nat'><nat><port start='500' end='1000'/></nat></forward>
    
  2. mode=‘route’: similar to NAT, but instead of using NAT, you use routing table.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    <network>
        <name>local</name>
        <bridge name="virbr1" />
        <forward mode="route" dev="eth1"/>
        <ip address="192.168.122.1" netmask="255.255.255.0">
            <dhcp>
                <range start="192.168.122.2" end="192.168.122.254" />
            </dhcp>
        </ip>
        <ip family="ipv6" address="2001:db8:ca2:2::1" prefix="64" />
    </network>
    
  3. mode=‘bridge’: use a bridge that is not managed by libvirt, such as an existing bridge on the host; open vswitch bridge; use macvtap’s “bridge” mode

    1
    2
    3
    4
    5
    
    <network>
        <name>host-bridge</name>
        <forward mode="bridge"/>
        <bridge name="br0"/>
    </network>
    
  4. mode=‘passthrough’: Use a macvtap “direct” connection in “passthrough” mode to specify a specific NIC on the host for virtual networking

    1
    2
    3
    4
    5
    6
    7
    
    <forward mode='passthrough'>
        <interface dev='eth10'/>
        <interface dev='eth11'/>
        <interface dev='eth12'/>
        <interface dev='eth13'/>
        <interface dev='eth14'/>
    </forward>
    
  5. mode=‘hostdev’: Directly assign the network device on the host.

    1
    2
    3
    4
    5
    6
    
    <forward mode='hostdev' managed='yes'>
        <driver name='vfio'/>
        <address type='pci' domain='0' bus='4' slot='0' function='1'/>
        <address type='pci' domain='0' bus='4' slot='0' function='2'/>
        <address type='pci' domain='0' bus='4' slot='0' function='3'/>
    </forward>
    

Libvirt API implementation

The libvirt API is implemented in the individual Hypervisor drivers and Storage dirver.

The Hypervisor drivers include.

Principle implementation

Overall architecture

Overall architecture

kvm

kvm is a module of the linux kernel that requires hardware-assisted virtualization technologies, such as Intel-VT for cpu, AMD-V; memory-related technologies such as Intel’s EPT and AMD’s RVI. guest OS CPU instructions do not have to be translated by Qemu and run directly, greatly increasing speed. kvm exposes the interface through /dev/kvm. User programs can access this interface via the ioctl function. See the following pseudo-code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
open("/dev/kvm")
ioctl(KVM_CREATE_VM)
ioctl(KVM_CREATE_VCPU)
for (;;) {
    ioctl(KVM_RUN)
        switch (exit_reason) {
        case KVM_EXIT_IO: 
        case KVM_EXIT_HLT:
    }
}

The kvm kernel module itself can only provide virtualization of cpu and memory, so it must be paired with QEMU to form a complete virtualization technology, which is the following qemu-kvm.

qemu

Qemu is an emulator that simulates hardware to the Guest OS, which thinks it is dealing directly with the hardware, but is actually dealing with the hardware simulated by Qemu, which translates these instructions to the real hardware. Since all the hardware has to pass through Qemu, the performance is poor.

qemu

qemu-kvm

Qemu integrates kvm, calls the /dev/kvm interface via ioctl, and leaves the cpu instructions to the kernel module. kvm is responsible for cpu virtualization + memory virtualization, but kvm cannot emulate other devices. qemu emulates IO devices (NICs, disks, etc.), and kvm with Qemu makes it possible to virtualize servers in the true sense. It is called qemu-kvm because it uses both of these things.

virtio

Qemu emulates I/O devices, which also affects the performance of these devices, so semi-virtualized devices virtio_blk and virtio_net are created to improve device performance.

Libvirt

libvirt is the most widely used tool and API for managing kvm virtual machines

  • Libvirtd is a daemon process that can be called locally by virsh or remotely by virsh
  • Libvirtd calls qemu-kvm to operate the KVM virtual machine

Libvirt

Code Implementation

The main objects defined in the Libvirt code are shown below.

main objects

  1. VirConnectPtr: represents a connection established by a specific VMM. Every Libvirt-based application should first provide a URI to specify a particular VMM locally or remotely to get a VirConnectPtr connection. For example, xen+ssh: //host-virt/ represents a Xen VMM running on a host-virt machine via ssh. After getting a virConnectPtr connection, the application can manage the virtual machines of this VMM and the corresponding virtualization resources, such as storage and network.
  2. VirDomainPtr: represents a virtual machine, which may be active or just defined.
  3. VirNetworkPtr: represents a network
  4. VirStorageVolPtr: represents a storage volume, usually used as a block device by a virtual machine.
  5. VirStoragePoolPtr: represents a storage pool, which is used to allocate and manage logical areas of storage volumes.

Communication between local machines

During the initialization process, all drivers are enumerated and registered. Each Driver loads specific functions to be called by the Libvirt API. As shown in the figure below, the Application calls the Public API through a URI, and then the Public API is implemented by calling the real Driver using the API interface provided by the Driver.

Communication between remote hosts

Libvirt aims to support remote management, so access to Libvirt’s drivers is handled by the Libvirt daemon libvirtd, which is deployed on the node running the virtual machine and managed by the remote Driver on the opposite end via RPC, as shown in the following diagram.

Libvirt

In remote management mode, virConnectionPtr actually connects the local remote Driver to the remote specific Driver. all calls reach libvirtd on the cloud first via the remote Driver, and libvirtd accesses the corresponding Driver.

Cross-reference

here has a cross-reference to the virsh command, the Libvirt C API, the QEMU driver method and the QEMU Monitor command (in part).

virsh command Public API QEMU driver function Monitor command
virsh create XMLFILE virDomainCreateXML() qemudDomainCreate() info cpus, cont, change vnc password, balloon (all indirectly)
virsh suspend GUEST virDomainSuspend() qemudDomainSuspend() stop
virsh resume GUEST virDomainResume() qemudDomainResume() cont
virsh shutdown GUEST virDomainShutdown() qemudDomainShutdown() system_powerdown
virsh setmem GUEST MEM-KB virDomainSetMemory() qemudDomainSetMemory() balloon (indirectly)
virsh dominfo GUEST virDomainGetInfo() qemudDomainGetInfo() info balloon (indirectly)
virsh save GUEST FILENAME virDomainSave() qemudDomainSave() stop, migrate exec
virsh restore FILENAME virDomainRestore() qemudDomainRestore() cont
virsh dumpxml GUEST virDomainDumpXML() qemudDomainDumpXML() info balloon (indirectly)
virsh attach-device GUEST XMLFILE virDomainAttachDevice() qemudDomainAttachDevice() change, eject, usb_add, pci_add (all indirectly)
virsh detach-device GUEST XMLFILE virDomainDetachDevice() qemudDomainDetachDevice() pci_del (indirectly)
virsh migrate GUEST DEST-URI virDomainMigrate() qemudDomainMigratePerform() stop, migrate_set_speed, migrate, cont
virsh domblkstat GUEST virDomainBlockStats() qemudDomainBlockStats() info blockstats
- virDomainBlockPeek() qemudDomainMemoryPeek() memsave

Practice

Installing a kvm environment

KVM is an open source, Linux-native, fully virtualized solution for X86 hardware based on virtualization extensions (Intel VT or AMD-V). in KVM, virtual machines are implemented as regular Linux processes that are scheduled by standard Linux schedulers; each virtual CPU of a virtual machine is implemented as a regular Linux process. This allows the KMV to use the existing functionality of the Linux kernel. However, KVM itself does not perform any hardware emulation and requires a client space application to set up a client virtual server’s address space via the /dev/kvm interface, provide it with emulated I/O, and map its video display back to the host’s display. This application is currently QEMU.

Kvm related installation packages and their roles:

  • qemu-kvm The main kvm package
  • python-virtinst Command line tools and libraries needed to create virtual machines
  • virt-manager GUI interface for managing virtual machines (can manage remote kvm hosts)
  • virt-viewer: Interact directly with the virtual machine through the GUI interface (can manage remote kvm hosts)
  • virsh: libvirt-based command line tool (CLI)
  • virt-top virtual machine statistics command
  • virt-viewer GUI connection to a configured virtual machine
  • libvirt: Provides simple and unified tools and APIs for managing virtual machines, shielding the underlying complex structure. (supports qemu-kvm/virtualbox/vmware)
  • libvirt-client A c-language toolkit for virtual clients
  • virt-install Virtual machine creation command based on libvirt services
  • bridge-utils Tools for creating and managing bridged devices
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 安装qemu libvirt
yum -y install qemu-kvm qemu-img libvirt virt-install bridge-utils virt-manager
# yum -y install qemu-kvm-tools 没找到

# 查看kvm模块是否被正确加载
[root@VM-64-223-centos ~]# lsmod | grep kvm
kvm_intel             315392  0
kvm                   847872  1 kvm_intel
irqbypass              16384  1 kvm

# 开启kvm服务,并且设置其开自动启动
[root@VM-64-223-centos ~]# systemctl enable libvirtd
[root@VM-64-223-centos ~]# systemctl start libvirtd
[root@VM-64-223-centos ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2022-02-24 16:58:36 CST; 6s ago
     Docs: man:libvirtd(8)
           https://libvirt.org
 Main PID: 16881 (libvirtd)
    Tasks: 19 (limit: 32768)
   Memory: 14.8M
   CGroup: /system.slice/libvirtd.service
           ├─16881 /usr/sbin/libvirtd --timeout 120
           ├─17009 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/>
           └─17011 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/>


# 设置一下语言环境
[root@VM-64-223-centos ~]# LANG="en_US.UTF-8"

Create a directory /data under the root, (do not put the virtual machine files in /root or root mainly because after the virtual machine starts qemu these users do not have permission to read the virtual machine configuration files under /root or root)

1
2
3
# mkdir /data
# cd /data
# wget http://mirrors.ustc.edu.cn/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso

Install the virtual machine

Before installation, set the environment language to English LANG=“en_US.UTF-8”, if it is Chinese, some versions may report an error.

kvm to create a virtual machine, pay special attention to the .iso image file must be placed in /data or root directory to recreate the directory, otherwise it will be unable to create a virtual machine because of permission errors.

1
2
3
# 创建一个虚拟磁盘, -f指定格式, 路径是/data/kvm.qcow2 ,大小为30G
[root@VM-64-223-centos data]# qemu-img create -f qcow2 -o preallocation=metadata /data/kvm.qcow2 30G
Formatting '/data/kvm.qcow2', fmt=qcow2 size=32212254720 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16

First learn the virt-install command, use -help to view it here, and learn only the important ones, the others can be learned later.

1
2
3
4
5
6
7
8
# virt-install -help
-n (name) : 指定虚拟机的名称
-memory (-raw) : 指定内存大小
-cpu : 指定CPU的核数(默认为1)
-cdrom : 指定镜像
-disk: 指定磁盘路径(即预先创建的虚拟磁盘)
-virt-type : 指定虚拟机类型(kvm, qemu , xen)
-network : 指定网络类型

Execute the create virtual machine command.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# virt-install --virt-type=kvm --name=centos --vcpus=2 --memory=4096 --location=/data/CentOS-7-x86_64-Minimal-2009.iso --disk path=/data/kvm.qcow2,size=20,format=qcow2 --network network=default  --graphics none --extra-args='console=ttyS0' --force

Starting install...
Retrieving file vmlinuz...                                                                     | 6.5 MB  00:00:00
Retrieving file initrd.img...                                                                  |  53 MB  00:00:00
Connected to domain centos
Escape character is ^]
[    4.020055] Freeing initrd memory: 53840k freed
[    4.032322] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    4.033010] software IO TLB [mem 0x78b3c000-0x7cb3c000] (64MB) mapped at [ffff9a54f8b3c000-ffff9a54fcb3bfff]
[    4.034101] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer
[    4.034969] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules
[    4.035550] RAPL PMU: hw unit of domain package 2^-0 Joules
[    4.036131] RAPL PMU: hw unit of domain dram 2^-16 Joules
[    4.037025] sha1_ssse3: Using AVX2 optimized SHA-1 implementation
...

The above create virtual machine command eventually needs to configure the system base settings with [!] are basically to be configured, after the configuration is complete, you can enter the virtual machine installation process.

View the virtual machine

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# 查看在运行的虚拟机
[root@VM-64-223-centos ~]# virsh list
 Id   Name     State
------------------------
 2    centos   running
 
# 连接到虚拟机的命令
# 如果报错,可能是安装完成之后已经在界面进入了,需要退出那个界面,再登录就好了
# virsh console centos
Connected to domain centos
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-1160.el7.x86_64 on an x86_64

localhost login: root
Password:
Last login: Fri Feb 25 15:39:48 on ttyS0
[root@localhost ~]#

Basic virtual machine operations

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
virt-install          # 生成kvm虚拟机
virsh list            # 查看在运行的虚拟机
virsh list -all       # 查看所有虚拟机
virsh dumpxml name    # 查看kvm虚拟机配置文件
virsh start name      # 启动kvm虚拟机
virsh shutdown name   # 正常关机:
virsh destroy name    # 非正常关机(相当于物理机直接拔掉电源)
virsh undefine name   # 删除虚拟机,(彻底删除,找不回来了,如果想找回来,需要备份/etc/libvirt/qemu的xml文件) 
virsh define file.xml # 根据配置文件定义虚拟机
virsh suspend name    # 挂起,终止
virsh resumed name    # 恢复挂起状态
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
[root@VM-64-223-centos ~]# virsh dumpxml centos
<domain type='kvm' id='2'>
  <name>centos</name>
  <uuid>556c2f19-2c66-42c3-bedb-5e1cbc469bee</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://centos.org/centos/7.0"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Cascadelake-Server</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='umip'/>
    <feature policy='require' name='pku'/>
    <feature policy='require' name='md-clear'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='xsaves'/>
    <feature policy='require' name='ibpb'/>
    <feature policy='require' name='ibrs'/>
    <feature policy='require' name='amd-stibp'/>
    <feature policy='require' name='amd-ssbd'/>
    <feature policy='require' name='rdctl-no'/>
    <feature policy='require' name='ibrs-all'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='mds-no'/>
    <feature policy='require' name='pschange-mc-no'/>
    <feature policy='require' name='tsx-ctrl'/>
    <feature policy='disable' name='hle'/>
    <feature policy='disable' name='rtm'/>
    <feature policy='disable' name='mpx'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/data/kvm.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu'/>
      <target dev='sda' bus='sata'/>
      <readonly/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xd'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0xe'/>
      <alias name='pci.7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:22:2c:d0'/>
      <source network='default' portid='e3de6751-8de8-473c-b2a3-8c06679faea8' bridge='virbr0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-centos/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>

Edit kvm’s xml file, change VM CPU configuration

Edit the xml file of kvm to change the CPU configuration of the virtual machine. There are two ways to configure the CPU of a virtual machine

  • Specify the number of cores at boot time
  • Change the xml

In order to hot add CPUs, you need to change the maximum number of CPUs, and the number of hot adds cannot exceed the maximum.

1
2
# virsh list --all  # 查看虚拟机
# virsh edit centos # 打开虚拟机的xml文件,找到如下项 vcpu 配置

Current is 2, change to automatic bracketing, maximum is 4

1
<vcpu placement='auto' current="2">4</vcpu>

Restart the virtual machine, check the CPU information, confirm the number of CPUs, and then hot add CPUs.

1
2
# virsh shutdown centos
# virsh start centos

At this point, check the virtual machine CPU.

1
2
3
[root@localhost ~]# cat /proc/cpuinfo | grep processor
processor   : 0
processor   : 1

Dynamic modification of the virtual machine CPU.

1
# virsh setvcpus centos --live --count

Go back to the virtual machine and check the CPU information.

1
2
3
4
5
[root@localhost ~]# cat /proc/cpuinfo | grep processor
processor   : 0
processor   : 1
processor   : 2
processor   : 3

Edit kvm xml file, change VM memory configuration

The memory settings have a balloon mechanism, which can be increased or decreased, but also set a maximum value, which is not set by default, and can be specified during installation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# virsh dominfo centos # 查看虚拟机信息
Id:             3
Name:           centos
UUID:           556c2f19-2c66-42c3-bedb-5e1cbc469bee
OS Type:        hvm
State:          running
CPU(s):         4
CPU time:       14.8s
Max memory:     4194304 KiB
Used memory:    4194304 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: none
Security DOI:   0

You can modify XML to change the virtual machine memory configuration.

1
2
3
4
5
6
# virsh edit centos
  <memory unit='KiB'>6000640</memory> (最大内存为6G)
  <currentMemory unit='KiB'>4194304</currentMemory>  (当前内存为4G)

# virsh  setmem centos 5G   (增大或减小)
# 能够在线调整的最大内存不能超过为虚拟机分配的最大内存,否则需要关闭虚拟机上调最大内存

Create virtual machines by mirroring

Mirroring creation principles

  • When partitioning, only one / root partition, do not need swap partition, because the performance of the virtual machine’s disk is bad, if you set up a swap partition, when the swap works, the performance will be even worse. Ali cloud hosting, for example, there is no swap partition.
  • Mirroring requires removing the UUID from the NIC (eth0)
  • Turn off selinux, turn off iptables
  • Install the package for the base software: net-tools lrzsz screen tree vim wget

Create a virtual machine image file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# 复制第一次安装的干净系统镜像,作为基础镜像文件,后面创建虚拟机使用这个基础镜像.
# cp /data/centos.qcow2  /data/centos7.base.qcow2

# 使用基础镜像文件,创建新的虚拟机镜像
# cp /data/centos7.base.qcow2   /data/centos7.113.qcow2

# 创建虚拟机配置文件
# 复制第一次安装的干净的系统镜像,作为基础配置文件.
# virsh dumpxml centos > /data/centos7.base.xml

# 使用基础虚拟机镜像配置文件,创建新的虚拟机配置文件
# cp /data/centos7.base.xml  /data/centos.new.xml

Edit the new virtual machine configuration file

The main changes are the virtual machine file name, UUID, image address and NIC address, where the UUID can be generated using the uuidgen command under Linux.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<domain type='kvm'>
  <name>centos-new</name>
  <uuid>1e86167a-33a9-4ce8-929e-58013fbf9122</uuid>
  <devices>
    <disk type='file' device='disk'>
      <source file='/home/vms/centos.113.img'/>
    </disk>
    <interface type='bridge'>
      <mac address='00:00:00:00:00:04'/>
    </interface>    
    </devices>
</domain>

# virsh define /data/centos.new.xml

How to define a virtual machine based on XML

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
<domain type='kvm'>  //如果是Xen,则type='xen'
  <name>vm0</name> //虚拟机名称,同一物理机唯一
  <uuid>fd3535db-2558-43e9-b067-314f48211343</uuid>  //同一物理机唯一,可用uuidgen生成
  <memory>524288</memory>
  <currentMemory>524288</currentMemory>  //memory这两个值最好设成一样
  <vcpu>2</vcpu>            //虚拟机可使用的cpu个数,查看物理机可用CPU个数:cat /proc/cpuinfo |grep processor | wc -l 
  <os>
   <type arch='x86_64' machine='pc-i440fx-vivid'>hvm</type> //arch指出系统架构类型,machine 则是机器类型,查看机器类型:qemu-system-x86_64 -M ?
   <boot dev='hd'/>  //启动介质,第一次需要装系统可以选择cdrom光盘启动
   <bootmenu enable='yes'/>  //表示启动按F12进入启动菜单
  </os>
  <features>
   <acpi/>  //Advanced Configuration and Power Interface,高级配置与电源接口
   <apic/>  //Advanced Programmable Interrupt Controller,高级可编程中断控制器
   <pae/>   //Physical Address Extension,物理地址扩展
  </features>
  <clock offset='localtime'/>  //虚拟机时钟设置,这里表示本地本机时间
  <on_poweroff>destroy</on_poweroff>  //突发事件动作
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>   //设备配置
   <emulator>/usr/bin/kvm</emulator> //如果是Xen则是/usr/lib/xen/binqemu-dm
   <disk type='file' device='disk'> //硬盘
      <driver name='qemu' type='raw'/>
      <source file='/opt/vm/vmdev/fdisk.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> //域、总线、槽、功能号,slot值同一虚拟机上唯一
   </disk>
   <disk type='file' device='disk'>  
      <driver name='qemu' type='raw'/> 
      <source file='/opt/vm/vmdev/fdisk2.img'/>
      <target dev='vdb' bus='virtio'/>  
   </disk>
   <disk type='file' device='cdrom'>//光盘
      <driver name='qemu' type='raw'/>
      <source file='/opt/vm/vmiso/ubuntu-15.10-server-amd64.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
   </disk>

   /* 利用Linux网桥连接网络 */
   <interface type='bridge'>   
      <mac address='fa:92:01:33:d4:fa'/> 
      <source bridge='br100'/>  //配置的网桥网卡名称
      <target dev='vnet0'/>     //同一网桥下相同
      <alias name='net0'/>      //别名,同一网桥下相同
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>  //注意slot值唯一
   </interface>

   /* 利用ovs网桥连接网络 */
   <interface type='bridge'>  
      <source bridge='br-ovs0'/>  
      <virtualport type='openvswitch'/>
      <target dev='tap0'/>     
      <model type='virtio'/>  
   </interface>

    /* 配置成pci直通虚拟机连接网络,SR-IOV网卡的VF场景 */
   <hostdev mode='subsystem' type='pci' managed='yes'>
     <source>
       <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
     </source>
   </hostdev>

   /* 利用vhostuser连接ovs端口 */
   <interface type='vhostuser'>   
      <mac address='fa:92:01:33:d4:fa'/> 
      <source type='unix' path='/var/run/vhost-user/tap0' mode='client'/>  
      <model type='virtio'/>     
      <driver vringbuf='2048'/>     
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>  
   </interface>

   <interface type='network'>   //基于虚拟局域网的网络
     <mac address='52:54:4a:e1:1c:84'/>  //可用命令生成,见下面的补充
     <source network='default'/> //默认
     <target dev='vnet1'/>  //同一虚拟局域网的值相同
     <alias name='net1'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>  //注意slot值
   </interface>
  <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'/>  //配置vnc,windows下可以使用vncviewer登录,获取vnc端口号:virsh vncdisplay vm0
   <listen type='address' address='0.0.0.0'/>
  </graphics>
  </devices>
</domain>

Reference https://houmin.cc/posts/efda97c6/