Qume-KVM虚拟化
什么是虚拟化?
为什么企业使用虚拟化技术?
1、节约成本
2、提高效率,物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。
虚拟化通过什么软件实现资源分配?
通过Hypervisor实现的虚拟化分类:
全虚拟化:
Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型
全虚拟化对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高
半虚拟化:
物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。
Hypervisor 作为 OS 上的一个程序模块运行,并对虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM
总结:完全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。
什么是KVM?
KVM(Kernel-based Virtual Machine)全称是基于内核的虚拟机
KVM 是一个开源软件,基于内核的虚拟化技术,实际是嵌入系统的一个虚拟化模块,通过优化内核来使用虚拟技术,该内核模块使得 Linux 变成了一个Hypervisor,虚拟机使用 Linux 自身的调度器进行管理
KVM 自 Linux 2.6.20 之后逐步取代 Xen 被集成在Linux 的各个主要发行版本中,使用 Linux 自身的调度器进行管理
KVM有一个内核模块叫 kvm.ko。KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu
KVM通过什么来管理?
Libvirt 就是 KVM 的管理工具
Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等
Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh
KVM虚拟化有两个核心模块:
1)KVM内核模块:主要包括KVM虚拟化核心模块KVM.ko,以及硬件相关的KVM_intel或KVM_AMD模块;负责CPU与内存虚拟化,包括VM创建,内存分配与管理、vCPU执行模式切换等。
2)QEMU设备模拟:实现IO虚拟化与各设备模拟(磁盘、网卡、显卡、声卡等),通过IOCTL系统调用与KVM内核交互。KVM仅支持基于硬件辅助的虚拟化(如Intel-VT与AMD-V),在内核加载时,KVM先初始化内部数据结构,打开CPU控制寄存器CR4里面的虚拟化模式开关,执行VMXON指令将Host OS设置为root模式,并创建的特殊设备文件/dev/kvm等待来自用户空间的命令,然后由KVM内核与QEMU相互配合实现VM的管理。KVM会复用部分Linux内核的能力,如进程管理调度、设备驱动,内存管理等。
KVM本身不执行任何设备模拟,需要用户空间程序QEMU通过/dev/kvm接口设置一个虚拟客户机的地址空间。
KVM和Qemu的关系
Qemu的三种运行模式
qmeu的两种特点
QEMU的两种操作模式:全系统仿真和用户模式仿真。
实验环境说明:
egrep -o 'vmx|svm' /proc/cpuinfo
开始部署
#验证是否支持kvm虚拟化功能。intel的CPU会显示vmx ,AMD的CPU会显示svm
[root@Qume-KVM ~]# egrep -o 'vmx|svm' /proc/cpuinfo
svm
svm
svm
svm
#关闭防火墙与SELinux
[root@Qume-KVM ~]# setenforce 0
[root@Qume-KVM ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@Qume-KVM ~]# systemctl disable --now firewalld.service
#查看新硬盘名
[root@Qume-KVM ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part
├─cl-root 253:0 0 70G 0 lvm /
├─cl-swap 253:1 0 2G 0 lvm [SWAP]
└─cl-home 253:2 0 127G 0 lvm /home
sdb 8:16 0 200G 0 disk
sr0 11:0 1 10.1G 0 rom /mnt/cdrom
#给新磁盘分配分区。在该命令界面中可多使用Tab键来查看有哪些参数可供使用
[root@Qume-KVM ~]# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel #创建新的磁盘标签(分区表)
New disk label type? msdos #msdos是MBR类型的磁盘分区表
(parted) unit #设置磁盘的基础单元格式
Unit? [compact]? MiB #以MiB为基础单元格式显示磁盘的容量
(parted) p #p是print是缩写,打印磁盘的信息
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 204800MiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
(parted) mkpart #创建分区
Partition type? primary/extended? primary #分区类型
File system type? [ext2]? xfs #文件系统类型
Start? 10 #从10MiB(这个单元格式是上面设置的)作起始划分
End? 204790 #到204790MiB作为结束范围
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 204800MiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 10.0MiB 204790MiB 204780MiB primary xfs lba
(parted) q #q是quit的缩写,意为退出
Information: You may need to update /etc/fstab.
[root@Qume-KVM ~]# udevadm settle #刷新分区表
[root@Qume-KVM ~]# lsblk #查看磁盘及分区的信息
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part
├─cl-root 253:0 0 70G 0 lvm /
├─cl-swap 253:1 0 2G 0 lvm [SWAP]
└─cl-home 253:2 0 127G 0 lvm /home
sdb 8:16 0 200G 0 disk
└─sdb1 8:17 0 200G 0 part
sr0 11:0 1 10.1G 0 rom /mnt/cdrom
[root@Qume-KVM ~]# mkfs -t xfs /dev/sdb1 #把该分区格式化为xfs文件系统
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=13105920 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=52423680, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=25597, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@Qume-KVM ~]# blkid /dev/sdb1 #查看UUID及文件系统
/dev/sdb1: UUID="d9f5e2c1-ffc3-489b-9cf0-a58287454c6c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="d7078c1b-01"
[root@Qume-KVM ~]# mkdir /kvmdata #创建供挂载的目录
#写入配置文件实现永久挂载
[root@Qume-KVM ~]# echo 'UUID=d9f5e2c1-ffc3-489b-9cf0-a58287454c6c /kvmdata xfs defaults 0 0' >> /etc/fstab
[root@Qume-KVM ~]# mount -a #挂载/etc/fstab文件里的全部
[root@Qume-KVM ~]# df -Th #查看是否挂载成功
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs tmpfs 3.8G 8.9M 3.8G 1% /run
tmpfs tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 70G 2.1G 68G 3% /
/dev/sr0 iso9660 11G 11G 0 100% /mnt/cdrom
/dev/mapper/cl-home xfs 127G 939M 126G 1% /home
/dev/sda1 xfs 1014M 214M 801M 22% /boot
tmpfs tmpfs 775M 0 775M 0% /run/user/0
/dev/sdb1 xfs 200G 1.5G 199G 1% /kvmdata
#配置yum源
[root@Qume-KVM ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2495 100 2495 0 0 4863 0 --:--:-- --:--:-- --:--:-- 4863
[root@Qume-KVM ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' 、/etc/yum.repos.d/CentOS-Base.repo
#按照所需的软件依赖包
[root@Qume-KVM ~]# dnf -y install epel-release
[root@Qume-KVM ~]# dnf -y install vim wget net-tools unzip zip gcc gcc-c++
#安装kvm
[root@Qume-KVM ~]# dnf -y install qemu-kvm qemu-img virt-manager libvirt libvirt-clientvirt-install virt-viewer libguestfs-tools
#配置网络,因为虚拟机中的网络,我们一般是都和公司服务器处在同一网段的,所以我们需要把kvm的网卡配置成桥接模式
[root@Qume-KVM ~]# cd /etc/sysconfig/network-scripts/
[root@Qume-KVM network-scripts]# cp ifcfg-ens32 ifcfg-br0
[root@Qume-KVM network-scripts]# vim ifcfg-br0
TYPE=Bridge
BOOTPROTO=none
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.92.130
PREFIX=24
GATEWAY=192.168.92.2
DNS1=8.8.8.8
[root@Qume-KVM network-scripts]# vim ifcfg-ens32
TYPE=Ethernet
BOOTPROTO=none
NAME=ens32
DEVICE=ens32
ONBOOT=yes
BRIDGE=br0
#重启网卡服务
[root@Qume-KVM ~]# nmcli connection reload
[root@Qume-KVM ~]# nmcli connection up ens32
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@Qume-KVM ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
[root@Qume-KVM ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.92.130/24 brd 192.168.92.255 scope global noprefixroute br0
valid_lft forever preferred_lft forever
#启动libvirtd服务
[root@Qume-KVM ~]# systemctl enable --now libvirtd
[root@Qume-KVM ~]# lsmod | grep kvm
kvm_amd 135168 0
ccp 98304 1 kvm_amd
kvm 880640 1 kvm_amd
irqbypass 16384 1 kvm
#将qemu-kvm命令软链接到/usr/bin/qemu-kvm
[root@Qume-KVM ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@Qume-KVM ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 Oct 7 17:27 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm
#安装brctl命令
[root@Qume-KVM ~]# yum -y install console-bridge console-bridge-devel
[root@Qume-KVM ~]# rpm -ivh http://mirror.centos.org/centos/7/os/x86_64/Packages/bridge-utils-1.5-9.el7.x86_64.rpm
#查看网桥信息
[root@Qume-KVM ~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c299ee3c1 no ens32
virbr0 8000.5254004abd29 yes virbr0-nic
前言:Kvm的web界面是由webvirtmgr程序提供的
#安装依赖包
[root@Qume-KVM ~]# yum -y install git python2-pip supervisor nginx python2-devel
[root@Qume-KVM ~]# rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/libxml2-python-2.9.1-6.el7.5.x86_64.rpm
[root@Qume-KVM ~]# rpm -ivh --nodeps https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/p/python-websockify-0.6.0-2.el7.noarch.rpm
#升级pip
[root@Qume-KVM ~]# pip2 install --upgrade pip
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip2 install --user` instead.
Collecting pip
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)",)': /packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl
Downloading https://files.pythonhosted.org/packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl (1.5MB)
100% |████████████████████████████████| 1.5MB 24kB/s
Installing collected packages: pip
Found existing installation: pip 9.0.3
Uninstalling pip-9.0.3:
Successfully uninstalled pip-9.0.3
Successfully installed pip-20.3.4
You are using pip version 20.3.4, however version 22.2.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[root@Qume-KVM ~]# pip -V
pip 20.3.4 from /usr/lib/python2.7/site-packages/pip (python 2.7)
#从github拉取webvirtmgr
[root@Qume-KVM src]# git clone http://github.com/retspen/webvirtmgr.git
Cloning into 'webvirtmgr'...
warning: redirecting to https://github.com/retspen/webvirtmgr.git/
remote: Enumerating objects: 5614, done.
remote: Total 5614 (delta 0), reused 0 (delta 0), pack-reused 5614
Receiving objects: 100% (5614/5614), 2.97 MiB | 1.76 MiB/s, done.
Resolving deltas: 100% (3606/3606), done.
[root@Qume-KVM src]# ls
webvirtmgr
#安装webvirtmgr
[root@Qume-KVM src]# cd webvirtmgr/
[root@Qume-KVM webvirtmgr]# ls
conf deploy images locale networks secrets setup.py Vagrantfile
console dev-requirements.txt instance manage.py README.rst serverlog storages vrtManager
create hostdetail interfaces MANIFEST.in requirements.txt servers templates webvirtmgr
[root@Qume-KVM webvirtmgr]# pip install -r requirements.txt
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)",)': /simple/django/
Collecting django==1.5.5
Downloading Django-1.5.5.tar.gz (8.1 MB)
|████████████████████████████████| 8.1 MB 17 kB/s
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)",)': /simple/gunicorn/
Collecting gunicorn==19.5.0
Downloading gunicorn-19.5.0-py2.py3-none-any.whl (113 kB)
|████████████████████████████████| 113 kB 29 kB/s
Collecting lockfile>=0.9
Downloading lockfile-0.12.2-py2.py3-none-any.whl (13 kB)
Using legacy 'setup.py install' for django, since package 'wheel' is not installed.
Installing collected packages: django, gunicorn, lockfile
Running setup.py install for django ... done
Successfully installed django-1.5.5 gunicorn-19.5.0 lockfile-0.12.2
#检查sqlite3是否安装
[root@Qume-KVM webvirtmgr]# python3
Python 3.6.8 (default, Sep 10 2021, 09:13:53)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> exit()
#初始化账号信息
[root@Qume-KVM webvirtmgr]# python2 manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes #是否创建超级管理员账号
Username (leave blank to use 'root'): root #指定超级管理员账号用户名,默认留空为root
Email address: [email protected] #设置超级管理员邮箱
Password: #设置超级管理员密码
Password (again): #再次输入确认超级管理员密码
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)
#拷贝web网页到指定目录
[root@Qume-KVM ~]# mkdir /var/www
[root@Qume-KVM ~]# cp -r /usr/local/src/webvirtmgr/ /var/www/
[root@Qume-KVM ~]# chown -R nginx.nginx /var/www/webvirtmgr/
#配置密钥认证
#由于这里webvirtmgr和kvm服务部署在同一台主机中,所以这里本地信任。如果kvm部署在其他机器上的时候,那么就需要把公钥发送到kvm主机中
[root@Qume-KVM ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.92.130 (192.168.92.130)' can't be established.
ECDSA key fingerprint is SHA256:41MUAgoOJ7cipkGboXt2n0BlrxuPxp2IVlgXn0ahNgg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
#配置端口转发
[root@Qume-KVM ~]# ssh 192.168.92.130 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Last login: Fri Oct 7 17:19:01 2022 from 192.168.92.1
#查看端口
[root@Qume-KVM ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 127.0.0.1:6080 0.0.0.0:*
LISTEN 0 128 127.0.0.1:8000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:111 0.0.0.0:*
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::1]:6080 [::]:*
LISTEN 0 128 [::1]:8000 [::]:*
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:22 [::]:*
#配置nginx
#先把原nginx配置文件做个备份,这样改错了也可以恢复
[root@Qume-KVM ~]# cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
[root@Qume-KVM ~]# vim /etc/nginx/nginx.conf
#在server参数中进行修改
#删除listen [::]:80;行
#参数server_name行改成server_name localhost;
#删除root /usr/share/nginx/html;行
server {
listen 80 ;
server_name localhost;
#在include /etc/nginx/default.d/*.conf;行下修改为如下这样
location / {
root html;
index index.html index.htm;
}
#配置nginx虚拟主机
[root@Qume-KVM ~]# vim /etc/nginx/conf.d/webvirtmgr.conf
server {
listen 80 default_server;
server_name $hostname;
#access_log /var/log/nginx/webvirtmgr_access_log;
location /static/ {
root /var/www/webvirtmgr/webvirtmgr;
expires max;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $remote_addr;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
client_max_body_size 1024M;
}
}
#确保bind绑定本机的8000端口
[root@Qume-KVM ~]# grep "bind" /var/www/webvirtmgr/conf/gunicorn.conf.py
# bind - The socket to bind.
bind = '127.0.0.1:8000'
#重启nginx服务,并查看默认80端口是否开启
[root@Qume-KVM ~]# systemctl restart nginx.service
[root@Qume-KVM ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 127.0.0.1:6080 0.0.0.0:*
LISTEN 0 128 127.0.0.1:8000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:111 0.0.0.0:*
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::1]:6080 [::]:*
LISTEN 0 128 [::1]:8000 [::]:*
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:22 [::]:*
#设置supervisor。在该文件最后一行添加内容
[root@Qume-KVM ~]# vim /etc/supervisord.conf
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx
[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx
#启动并设为开机自启
[root@Qume-KVM ~]# systemctl enable --now supervisord.service
#配置nginx用户
[root@Qume-KVM ~]# su - nginx -s /bin/bash
[nginx@Qume-KVM ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa):
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:QGXNmuONujvXUn8UcXyVkXvpuZuPpvNl5Ndf8/RRxW8 nginx@Qume-KVM
The key's randomart image is:
+---[RSA 3072]----+
| ..oo o*|
| . . o .++|
| . o o*|
| .+ .o+|
| .S+ ..E|
| o o .*o|
| . o . . oX|
| o o . o o*X|
| o= . .*ooB|
+----[SHA256]-----+
[nginx@Qume-KVM ~]$ echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" > ~/.ssh/config
[nginx@Qume-KVM ~]$ cat ~/.ssh/config
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
[nginx@Qume-KVM ~]$ chmod 600 .ssh/config
[nginx@Qume-KVM ~]$ ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.92.130' (ECDSA) to the list of known hosts.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
#验证基于密钥认证是否成功
[nginx@Qume-KVM ~]$ ssh [email protected]
Warning: Permanently added '192.168.92.130' (ECDSA) to the list of known hosts.
Last login: Fri Oct 7 18:01:24 2022 from 192.168.92.130
[root@Qume-KVM ~]# exit
logout
Connection to 192.168.92.130 closed.
[nginx@Qume-KVM ~]$ exit
logout
[root@Qume-KVM ~]#
[root@Qume-KVM ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
#重启服务,生效配置
[root@Qume-KVM ~]# systemctl restart nginx.service
[root@Qume-KVM ~]# systemctl enable nginx.service
[root@Qume-KVM ~]# systemctl restart libvirtd
做完上述操作后去浏览器进行访问。
如果遇到下列问题则reboot一下即可,这是去浏览器访问失败终端报的错
channel 1012: open failed: connect failed: Device or resource busy
channel 1014: open failed: connect failed: Device or resource busy
登录密码就是你自己之前配置中设置的
将镜像上传至/kvmdata目录下
[root@Qume-KVM ~]# ls /kvmdata/
CentOS-8.5.2111-x86_64-dvd1.iso
将镜像上传完成后,去web界面刷新看一下
点击【New Network】先选择网络类型,不然配置界面不一样
自定义创建实例
设置控制台密码
连接上镜像
启动实例
报错!别急,有办法解决
[root@Qume-KVM ~]# dnf -y install novnc
[root@Qume-KVM ~]# chmod +x /etc/rc.d/rc.local
[root@Qume-KVM ~]# echo "nohup novnc_server 192.168.100.100:5920 &" >> /etc/rc.d/rc.local
[root@Qume-KVM ~]# . /etc/rc.d/rc.local
后面的步骤就跟VMware一样安装Centos就可以了