Intel VT即Intel公司的Virtualization Technology虚拟化技术。
为解决纯软件虚拟化解决方案在可靠性、安全性和性能上的不足,Intel在它的硬件产品上引入了Intel VT(Virtualization Technology,虚拟化技术)。2005年8月,Intel首次公布了针对硬件辅助虚拟化的Vanderpool(Intel VT虚拟化技术的前身)技术细节。Vanderpool技术通过增加新的指令,使得Intel处理器支持硬件虚拟化。2005年11月,Intel宣布,虚拟化技术Vanderpool改成VT,被Acer和联想应用在其基于Intel Pentium 4的PC上。
Intel VT可以让一个CPU工作起来像多个CPU在并行运行,从而使得在一部电脑内同时运行多个操作系统成为可能。这种VT技术并不是一个新鲜事物,市面上已经有一些软件可以达到虚拟多系统的目的,比如VMware workstation、Virtual PC等,使用这种技术就可以单CPU模拟多CPU并行,可以实现单机同时运行多操作系统。
简单的说,虚拟化使得在一台物理的服务器上可以跑多台虚拟机,虚拟机共享物理机的 CPU、内存、IO 硬件资源,但逻辑上虚拟机之间是相互隔离的。
物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。
那么 Host 是如何将自己的硬件资源虚拟化,并提供给 Guest 使用的呢?
这个主要是通过一个叫做 Hypervisor 的程序实现的。
根据 Hypervisor 的实现方式和所处的位置,虚拟化又分为两种:
全虚拟化
半虚拟化
全虚拟化:
Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型
半虚拟化:
物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。Hypervisor 作为 OS 上的一个程序模块运行,并对管理虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
理论上讲:
全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。
kVM 全称是 Kernel-Based Virtual Machine。也就是说 KVM 是基于 Linux 内核实现的。
KVM有一个内核模块叫 kvm.ko,只用于管理虚拟 CPU 和内存。
那 IO 的虚拟化,比如存储和网络设备则是由 Linux 内核与Qemu来实现。
作为一个 Hypervisor,KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu。
大家在网上看 KVM 相关文章的时候肯定经常会看到 Libvirt 这个东西。
Libvirt 就是 KVM 的管理工具。
其实,Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等。
Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh
libvirtd是服务程序,接收和处理 API 请求;
API 库使得其他人可以开发基于 Libvirt 的高级工具,比如 virt-manager,这是个图形化的 KVM 管理工具;
virsh 是我们经常要用的 KVM 命令行工具
环境说明:
系统类型 | IP |
---|---|
CentOS7 | 192.168.116.182 |
至少4个内核,8g内存
//安装
[root@rs2 ~]# systemctl stop firewalld
[root@rs2 ~]# systemctl disable firewalld
[root@rs2 ~]# getenforce
Enforcing
[root@rs2 ~]# yum -y install vim wget
[root@rs2 ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@rs2 ~]# reboot
[root@rs2 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@rs2 yum.repos.d]# vim CentOS-Base.repo
:%s/$releasever/7/g
[root@rs2 yum.repos.d]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
[root@rs2 yum.repos.d]# yum clean all
[root@rs2 yum.repos.d]# yum makecache fast
[root@rs2 yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@rs2 yum.repos.d]# yum -y install net-tools unzip zip gcc gcc-c++
[root@rs2 yum.repos.d]# egrep -o 'vmx|svm' /proc/cpuinfo
vmx
[root@rs2 yum.repos.d]# yum -y install qemu-kvm qemu-kvm-tools qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils libguestfs-tools
//因为虚拟机中网络,我们一般都是和公司的其他服务器是同一个网段,所以我们需要把 \
KVM服务器的网卡配置成桥接模式。这样的话KVM的虚拟机就可以通过该桥接网卡和公司内部 \
其他服务器处于同一网段
//此处我的网卡是ens33,所以用br0来桥接ens33网卡
[root@rs2 network-scripts]# vim ifcfg-br0
TYPE="Bridge"
NM_CONTROLLED="no"
BOOTPROTO="static"
DEFROUTE="yes"
NAME="br0"
DEVICE="br0"
ONBOOT="yes"
IPADDR="192.168.116.182"
NETMASK="255.255.255.0"
GATEWAY="192.168.116.2"
DNS1="192.168.116.2"
[root@rs2 network-scripts]# vim ifcfg-ens33
TYPE="Ethernet"
BOOTPROTO="static"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
BRIDGE="br0"
NM_CONTROLLED="no"
[root@rs2 network-scripts]# systemctl restart network
[root@rs2 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 00:0c:29:eb:85:ed brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:feeb:85ed/64 scope link
valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:eb:85:ed brd ff:ff:ff:ff:ff:ff
inet 192.168.116.182/24 brd 192.168.116.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fd15:4ba5:5a2b:1008:20c:29ff:feeb:85ed/64 scope global mngtmpaddr dynamic
valid_lft 86398sec preferred_lft 14398sec
inet6 fe80::20c:29ff:feeb:85ed/64 scope link
valid_lft forever preferred_lft forever
[root@rs2 network-scripts]# systemctl enable --now libvirtd
//验证安装结果
[root@rs2 network-scripts]# lsmod |grep kvm
kvm_intel 174841 0
kvm 578518 1 kvm_intel
irqbypass 13503 1 kvm
//测试并验证安装结果
[root@rs2 network-scripts]# virsh -c qemu:///system list
Id Name State
----------------------------------------------------
[root@rs2 network-scripts]# virsh --version
4.5.0
[root@rs2 network-scripts]# virt-install --version
1.5.0
//创建链接
[root@rs2 network-scripts]# cd
[root@rs2 ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@rs2 ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 Aug 4 18:21 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm
//查看网桥信息
[root@rs2 ~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c29eb85ed no ens33
virbr0 8000.525400db18eb yes virbr0-nic
//调取图形化
[root@rs2 ~]# vim /etc/ssh/sshd_config
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
[root@rs2 ~]# systemctl restart sshd
[root@rs2 ~]# virt-manager
[root@rs2 ~]#
(virt-manager:1823): Gtk-WARNING **: 18:40:25.397: cannot open display: localhost:10.0
(要下载xmanager)
kvm 的 web 管理界面是由 webvirtmgr 程序提供的。
//加硬盘
[root@rs2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 2.2G 15G 13% /
devtmpfs 3.1G 0 3.1G 0% /dev
tmpfs 3.1G 0 3.1G 0% /dev/shm
tmpfs 3.1G 12M 3.1G 1% /run
tmpfs 3.1G 0 3.1G 0% /sys/fs/cgroup
/dev/sda1 1014M 143M 872M 15% /boot
tmpfs 630M 0 630M 0% /run/user/0
[root@rs2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 200G 0 disk
sr0 11:0 1 4.2G 0 rom
[root@rs2 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xa126a8c3.
Command (m for help):
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-419430399, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-419430399, default 419430399):
Using default value 419430399
Partition 1 of type Linux and of size 200 GiB is set
Command (m for help):
Command (m for help):
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rs2 ~]# partprobe
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
[root@rs2 ~]# mkfs.xfs /dev/v
vcs vcs5 vcsa3 vga_arbiter
vcs1 vcs6 vcsa4 vhci
vcs2 vcsa vcsa5 vhost-net
vcs3 vcsa1 vcsa6 vmci
vcs4 vcsa2 vfio/ vsock
[root@rs2 ~]# mkfs.xfs /dev/sd
sda sda1 sda2 sdb sdb1
[root@rs2 ~]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=13107136 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=52428544, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=25599, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@rs2 ~]# blkid
/dev/sda1: UUID="9d02d86e-5842-4dcb-9050-6a35d692edb7" TYPE="xfs"
/dev/sda2: UUID="VmaOSX-w7NB-xFYe-RYMm-6P1W-Fjfr-fdDmXE" TYPE="LVM2_member"
/dev/sdb1: UUID="27d46a54-86ae-40b6-b9e0-35de8eb33227" TYPE="xfs"
/dev/sr0: UUID="2018-05-03-20-55-23-00" LABEL="CentOS 7 x86_64" TYPE="iso9660" PTTYPE="dos"
/dev/mapper/centos-root: UUID="d6529b7e-84ea-446f-ba9c-5c929309452e" TYPE="xfs"
/dev/mapper/centos-swap: UUID="ab4ef440-6b4a-4a5c-bed3-915c49ff90f3" TYPE="swap"
[root@rs2 ~]# vim /etc/fstab
UUID="27d46a54-86ae-40b6-b9e0-35de8eb33227" /storage xfs defaults 0 0
[root@rs2 ~]# mkdir /storage
[root@rs2 ~]# mount -a
[root@rs2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 2.2G 15G 13% /
devtmpfs 3.1G 0 3.1G 0% /dev
tmpfs 3.1G 0 3.1G 0% /dev/shm
tmpfs 3.1G 12M 3.1G 1% /run
tmpfs 3.1G 0 3.1G 0% /sys/fs/cgroup
/dev/sda1 1014M 143M 872M 15% /boot
tmpfs 630M 0 630M 0% /run/user/0
/dev/sdb1 200G 33M 200G 1% /storage
//安装依赖包
[root@rs2 ~]# yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor nginx python-devel
//升级pip
[root@rs2 ~]# pip install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple/(国内源,不加-i ,默认为国外)
//从github上下载webvirtmgr代码
[root@rs2 ~]# git clone git://github.com/retspen/webvirtmgr.git
//安装webvirtmgr
[root@rs2 ~]# ls
anaconda-ks.cfg zabbix-5.0.2
webvirtmgr zabbix-5.0.2.tar.gz
[root@rs2 ~]# cd webvirtmgr/
[root@rs2 webvirtmgr]# ls
conf interfaces serverlog
console locale servers
create manage.py setup.py
deploy MANIFEST.in storages
dev-requirements.txt networks templates
hostdetail README.rst Vagrantfile
images requirements.txt vrtManager
instance secrets webvirtmgr
[root@rs2 webvirtmgr]# cat requirements.txt
django==1.5.5
gunicorn==19.5.0
# Utility Requirements
# for SECURE_KEY generation
lockfile>=0.9
# Uncoment for support ldap
#django-auth-ldap==1.2.0
[root@rs2 webvirtmgr]# pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/(国内源,不加-i ,默认为国外)
//检查sqlite3是否安装
[root@rs2 webvirtmgr]# python
Python 2.7.5 (default, Apr 2 2020, 13:16:51)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3(没问题,下面就不会显示)
>>> exit()
[root@rs2 webvirtmgr]# pip install sqlite3(如果有问题)
//同步数据库
[root@rs2 webvirtmgr]# python manage.py syncdb
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin(用户名)
Email address: [email protected]
Password:
Password (again):
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)
//拷贝web网页至指定目录
[root@rs2 ~]# mkdir /var/www
[root@rs2 ~]# cp -r webvirtmgr /var/www/
[root@rs2 ~]# ls /var/www/
cgi-bin html webvirtmgr
[root@rs2 ~]# chown -R nginx.nginx /var/www/webvirtmgr/
//生成密钥
[root@rs2 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:VgwfY1Enm9XUo7HX6rV479xlcIKxeR1M8Eh2QhWJbVc root@rs2
The key's randomart image is:
+---[RSA 2048]----+
| . =o+*BBE|
| = ooOB=+|
| + +.=++|
| . B oo|
| S + =.o|
| . ..+.|
| ...+|
| ..=o|
| ..*|
+----[SHA256]-----+
//由于这里webvirtmgr和kvm服务部署在同一台机器,所以这里本地信任。如果kvm部署在其他机器,那么这个是它的ip
[root@rs2 ~]# ssh-copy-id 192.168.116.182
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.116.182 (192.168.116.182)' can't be established.
ECDSA key fingerprint is SHA256:8hxwf+b3CWm1IjU60WaXVQYyp4OJKyCSNGSJX2FOA3o.
ECDSA key fingerprint is MD5:fb:a5:fd:76:cb:9e:b7:30:62:1f:15:c5:be:98:f9:fd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.116.182'"
and check to make sure that only the key(s) you wanted were added.
//配置端口转发
[root@rs2 ~]# ssh 192.168.116.182 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
[root@rs2 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:6080 *:*
LISTEN 0 128 127.0.0.1:8000 *:*
LISTEN 0 50 *:3306 *:*
LISTEN 0 128 *:111 *:*
LISTEN 0 5 192.168.122.1:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 128 ::1:6080 :::*
LISTEN 0 128 ::1:8000 :::*
LISTEN 0 128 :::111 :::*
LISTEN 0 128 :::80 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
LISTEN 0 128 ::1:6010 :::*
//配置nginx
[root@rs2 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name localhost;
include /etc/nginx/default.d/*.conf;
location / {
root html;
index index.html index.htm;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
[root@rs2 ~]# vim /etc/nginx/conf.d/webvirtmgr.conf
server {
listen 80 default_server;
server_name $hostname;
#access_log /var/log/nginx/webvirtmgr_access_log;
location /static/ {
root /var/www/webvirtmgr/webvirtmgr;
expires max;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $remote_addr;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
client_max_body_size 1024M;
}
}
//确保bind绑定的是本机的8000端口
[root@rs2 ~]# cd /var/www/webvirtmgr/conf/
[root@rs2 conf]# vim gunicorn.conf.py
bind = '0.0.0.0:8000'
backlog = 2048
//启动nginx
[root@rs2 conf]# systemctl enable --now nginx
[root@rs2 conf]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:6080 *:*
LISTEN 0 128 127.0.0.1:8000 *:*
LISTEN 0 50 *:3306 *:*
LISTEN 0 128 *:111 *:*
LISTEN 0 128 *:80 *:*
LISTEN 0 5 192.168.122.1:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 128 ::1:6080 :::*
LISTEN 0 128 ::1:8000 :::*
LISTEN 0 128 :::111 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
LISTEN 0 128 ::1:6010 :::*
//设置supervisor
[root@kvm ~]# vim /etc/supervisord.conf
#在文件最后加上以下内容
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx
[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx
//启动supervisor并设置开机自启
[root@rs2 ~]# systemctl enable --now supervisord
Created symlink from /etc/systemd/system/multi-user.target.wants/supervisord.service to /usr/lib/systemd/system/supervisord.service.
//配置nginx用户
[root@rs2 ~]# su - nginx -s /bin/bash
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa):
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:4R6xJJBLZJEXTVHaPGIro+mJQloYFoK0opf6c18AOhg nginx@rs2
The key's randomart image is:
+---[RSA 2048]----+
|o. .=+.+oo. |
|o...+.. .+ |
|E....o. B + |
|++ o.. = * . |
|+o= + S |
|.+.. o = . |
|+. o o |
|o..o.. . |
| .ooo.. |
+----[SHA256]-----+
-bash-4.2$
-bash-4.2$ touch ~/.ssh/config
-bash-4.2$ echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" >> ~/.ssh/config
-bash-4.2$ chmod 0600 ~/.ssh/config
-bash-4.2$ ssh-copy-id [email protected]
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.116.182' (ECDSA) to the list of known hosts.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
-bash-4.2$ exit
[root@rs2 ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
[root@rs2 ~]# systemctl restart nginx libvirtd
第一次通过web访问kvm时可能会一直访问不了,一直转圈,而命令行界面一直报错(too many open files)
此时需要对nginx进行配置
[root@rs2 ~]# vim /etc/nginx/nginx.conf
pid /run/nginx.pid;
worker_rlimit_nofile 655350; //添加此行配置
[root@rs2 ~]# vim /etc/security/limits.conf
# End of file
* soft nofile 655350
* hard nofile 655350
[root@rs2 ~]# systemctl restart nginx
解决方法是安装novnc并通过novnc_server启动一个vnc
[root@rs2 ~]# ll /etc/rc.local
lrwxrwxrwx. 1 root root 13 Aug 6 2018 /etc/rc.local -> rc.d/rc.local
[root@rs2 ~]# ll /etc/rc.d/rc.local
-rw-r--r-- 1 root root 513 Mar 11 22:35 /etc/rc.d/rc.local
[root@rs2 ~]# chmod +x /etc/rc.d/rc.local
[root@rs2 ~]# ll /etc/rc.d/rc.local
-rwxr-xr-x 1 root root 513 Mar 11 22:35 /etc/rc.d/rc.local
[root@rs2 ~]# vim /etc/rc.d/rc.local
......此处省略N行
# that this script will be executed during boot.
touch /var/lock/subsys/local
nohup novnc_server 172.16.12.128:5920 &
[root@rs2 ~]# . /etc/rc.d/rc.local
//虚拟机插入光盘