ovirt翻译

http://old.ovirt.org/OVirt_Administration_Guide

Contents

· 1 Introduction 

1.1 oVirt Architecture

1.2 oVirt System Components

1.3 oVirt Resources

1.4 oVirt API Support Statement

1.5 Administering and Maintaining the oVirt Environment

· 2 Using the Administration Portal 

2.1 Graphical User Interface Elements

2.2 Tree Mode and Flat Mode

2.3 Using the Guide Me Facility

2.4 Performing Searches in oVirt

2.5 Saving a Query String as a Bookmark

· 3 鈦燚ata Centers 

3.1 Introduction to Data Centers

3.2 The Storage Pool Manager

3.3 SPM Priority

3.4 Using the Events Tab to Identify Problem Objects in Data Centers

3.5 Data Center Tasks 

§ 3.5.1 Creating a New Data Center

§ 3.5.2 Explanation of Settings in the New Data Center and Edit Data Center Windows

§ 3.5.3 Editing a Resource

§ 3.5.4 Creating a New Logical Network in a Data Center or Cluster

§ 3.5.5 Removing a Logical Network

§ 3.5.6 Re-Initializing a Data Center: Recovery Procedure

§ 3.5.7 Removing a Data Center

§ 3.5.8 Force Removing a Data Center

§ 3.5.9 Changing the Data Center Compatibility Version

3.6 Data Centers and Storage Domains 

§ 3.6.1 Attaching an Existing Data Domain to a Data Center

§ 3.6.2 Attaching an Existing ISO domain to a Data Center

§ 3.6.3 Attaching an Existing Export Domain to a Data Center

§ 3.6.4 Detaching a Storage Domain from a Data Center

§ 3.6.5 Activating a Storage Domain from Maintenance Mode

· 4 鈦燙lusters 

4.1 Introduction to Clusters

4.2 Cluster Tasks 

§ 4.2.1 Creating a New Cluster

§ 4.2.2 Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows 

§ 4.2.2.1 General Cluster Settings Explained

§ 4.2.2.2 Optimization Settings Explained

§ 4.2.2.3 Resilience Policy Settings Explained

§ 4.2.2.4 Cluster Policy Settings Explained

§ 4.2.2.5 Cluster Console Settings Explained

§ 4.2.3 Editing a Resource

§ 4.2.4 Setting Load and Power Management Policies for Hosts in a Cluster

§ 4.2.5 Creating a New Logical Network in a Data Center or Cluster

§ 4.2.6 Removing a Cluster

§ 4.2.7 Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

§ 4.2.8 Explanation of Settings in the Manage Networks Window

§ 4.2.9 Changing the Cluster Compatibility Version

· 5 Logical Networks 

5.1 Introduction to Logical Networks

5.2 Port Mirroring

5.3 Required Networks, Optional Networks, and Virtual Machine Networks

5.4 vNIC Profiles and QoS 

§ 5.4.1 vNIC Profile Overview

§ 5.4.2 Creating a vNIC Profile

§ 5.4.3 Assigning Security Groups to vNIC Profiles

§ 5.4.4 Explanation of Settings in the VM Interface Profile Window

§ 5.4.5 Removing a vNIC Profile

§ 5.4.6 User Permissions for vNIC Profiles

§ 5.4.7 QoS Overview

§ 5.4.8 Adding QoS

§ 5.4.9 Settings in the New Network QoS and Edit Network QoS Windows Explained

§ 5.4.10 Removing QoS

5.5 Logical Network Tasks 

§ 5.5.1 Creating a New Logical Network in a Data Center or Cluster

§ 5.5.2 Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows 

§ 5.5.2.1 Logical Network General Settings Explained

§ 5.5.2.2 Logical Network Cluster Settings Explained

§ 5.5.2.3 Logical Network vNIC Profiles Settings Explained

§ 5.5.3 Editing a Logical Network

§ 5.5.4 Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

§ 5.5.5 Explanation of Settings in the Manage Networks Window

§ 5.5.6 Adding Multiple VLANs to a Single Network Interface Using Logical Networks

§ 5.5.7 Network Labels 

§ 5.5.7.1 Adding Network Labels to Host Network Interfaces

§ 5.5.8 Using the Networks Tab

5.6 External Provider Networks 

§ 5.6.1 Importing Networks From External Providers

§ 5.6.2 Limitations to Using External Provider Networks

§ 5.6.3 Subnets on External Provider Logical Networks 

§ 5.6.3.1 Configuring Subnets on External Provider Logical Networks

§ 5.6.3.2 Adding Subnets to External Provider Logical Networks

§ 5.6.3.3 Removing Subnets from External Provider Logical Networks

· 6 鈦燞osts 

6.1 Introduction to oVirt Hosts

6.2 oVirt Node Hosts

6.3 Foreman Host Provider Hosts

6.4 Enterprise Linux Hosts

6.5 Host Tasks 

§ 6.5.1 Adding an Enterprise Linux Host

§ 6.5.2 Approving a Hypervisor

§ 6.5.3 Explanation of Settings and Controls in the New Host and Edit Host Windows 

§ 6.5.3.1 Host General Settings Explained

§ 6.5.3.2 Host Power Management Settings Explained

§ 6.5.3.3 SPM Priority Settings Explained

§ 6.5.3.4 Host Console Settings Explained

§ 6.5.4 Configuring Host Power Management Settings

§ 6.5.5 Configuring Host Storage Pool Manager Settings

§ 6.5.6 Editing a Resource

§ 6.5.7 Approving Newly Added oVirt Node Hosts

§ 6.5.8 Moving a Host to Maintenance Mode

§ 6.5.9 Activating a Host from Maintenance Mode

§ 6.5.10 Removing a Host

§ 6.5.11 Customizing Hosts with Tags

6.6 Hosts and Networking 

§ 6.6.1 Refreshing Host Capabilities

§ 6.6.2 Editing Host Network Interfaces and Assigning Logical Networks to Hosts

§ 6.6.3 Bonds 

§ 6.6.3.1 Bonding Logic in oVirt

§ 6.6.3.2 Bonding Modes

§ 6.6.3.3 Creating a Bond Device Using the Administration Portal

§ 6.6.3.4 Example Uses of Custom Bonding Options with Host Interfaces

§ 6.6.4 Saving a Host Network Configuration

6.7 Host Resilience 

§ 6.7.1 Host High Availability

§ 6.7.2 Power Management by Proxy in oVirt

§ 6.7.3 Setting Fencing Parameters on a Host

§ 6.7.4 Soft-Fencing Hosts

§ 6.7.5 Using Host Power Management Functions

§ 6.7.6 Manually Fencing or Isolating a Non Responsive Host

· 7 鈦燬torage 

7.1 Understanding Storage Domains

7.2 Storage Metadata Versions in oVirt

7.3 Preparing and Adding File-Based Storage 

§ 7.3.1 Preparing NFS Storage

§ 7.3.2 Attaching NFS Storage

§ 7.3.3 Preparing Local Storage

§ 7.3.4 Adding Local Storage

7.4 Adding POSIX Compliant File System Storage 

§ 7.4.1 Attaching POSIX Compliant File System Storage

7.5 Preparing and Adding Block Storage 

§ 7.5.1 Preparing iSCSI Storage

§ 7.5.2 Adding iSCSI Storage

§ 7.5.3 Adding FCP Storage

§ 7.5.4 Unusable LUNs in oVirt

7.6 Storage Tasks 

§ 7.6.1 Importing Existing ISO or Export Storage Domains

§ 7.6.2 Populating the ISO Storage Domain

§ 7.6.3 Moving Storage Domains to Maintenance Mode

§ 7.6.4 Editing a Resource

§ 7.6.5 Activating Storage Domains

§ 7.6.6 Removing a Storage Domain

§ 7.6.7 Destroying a Storage Domain

§ 7.6.8 Detaching the Export Domain

§ 7.6.9 Attaching an Export Domain to a Data Center

· 8 Virtual Machines 

8.1 Introduction to Virtual Machines

8.2 Supported Virtual Machine Operating Systems

8.3 Virtual Machine Performance Parameters

8.4 Creating Virtual Machines 

§ 8.4.1 Creating a Virtual Machine

§ 8.4.2 Creating a Virtual Machine Based on a Template

§ 8.4.3 Creating a Cloned Virtual Machine Based on a Template

8.5 Explanation of Settings and Controls in the New Virtual Machine and Edit Virtual Machine Windows 

§ 8.5.1 Virtual Machine General Settings Explained

§ 8.5.2 Virtual Machine System Settings Explained

§ 8.5.3 Virtual Machine Initial Run Settings Explained

§ 8.5.4 Virtual Machine Console Settings Explained

§ 8.5.5 Virtual Machine Host Settings Explained

§ 8.5.6 Virtual Machine High Availability Settings Explained

§ 8.5.7 Virtual Machine Resource Allocation Settings Explained

§ 8.5.8 Virtual Machine Boot Options Settings Explained

§ 8.5.9 Virtual Machine Custom Properties Settings Explained

8.6 Configuring Virtual Machines 

§ 8.6.1 Completing the Configuration of a Virtual Machine by Defining Network Interfaces and Hard Disks

§ 8.6.2 Installing Windows on VirtIO-Optimized Hardware

§ 8.6.3 Virtual Machine Run Once Settings Explained

§ 8.6.4 Configuring a Watchdog 

§ 8.6.4.1 Adding a Watchdog Card to a Virtual Machine

§ 8.6.4.2 Installing a Watchdog

§ 8.6.4.3 Confirming Watchdog Functionality

§ 8.6.4.4 Parameters for Watchdogs in watchdog.conf

8.7 Editing Virtual Machines 

§ 8.7.1 Editing Virtual Machine Properties

§ 8.7.2 Network Interfaces 

§ 8.7.2.1 Adding and Editing Virtual Machine Network Interfaces

§ 8.7.2.2 Editing a Network Interface

§ 8.7.2.3 Removing a Network Interface

§ 8.7.2.4 Explanation of Settings in the Virtual Machine Network Interface Window

§ 8.7.2.5 Hot Plugging Network Interfaces

§ 8.7.2.6 Removing Network Interfaces From Virtual Machines

§ 8.7.3 Virtual Disks 

§ 8.7.3.1 Adding and Editing Virtual Machine Disks

§ 8.7.3.2 Hot Plugging Virtual Machine Disks

§ 8.7.3.3 Removing Virtual Disks From Virtual Machines

§ 8.7.4 Extending the Size of an Online Virtual Disk

§ 8.7.5 Floating Disks

§ 8.7.6 Associating a Virtual Disk with a Virtual Machine

§ 8.7.7 Changing the CD for a Virtual Machine

§ 8.7.8 Smart Card Authentication

§ 8.7.9 Enabling and Disabling Smart cards

8.8 Running Virtual Machines 

§ 8.8.1 Installing Console Components 

§ 8.8.1.1 Console Components

§ 8.8.1.2 Installing Remote Viewer on Linux

§ 8.8.1.3 Installing Remote Viewer for Internet Explorer on Windows

§ 8.8.1.4 Installing Remote Viewer on Windows

§ 8.8.2 Guest Drivers and Agents 

§ 8.8.2.1 Installing Guest Agents and Drivers

§ 8.8.2.2 Automating Guest Additions on Windows Guests with oVirt Application Provisioning Tool(APT)

§ 8.8.2.3 oVirt Guest Drivers and Guest Agents

§ 8.8.3 Accessing Virtual machines 

§ 8.8.3.1 Starting a Virtual Machine

§ 8.8.3.2 Opening a Console to a Virtual Machine

§ 8.8.3.3 Shutting Down a Virtual Machine

§ 8.8.3.4 Pausing a Virtual Machine

§ 8.8.3.5 Rebooting a Virtual Machine

§ 8.8.4 Console Options 

§ 8.8.4.1 Introduction to Connection Protocols

§ 8.8.4.2 Accessing Console Options

§ 8.8.4.3 SPICE Console Options

§ 8.8.4.4 VNC Console Options

§ 8.8.4.5 RDP Console Options

§ 8.8.5 Remote Viewer Options 

§ 8.8.5.1 Remote Viewer Options

§ 8.8.5.2 Remote Viewer Hotkeys

8.9 Removing Virtual Machines 

§ 8.9.1 Removing a Virtual Machine

8.10 鈦燬napshots 

§ 8.10.1 Creating a Snapshot of a Virtual Machine

§ 8.10.2 Using a Snapshot to Restore a Virtual Machine

§ 8.10.3 Creating a Virtual Machine from a Snapshot

§ 8.10.4 Deleting a Snapshot

8.11 鈦燗ffinity Groups 

§ 8.11.1 Introduction to Virtual Machine Affinity

§ 8.11.2 Creating an Affinity Group

§ 8.11.3 Editing an Affinity Group

§ 8.11.4 Removing an Affinity Group

8.12 鈦營mporting and Exporting Virtual Machines 

§ 8.12.1 Exporting and Importing Virtual Machines and Templates

§ 8.12.2 Overview of the Export and Import Process

§ 8.12.3 Graphical Overview for Exporting and Importing Virtual Machines and Templates

§ 8.12.4 Exporting a Virtual Machine to the Export Domain

§ 8.12.5 Importing a Virtual Machine into the Destination Data Center

8.13 鈦燤igrating Virtual Machines Between Hosts 

§ 8.13.1 What is Live Migration?

§ 8.13.2 Live Migration Prerequisites

§ 8.13.3 Automatic Virtual Machine Migration

§ 8.13.4 Preventing Automatic Migration of a Virtual Machine

§ 8.13.5 Manually Migrating Virtual Machines

§ 8.13.6 Setting Migration Priority

§ 8.13.7 Canceling Ongoing Virtual Machine Migrations

§ 8.13.8 Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers

8.14 鈦營mproving Uptime with Virtual Machine High Availability 

§ 8.14.1 Why Use High Availability?

§ 8.14.2 What is High Availability?

§ 8.14.3 High Availability Considerations

§ 8.14.4 Configuring a Highly Available Virtual Machine

8.15 鈦燨ther Virtual Machine Tasks 

§ 8.15.1 Enabling SAP monitoring for a virtual machine from the Administration Portal

§ 8.15.2 Configuring Red Hat Enterprise Linux 5.4 or Higher Virtual Machines to use SPICE 

§ 8.15.2.1 Using SPICE on virtual machines running versions of Red Hat Enterprise Linux released prior to 5.4

§ 8.15.2.2 Installing qxl drivers on virtual machines

§ 8.15.2.3 Configuring qxl drivers on virtual machines

§ 8.15.2.4 Configuring a Virtual Machine's Tablet and Mouse to use SPICE

§ 8.15.3 KVM Virtual Machine Timing Management

§ 8.15.4 Monitoring Virtual Machine Login Activity Using the Sessions Tab

· 9 鈦燭emplates 

9.1 Introduction to Templates

9.2 Template Tasks 

§ 9.2.1 Creating a Template

§ 9.2.2 Explanation of Settings and Controls in the New Template Window

§ 9.2.3 Editing a Template

§ 9.2.4 Deleting a Template

§ 9.2.5 Exporting Templates 

§ 9.2.5.1 Migrating Templates to the Export Domain

§ 9.2.5.2 Copying a Template's Virtual Hard Disk

§ 9.2.6 Importing Templates 

§ 9.2.6.1 Importing a Template into a Data Center

§ 9.2.6.2 Importing a Virtual Disk Image from an OpenStack Image Service as a Template

9.3 Sealing Virtual Machines in Preparation for Deployment as Templates 

§ 9.3.1 Sealing a Linux Virtual Machine for Deployment as a Template 

§ 9.3.1.1 Sealing a Linux Virtual Machine for Deployment as a Template

§ 9.3.1.2 Sealing a Linux Virtual Machine Manually for Deployment as a Template

§ 9.3.1.3 Sealing a Linux Virtual Machine for Deployment as a Template using sys-unconfig

§ 9.3.2 Sealing a Windows Virtual Machine for Deployment as a Template 

§ 9.3.2.1 Considerations when Sealing a Windows Template with Sysprep

§ 9.3.2.2 Sealing a Windows XP Template

§ 9.3.2.3 Sealing a Windows 7 or Windows 2008 Template

§ 9.3.3 Using Cloud-Init to Automate the Configuration of Virtual Machines 

§ 9.3.3.1 Cloud-Init Overview

§ 9.3.3.2 Cloud-Init Use Case Scenarios

§ 9.3.3.3 Installing Cloud-Init

§ 9.3.3.4 Using Cloud-Init to Initialize a Virtual Machine

§ 9.3.3.5 Using Cloud-Init to Prepare a Template

· 10 鈦燩ools 

10.1 Introduction to Virtual Machine Pools

10.2 Virtual Machine Pool Tasks 

§ 10.2.1 Creating a Virtual Machine Pool

§ 10.2.2 Explanation of Settings and Controls in the New Pool Window 

§ 10.2.2.1 New Pool General Settings Explained

§ 10.2.2.2 New Pool Pool Settings Explained

§ 10.2.2.3 New Pool and Edit Pool Console Settings Explained

§ 10.2.3 Editing a Virtual Machine Pool

§ 10.2.4 Explanation of Settings and Controls in the Edit Pool Window 

§ 10.2.4.1 Edit Pool General Settings Explained

§ 10.2.5 Prestarting Virtual Machines in a Pool

§ 10.2.6 Adding Virtual Machines to a Virtual Machine Pool

§ 10.2.7 Detaching Virtual Machines from a Virtual Machine Pool

§ 10.2.8 Removing a Virtual Machine Pool

10.3 Trusted Compute Pools 

§ 10.3.1 Creating a Trusted Cluster

§ 10.3.2 Adding a Trusted Host

· 11 鈦燰irtual Machine Disks 

11.1 Understanding Virtual Machine Storage

11.2 Understanding Virtual Disks

11.3 Shareable Disks in oVirt

11.4 Read Only Disks in oVirt

11.5 Virtual Disk Tasks 

§ 11.5.1 Creating Floating Virtual Disks

§ 11.5.2 Explanation of Settings in the New Virtual Disk Window

§ 11.5.3 Moving a Virtual Disk

§ 11.5.4 Copying a Virtual Disk

· 12 鈦燘ackups 

12.1 Backing Up and Restoring oVirt 

§ 12.1.1 Backing up oVirt - Overview

§ 12.1.2 Syntax for the engine-backup Command

§ 12.1.3 Creating a Backup with the engine-backup Command

§ 12.1.4 Restoring a Backup with the engine-backup Command

§ 12.1.5 Restoring a Backup to a Fresh Installation

§ 12.1.6 Restoring a Backup to Overwrite an Existing Installation

§ 12.1.7 Restoring a Backup with Different Credentials

12.2 Manually Backing Up and Restoring oVirt 

§ 12.2.1 Backing Up the Engine Database Using the backup.sh Script

§ 12.2.2 Backing Up Manager Configuration Files

§ 12.2.3 Restoring the Engine Database Using the restore.sh Script

§ 12.2.4 Restoring oVirt Configuration Files

· 13 鈦燯sers and Roles 

13.1 Introduction to Users

13.2 Directory Users 

§ 13.2.1 Directory Services Support in oVirt

13.3 User Authorization 

§ 13.3.1 User Authorization Model

§ 13.3.2 User Actions

§ 13.3.3 User Permissions

13.4 oVirt User Properties and Roles 

§ 13.4.1 User Properties

§ 13.4.2 User and Administrator Roles

§ 13.4.3 User Roles Explained

§ 13.4.4 Administrator Roles Explained

13.5 oVirt User Tasks 

§ 13.5.1 Adding Users

§ 13.5.2 Viewing User Information

§ 13.5.3 Viewing User Permissions on Resources

§ 13.5.4 Removing Users

§ 13.5.5 Configuring Roles

§ 13.5.6 Creating a New Role

§ 13.5.7 Editing or Copying a Role

13.6 Assigning an Administrator or User Role to a Resource

13.7 Removing an Administrator or User Role from a Resource

13.8 User Role and Authorization Examples

· 14 鈦燪uotas and Service Level Agreement Policy 

14.1 Introduction to Quota

14.2 Shared Quota and Individually Defined Quota

14.3 Quota Accounting

14.4 Enabling and Changing a Quota Mode in a Data Center

14.5 Creating a New Quota Policy

14.6 Explanation of Quota Threshold Settings

14.7 Assigning a Quota to an Object

14.8 Using Quota to Limit Resources by User

14.9 Editing Quotas

14.10 鈦燫emoving Quotas

14.11 鈦燬ervice-level Agreement Policy Enforcement

· 15 鈦燛vent Notifications 

15.1 Configuring Event Notifications

15.2 Parameters for Event Notifications in ovirt-engine-notifier.conf

15.3 Canceling Event Notifications

· 16 鈦燯tilities 

16.1 The Ovirt Engine Rename Tool 

§ 16.1.1 The Ovirt Engine Rename Tool

§ 16.1.2 Syntax for the Ovirt Engine Rename Command

§ 16.1.3 Using the Ovirt Engine Rename Tool

16.2 The Domain Management Tool 

§ 16.2.1 The Domain Management Tool

§ 16.2.2 Syntax for the Domain Management Tool

16.3 The Configuration Tool 

§ 16.3.1 The Configuration Tool

§ 16.3.2 Syntax for engine-config Command

§ 16.3.3 The admin@internal User

§ 16.3.4 Changing the Password for admin@internal

§ 16.3.5 oVirt Configuration Options

16.4 The Image Uploader Tool 

§ 16.4.1 The Image Uploader Tool

§ 16.4.2 Syntax for the engine-image-uploader Command

§ 16.4.3 Creating an OVF Archive That is Compatible With the Image Uploader

§ 16.4.4 Basic engine-image-uploader Usage Examples

16.5 The Log Collector Tool 

§ 16.5.1 Log Collector

§ 16.5.2 Syntax for engine-log-collector Command

§ 16.5.3 Basic Log Collector Usage

16.6 The ISO Uploader Tool 

§ 16.6.1 The ISO Uploader Tool

§ 16.6.2 Syntax for the engine-iso-uploader Command

· 17 Log Files 

17.1 oVirt Installation Log Files

17.2 oVirt Log Files

17.3 oVirt Host Log Files

17.4 Remotely Logging Host Activities 

§ 17.4.1 Setting Up a Virtualization Host Logging Server

§ 17.4.2 Configuring oVirt Node Hosts to Use a Logging Server

· 18 鈦燩roxies 

18.1 SPICE Proxy 

§ 18.1.1 SPICE Proxy Overview

§ 18.1.2 SPICE Proxy Machine Setup

§ 18.1.3 Turning on SPICE Proxy

§ 18.1.4 Turning Off a SPICE Proxy

18.2 Squid Proxy 

§ 18.2.1 Installing and Configuring a Squid Proxy

· 19 Firewalls 

19.1 鈦爋Virt Firewall Requirements

19.2 鈦燰irtualization Host Firewall Requirements

19.3 鈦燚irectory Server Firewall Requirements

19.4 鈦燚atabase Server Firewall Requirements

· 20 oVirt and SSL 

20.1 鈦燫eplacing oVirt SSL Certificate

· 21 Authors and Revision History

· 22

Introduction

oVirt Architecture

An oVirt environment consists of:

· Virtual machine hosts using the Kernel-based Virtual Machine (KVM).

· Agents and tools running on hosts including VDSM, QEMU, and libvirt. These tools provide local management for virtual machines, networks and storage.

· oVirt; a centralized management platform for the oVirt environment. It provides a graphical interface where you can view, provision and manage resources.

· Storage domains to hold virtual resources like virtual machines, templates, ISOs.

· A database to track the state of and changes to the environment.

· Access to an external Directory Server to provide users and authentication.

· Networking to link the environment together. This includes physical network links, and logical networks.

Figure 1.1. oVirt Platform Overview 

oVirt System Components

The oVirt version 3.4 environment consists of one or more hosts (either Red Hat Enterprise Linux 6.5 or later hosts (or similar), Fedora 19, or oVirt Node 6.5 or later hosts) and at least one instance of oVirt.

Hosts run virtual machines using KVM (Kernel-based Virtual Machine) virtualization technology.

oVirt runs on a Red Hat Enterprise Linux (or similar) server, as well as Fedora 19, and provides interfaces for controlling the oVirt environment. It manages virtual machine and storage provisioning, connection protocols, user sessions, virtual machine images, and high-availability virtual machines.

oVirt is accessed through the Administration Portal using a web browser.

oVirt Resources

The components of the oVirt environment fall into two categories: physical resources, and logical resources. Physical resources are physical objects, such as host and storage servers. Logical resources are nonphysical groupings and processes, such as logical networks and virtual machine templates.

· Data Center - A data center is the highest-level container for all physical and logical resources within a managed virtual environment. It is a collection of clusters, virtual machines, storage, and networks.

· Clusters - A cluster is a set of physical hosts that are treated as a resource pool for virtual machines. Hosts in a cluster share the same network infrastructure and storage. They form a migration domain within which virtual machines can be moved from host to host.

· Logical Networks - A logical network is a logical representation of a physical network. Logical networks group network traffic and communication between oVirt, hosts, storage, and virtual machines.

· Hosts - A host is a physical server that runs one or more virtual machines. Hosts are grouped into clusters. Virtual machines can be migrated from one host to another within a cluster.

· Storage Pool - The storage pool is a logical entity that contains a standalone image repository of a certain type, either iSCSI, Fibre Channel, NFS, or POSIX. Each storage pool can contain several domains, for storing virtual machine disk images, ISO images, and for the import and export of virtual machine images.

· Virtual Machines - A virtual machine is a virtual desktop or virtual server containing an operating system and a set of applications. Multiple identical virtual machines can be created in a Pool. Virtual machines are created, managed, or deleted by power users and accessed by users.

· Template - A template is a model virtual machine with predefined settings. A virtual machine that is based on a particular template acquires the settings of the template. Using templates is the quickest way of creating a large number of virtual machines in a single step.

· Virtual Machine Pool - A virtual machine pool is a group of identical virtual machines that are available on demand by each group member. Virtual machine pools can be set up for different purposes. For example, one pool can be for the Marketing department, another for Research and Development, and so on.

· Snapshot - A snapshot is a view of a virtual machine's operating system and all its applications at a point in time. It can be used to save the settings of a virtual machine before an upgrade or installing new applications. In case of problems, a snapshot can be used to restore the virtual machine to its original state.

· User Types - oVirt supports multiple levels of administrators and users with distinct levels of permissions. System administrators can manage objects of the physical infrastructure, such as data centers, hosts, and storage. Users access virtual machines available from a virtual machine pool or standalone virtual machines made accessible by an administrator.

· Events and Monitors - Alerts, warnings, and other notices about activities help the administrator to monitor the performance and status of resources.

· Reports - A range of reports either from the reports module based on JasperReports, or from the data warehouse. Preconfigured or ad hoc reports can be generated from the reports module. Users can also generate reports using any query tool that supports SQL from a data warehouse that collects monitoring data for hosts, virtual machines, and storage.

oVirt API Support Statement

oVirt exposes a number of interfaces for interacting with the components of the virtualization environment. These interfaces are in addition to the user interfaces provided by oVirt Administration, User, and Reports Portals. Many of these interfaces are fully supported. Some, however, are supported only for read access.

Supported Interfaces for Read and Write Access 

Direct interaction with these interfaces is supported and encouraged for both read and write access:

Representational State Transfer (REST) API

The REST API exposed by oVirt is a fully supported interface for interacting with oVirt.

Software Development Kit (SDK)

The SDK provided by the python-sdk and java-sdk packages is a fully supported interface for interacting with oVirt.

Command Line Shell

The command line shell provided by the ovirt-shell package is a fully supported interface for interacting with oVirt.

VDSM Hooks

The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on oVirt hosts. The use of VDSM Hooks on virtualization hosts running oVirt Node is not currently supported.

Supported Interfaces for Read Access 

Direct interaction with these interfaces is supported and encouraged only for read access. Use of these interfaces for write access is not supported:

oVirt History Database

Read access to oVirt history database using the database views specified in the Administration Guide is supported. Write access is not supported.

Libvirt on Virtualization Hosts

Read access to libvirt using the virsh -r command is a supported method of interacting with virtualization hosts. Write access is not supported.

Unsupported Interfaces 

Direct interaction with these interfaces is not supported:

The vdsClient Command

Use of the vdsClient command to interact with virtualization hosts is not supported.

oVirt Node Console

Console access to oVirt Node outside of the provided text user interface for configuration is not supported.

oVirt Database

Direct access to and manipulation of oVirt database is not supported.

Administering and Maintaining the oVirt Environment

The oVirt environment requires an administrator to keep it running. As an administrator, your tasks include:

· Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.

· Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).

· Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).

· Managing customized object properties using tags.

· Managing searches saved as public bookmarks.

· Managing user setup and setting permission levels.

· Troubleshooting for specific users or virtual machines for overall system functionality.

· Generating general and specific reports.

Using the Administration Portal

Graphical User Interface Elements

The oVirt Administration Portal consists of contextual panes and menus and can be used in two modes - tree mode, and flat mode. Tree mode allows you to browse the object hierarchy of a data center while flat mode allows you to view all resources across data centers in a single list. The elements of the graphical user interface are shown in the diagram below.

Figure 2.1. Key Graphical User Interface Elements 

Key Graphical User Interface Elements 

· 
Header
The header bar contains the name of the currently logged in user, the Sign Out button, the About button, the Configure button, and the Guide button. The About shows information on the version of oVirt, the Configure button allows you to configure user roles, and the Guide button provides a shortcut to the book you are reading now.
Search Bar
The search bar allows you to build queries for finding resources such as hosts and clusters in the oVirt environment. Queries can be as simple as a list of all the hosts in the system, or more complex, such as a list of resources that match certain conditions. As you type each part of the search query, you are offered choices to assist you in building the search. The star icon can be used to save the search as a bookmark.

· 
System/Bookmarks/Tags Pane
The system pane displays a navigable hierarchy of the resources in the virtualized environment. Bookmarks are used to save frequently used or complicated searches for repeated use. Bookmarks can be added, edited, or removed. Tags are applied to groups of resources and are used to search for all resources associated with that tag. The System/Bookmarks/Tags Pane can be minimized using the arrow in the upper right corner of the panel.

· 
Resource Tabs
All resources can be managed using their associated tab. Moreover, the Events tab allows you to view events for each resource. The Administration Portal provides the following tabs: Data Centers, Clusters, Hosts, Networks, Storage, Disks, Virtual Machines, Pools, Templates, Volumes, Users, and Events, and a Dashboard tab if you have installed the data warehouse and reports.

· 
Results List
You can perform a task on an individual item, multiple items, or all the items in the results list by selecting the items and clicking the relevant action button. Information on a selected item is displayed in the details pane.
Refresh Rate
The refresh rate drop-down menu at the top of the Results List allows you to set the time, in seconds, between Administration Portal refreshes. To avoid the delay between a user performing an action and the result appearing the portal, the portal will automatically refresh upon an action or event regardless of the chosen refresh interval. You can set this interval by clicking the refresh symbol in top right of the portal.

· 
Details Pane
The details pane shows detailed information about a selected item in the results list. If no items are selected, this pane is hidden. If multiple items are selected, the details pane displays information on the first selected item only.
Alerts/Events Pane
Below the Details pane, the Alerts tab lists all high severity events such as errors or warnings. The Events tab shows a list of events for all resources. The Tasks tab lists the currently running tasks. You can view this panel by clicking the maximize/minimize button.

Important: The minimum supported resolution viewing the Administration Portal in a web browser is 1024x768. The Administration Portal will not render correctly when viewed at a lower resolution.

Note: In oVirt 3.4, the web user interface display has been improved to allow the Administration Portal to render correctly at low resolutions or on non-maximized windows. When resolution is too low or window space insufficient to hold all menu tabs, you are able to scroll the tabs left or right, and access a drop down menu that lists all tabs. The System/Bookmarks/Tags Pane can also be minimized to allow additional space.

Tree Mode and Flat Mode

The Administration Portal provides two different modes for managing your resources: tree mode and flat mode. Tree mode displays resources in a hierarchical view per data center, from the highest level of the data center down to the individual virtual machine. Working in tree mode is highly recommended for most operations.

Figure 2.2. Tree Mode 

Flat mode allows you to search across data centers, or storage domains. It does not limit you to viewing the resources of a single hierarchy. For example, you may need to find all virtual machines that are using more than 80% CPU across clusters and data centers, or locate all hosts that have the highest utilization. Flat mode makes this possible. In addition, certain objects, such as Pools and Users are not in the data center hierarchy and can be accessed only in flat mode.

To access flat mode, click on the System item in the Tree pane on the left side of the screen. You are in flat mode if the Pools and Users resource tabs appear.

Figure 2.3. Flat Mode 

Using the Guide Me Facility

When setting up resources such as data centers and clusters, a number of tasks must be completed in sequence. The context-sensitive Guide Me window prompts for actions that are appropriate to the resource being configured. The Guide Me window can be accessed at any time by clicking the Guide Me button on the resource toolbar.

Figure 2.4. New Data Center Guide Me Window 

Performing Searches in oVirt

The Administration Portal enables the management of thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) in the search bar. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are needed.

Note: In versions prior to oVirt 3.4, searches in the Administration Portal were case sensitive. Now, the search bar supports case insensitive searches.

Saving a Query String as a Bookmark

Summary 

A bookmark can be used to remember a search query, and shared with other users.

Procedure 2.1. Saving a Query String as a Bookmark 

1. Enter the desired search query in the search bar and perform the search.

2. 

Click the star-shaped Bookmark button to the right of the search bar to open the New Bookmark window.

3. 

4. 

5. 

Figure 2.5. Bookmark Icon

6. 

7. Enter the Name of the bookmark.

8. Edit the Search string field (if applicable).

9. Click OK to save the query as a bookmark and close the window.

10. The search query is saved and displays in the Bookmarks pane.

Result 

You have saved a search query as a bookmark for future reuse. Use the Bookmark pane to find and select the bookmark.

⁠Data Centers

Introduction to Data Centers

A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.

数据中心是一个定义了一系列资源被用在特定环境的逻辑存在体。它被认为是一个容器资源,里面由逻辑资源组成,其表现为集群和主机;网络资源的话表现为逻辑网络和物理NICS;存储资源则为存储域

A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. An oVirt environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.

数据中心可以包括多集群,多集群也就可以包括多主机;可以关联多个存储域;支持每个主机容纳多虚拟机。这环境可以包含多个数据中心;这样的构造可以让你分离这些数据中心

All data centers are managed from the single Administration Portal.

所以数据中心由一个管理入口进行管理

oVirt creates a default data center during installation. It is recommended that you do not remove the default data center; instead, set up new appropriately named data centers.

这平台在安装时默认创建了数据中心。建议你不要移除掉默认的数据中心;此外设置一个合适的数据中心命名

The Storage Pool Manager

The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; oVirt grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.

SPM是给数据中心的其中一个主机有能力去管理数据中心的存储域。Spm可以运行在数据中心的任意一台主机;ovirt授权这个角色给某台主机。Spm不阻止主机规范性的操作;以SPM的运行方式的主机仍然可以主导虚拟资源

The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.

SPM存在体通过整合跨过存储域的元数据去连接存储。这包含创建,删除,和操作虚拟磁盘(images),快照,和模板,和分配存储给稀疏的块设备(SAN)。这是一个特有的功能:只有一台主机可以成为SPM同时确保元数据的完整。

oVirt ensures that the SPM is always available. oVirt moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.

这平台确保SPM总是可用的。如果SPM主机连接存储时出问题,这平台他会移动SPM的角色到另一个主机。当SPM启动,它确保只有一个主机被授予SPM角色;因此这要求租用一个集中存储。这个可以后续再做。

SPM Priority

The SPM role uses some of a host's available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.

SPM角色使用一些主机可用的资源。一主机的SPM优先权的设定会改变----主机被赋予SPM的角色的可能性:高优先权的SPM主机肯定在低的前面。辨别到低SPM优先权的主机上的VM将不会去竞争SPM对主机资源的运算

You can change a host's SPM priority by editing the host.

通过编辑,你可以改变主机SPM优先权

Using the Events Tab to Identify Problem Objects in Data Centers

The Events tab for a data center displays all events associated with that data center; events include audits, warnings, and errors. The information displayed in the results list will enable you to identify problem objects in your oVirt environment.

The Events results list has two views: Basic and Advanced. Basic view displays the event icon, the time of the event, and the description of the events. Advanced view displays these also and includes, where applicable, the event ID; the associated user, host, virtual machine, template, data center, storage, and cluster; the Gluster volume, and the correlation ID.

Data Center Tasks

Creating a New Data Center

Summary 

This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.

创建数据中心的步骤。数据中心要求一个正在运作的集群,主机,存储域来操作。

Note: The storage Type can be edited until the first storage domain is added to the data center. Once a storage domain has been added, the storage Type cannot be changed. If you set the Compatibility Version as 3.1, it cannot be changed to 3.0 at a later time; version regression is not allowed.

直到第一个存储域被添加到数据中心后,存储类型才可以被编辑。一旦存储域被添加,存储的类型不会被改变。 

Procedure 3.1. Creating a New Data Center 

1. Select the Data Centers resource tab to list all data centers in the results list.

2. Click New to open the New Data Center window.

3. Enter the Name and Description of the data center.

4. Select the storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.

5. Click OK to create the data center and open the New Data Center - Guide Me window.

6. The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the data center and clicking the Guide Me button.

Result 

The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.

新的数据中心已经被添加了。当集群,主机,存储域都配置好了,它将会进行初始化,也就可以使用了;

Explanation of Settings in the New Data Center and Edit Data Center Windows

The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 3.1. Data Center Properties 

Field

Description/Action

Name 

The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description 

The description of the data center. This field is recommended but not mandatory.

Type 

The storage type. Choose one of the following:

· Shared 

· Local 

The type of data domain dictates the type of the data center and cannot be changed after creation without significant disruption. Multiple types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, though local and shared domains cannot be mixed.

Compatibility Version 

The version of oVirt. Choose one of the following:

· 3.0 

· 3.1 

· 3.2 

· 3.3 

· 3.4 

After upgrading oVirt, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center.

Quota Mode 

Quota is a resource limitation tool provided with oVirt. Choose one of:

· Disabled: Select if you do not want to implement Quota

· Audit: Select if you want to edit the Quota settings

· Enforced: Select to implement Quota

Editing a Resource

Summary 

Edit the properties of a resource.

Procedure 3.2. Editing a Resource 

1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

2. Click Edit to open the Edit window.

3. Change the necessary properties and click OK.

Result 

The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

Creating a New Logical Network in a Data Center or Cluster

Summary 

Create a logical network and define its use in a data center, or in clusters in a data center.

创建逻辑网络并且定义它在数据中心的用处,或者集群~

Procedure 3.3. Creating a New Logical Network in a Data Center or Cluster 

1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.

2. Click the Logical Networks tab of the details pane to list the existing logical networks.

3. From the Data Centers details pane, click New to open the New Logical Network window. From the Clusters details pane, click Add Network to open the New Logical Network window.

4. Enter a Name, Description and Comment for the logical network.

5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down menu.

6. In the Network Parameters section, select the Enable VLAN tagging, VM network and Override MTU to enable these options.

7. Enter a new label or select an existing label for the logical network in the Network Label text field.

8. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.

9. If the Create on external provider check box is selected, the Subnet tab will be visible. From the Subnet tab enter a Name, CIDR and select an IP Version for the subnet that the logical network will provide.

10. From the Profiles tab, add vNIC profiles to the logical network as required.

11. Click OK.

Result 

You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

你已经定义了一个逻辑网络(资源要求的)。如果你输入一个标签给逻辑网络,它将会自动地添加到所有主机网络接口。

Note: When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

当创建一个新的逻辑网络或者改变已存在的逻辑网络,这被用在一个可显示的网络,任何使用网络的处于运行状态的VM必须在网络变成可用或者改变生效前重启。

Removing a Logical Network

Summary 

Remove a logical network from oVirt.

Procedure 3.4. Removing Logical Networks 

1. Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.

2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.

3. Select a logical network and click Remove to open the Remove Logical Network(s) window.

4. Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from oVirt and from the external provider if the network is provided by an external provider.

5. Click OK.

Result 

The logical network is removed from oVirt and is no longer available. If the logical network was provided by an external provider and you elected to remove the logical network from that external provider, it is removed from the external provider and is no longer available on that external provider as well.

Re-Initializing a Data Center: Recovery Procedure

重新初始化一个数据中心

Summary 

This recovery procedure replaces the master data domain of your data center with a new master data domain; necessary in the event of data corruption of your master data domain. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.

You can import any backup or exported virtual machines or templates into your new master data domain.

这恢复步骤通过一个新的主数据域会取代你数据中心的主数据域;相当有必要的是在你主数据域出现数据损坏的情况下。重新初始化数据中心让你去还原一切其他有关这数据中心的资源,包括集群,主机,无损坏的存储域。

你可以导入任何备份或者导出虚拟机或者模板到你新的主数据域

Procedure 3.5. Re-Initializing a Data Center 

1. Click the Data Centers resource tab and select the data center to re-initialize.

2. Ensure that any storage domains attached to the data center are in maintenance mode.

3. Right-click the data center and select Re-Initialize Data Center from the drop-down menu to open the Data Center Re-Initialize window.

4. The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.

5. Select the Approve operation check box.

6. Click OK to close the window and re-initialize the data center.

Result 

The storage domain is attached to the data center as the master data domain and activated. You can now import any backup or exported virtual machines or templates into your new master data domain.

隶属这数据中心的存储域会作为主数据域和活动的。你可以现在导入任何备份或者导出VM或者~

Removing a Data Center

Summary 

An active host is required to remove a data center. Removing a data center will not remove the associated resources.

Procedure 3.6. Removing a Data Center 

1. Ensure the storage domains attached to the data center is in maintenance mode.

2. Click the Data Centers resource tab and select the data center to remove.

3. Click Remove to open the Remove Data Center(s) confirmation window.

4. Click OK.

Result 

The data center has been removed.

Force Removing a Data Center

Summary 

A data center becomes Non Responsive if the attached storage domain is corrupt or if the host becomes Non Responsive. You cannot Remove the data center under either circumstance.

Force Remove does not require an active host. It also permanently removes the attached storage domain.

It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.

Procedure 3.7. Force Removing a Data Center 

1. Click the Data Centers resource tab and select the data center to remove.

2. Click Force Remove to open the Force Remove Data Center confirmation window.

3. Select the Approve operation check box.

4. Click OK 

Result 

The data center and attached storage domain are permanently removed from the oVirt environment.

Changing the Data Center Compatibility Version

Summary 

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.

Note: To change the data center compatibility version, you must have first updated all the clusters in your data center to a level that supports your desired compatibility level.

Procedure 3.8. Changing the Data Center Compatibility Version 

1. Click the Data Centers resource tab and select the data center to remove.

2. Select the data center to change from the list displayed. If the list of data centers is too long to filter visually, then perform a search to locate the desired data center.

3. Click the Edit button.

4. Change the Compatibility Version to the desired value.

5. Click OK.

Result 

You have updated the compatibility version of the data center.

Warning: Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

Data Centers and Storage Domains

Attaching an Existing Data Domain to a Data Center

隶属已存在的数据域给数据中心

Summary 

Data domains that are Unattached can be attached to a data center. The data domain must be of the same Storage Type as the data center.

数据域默认是没隶属到数据中心的。⁠

Procedure 3.9. Attaching an Existing Data Domain to a Data Center 

1. Click the Data Centers resource tab and select the appropriate data center.

2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.

3. Click Attach Data to open the Attach Storage window.

4. Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.

5. Click OK.

Result 

The data domain is attached to the data center and is automatically activated. 自动活跃~

Note: In oVirt 3.4, shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.

Attaching an Existing ISO domain to a Data Center

Summary 

An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center. Only one ISO domain can be attached to a data center.


Procedure 3.10. Attaching an Existing ISO Domain to a Data Center 

1. Click the Data Centers resource tab and select the appropriate data center.

2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.

3. Click Attach ISO to open the Attach ISO Library window.

4. Click the radio button for the appropriate ISO domain.

5. Click OK.

Result 

The ISO domain is attached to the data center and is automatically activated.

Attaching an Existing Export Domain to a Data Center

Summary 

An export domain that is Unattached can be attached to a data center.

Only one export domain can be attached to a data center.

Procedure 3.11. Attaching an Existing Export Domain to a Data Center 

1. Click the Data Centers resource tab and select the appropriate data center.

2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.

3. Click Attach Export to open the Attach Export Domain window.

4. Click the radio button for the appropriate Export domain.

5. Click OK.

Result 

The Export domain is attached to the data center and is automatically activated.

Detaching a Storage Domain from a Data Center

添加存储域

Summary 

Detaching a storage domain from a data center will stop the data center from associating with that storage domain. The storage domain is not removed from the oVirt environment; it can be attached to another data center.

Data, such as virtual machines and templates, remains attached to the storage domain.

Note: The master storage, if it is the last available storage domain, cannot be removed.

Procedure 3.12. Detaching a Storage Domain from a Data Center 

1. Click the Data Centers resource tab and select the appropriate data center.

2. Select the Storage tab in the details pane to list the storage domains attached to the data center.

3. Select the storage domain to detach. If the storage domain is Active, click Maintenance to open the Maintenance Storage Domain(s) confirmation window.

4. Click OK to initiate maintenance mode.

5. Click Detach to open the Detach Storage confirmation window.

6. Click OK.

Result 

You have detached the storage domain from the data center. It can take up to several minutes for the storage domain to disappear from the details pane.

Activating a Storage Domain from Maintenance Mode

Summary 

Storage domains in maintenance mode must be activated to be used.

Procedure 3.13. Activating a Data Domain from Maintenance Mode 

1. Click the Data Centers resource tab and select the appropriate data center.

2. Select the Storage tab in the details pane to list the storage domains attached to the data center.

3. Select the appropriate storage domain and click Activate.

Result 

The storage domain is activated and can be used in the data center.

⁠Clusters

Introduction to Clusters

A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.

Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the Clusters tab and in the Configuration tool during runtime. The cluster is the highest level at which power and load-sharing policies can be defined.

Clusters run virtual machines or Red Hat Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.

The oVirt platform installs a default cluster in the default data center by default during the installation process.

Cluster Tasks

Creating a New Cluster

Summary 

A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button.

Procedure 4.1. Creating a New Cluster 

1. Select the Clusters resource tab.

2. Click New to open the New Cluster window.

3. Select the Data Center the cluster will belong to from the drop-down list.

4. Enter the Name and Description of the cluster.

5. Select the CPU Name and Compatibility Version from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.

6. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling, memory ballooning, and KSM control on the hosts in the cluster.

7. Click the Cluster Policy tab to optionally configure a cluster policy, scheduler optimization settings, enable trusted service for hosts in the cluster, and enable HA Reservation.

8. Click the Resilience Policy tab to select the virtual machine migration policy.

9. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.

10. Click OK to create the cluster and open the New Cluster - Guide Me window.

11. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the cluster and clicking the Guide Me button.

Result 

The new cluster is added to the virtualization environment.

Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

General Cluster Settings Explained

Figure 4.1. New Cluster window 

The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 4.1. General Cluster Settings 

Field

Description/Action

Data Center 

The data center that will contain the cluster.

Name 

The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description 

The description of the cluster. This field is recommended but not mandatory.

CPU Name 

The CPU type of the cluster. Choose one of:

· Intel Conroe Family

· Intel Penryn Family

· Intel Nehalem Family

· Intel Westmere Family

· Intel SandyBridge Family

· Intel Haswell Family

· AMD Opteron G1

· AMD Opteron G2

· AMD Opteron G3

· AMD Opteron G4

· AMD Opteron G5

· IBM POWER 7 v2.0

· IBM POWER 7 v2.1

· IBM POWER 7 v2.3

· IBM POWER 7+ v2.1

· IBM POWER 8 v1.0

All hosts in a cluster must run the same CPU type (Intel or AMD); this cannot be changed after creation without significant disruption. The CPU type should be set for the least powerful host. For example: an Intel SandyBridge host can attach to an Intel Penryn cluster; an Intel Conroe host cannot. Hosts with different CPU models will only use features present in all models.

Compatibility Version 

The version of oVirt. Choose one of:

· 3.0

· 3.1

· 3.2

· 3.3

· 3.4

You will not be able to select a version older than the version specified for the data center.

CPU Architecture 

The architecture of the cluster. Choose either:

· x86_64

· ppc64

Optimization Settings Explained

Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your oVirt environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.

CPU Thread Handling allows hosts to run virtual machines with a total number of processor cores greater than number of cores in the host. This is useful for non-CPU-intensive workloads, where allowing a greater number of virtual machines to run can reduce hardware requirements. It also allows virtual machines to run with CPU topologies that would otherwise not be possible, specifically if the number of guest cores is between the number of host cores and number of host threads.

The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.

Table 4.2. Optimization Settings 

Field

Description/Action

Memory Optimization 

* None - Disable memory page sharing: Disables memory page sharing.

· For Server Load - Enable memory page sharing to 150%: Sets the memory page sharing threshold to 150% of the system memory on each host.

· For Desktop Load - Enable memory page sharing to 200%: Sets the memory page sharing threshold to 200% of the system memory on each host.

CPU Threads 

Selecting the Count Threads As Cores check box allows hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host.

The exposed host threads would be treated as cores which can be utilized by virtual machines. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores.

Memory Balloon 

Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this option is set, the Memory Overcommit Manager (MoM) will start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.

To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine in cluster level 3.2 and higher includes a balloon device, unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up. It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution.

KSM control 

Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost.

Resilience Policy Settings Explained

The resilience policy sets the virtual machine migration policy in the event of host failure. Virtual machines running on a host that unexpectedly shuts down or is put into maintenance mode are migrated to other hosts in the cluster; this migration is dependent upon your cluster policy.

Note: Virtual machine migration is a network-intensive operation. For instance, on a setup where a host is running ten or more virtual machines, migrating all of them can be a long and resource-consuming process. Therefore, select the policy action to best suit your setup. If you prefer a conservative approach, disable all migration of virtual machines. Alternatively, if you have many virtual machines, but only several which are running critical workloads, select the option to migrate only highly available virtual machines.

The table below describes the settings for the Resilience Policy tab in the New Cluster and Edit Cluster windows.

Table 4.3. Resilience Policy Settings 

Field

Description/Action

Migrate Virtual Machines 

Migrates all virtual machines in order of their defined priority.

Migrate only Highly Available Virtual Machines 

Migrates only highly available virtual machines to prevent overloading other hosts.

Do Not Migrate Virtual Machines 

Prevents virtual machines from being migrated.

Cluster Policy Settings Explained

Cluster policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the cluster policy to enable automatic load balancing across the hosts in a cluster.

Figure 4.2. Cluster Policy Settings: Power_Saving 

Figure 4.3. Cluster Policy Settings: VM_Evenly_Distributed 

The table below describes the settings for the Edit Policy window.

Table 4.4. Cluster Policy Tab Properties 

Field/Tab

Description/Action

None 

Set the policy value to None to have no load or power sharing between hosts. This is the default mode.

Evenly_Distributed 

Distributes the CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined Maximum Service Level.

Power_Saving 

Distributes the CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.

VM_Evenly_Distributed 

Distributes virtual machines evenly between hosts based on a count of the virtual machines.

· HighVmCount: Sets the maximum number of virtual machines that can run on each host. Exceeding this limit qualifies the host as overloaded.

· MigrationThreshold: Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold.

· SpmVmGrace: Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run.

The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.

CpuOverCommitDurationMinutes 

Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the cluster policy takes action. The defined time interval protects against temporary spikes in CPU load activating cluster policies and instigating unnecessary virtual machine migration. Maximum two characters.

HighUtilization 

Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, oVirt migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold.

LowUtilization 

Expressed as a percentage. If the host runs below the low utilization value for the defined time interval, oVirt will migrate virtual machines to other hosts in the cluster. oVirt will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster.

Scheduler Optimization 

Optimize scheduling for host weighing/ordering.

· Optimize for Utilization: Includes weight modules in scheduling to allow best selection.

· Optimize for Speed: Skips host weighting in cases where there are more than ten pending requests.

Enable Trusted Service 

Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details.

Enable HA Reservation 

Enable oVirt to monitor cluster capacity for highly available virtual machines. oVirt ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly.

When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.

Cluster Console Settings Explained

The following table details the information required on the Console tab of the New Cluster or Edit Cluster window.

Table 4.5. Console settings 

Field Name

Description

Define SPICE Proxy for Cluster 

Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.

Overridden SPICE proxy address 

The proxy by which the SPICE client will connect to virtual machines. The address must be in the following format:

protocol://[host]:[port]

Editing a Resource

Summary 

Edit the properties of a resource.

Procedure 4.2. Editing a Resource 

1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

2. Click Edit to open the Edit window.

3. Change the necessary properties and click OK.

Result 

The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

Setting Load and Power Management Policies for Hosts in a Cluster

Summary 

Cluster policies allow you to specify acceptable CPU usage values, both high and low, and what happens when those levels are reached. Define the cluster policy to enable automatic load balancing across the hosts in a cluster.

A host with CPU usage that exceeds the HighUtilization value will reduce its CPU processor load by migrating virtual machines to other hosts.

A host with CPU usage below its LowUtilization value will migrate all of its virtual machines to other hosts so it can be powered down until such time as it is required again.

Procedure 4.3. Setting Load and Power Management Policies for Hosts 

1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.

2. 

Click the Edit button to open the Edit Cluster window.

3. 

4. 

5. 

Figure 4.4. Edit Cluster Policy

6. 

7. Select one of the following policies:

None

VM_Evenly_Distributed - Enter the maximum number of virtual machines that can run on each host in HighVmCount field.

Evenly_Distributed - Enter CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization text field.

Power Saving - Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization text field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization text field

8. Specify the time interval in minutes at which the selected policy will be triggered in the CpuOverCommitDurationMinutes text field.

9. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box.

10. Click OK.

Result 

You have updated the cluster policy for the cluster.

Creating a New Logical Network in a Data Center or Cluster

Summary 

Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 4.4. Creating a New Logical Network in a Data Center or Cluster 

1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.

2. Click the Logical Networks tab of the details pane to list the existing logical networks.

3. From the Data Centers details pane, click New to open the New Logical Network window. From the Clusters details pane, click Add Network to open the New Logical Network window.

4. Enter a Name, Description and Comment for the logical network.

5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down menu.

6. In the Network Parameters section, select the Enable VLAN tagging, VM network and Override MTU to enable these options.

7. Enter a new label or select an existing label for the logical network in the Network Label text field.

8. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.

9. If the Create on external provider check box is selected, the Subnet tab will be visible. From the Subnet tab enter a Name, CIDR and select an IP Version for the subnet that the logical network will provide.

10. From the Profiles tab, add vNIC profiles to the logical network as required.

11. Click OK.

Result 

You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note: When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

Removing a Cluster

Summary 

Move all hosts out of a cluster before removing it.

Note: You cannot remove the Default cluster, as it holds the Blank template. You can however rename the Default cluster and add it to a new data center.

Procedure 4.5. Removing a Cluster 

1. Use the resource tabs, tree mode, or the search function to find and select the cluster to be removed in the results list.

2. Ensure there are no hosts in the cluster.

3. Click Remove to open the Remove Cluster(s) confirmation window.

4. Click OK 

Result 

The cluster is removed.

Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary 

Specify the traffic type for the logical network to optimize the network traffic flow.

定义逻辑网络的传输方式去自主化网络传输

Procedure 4.6. Assigning or Unassigning a Logical Network to a Cluster 

1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.

2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.

3. 

Click Manage Networks to open the Manage Networks window.

4. 

5. 

6. 

Figure 4.5. Manage Networks

7. 

8. Select appropriate check boxes.

9. Click OK to save the changes and close the window.

Result 

You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note: Networks offered by external providers cannot be used as display networks.

Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 4.6. Manage Networks Settings 

Field

Description/Action

Assign 

Assigns the logical network to all hosts in the cluster.

Required 

A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.

VM Network 

A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.

Display Network 

A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller. 关于SPICE显示的网络传输和虚拟网络控制器

Migration Network 

A logical network marked "Migration Network" carries virtual machine and storage migration traffic.

Changing the Cluster Compatibility Version

Summary 

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Note: To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level.

Procedure 4.7. Changing the Cluster Compatibility Version 

1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.

2. Click the Edit link. The Edit Cluster window will open.

3. Change the Compatibility Version to the desired value.

4. Click OK to open the Change Cluster Compatibility Version confirmation window.

5. Click OK to confirm.

Result 

You have updated the compatibility version of the cluster. Once you have updated the compatibility version of all clusters in a data center, then you are also able to change the compatibility version of the data center itself.

Warning: Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

Logical Networks

Introduction to Logical Networks

A logical network is a named set of global network connectivity properties in your data center. When a logical network is added to a host, it may be further configured with host-specific network parameters. Logical networks optimize network flow by grouping network traffic by usage, type, and requirements.

逻辑网络是全局网络连接属性的集合体,当逻辑网络被加到一个主机,可能会被配置为主机定义的网络属性。逻辑网络通过网络传输组自主化网络流量

Logical networks allow both connectivity and segregation. You can create a logical network for storage communication to optimize network traffic between hosts and storage domains, a logical network specifically for all virtual machine traffic, or multiple logical networks to carry the traffic of groups of virtual machines.

逻辑网络允许连通和分离。你可以在主机和存储域之间创建一个 逻辑网络,去让存储连接到自主化网络流量 ,逻辑网络对所有虚拟机流量是很重要的,多逻辑网络带来VM组的传输

The default logical network in all data centers is the management network called ovirtmgmt. The ovirtmgmt network carries all traffic, until another logical network is created. It is meant especially for management communication between oVirt and hosts.

默认逻辑网络叫做:ovirtmgmt 。它带来所有传输,直到创建了另一个逻辑网络。意味着他对ovirt和主机之间的管理连接很重要。

A logical network is a data center level resource; creating one in a data center makes it available to the clusters in a data center. A logical network that has been designated a Required must be configured in all of a cluster's hosts before it is operational. Optional networks can be used by any host they have been added to.

逻辑网络是数据中心级别资源;创建一个并令其在集群可用。 逻辑网络标识为Required 必须在它可操作之前,所有集群主机必须已经被配置。标识为OPTIONAL则可以被任何主机使用

Warning: Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.

Important: If you plan to use oVirt nodes to provide any services, remember that the services will stop if the oVirt environment stops operating.

不要在主机运行状态进行变更网络 ,否则会导致主机无法接触。

This applies to all services, but you should be especially aware of the hazards of running the following on oVirt:

· Directory Services

· DNS

· Storage

Port Mirroring

Port mirroring copies layer 3 network traffic on a given logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.

端口镜像在(给定的逻辑网络)和(提供虚拟接口给VM的主机),进行复制模仿三层网络交换流量。VM可被用作网络调试和调整,入侵检测,监控在同一主机和逻辑网络的VM行为

The only traffic copied is internal to one logical network on one host. There is no increase on traffic on the network external to the host; however a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines.

被拷贝的流量只在主机内的逻辑网络流通。这也没有外部增加流量给主机。但是带有端口镜像的VM会使用更多的主机CPURAM

Enable and disable port mirroring by editing network interfaces on virtual machines.

Port mirroring requires an IPv4 IP address. 

Hotplugging profiles with port mirroring is not supported.

As of oVirt 3.4, port mirroring has been included in vNIC profiles. Port mirroring cannot be altered when the vNIC profile associated with port mirroring is attached to a virtual machine. To use port mirroring, create a dedicated vNIC profile that has port mirroring enabled.

 

Important: Enabling port mirroring reduces the privacy of other network users.开启端口镜像会降低其他网路用户的优先权

Required Networks, Optional Networks, and Virtual Machine Networks 三种网络

oVirt 3.1 and higher distinguishes between required networks and optional networks.

Required networks must be applied to all hosts in a cluster for the cluster and network to be Operational. Logical networks are added to clusters as Required networks by default.

这种网络必须被应用于所有主机并且网络变成Operational。逻辑网络会被默认添加到集群作为Rquired网络

When a required network becomes non-operational, the virtual machines running on the network are fenced and migrated to another host. This is beneficial if you have machines running mission critical workloads.

当required 类型的网络变成non-operational 状态,跑在这种网络的VM会防护并且移植到其他主机。这优点可以适用你苛刻的工作环境。

When a non-required network becomes non-operational, the virtual machines running on the network are not migrated to another host. This prevents unnecessary I/O overload caused by mass migrations.

Optional networks are those logical networks that have not been explicitly declared Required networks. Optional networks can be implemented on only the hosts that use them. The presence or absence of these networks does not affect the Operational status of a host.

Optional networks 是这些没明确表明Required networks的逻辑网络。Optional networks 只在主机在用它的时候才生效。这些网络的存在与否不会影响主机的Optional 状态

Use the Manage Networks button to change a network's Required designation.

使用Manage Networks去改变网络Required 的名称

Virtual machine networks (called a VM network in the user interface) are logical networks designated to carry only virtual machine network traffic. Virtual machine networks can be required or optional.

Note: A virtual machine with a network interface on an optional virtual machine network will not start on a host without the network.

vNIC Profiles and QoS

vNIC Profile Overview

A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in oVirt. vNIC profiles allow you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. vNIC profiles offer an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.

Note: Starting with oVirt 3.3, virtual machines now access logical networks only through vNIC profiles and cannot access a logical network if no vNIC profiles exist for that logical network. When you create a new logical network in oVirt, a vNIC profile of the same name as the logical network is automatically created under that logical network.

Creating a vNIC Profile

Summary 

Create a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.


Procedure 5.1. Creating a vNIC Profile 

1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.

2. Select the Profiles tab in the details pane to display available vNIC profiles. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results list.

3. 

Click New to open the VM Interface Profile window.

4. 

5. 

6. 

Figure 5.1. The VM Interface Profile window

7. 

8. Enter the Name and Description of the profile.

9. Use the QoS drop-down menu to select the relevant Quality of Service policy to apply to the vNIC profile.

10. Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.

11. Use the custom properties drop-down menu, which displays Please select a key... by default, to select a custom property and use the + and - buttons to add additional custom properties or remove existing custom properties.

12. Click OK to save the profile and close the window.

Result 

You have created a vNIC profile. Apply this profile to users and groups to regulate their network bandwidth.

Assigning Security Groups to vNIC Profiles

Summary 

You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Linux Bridge or Open vSwitch plug-ins. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.

Procedure 5.2. Assigning Security Groups to vNIC Profiles 

1. Click the Networks tab and select a logical network in the results list.

2. Click the vNIC Profiles tab in the details pane.

3. Click New or select an existing vNIC profile and click Edit to open the VM Interface Profile window.

4. From the custom properties drop-down menu, select SecurityGroups.

5. In the text field to the right of the custom properties drop-down menu, enter the ID of the security group to attach to the vNIC profile.

6. Click OK.

Note: A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed:

# neutron security-group-list

Result 

You have attached a security group to the vNIC profile and all traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.

Explanation of Settings in the VM Interface Profile Window

Table 5.1. VM Interface Profile Window 

Field Name

Description

Network 

A drop-down menu of the available networks to apply the vNIC profile.

Name 

The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters.

Description 

The description of the vNIC profile. This field is recommended but not mandatory.

QoS 

A drop-down menu of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC.

Port Mirroring 

A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default.

Device Custom Properties 

A drop-down menu to select available custom properties to apply to the VNIC profile. Use the + and - buttons to add and remove properties respectively.

Allow all users to use this Profile 

A check box to toggle the availability of the profile to all users in the environment. It is selected by default.

Removing a vNIC Profile

Summary 

Remove a vNIC profile to delete it from your virtualized environment.

Procedure 5.3. Removing a vNIC Profile 

1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.

2. Select the Profiles tab in the details pane to display available vNIC profiles. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results list.

3. Select one or more profiles and click Remove to open the Remove VM Interface Profile(s) window.

4. Click OK to remove the profile and close the window.

Result 

You have removed the vNIC profile.

User Permissions for vNIC Profiles

Summary 

Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.

Procedure 5.4. User Permissions for vNIC Profiles 

1. Use tree mode to select a logical network.

2. Select the vNIC Profiles resource tab to display the VNIC profiles.

3. Select the Permissions tab in the details pane to show the current user permissions for the profile.

4. Use the Add button to open the Add Permission to User window, and the Remove button to open the Remove Permission window, to affect user permissions for the vNIC profile.

Result 

You have configured user permissions for a VNIC profile.

QoS Overview

Network QoS is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual NIC. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.

Important: Network QoS is only supported on cluster compatibility version 3.3 and higher.

Adding QoS

Summary 

Create a QoS profile to regulate network traffic when applied to a vNIC profile, also known as VM Interface profile.

Procedure 5.5. Creating a QoS profile 

1. Use the Data Centers resource tab, tree mode, or the search function to display and select a data center in the results list.

2. Select the Network QoS tab in the details pane to display the available QoS profiles.

3. Click New to open the New Network QoS window.

4. Enter the Name of the profile.

5. Enter the limits for the Inbound and Outbound network traffic.

6. Click OK to save the changes and close the window.

Summary 

You have created a QoS Profile that can be used in a vNIC profile, also known as VM Interface profile.

Settings in the New Network QoS and Edit Network QoS Windows Explained

Network QoS settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.

Table 5.2. Network QoS Settings 

Field Name

Description

Data Center 

The data center to which the Network QoS policy is to be added. This field is configured automatically according to the selected data center.

Name 

A name to represent the network QoS policy within oVirt.

Inbound 

The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.

· Average: The average speed of inbound traffic.

· Peak: The speed of inbound traffic during peak times.

· Burst: The speed of inbound traffic during bursts.

Outbound 

The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.

· Average: The average speed of outbound traffic.

· Peak: The speed of outbound traffic during peak times.

· Burst: The speed of outbound traffic during bursts.

 Removing QoS

Summary 

Remove a QoS profile from your virtualized environment.

Procedure 5.6. Removing a QoS profile 

1. Use the Data Centers resource tab, tree mode, or the search function to display and select a data center in the results list.

2. Select the Network QoS tab in the details pane to display the available QoS profiles.

3. Select the QoS profile to remove and click Remove to open the Remove Network QoS window. This window will list what, if any, vNIC profiles are using the selected QoS profile.

4. Click OK to save the changes and close the window.

Result 

You have removed the QoS profile.

Logical Network Tasks

Creating a New Logical Network in a Data Center or Cluster

Summary 

Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 5.7. Creating a New Logical Network in a Data Center or Cluster 

1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.

2. Click the Logical Networks tab of the details pane to list the existing logical networks.

3. From the Data Centers details pane, click New to open the New Logical Network window. From the Clusters details pane, click Add Network to open the New Logical Network window.

4. Enter a Name, Description and Comment for the logical network.

5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down menu.

6. In the Network Parameters section, select the Enable VLAN tagging, VM network and Override MTU to enable these options.

7. Enter a new label or select an existing label for the logical network in the Network Label text field.

8. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.

9. If the Create on external provider check box is selected, the Subnet tab will be visible. From the Subnet tab enter a Name, CIDR and select an IP Version for the subnet that the logical network will provide.

10. From the Profiles tab, add vNIC profiles to the logical network as required.

11. Click OK.

Result 

You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note: When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

Logical Network General Settings Explained

The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.

Table 5.3. New Logical Network and Edit Logical Network Settings 

Field Name

Description

Name 

The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description 

The description of the logical network. This text field has a 40-character limit.

Comment 

A field for adding plain text, human-readable comments regarding the logical network.

Create on external provider 

Allows you to create the logical network to an OpenStack Networking instance that has been added to oVirt as an external provider.

External Provider - Allows you to select the external provider on which the logical network will be created.

Enable VLAN tagging 

VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.

VM Network 

Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.

Override MTU 

Set a custom maximum transmission unit for the logical network. You can use this to match the maximum transmission unit supported by your new logical network to the maximum transmission unit supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Override MTU is selected.

Network Label 

Allows you to specify a new label for the network or select from a existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label.

Logical Network Cluster Settings Explained

The table below describes the settings for the Cluster tab of the New Logical Network and Edit Logical Network window.

Table 5.4. New Logical Network and Edit Logical Network Settings 

Field Name

Description

Attach/Detach Network to/from Cluster(s) 

Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters.

Name - the name of the cluster to which the settings will apply. This value cannot be edited.

Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster.

Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster.

Logical Network vNIC Profiles Settings Explained

The table below describes the settings for the vNIC Profiles tab of the New Logical Network and Edit Logical Network window.

Table 5.5. New Logical Network and Edit Logical Network Settings 

Field Name

Description

vNIC Profiles 

Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile.

Public - Allows you to specify whether the profile is available to all users.

QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile.

Editing a Logical Network

Summary 

Edit the settings of a logical network.

Procedure 5.8. Editing a Logical Network 

1. Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.

2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.

3. Select a logical network and click Edit to open the Edit Logical Network window.

4. Edit the necessary settings.

5. Click OK to save the changes.

Result 

You have updated the settings of your logical network.

Note: Multi-host network configuration is available on data centers with 3.1-or-higher compatibility, and automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.

Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary 

Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 5.9. Assigning or Unassigning a Logical Network to a Cluster 

1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.

2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.

3. 

Click Manage Networks to open the Manage Networks window.

4. 

5. 

6. 

Figure 5.2. Manage Networks

7. 

8. Select appropriate check boxes.

9. Click OK to save the changes and close the window.

Result 

You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note: Networks offered by external providers cannot be used as display networks.

Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 5.6. Manage Networks Settings 

Field

Description/Action

Assign 

Assigns the logical network to all hosts in the cluster.

Required 

A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.

VM Network 

A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.

Display Network 

A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller.

Migration Network 

A logical network marked "Migration Network" carries virtual machine and storage migration traffic.

Adding Multiple VLANs to a Single Network Interface Using Logical Networks 多VLANS

Summary 

Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important: You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 5.10. Adding Multiple VLANs to a Network Interface using Logical Networks 

1. Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.

2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.

3. Click Setup Host Networks to open the Setup Host Networks window.

4. 

Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.

5. 

6. 

7. 

Figure 5.3. Setup Host Networks

8. 

9. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. Select a Boot Protocol from:

如果你逻辑网络定义与主机的网络配置不同步,

None,

DHCP, or

Static, Provide the IP and Subnet Mask.(如果客户环境是静态的,是需要设置静态的ip地址,否则vlan没什么用,虚拟机可以ping通任意vlan

Click OK.

10. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.

11. Select the Save network configuration check box

12. Click OK.

Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational

Result 

You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface. 这方法可以反复几次,选择和编辑每台主机上的相同的网络接口,达到在单一网络接口,增加不同vlan标签的逻辑网络

Network Labels

Network labels can be used to greatly simplify several administrative tasks associated with creating and administering logical networks and associating those logical networks with physical host network interfaces and bonds.

A network label is a plain text, human readable label that can be attached to a logical network or a physical host network interface. There is no strict limit on the length of label, but you must use a combination of lowercase and uppercase letters, underscores and hyphens; no spaces or special characters are allowed.

Attaching a label to a logical network or physical host network interface creates an association with other logical networks or physical host network interfaces to which the same label has been attached, as follows:

Network Label Associations 

· When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label.

· When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface.

· Changing the label attached to a logical network or physical host network interface acts in the same way as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated.

Network Labels and Clusters 

· When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface.

· When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface.

Network Labels and Logical Networks With Roles 

· When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address.

Adding Network Labels to Host Network Interfaces

Summary 

Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Procedure 5.11. Adding Network Labels to Host Network Interfaces 

1. Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.

2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.

3. Click Setup Host Networks to open the Setup Host Networks window.

4. 

Edit a physical network interface by hovering your cursor over a physical network interface and clicking the pencil icon to open the Edit Interface window.

5. 

6. 

7. 

Figure 5.4. The Edit Interface Window

8. 

9. Enter a name for the network label in the Label text field and use the + and - buttons to add or remove additional network labels.

10. Click OK.

Result 

You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

Using the Networks Tab

The Networks resource tab provides a central location for users to perform network-related operations and search for networks based on each network's property or association with other resources.

All networks in the oVirt environment display in the results list of the Networks tab. The New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.

Click on each network name and use the Clusters, Hosts, Virtual Machines, Templates, and Permissions tabs in the details pane to perform functions including:

· Attaching or detaching the networks to clusters and hosts

· Removing network interfaces from virtual machines and templates

· Adding and removing permissions for users to access and manage networks

These functions are also accessible through each individual resource tab.

External Provider Networks

Importing Networks From External Providers

Summary 

If an external provider offering networking services has been registered in oVirt, the networks provided by that provider can be imported into oVirt and used by virtual machines.

Procedure 5.12. Importing a Network From an External Provider 

1. Click the Networks tab.

2. 

Click the Import button to open the Import Networks window.

3. 

4. 

5. 

Figure 5.5. The Import Networks Window

6. 

7. From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.

8. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.

9. From the Data Center drop-down list, select the data center into which the networks will be imported.

10. Optionally, clear the Allow All check box for a network in the Networks to Import list to prevent that network from being available to all users.

11. Click the Import button.

Result 

The selected networks are imported into the target data center and can now be used in oVirt.

Limitations to Using External Provider Networks

The following limitations apply to using logical networks imported from an external provider in an oVirt environment.

· Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.

· The same logical network can be imported more than once, but only to different data centers.

· You cannot edit logical networks offered by external providers in oVirt. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the OpenStack Networking instance that provides that logical network.

· Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.

· If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from oVirt while the logical network is still in use by the virtual machine.

· Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.

Important: Logical networks imported from external providers are only compatible with Red Hat Enterprise Linux hosts and cannot be assigned to virtual machines running on oVirt Node hosts.

Important: External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

Subnets on External Provider Logical Networks

Configuring Subnets on External Provider Logical Networks

A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the Neutron instance on which the logical network is hosted is responsible for assigning these IP addresses.

While oVirt automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within oVirt.

Adding Subnets to External Provider Logical Networks

Summary 

Create a subnet on a logical network provided by an external provider

Procedure 5.13. Adding Subnets to External Provider Logical Networks 

1. Click the Networks tab.

2. Click the logical network provided by an external provider to which the subnet will be added.

3. Click the Subnets tab in the details pane.

4. Click the New button to open the New External Subnet window.

5. Enter a Name and CIDR for the new subnet.

6. From the IP Version drop-down menu, select either IPv4 or IPv6.

7. Click OK.

Result 

A new subnet is created on the logical network.

Removing Subnets from External Provider Logical Networks

Summary 

Remove a subnet from a logical network provided by an external provider

Procedure 5.14. Removing Subnets from External Provider Logical Networks 

1. Click the Networks tab.

2. Click the logical network provided by an external provider from which the subnet will be removed.

3. Click the Subnets tab in the details pane.

4. Click the subnet to remove.

5. Click the Remove button and click OK when prompted.

Result 

The subnet is removed from the logical network. 

Hosts

Introduction to oVirt Hosts

Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).

KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by oVirt. An oVirt environment has one or more hosts attached to it.

oVirt supports two methods of installing hosts. You can use the oVirt Node installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux, CentOS, or Fedora installation. 

oVirt hosts take advantage of tuned profiles, which provide virtualization optimizations. For more information on tuned for Red Hat Enterprise Linux, please refer to the Red Hat Enterprise Linux 6.0 Performance Tuning Guide.

The oVirt Node has security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. oVirt can open required ports on Red Hat Enterprise Linux, CentOS, and Fedora hosts when it adds them to the environment. For a full list of ports, see Section A.2, “Virtualization Host Firewall Requirements”.

A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 6.1 or later, as well as CentOS 6.1 or later, in either the AMD64/Intel 64 version.

Important: Red Hat Enterprise Linux 5.4 and Red Hat Enterprise Linux 5.5 machines that belong to existing clusters are supported. oVirt Guest Agent is now included in the virtio serial channel. Any Guest Agents installed on Windows guests on Red Hat Enterprise Linux hosts will lose their connection to oVirt when the Red Hat Enterprise Linux hosts are upgraded from version 5 to version 6.

A physical host on the oVirt platform:

· Must belong to only one cluster in the system.

· Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.

· Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.

· Has a minimum of 2 GB RAM.

· Can have an assigned system administrator with system permissions.

oVirt Node Hosts

oVirt Node hosts are installed using a special build of Fedora, with only the packages required to host virtual machines. They run stateless, not writing any changes to disk unless explicitly required to.

oVirt Node hosts can be added directly to, and configured by, oVirt. Alternatively a host can be configured locally to connect to oVirt; oVirt then is only used to approve the host to be used in the environment.

Unlike Red Hat Enterprise Linux, Fedora, or CentOS hosts, oVirt Node hosts cannot be added to clusters that have been enabled for Gluster service for use as Red Hat Storage nodes.

Important: The oVirt Node is a closed system. Use a Red Hat Enterprise Linux, CentOS, or Fedora host if additional rpm packages are required for your environment.

Foreman Host Provider Hosts

Hosts provided by a Foreman host provider can also be used as virtualization hosts by oVirt. After a Foreman host provider has been added to oVirt as an external provider, any hosts that it provides can be added to and used in oVirt in the same way as oVirt Node hosts and Red Hat Enterprise Linux/CentOS hosts.

Important: Foreman host provider hosts are a Technology Preview feature. Technology Preview features are not fully supported, may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

Enterprise Linux Hosts

You can use a standard Red Hat Enterprise Linux 6 or CentOS 6 installation on capable hardware as a host. oVirt supports hosts running Red Hat Enterprise Linux 6 or CentOS server AMD64/Intel 64 version.

Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, creation of bridge and a reboot of the host. Use the Details pane to monitor the hand-shake process as the host and management system establish a connection.

Host Tasks

Adding an Enterprise Linux Host

Summary 

An Enterprise Linux host is based on a standard "basic" installation of Red Hat Enterprise Linux or CentOS. The physical host must be set up before you can add it to the oVirt environment.

oVirt logs into the host to perform virtualization capability checks, install packages, create a network bridge, and reboot the host. The process of adding a new host can take up to 10 minutes.

Procedure 6.1. Adding an Enterprise Linux Host 

1. Click the Hosts resource tab to list the hosts in the results list.

2. Click New to open the New Host window.

3. Use the drop-down menus to select the Data Center and Host Cluster for the new host.

4. Enter the Name, Address, and SSH Port of the new host.

5. Select an authentication method to use with the host.

o Enter the root user's password to use password authentication.

o Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.

6. You have now completed the mandatory steps to add a Red Hat Enterprise Linux or CentOS host. Click the Advanced Parameters button to expand the advanced host settings.

1. Optionally disable automatic firewall configuration.

2. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.

7. You can configure the Power Management and SPM using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.

8. Click OK to add the host and close the window.

Result 

The new host displays in the list of hosts with a status of Installing. When installation is complete, the status updates to Reboot. The host must be activated for the status to change to Up.

Note: You can view the progress of the installation in the details pane.

Approving a Hypervisor

Summary 

It is not possible to run virtual machines on a Hypervisor until the addition of it to the environment has been approved in oVirt.

Procedure 6.2. Approving a Hypervisor 

1. Log in to oVirt Administration Portal.

2. From the Hosts tab, click on the host to be approved. The host should currently be listed with the status of Pending Approval.

3. Click the Approve button. The Edit and Approve Hosts dialog displays. You can use the dialog to set a name for the host, fetch its SSH fingerprint before approving it, and configure power management, where the host has a supported power management card. For information on power management configuration, refer to “Host Power Management Settings Explained”.

4. Click OK. If you have not configured power management you will be prompted to confirm that you wish to proceed without doing so, click OK.

Result 

The status in the Hosts tab changes to Installing, after a brief delay the host status changes to Up.

Explanation of Settings and Controls in the New Host and Edit Host Windows

Host General Settings Explained

These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Foreman host provider hosts.

The General settings table contains the information required on the General tab of the New Host or Edit Host window.

Table 6.1. General Settings 

Field Name

Description

Data Center 

The data center to which the host belongs. oVirt Node hosts cannot be added to Gluster-enabled clusters.

Host Cluster 

The cluster to which the host belongs.

Use External Providers 

Select or clear this check box to view or hide options for adding hosts provided by external providers. Upon selection, a drop-down list of external providers that have been added to oVirt displays. The following options are also available:

· Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts.

· External Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter.

Name 

The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Comment 

A field for adding plain text, human-readable comments regarding the host.

Address 

The IP address, or resolvable hostname of the host.

Password 

The password of the host's root user. This can only be given when you add the host; it cannot be edited afterwards.

SSH PublicKey 

Copy the contents in the text box to the /root/.known_hosts file on the host to use oVirt's ssh key instead of using a password to authenticate with the host.

Automatically configure host firewall 

When adding a new host, oVirt can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.

SSH Fingerprint 

You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.

Host Power Management Settings Explained

The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows.

Table 6.2. Power Management Settings 

Field Name

Description

Primary/ Secondary 

Prior to oVirt 3.2, a host with power management configured only recognized one fencing agent. Fencing agents configured on version 3.1 and earlier, and single agents, are treated as primary agents. The secondary option is valid when a second agent is defined.

Concurrent 

Valid when there are two fencing agents, for example for dual power hosts in which each power switch has two agents connected to the same power switch.

· If this check box is selected, both fencing agents are used concurrently when a host is fenced. This means that both fencing agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up.

· If this check box is not selected, the fencing agents are used sequentially. This means that to stop or start a host, the primary agent is used first, and if it fails, the secondary agent is used.

Address 

The address to access your host's power management device. Either a resolvable hostname or an IP address.

User Name 

User account with which to access the power management device. You can set up a user on the device, or use the default user.

Password 

Password for the user accessing the power management device.

Type 

The type of power management device in your host.

Choose one of the following:

· apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices.

· apc_snmp - Use with APC 5.x power switch devices.

· bladecenter - IBM Bladecentre Remote Supervisor Adapter.

· cisco_ucs - Cisco Unified Computing System.

· drac5 - Dell Remote Access Controller for Dell computers.

· drac7 - Dell Remote Access Controller for Dell computers.

· eps - ePowerSwitch 8M+ network power switch.

· hpblade - HP BladeSystem.

· ilo, ilo2, ilo3, ilo4 - HP Integrated Lights-Out.

· ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices.

· rsa - IBM Remote Supervisor Adaptor.

· rsb - Fujitsu-Siemens RSB management interface.

· wti - WTI Network PowerSwitch.

Port 

The port number used by the power management device to communicate with the host.

Options 

Power management device specific options. Enter these as 'key=value' or 'key'. See the documentation of your host's power management device for the options available.

Secure 

Tick this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on and supported by the power management agent.

Source 

Specifies whether the host will search within its cluster or data center for a fencing proxy. Use the Up and Down buttons to change the sequence in which the resources are used.

Disable policy control of power management 

Power management is controlled by the Cluster Policy of the host's cluster. If power management is enabled and the defined low utilization value is reached, oVirt will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Tick this check box to disable policy control.

SPM Priority Settings Explained

The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.

Table 6.3. SPM settings 

Field Name

Description

SPM Priority 

Defines the likelihood that the host will be given the role of Storage Pool Manager(SPM). The options are Low, Normal, and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal.

Host Console Settings Explained

The Console settings table details the information required on the Console tab of the New Host or Edit Host window.

Table 6.4. Console settings 

Field Name

Description

Override display address 

Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP).

Display address 

The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.

Configuring Host Power Management Settings

Summary 

Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.

It is necessary to configure host power management in order to utilize host high availability and virtual machine high availability.

Important: Ensure that your host is in maintenance mode before configuring power management settings. Otherwise, all running virtual machines on that host will be stopped ungracefully upon restarting the host, which can cause disruptions in production environments. A warning dialog will appear if you have not correctly set your host to maintenance mode.

Procedure 6.3. Configuring Power Management Settings 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click Edit to open the Edit Host window.

3. Click the Power Management tab to display the Power Management settings.

4. Select the Enable Power Management check box to enable the fields.

5. The Primary option is selected by default if you are configuring a new power management device. If you are adding a new device, set it to Secondary.

6. Select the Concurrent check box to enable multiple fence agents to be used concurrently.

7. Enter the Address, User Name, and Password of the power management device into the appropriate fields.

8. Use the drop-down menu to select the Type of power management device.

9. Enter the Port number used by the power management device to communicate with the host.

10. Enter the Options for the power management device. Use a comma-separated list of key=value or key.

11. Select the Secure check box to enable the power management device to connect securely to the host.

12. Click Test to ensure the settings are correct.

13. Click OK to save your settings and close the window.

Result 

You have configured the power management settings for the host. The Power Management drop-down menu is now enabled in the Administration Portal.

Note: Power management is controlled by the Cluster Policy of the host's cluster. If power management is enabled and the defined low utilization value is reached, oVirt will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Tick the Disable policy control of power management check box if you do not wish for your host to automatically perform these functions.

Configuring Host Storage Pool Manager Settings

Summary 

The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources.

The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.

Procedure 6.4. Configuring SPM settings 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click Edit to open the Edit Host window.

3. Click the SPM tab to display the SPM Priority settings.

4. Use the radio buttons to select the appropriate SPM priority for the host.

5. Click OK to save the settings and close the window.

Result 

You have configured the SPM priority of the host.

Editing a Resource

Summary 

Edit the properties of a resource.

Procedure 6.5. Editing a Resource 

1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

2. Click Edit to open the Edit window.

3. Change the necessary properties and click OK.

Result 

The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

Approving Newly Added oVirt Node Hosts

Summary 

You have to install your oVirt Node hosts before you can approve them in oVirt. Read about installing oVirt Nodes in the oVirt Installation Guide.

Once installed, the oVirt Node host is visible in the Administration Portal but not active. Approve it so that it can host virtual machines.

Procedure 6.6. Approving newly added oVirt Node hosts 

1. In the Hosts tab, select the host you recently installed using the oVirt Node host installation media. This host shows a status of Pending Approval.

2. Click the Approve button.

Result 

The host's status changes to Up and it can be used to run virtual machines.

Note: You can also add this host using the procedure in “Adding an Enterprise Linux Host”, which utilizes the oVirt Node host's IP address and the password that was set on the oVirt Engine screen.

Moving a Host to Maintenance Mode

Summary 

Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. When a host is placed into maintenance mode oVirt attempts to migrate all running virtual machines to alternative hosts.

The normal prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Procedure 6.7. Moving a Host to Maintenance Mode 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click Maintenance to open the Maintenance Host(s) confirmation window.

3. Click OK to initiate maintenance mode.

Result:

All running virtual machines are migrated to alternative hosts. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully.

Activating a Host from Maintenance Mode

Summary 

A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used.

Procedure 6.8. Activating a Host from Maintenance Mode 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click Activate.

Result 

The host status changes to Unassigned, and finally Up when the operation is complete. Virtual machines can now run on the host.

Removing a Host

Summary 

Remove a host from your virtualized environment.

Procedure 6.9. Removing a host 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Place the host into maintenance mode.

3. Click Remove to open the Remove Host(s) confirmation window.

4. Select the Force Remove check box if the host is part of a Red Hat Storage cluster and has volume bricks on it, or if the host is non-responsive.

5. Click OK.

Result 

Your host has been removed from the environment and is no longer visible in the Hosts tab.

Customizing Hosts with Tags

Summary 

You can use tags to store information about your hosts. You can then search for hosts based on tags.

Procedure 6.10. Customizing hosts with tags 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. 

Click Assign Tags to open the Assign Tags window.

3. 

4. 

5. 

Figure 6.1. Assign Tags Window

6. 

7. The Assign Tags window lists all available tags. Select the check boxes of applicable tags.

8. Click OK to assign the tags and close the window.

Result 

You have added extra, searchable information about your host as tags.

Hosts and Networking

Refreshing Host Capabilities

Summary 

When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in oVirt.

Procedure 6.11. To Refresh Host Capabilities 

1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.

2. Click the Refresh Capabilities button. 刷新能力

Result 

The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in oVirt.

Editing Host Network Interfaces and Assigning Logical Networks to Hosts

Summary 

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces.

Important: You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Procedure 6.12. Editing Host Network Interfaces and Assigning Logical Networks to Hosts 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results.

2. Click the Network Interfaces tab in the details pane.

3. 

Click the Setup Host Networks button to open the Setup Host Networks window.

4. 

5. 

6. 

Figure 6.2. The Setup Host Networks window

7. 

8. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface. Alternatively, right-click the logical network and select a network interface from the drop-down menu.

9. Configure the logical network:

1. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.

2. Select a Boot Protocol from:

§ None,

§ DHCP, or

§ Static. If you selected Static, enter the IP, Subnet Mask, and the Gateway.

3. Click OK.

4. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.

1. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.

2. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.

3. Click OK.

Result 

You have assigned logical networks to and configured a physical host network interface.

Note: If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

Bonds 绑定!

Bonding Logic in oVirt

oVirt Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.

Ovirt允许你使用图像接口取绑定设备。以下是清楚的绑定方案,

Two factors that affect bonding logic are:

· Are either of the devices already carrying logical networks? 设备都是逻辑网络?

· Are the devices carrying compatible logical networks? A single device cannot carry both VLAN tagged and non-VLAN tagged logical networks.

Table 6.5. Bonding Scenarios and Their Results 

Bonding Scenario

Result

NIC + NIC

The Create New Bond window is displayed, and you can configure a new bond device.

If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

NIC + Bond

The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Bond + Bond

If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Bonding Modes 绑定模式

oVirt supports the following common bonding modes:

· Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in oVirt.

· Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses modulo NIC slave count. This calculation ensures that the same interface is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in oVirt.

· Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in oVirt.

· Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is supported in oVirt.

Creating a Bond Device Using the Administration Portal

Summary 

You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two.

A bond cannot carry both vlan tagged and non-vlan traffic. Bond不可以同时携带VLAN标签和非vlan的流量

Procedure 6.13. Creating a Bond Device using the Administration Portal 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.

3. Click Setup Host Networks to open the Setup Host Networks window.

4. 

Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.

5. 

If the devices are incompatible, for example one is vlan tagged and the other is not, the bond operation fails with a suggestion on how to correct the compatibility issue.

6. 

7. 

8. 

Figure 6.3. Bond Devices Window

9. 

10. Select the Bond Name and Bonding Mode from the drop-down menus. Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.

11. Click OK to create the bond and close the Create New Bond window.

12. Assign a logical network to the newly created bond device.

13. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.

14. Click OK accept the changes and close the Setup Host Networks window.

Result 

Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.

Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 6.1. xmit_hash_policy 

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:

mode=4 xmit_hash_policy=layer2+3

Example 6.2. ARP Monitoring 

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 6.3. Primary 

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 primary=eth0

Saving a Host Network Configuration

Summary 

One of the options when configuring a host network is to save the configuration as you apply it, making the changes persistent.

Any changes made to the host network configuration will be temporary if you did not select the Save network configuration check box in the Setup Host Networks window.

Save the host network configuration to make it persistent.

Procedure 6.14. Saving a host network configuration 

1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.

2. Click the Network Interfaces tab on the Details pane to list the NICs on the host, their address, and other specifications.

3. Click the Save Network Configuration button.

4. The host network configuration is saved and the following message is displayed on the task bar: "Network changes were saved on host [Hostname]."

Result 

The host's network configuration is saved persistently and will survive reboots.

Note: Saving the host network configuration also updates the list of available network interfaces for the host. This behavior is similar to that of the Refresh Capabilities button.

 

 

你可能感兴趣的:(虚拟化,私有云)