SUN Zone Cluster安装及配置说明之三

3. 创建ZFS存储池以及ZFS文件系统

在aptest创建zpool

bash-3.00# zpool create erpapppool c1t1d0

bash-3.00# zpool create erpdbpool c1t2d0

在aptest,将erpdbpool export

bash-3.00# zpool export erpdbpool

在dbtest,将erpdbpool import

bash-3.00# zpool import erpdbpool

查看aptest zpool list

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpdbpool 79.5G 78K 79.5G 0% ONLINE -

rpool 79.5G 6.17G 73.3G 7% ONLINE -

查看dbtest zpool list

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 79.5G 76.5K 79.5G 0% ONLINE -

rpool 80.5G 6.47G 74.0G 8% ONLINE -

创建目录

bash-3.00# mkdir zonedir

bash-3.00# cd zonedir

bash-3.00# pwd

/zonedir

bash-3.00# mkdir erpapp

创建挂接点

bash-3.00# zfs create -o mountpoint=/zonedir/erpapp erpapppool/erpapp_fs

bash-3.00# zfs create -o mountpoint=/zonedir/erpdb erpdbpool/erpdb_fs

再查看zfs list

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 112K 78.3G 21K /erpapppool

erpapppool/erpapp_fs 21K 78.3G 21K /zonedir/erpapp

rpool 7.00G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 0 - 33.5K -

rpool/ROOT 5.46G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.46G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 70.6M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 62K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 0 - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.8G 6.38M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 0 - 6.38M -

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 79.5G 117K 79.5G 0% ONLINE -

rpool 80.5G 6.47G 74.0G 8% ONLINE -

开始创建zone

bash-3.00# zonecfg -z erpapp

erpapp: No such zone configured

Use 'create' to begin configuring a new zone.

zonecfg:erpapp> create

zonecfg:erpapp> set zonepath=/zonedir/erpapp

zonecfg:erpapp> set autoboot=false

zonecfg:erpapp> remove inherit-pkg-dir dir=/lib

zonecfg:erpapp> remove inherit-pkg-dir dir=/platform

zonecfg:erpapp> remove inherit-pkg-dir dir=/sbin

zonecfg:erpapp> remove inherit-pkg-dir dir=/usr

zonecfg:erpapp> add net

zonecfg:erpapp:net> set address=192.168.0.42

zonecfg:erpapp:net> set physical=e1000g0

zonecfg:erpapp:net> set defrouter=192.168.0.1

zonecfg:erpapp:net> end

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

zonecfg:erpapp> verify

zonecfg:erpapp> commit

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

添加内存限制:

zonecfg:erpapp> add capped-memory

zonecfg:erpapp:capped-memory> set physical=1.5G

zonecfg:erpapp:capped-memory> set swap=1.5G

zonecfg:erpapp:capped-memory> end

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

capped-memory:

physical: 1.5G

[swap: 1.5G]

rctl:

name: zone.max-swap

value: (priv=privileged,limit=1610612736,action=deny)

zonecfg:erpapp> commit

zonecfg:erpapp> exit

bash-3.00# zonecfg -z erpapp info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

zone erpapp创建成功,如下

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp configured /zonedir/erpapp native shared

如下进入zone的setup

bash-3.00# zoneadm -z erpapp install

/zonedir/erpapp must not be group readable.

/zonedir/erpapp must not be group executable.

/zonedir/erpapp must not be world readable.

/zonedir/erpapp must not be world executable.

could not verify zonepath /zonedir/erpapp because of the above errors.

zoneadm: zone erpapp failed to verify

此错误为授权,需要修改如下

bash-3.00# chmod 700 erpapp

bash-3.00# ls -lrt

total 6

drwx------ 5 root root 5 Apr 12 10:16 erpdb

drwx------ 2 root root 2 Apr 12 17:25 erpapp

bash-3.00# zoneadm -z erpapp install

Preparing to install zone <erpapp>.

Creating list of files to copy from the global zone.

Copying <169112> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <1388> packages on the zone.

Initialized <1287> packages on zone.

Zone <erpapp> is initialized.

Installation of these packages generated errors: <SUNWpostgr-82-libs SUNWpostgr-83-server-data-root SUNWpostgr-82-server-data-root SUNWpostgr-82-client SUNWpostgr-82-server SUNWpostgr-82-contrib SUNWpostgr-82-devel>

Installation of <1> packages was skipped.

The file </zonedir/erpapp/root/var/sadm/system/logs/install_log> contains a log of the zone installation.

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zoneadm -z erpapp boot

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 erpapp running /zonedir/erpapp native shared

bash-3.00# zlogin -C erpapp

[Connected to zone 'erpapp' console]

168/168

Reading ZFS config: done.

Select a Language

0. English

1. Simplified Chinese

Please make a choice (0 - 1), or press h or ? for help: 0

Select a Locale

0. English (C - 7-bit ASCII)

1. U.S.A. (UTF-8)

2. Go Back to Previous Screen

Please make a choice (0 - 2), or press h or ? for help: 0

What type of terminal are you using?

1) ANSI Standard CRT

2) DEC VT52

3) DEC VT100

4) Heathkit 19

5) Lear Siegler ADM31

6) PC Console

7) Sun Command Tool

8) Sun Workstation

9) Televideo 910

10) Televideo 925

11) Wyse Model 50

12) X Terminal Emulator (xterms)

13) CDE Terminal Emulator (dtterm)

14) Other

Type the number of your choice and press Return: 3

Creating new rsa public/private host key pair

Creating new dsa public/private host key pair

Configuring network interface addresses: e1000g0.

q Host Name for e1000g0:1 qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Enter the host name which identifies this system on the network. The name

must be unique within your domain; creating a duplicate host name will cause

problems on the network after you install Solaris.

A host name must have at least one character; it can contain letters,

digits, and minus signs (-).

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information for e1000g0:1 qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Just a moment...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Configure Security Policy: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Specify Yes if the system will use the Kerberos security mechanism.

Specify No if this system will use standard UNIX security.

Configure Kerberos Security

qqqqqqqqqqqqqqqqqqqqqqqqqqq

[ ] Yes

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please wait...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Name Service qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

On this screen you must provide name service information. Select the name

service that will be used by this system, or None if your system will either

not use a name service at all, or if it will use a name service not listed

here.

&gt; To make a selection, use the arrow keys to highlight the option

and press Return to mark it [X].

Name service

qqqqqqqqqqqq

[X] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

[ ] None

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

[X] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

[ ] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Just a moment...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q NFSv4 Domain Name qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

NFS version 4 uses a domain name that is automatically derived from the

system's naming services. The derived domain name is sufficient for most

configurations. In a few cases, mounts that cross domain boundaries might

cause files to appear to be owned by "nobody" due to the lack of a common

domain name.

The current NFSv4 default domain is: ""

NFSv4 Domain Configuration

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

[X] Use the NFSv4 domain derived by the system

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information for NFSv4 Domain qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Time Zone qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

On this screen you must specify your default time zone. You can specify a

time zone in three ways: select one of the continents or oceans from the

list, select other - offset from GMT, or other - specify time zone file.

&gt; To make a selection, use the arrow keys to highlight the option and

press Return to mark it [X].

Continents and Oceans

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Africa

x [ ] Americas

x [ ] Antarctica

x [ ] Arctic Ocean

x [ ] Asia

x [ ] Atlantic Ocean

x [ ] Australia

x [ ] Europe

v [ ] Indian Ocean

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Africa

x [ ] Americas

x [ ] Antarctica

x [ ] Arctic Ocean

q Country or Region qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; To make a selection, use the arrow keys to highlight the option and

press Return to mark it [X].

Countries and Regions

qqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Afghanistan

x [ ] Armenia

x [ ] Azerbaijan

x [ ] Bahrain

x [ ] Bangladesh

x [ ] Bhutan

x [ ] Brunei

x [ ] Cambodia

x [ ] China

x [ ] Cyprus

x [ ] East Timor

x [ ] Georgia

v [ ] Hong Kong

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Afghanistan

x [ ] Armenia

x [ ] Azerbaijan

x [ ] Bahrain

x [ ] Bangladesh

x [ ] Bhutan

x [ ] Brunei

x [ ] Cambodia

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please wait...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Root Password qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please enter the root password for this system.

The root password may contain alphanumeric and special characters. For

security, the password will not be displayed on the screen as you type it.

&gt; If you do not want a root password, leave both entries blank.

Root password:

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic_142910-17 64-bit

Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.

Hostname: erpapptest

Reading ZFS config: done.

erpapptest console login: root

Password: Apr 9 11:32:57 erpapptest sendmail[12381]: My unqualified host name (erpapptest) unknown; sleeping for retry

Apr 9 11:32:57 erpapptest sendmail[12390]: My unqualified host name (erpapptest) unknown; sleeping for retry

Apr 9 11:33:02 erpapptest login: ROOT LOGIN /dev/console

Oracle Corporation SunOS 5.10 Generic Patch January 2005

#

# bash

bash-3.00# export TERM=vt100

bash-3.00# vi /etc/default/login

#ident "@(#)login.dfl 1.14 04/06/25 SMI"

#

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

# Set the TZ environment variable of the shell.

#

#TIMEZONE=EST5EDT

# ULIMIT sets the file size limit for the login. Units are disk blocks.

# The default of zero means no limit.

#

#ULIMIT=0

# If CONSOLE is set, root can only login on that device.

# Comment this line out to allow remote login by root.

#

# CONSOLE=/dev/console ----需要注释掉这一行

# PASSREQ determines if login requires a password.

#

#ident "@(#)login.dfl 1.14 04/06/25 SMI"

#

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

# Set the TZ environment variable of the shell.

#

#TIMEZONE=EST5EDT

# ULIMIT sets the file size limit for the login. Units are disk blocks.

# The default of zero means no limit.

#

#ULIMIT=0

# If CONSOLE is set, root can only login on that device.

# Comment this line out to allow remote login by root.

#

"/etc/default/login" 77 lines, 2260 characters

Zone的相关配置

将Zone erpapp在aptest上关闭。

Zone erpapp在aptest上的相关配置文件在目录:/etc/zones下面,有如下两个文件:

Index erpapp.xml

将index文件中的黄色标的字拷贝到dbtest的相同文件中。

将erpapp.xml拷贝到dbtest的相同目录中

bash-3.00# cd /etc/zones

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# more index

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

#

# ident "@(#)zones-index 1.2 04/04/01 SMI"

#

# DO NOT EDIT: this file is automatically generated by zoneadm(1M)

# and zonecfg(1M). Any manual changes will be lost.

#

global:installed:/

erpapp:installed:/zonedir/erpapp:49cdd4a7-2f69-4838-c089-d25a168e1835

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

通过fpt将文件传过去

bash-3.00# ftp 192.168.0.20

Connected to 192.168.0.20.

220 dbtest FTP server ready.

Name (192.168.0.20:root): root

331 Password required for root.

Password:

230 User root logged in.

Remote system type is UNIX.

Using binary mode to transfer files.

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 621

drwxr-xr-x 3 root sys 3 Apr 6 13:36 export

lrwxrwxrwx 1 root root 9 Apr 6 13:36 bin -&gt; ./usr/bin

drwxr-xr-x 2 root sys 2 Apr 6 13:36 mnt

drwxr-xr-x 4 root root 4 Apr 6 13:36 system

drwxr-xr-x 2 root sys 54 Apr 6 13:45 sbin

drwxr-xr-x 18 root sys 19 Apr 6 13:48 kernel

drwxr-xr-x 5 root sys 5 Apr 6 13:48 platform

drwxr-xr-x 8 root bin 243 Apr 6 13:49 lib

drwxr-xr-x 4 root root 4 Apr 6 13:51 rpool

drwxr-xr-x 8 root sys 11 Apr 6 13:52 boot

drwxr-xr-x 6 root root 11 Apr 6 14:06 install

drwxr-xr-x 42 root sys 56 Apr 6 14:22 usr

drwxr-xr-x 3 root sys 3 Apr 6 14:23 global

drwxr-xr-x 45 root sys 45 Apr 6 14:24 var

drwxr-xr-x 42 root sys 42 Apr 6 14:26 opt

drwxr-xr-x 5 root sys 12 Apr 9 09:19 devices

dr-xr-xr-x 1 root root 1 Apr 9 09:19 net

dr-xr-xr-x 1 root root 1 Apr 9 09:19 home

dr-xr-xr-x 6 root root 512 Apr 9 09:20 vol

drwxr-xr-x 3 root nobody 4 Apr 9 09:20 cdrom

drwxr-xr-x 89 root sys 247 Apr 9 09:20 etc

drwxr-xr-x 23 root sys 447 Apr 9 09:20 dev

drwxrwxrwt 7 root sys 666 Apr 9 09:20 tmp

drwxr-xr-x 2 root root 2 Apr 9 09:38 erpdbpool

drwx------ 2 root root 2 Apr 9 09:52 zonedir

dr-xr-xr-x 77 root root 260032 Apr 9 11:59 proc

226 Transfer complete.

remote: -lrt

1601 bytes received in 0.096 seconds (16.35 Kbytes/s)

ftp&gt; cd /etc/zones

250 CWD command successful.

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 14

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root sys 356 Apr 9 11:59 index

-rw-r--r-- 1 root root 356 Apr 9 11:59 index-0409

226 Transfer complete.

remote: -lrt

414 bytes received in 0.00028 seconds (1456.40 Kbytes/s)

ftp&gt; !ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

ftp&gt; put erpapp.xml

200 PORT command successful.

150 Opening BINARY mode data connection for erpapp.xml.

226 Transfer complete.

local: erpapp.xml remote: erpapp.xml

363 bytes sent in 0.02 seconds (17.93 Kbytes/s)

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 15

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root sys 356 Apr 9 11:59 index

-rw-r--r-- 1 root root 356 Apr 9 11:59 index-0409

-rw-r--r-- 1 root root 363 Apr 9 12:07 erpapp.xml

226 Transfer complete.

remote: -lrt

480 bytes received in 0.00063 seconds (745.40 Kbytes/s)

ftp&gt; bye

221-You have transferred 363 bytes in 1 files.

221-Total traffic for this session was 3845 bytes in 4 transfers.

221-Thank you for using the FTP service on dbtest.

221 Goodbye.

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

此步骤为在dbtest操作:插入黄色标的的一行

bash-3.00# vi index

"index" 9 lines, 285 characters

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

#

# ident "@(#)zones-index 1.2 04/04/01 SMI"

#

# DO NOT EDIT: this file is automatically generated by zoneadm(1M)

# and zonecfg(1M). Any manual changes will be lost.

#

global:installed:/

erpapp:installed:/zonedir/erpapp:49cdd4a7-2f69-4838-c089-d25a168e1835

在aptest操作:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 74.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 74.6G 3.66G /zonedir/erpapp

rpool 7.04G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 20K - 33.5K -

rpool/ROOT 5.50G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.50G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 76.3M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 80K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 18K - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.5G 210M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 2.18M - 6.38M -

将erpapppool export出去

先在erpapp zone之erpapptest中,将os关闭,执行init 5

再在aptest中执行

bash-3.00# zoneadm –z erpapp halt

bash-3.00# zpool export erpapppool

在dbtest中执行

bash-3.00# zpool import erpapppool

bash-3.00# zoneadm -z erpapp boot

aptest中的操作:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# zpool export erpapppool

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

rpool 7.04G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 20K - 33.5K -

rpool/ROOT 5.50G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.50G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 76.3M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 80K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 18K - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.5G 210M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 2.18M - 6.38M -

dbtest中的操作:

bash-3.00# zpool import -f erpapppool

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 74.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 74.6G 3.66G /zonedir/erpapp

erpdbpool 73.5K 78.3G 21K /erpdbpool

rpool 6.70G 71.6G 32.5K /rpool

rpool@20120406 19K - 32.5K -

rpool@201204061730 0 - 32.5K -

rpool/ROOT 5.14G 71.6G 21K legacy

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT@201204061730 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.14G 71.6G 5.05G /

rpool/ROOT/s10x_u9wos_14a@20120406 88.0M - 4.54G -

rpool/ROOT/s10x_u9wos_14a@201204061730 11.5M - 4.92G -

rpool/dump 1.00G 71.6G 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/dump@201204061730 16K - 1.00G -

rpool/export 62K 71.6G 23K /export

rpool/export@20120406 18K - 23K -

rpool/export@201204061730 0 - 23K -

rpool/export/home 21K 71.6G 21K /export/home

rpool/export/home@20120406 0 - 21K -

rpool/export/home@201204061730 0 - 21K -

rpool/swap 569M 72.1G 14.1M -

rpool/swap@20120406 10.2M - 14.1M -

rpool/swap@201204061730 0 - 14.1M -

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zoneadm -z erpapp boot

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 erpapp running /zonedir/erpapp native shared

至此,zone已经建立完成,且可以在aptest和dbtest之间import和export操作

下面进行zone cluster的配置

建立erpapprg资源组

bash-3.00# clrg create -n aptest,dbtest erpapprg

注册存储资源

bash-3.00# clrt register SUNW.HAStoragePlus

bash-3.00# clrs create -g erpapprg -t SUNW.HAStoragePlus -x zpools=erpapppool erpappstg

把资源组置为在线

bash-3.00# clrg online -emM erpapprg

如下提示,已经在线

(C348385) WARNING: Cannot enable monitoring on resource erpappstg because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor erpappstg' and re-enable monitoring using 'clresource monitor erpappstg'.

查看状态

bash-3.00# scstat

------------------------------------------------------------------

-- Cluster Nodes --

Node name Status

--------- ------

Cluster node: aptest Online

Cluster node: dbtest Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status

-------- -------- ------

Transport path: aptest:e1000g3 dbtest:e1000g3 Path online

Transport path: aptest:e1000g2 dbtest:e1000g2 Path online

------------------------------------------------------------------

-- Quorum Summary from latest node reconfiguration --

Quorum votes possible: 3

Quorum votes needed: 2

Quorum votes present: 3

-- Quorum Votes by Node (current status) --

Node Name Present Possible Status

--------- ------- -------- ------

Node votes: aptest 1 1 Online

Node votes: dbtest 1 1 Online

-- Quorum Votes by Device (current status) --

Device Name Present Possible Status

----------- ------- -------- ------

Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group Primary Secondary

------------ ------- ---------

-- Device Group Status --

Device Group Status

------------ ------

-- Multi-owner Device Groups --

Device Group Online Status

------------ -------------

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name Resources

---------- ---------

Resources: erpapprg erpappstg

-- Resource Groups --

Group Name Node Name State Suspended

---------- --------- ----- ---------

Group: erpapprg aptest Online No

Group: erpapprg dbtest Offline No

-- Resources --

Resource Name Node Name State Status Message

------------- --------- ----- --------------

Resource: erpappstg aptest Online Online

Resource: erpappstg dbtest Offline Offline

------------------------------------------------------------------

-- IPMP Groups --

Node Name Group Status Adapter Status

--------- ----- ------ ------- ------

IPMP Group: aptest ipmp1 Online e1000g1 Online

IPMP Group: aptest ipmp1 Online e1000g0 Online

IPMP Group: dbtest ipmp1 Online e1000g1 Online

IPMP Group: dbtest ipmp1 Online e1000g0 Online

-- IPMP Groups in Zones --

Zone Name Group Status Adapter Status

--------- ----- ------ ------- ------

创建Cluster Pfiles文件目录

bash-3.00# mkdir /zonedir/erpapp/cluster-pfiles

bash-3.00# cd /zonedir/erpapp

bash-3.00# ls -lrt

total 11

drwxr-xr-x 19 root root 23 Apr 11 15:39 root

drwxr-xr-x 12 root sys 51 Apr 11 15:39 dev

drwxr-xr-x 2 root root 2 Apr 11 16:00 cluster-pfiles

到这个目录,编辑zone cluster文件 sczbt_config

bash-3.00# cd /opt/SUNWsczone/sczbt/util

bash-3.00# ls -lrt

total 28

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root bin 5520 Jul 27 2010 sczbt_config

先备份

bash-3.00# cp sczbt_config sczbt-config-0411

bash-3.00# ls -lrt

total 29

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root bin 5520 Jul 27 2010 sczbt_config

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

编辑示例如下,注意黄色标的部分为编辑

bash-3.00# vi sczbt_config

RS=erpappzone

RG=erpapprg

PARAMETERDIR=/zonedir/erpapp/cluster-pfiles

SC_NETWORK=false

SC_LH=

FAILOVER=true

HAS_RS=erpappstg

Zonename="erpapp"

Zonebrand="native"

Zonebootopt=""

Milestone="multi-user-server"

LXrunlevel="3"

SLrunlevel="3"

Mounts=""

"sczbt_config" 159 lines, 5593 characters

bash-3.00# ls -lrt

total 40

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

-rw-r--r-- 1 root bin 5593 Apr 11 16:14 sczbt_config

bash-3.00# ./sczbt_register

sourcing ./sczbt_config

(C779822) Resource type SUNW.gds is not registered

Registration of resource erpappzone failed, please correct the wrong parameters.

Removing parameterfile /zonedir/erpapp/cluster-pfiles/sczbt_erpappzone for resource erpappzone.

如上,若有如上提示报错,则从新注册SUNW.gds

bash-3.00# clrt register SUNW.gds

bash-3.00# pwd

/opt

bash-3.00# cd /opt/SUNWsczone/sczbt/util

bash-3.00# ls -lrt

total 40

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

-rw-r--r-- 1 root bin 5593 Apr 11 16:14 sczbt_config

bash-3.00# ./sczbt_register

sourcing ./sczbt_config

Registration of resource erpappzone succeeded.

Validation of resource erpappzone succeeded.

至此,zone cluster也配置完成

可以查看下状态

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

4 erpdb running /zonedir/erpdb native shared

5 erpapp running /zonedir/erpapp native shared

bash-3.00# scstat

------------------------------------------------------------------

-- Cluster Nodes --

Node name Status

--------- ------

Cluster node: aptest Online

Cluster node: dbtest Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status

-------- -------- ------

Transport path: aptest:e1000g3 dbtest:e1000g3 Path online

Transport path: aptest:e1000g2 dbtest:e1000g2 Path online

------------------------------------------------------------------

-- Quorum Summary from latest node reconfiguration --

Quorum votes possible: 3

Quorum votes needed: 2

Quorum votes present: 3

-- Quorum Votes by Node (current status) --

Node Name Present Possible Status

--------- ------- -------- ------

Node votes: aptest 1 1 Online

Node votes: dbtest 1 1 Online

-- Quorum Votes by Device (current status) --

Device Name Present Possible Status

----------- ------- -------- ------

Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group Primary Secondary

------------ ------- ---------

-- Device Group Status --

Device Group Status

------------ ------

-- Multi-owner Device Groups --

Device Group Online Status

------------ -------------

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name Resources

---------- ---------

Resources: erpdbrg erpdbstg erpdbzone

Resources: erpapprg erpappstg erpappzone

-- Resource Groups --

Group Name Node Name State Suspended

---------- --------- ----- ---------

Group: erpdbrg aptest Online No

Group: erpdbrg dbtest Offline No

Group: erpapprg aptest Online No

Group: erpapprg dbtest Offline No

-- Resources --

Resource Name Node Name State Status Message

------------- --------- ----- --------------

Resource: erpdbstg aptest Online Online

Resource: erpdbstg dbtest Offline Offline

Resource: erpdbzone aptest Online Online - Service is online.

Resource: erpdbzone dbtest Offline Offline

Resource: erpappzone aptest Online Online - Service is online.

Resource: erpappzone dbtest Offline Offline

Resource: erpappstg aptest Online Online

Resource: erpappstg dbtest Offline Offline

------------------------------------------------------------------

-- IPMP Groups --

Node Name Group Status Adapter Status

--------- ----- ------ ------- ------

IPMP Group: aptest ipmp1 Online e1000g1 Online

IPMP Group: aptest ipmp1 Online e1000g0 Online

IPMP Group: dbtest ipmp1 Online e1000g1 Online

IPMP Group: dbtest ipmp1 Online e1000g0 Online

-- IPMP Groups in Zones --

Zone Name Group Status Adapter Status

--------- ----- ------ ------- ------

下面重点阐述下资源间切换的步骤及方法

A. 分两部分,使用cluster自动切换和使用export&import手动切换

B. Cluster自动切换

说明:无论哪种切换,均需遵循如下步骤

在zone中:关闭应用---关闭数据库---关闭os

在global zone中:

查看zpool在哪个主机

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 81.5G 3.66G 77.8G 4% ONLINE /

erpdbpool 81.5G 3.67G 77.8G 4% ONLINE /

rpool 31.8G 6.32G 25.4G 19% ONLINE -

bash-3.00# clrg offline erpapprg --- 先执行offline,此时相当于关闭erpapp zone

bash-3.00# clrg online erpapprg --- 再执行 online,此时相当于开启erpapp zone

但是,此时注意,因aptest为集群第一节点,所以,erpapprg online后,仍在aptest节点;若在dbtest server执行此操作,则online后,会将资源挂接在aptest server,原因是dbtest为非首节点,切记!!!

执行switch操作:

如下为dbtest server:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP 0 global running / native shared

1 erpdb running /zonedir/erpdb native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# clrg switch -n aptest erpdbrg ---将资源组erpdbrg切换到aptest server

切换后,去aptest server 查看:

如下zfs文件过来了

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 76.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 76.6G 3.66G /zonedir/erpapp

erpdbpool 3.67G 76.6G 21K /erpdbpool

erpdbpool/erpdb_fs 3.67G 76.6G 3.67G /zonedir/erpdb

rpool 9.08G 24.1G 32.5K /rpool

rpool/ROOT 6.23G 24.1G 21K legacy

rpool/ROOT/s10x_u9wos_14a 6.23G 24.1G 6.23G /

rpool/dump 1.00G 24.1G 1.00G -

rpool/export 44K 24.1G 23K /export

rpool/export/home 21K 24.1G 21K /export/home

rpool/swap 1.85G 26.0G 16K -

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 81.5G 3.66G 77.8G 4% ONLINE /

erpdbpool 81.5G 3.67G 77.8G 4% ONLINE /

rpool 33.8G 7.24G 26.5G 21% ONLINE -

如下zone过来了,并且是running

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

4 erpdb running /zonedir/erpdb native shared

在cluster 模式,尽量使用cluster 命令操作,避免使用 zpool import & zpool export 的手动方式,避免与cluster资源竞争,造成文件损坏

若非得以,必须使用export&import时,切记,首先需要关闭cluster 模式

下面进入Oracle数据库的安装

9. 安装oracle数据操作

3. 二)创建用户组、用户
1)添加用户组:
groupadd oinstall
groupadd dba
2)添加用户:
useradd –g oinstall –G dba –d /export/home/oracle –s /bin/bash –m oracle
{-g表示用户所属组、-G表示用户所属附加组、-d表示用户主目录、-s表示用户默认shell类型、oracle表示用户名,-m参数表示自动创建此用户的主目录,为避免麻烦,请勿手动创建此目录}
passwd oracle
{表示为oracle用户设置密码,输入该命令并回车之后,系统会提示输入密码、确认密码}

4.
三)创建Oracle数据库安装点
新建目录,Oracle将安装于这些目录下:
mkdir /oradata/oracle
mkdir /oradata/oracle/product/11.1.2
并把/oradata/oracle目录属主改为oracle,属组改为oinstall:
chown -R oracle:oinstall /oradata/oracle
{附Solaris系统目录说明
/: root文件系统
/bin:可执行程序,基本命令
/usr:UNIX系统文件
/dev:设备文件(逻辑设备)
/devices:设备文件(物理设备)
/etc:系统配置,系统管理数据文件
/export:允许其他系统访问的目录和文件
/home:用户家目录
/kernel:系统核心模块
/lib:系统库
/opt:增加的一些应用软件
/tmp:SWAP区
/var:系统的一些管理文件}

5.
四)修改Oracle用户的环境变量
以root用户登陆,在oracle用户的主目录下找到并修改它的环境变量.bash_profile

6. 在.bash_profile添加如下内容
-bash-3.00$ cat .bash_profile

export ORACLE_HOME=/oradata/oracle/product/11.2.0/dbhome_1

export ORACLE_BASE=/oradata/oracle

export ORACLE_TERM=vt100

export ORACLE_SID=TEST

LD_LIBRARY_PATH=/oradata/oracle/product/11.2.0/dbhome_1/lib:/usr/lib

PATH=/oradata/oracle/product/11.2.0/dbhome_1/bin:/usr/sbin:/usr/bin:/usr/cluster/bin

DISPLAY=192.168.15.157:0.0;export DISPLAY
{ORACLE_BASE是Oracle根目录,ORACLE_HOME是Oracle产品目录,即如果你的机器装两个版本的Oracle系统,可以在同一个ORACLE_BASE下,但ORACLE_HOME会做两个。}
之后,在path的开头位置加入$ORACLE_HOME/bin
例如:set path=($ORACLE_HOME/bin /usr/ccs/bin /bin /usr/bin )请照此原样填写,勿使用绝对路径。
五)修改Solaris系统参数
1)使用root账户登录,创建/etc/system文件的一个备份,例如:
cp /etc/system /etc/system.orig
2)编辑/etc/system,在最后添加如下:
set noexec_user_stack=1
set semsys:seminfo_semmni=300
set semsys:seminfo_semmns=1050
set semsys:seminfo_semmsl=400
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=6400000000(服务器8G内存的情况下,不同情况按比例增减)
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=300
set shmsys:shminfo_shmseg=30
3)重启系统使参数生效:
/usr/sbin/reboot

如下为参考图片,仅供参考安装的顺序及方法,请自行甄别
六)Oracle系统安装
1)使用oracle用户登录ftp,将安装程序10gr2_db_sol[1].cpio上传至oracle用户主目录。
2)解压:cpio –idmv < 10gr2_db_sol[1].cpio,如解压时报错,则换成root账户解压。
3)以oracle用户登录,执行./runInstaller

第一步:注意不选“Create Starter Database”

image

3. 第二步:操作系统检查

第三步:选择配置选项

image

3. 第四步:显示安装信息汇总

3. 第五步:显示安装进度

第六步:安装中途,提示执行脚本文件,以root用户执行之。

image

3. 第七步:显示Oracle软件安装完成界面。

第八步:于oracle/product/10gr2/bin目录下执行./dbca,出现创建数据库界面。

image

3. 第九步:选择建库模板,默认

4. 第十步:配置数据库服务名

第十一步:开始数据库配置

image

3. 第十二步:为系统账户设置密码(为简便起见,可以都设置为一样的密码)

第十三步:设置存储机制,这里选择的是文件系统,配置起来比较简单(ASM弄了半天也没配成功)

image

3. 第十四、十五、十六步:默认。

第十七步:内存等参数的设置
*内存:默认;进程:可以根据需要调整一下;字符集:ZHS16GBK;连接方式:Dedicated

image

3. 第十八步:默认,随后进入安装过程。

4. 第十九步:欲运行net manager程序,请执行netmgr,可完成对监听程序、服务名的配置。
欲停止或启动监听程序,请执行:
Lsnrctl stop
Lsnrctl start
欲启动数据实例,请执行:
sqlplus /as sysdba登录,执行startup启动数据库。

5.
七)验证安装是否成功
1)验证是否安装成功:
sqlplus system/yourpassword@yoursid
SQL> select * from tab;
2)关闭、启动正常
sqlplus /nolog
SQL&gt; connect /as sysdba
SQL&gt; shutdown immediate
SQL&gt; conn /as sysdba
SQL&gt; startup
3)查看监听器状态
lsnrctl status

. Vistor模拟带库软件

Vistor简介:
Vistor虚拟带库系统是cofio公司的一款虚拟带库软件解决方案,用来实现高性能的磁盘备份,同真实带库一样的磁带管理机制提高了管理效率。Vistor支持iscsi和FC,可以模拟多种型号的磁带库,允许创建多个不同的带库,支持NBU、Legato Networker、Bakbone等多款备份软件。

image

Vistor虚拟带库系统架构

两种方法搭建Vistor系统,一种是自建linux系统,下载Vistor的tgz压缩包,自己进行安装;另一种是下载其ViStor VMware Image镜像文件,配合VMvare软件实现快捷安装,Aladuo在此就是采用的第二种方式。

Vistor安装环境准备:

VMware workstation 6.5
虚拟机1:来自Vistor官方网站下载的vmware image文件,实际是一个CentOS5.2的linux环境,已经集成了Vistor 2.1.1了。
虚拟机2:windows server 2003,for备份软件,装windows initiator的

Vistor安装及配置步骤
1、首先Vistor的官方注册一个用户http://www.cofio.com/Register/,激活后进入用户界面,左上方选ViStor Downloads,(注意,AIMstor是cofio公司的另外一个备份产品,跟Vistor不是一回事!)选择下载ViStor VMware Image,这个就是我们要的Vistor镜像文件了,大小239MB。
2、解压ViStor VMware Image这个下载压缩包,看到我们熟悉的vmware文件了,使用你的VMware workstation 6.5打开,默认看到其内存分配时1024M,磁盘空间最大是500GB。不用改,没这么多空间也不用改,只是一个最大值而已,回头可以设置Vistor里磁带库的磁带大小及数量来控制磁盘空间。
3、可以看到一个linux虚拟机系统(这就是咱们的虚拟机1了)启动,默认登录用户和密码是root/password。登录进去看一下系统和Vistor的安装情况,如图,Vistor已经安装在了/usr/cofio目录下了。

image

vistor系统安装目录

4、修改IP gateway等信息

由于是redhat linux系统,参考如下方法,这与solaris稍有不同

ifconfig eth0 新ip

然后编辑/etc/sysconfig/network-scripts/ifcfg-eth0,修改ip

a、修改IP地址

[aeolus@db network-scripts]$ vi ifcfg-eth0

DEVICE=eth0

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.0.50

NETMASK=255。255.255.0

GATEWAY=192.168.0.1

b、修改网关

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=vistor

GATEWAY=192.168.0.1

c、修改DNS

[aeolus@db etc]$ vi resolv.conf

nameserver 略

nameserver 略

d、重新启动网络配置

#/etc/init.d/network restart

找一台同一网络的机器,在浏览器里输入http://192.168.0.50:5050,看到Vistor虚拟带库的登录界面如下:默认密码为空。

image

vistor登录界面

5、进入Vistor系统管理界面,

image

vistor管理界面

6、进入Manage Library配置磁带的大小,默认100GB,我们可以选择Resize修改磁带的容量,这里我改成了1GB

image

image

设置磁带容量

7、选择顶部菜单的Configure Library设置虚拟带库的属性,如名称,机械手,驱动器的模拟型号,驱动器数量,槽位数量,如下图:

image

设置vistor中带库的属性

8、我们可以看到Vistor支持的机械手和磁带驱动类型如下图:

image

vistor支持的机械手

image

vistor支持的驱动器

9、至此Vistor虚拟带库就算是完成配置了,注意要手工执行运行Run,因为默认是带库是offline的,

image

安装OSB服务器及客户端

bash-3.00# mkdir -p /usr/local/oracle/backup

bash-3.00# cd /usr

bash-3.00# ls

5bin appserver cluster games java lib net openwin postgres sadm snadm SUNWale ucblib X11R6

adm aset demo gnome jdk local news perl5 preserve sbin spool tmp vmsys xpg4

apache bin dict include kernel mail oasys pgadmin3 proc sfw src ucb X xpg6

apache2 ccs dt j2se kvm man old platform pub share sunvts ucbinclude X11

bash-3.00# cd local

bash-3.00# ls

oracle

bash-3.00# cd oracle

bash-3.00# ls -lrt

total 1

drwxr-xr-x 2 root root 2 May 4 11:17 backup

bash-3.00# cd backup

bash-3.00# ls

bash-3.00# /install/osb-10.4.0.1.0_solaris.x64_cdrom110923/setup

Welcome to Oracle's setup program for Oracle Secure Backup. This

program loads Oracle Secure Backup software from the CD-ROM to a

filesystem directory of your choosing.

This CD-ROM contains Oracle Secure Backup version 10.4.0.1.0_SOLARIS.X64.

Please wait a moment while I learn about this host... done.

- - - - - - - - - - - - - - - - - - - - - - - - - - -

1. solarisx86_64 administrative server, media server, client

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Loading Oracle Secure Backup installation tools... done.

Loading solarisx86_64 administrative server, media server, client... done.

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Oracle Secure Backup has installed a new obparameters file.

Your previous version has been saved as install/obparameters.savedbysetup.

Any changes you have made to the previous version must be

made to the new obparameters file.

Would you like the opportunity to edit the obparameters file

Please answer 'yes' or 'no' [no]:

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Loading of Oracle Secure Backup software from CD-ROM is complete.

You may unmount and remove the CD-ROM.

Would you like to continue Oracle Secure Backup installation with

'installob' now? (The Oracle Secure Backup Installation Guide

contains complete information about installob.)

Please answer 'yes' or 'no' [yes]: no

When you are ready to continue:

1. log in as (or 'su' to) root

2. cd to /usr/local/oracle/backup

3. run install/installob

bash-3.00# pwd

/usr/local/oracle/backup

bash-3.00# ls -lrt

total 25

drwxrwxrwx 7 root root 8 Sep 24 2011 apache

drwxrwxrwx 2 root root 4 Sep 24 2011 device

drwxrwxrwx 2 root root 4 Sep 24 2011 help

drwxrwxrwx 2 root root 8 Sep 24 2011 tools.solarisx86_64

drwxrwxrwx 2 root root 21 Sep 24 2011 samples

drwxrwxrwx 2 root root 93 Sep 24 2011 install

drwxr-xr-x 4 root root 4 May 4 11:18 man

bash-3.00# cd install

bash-3.00# ls -lrt

total 1404

-rwxrwxrwx 1 root root 3379 Sep 24 2011 S92OB

-rwxrwxrwx 1 root root 15840 Sep 24 2011 installhere

-rwxrwxrwx 1 root root 7259 Sep 24 2011 installdriver

-rwxrwxrwx 1 root root 8268 Sep 24 2011 initinstall.sh

-rwxrwxrwx 1 root root 2780 Sep 24 2011 hotdate.sh

-rwxrwxrwx 1 root root 1609 Sep 24 2011 dupcheck.sh

-rwxrwxrwx 1 root root 16280 Sep 24 2011 doinstall.sh

-rwxrwxrwx 1 root root 2238 Sep 24 2011 daemons.sh

-rwxrwxrwx 1 root root 5601 Sep 24 2011 checkspace.sh

-rwxrwxrwx 1 root root 2789 Sep 24 2011 checkdirs.sh

-rwxrwxrwx 1 root root 9286 Sep 24 2011 canbe

-rwxrwxrwx 1 root root 2061 Sep 24 2011 ayenay.sh

-rwxrwxrwx 1 root root 1572 Sep 24 2011 linkexists.sh

-rwxrwxrwx 1 root root 84 Sep 24 2011 killblanks.sed

-rwxrwxrwx 1 root root 112 Sep 24 2011 justtokens.sed

-rwxrwxrwx 1 root root 1258 Sep 24 2011 isnfssub.sh

-rwxrwxrwx 1 root root 1898 Sep 24 2011 isnfs.sh

-rwxrwxrwx 1 root root 2709 Sep 24 2011 iscwd.sh

-rwxrwxrwx 1 root root 6649 Sep 24 2011 instnet.sh

-rwxrwxrwx 1 root root 81520 Sep 24 2011 installob

-rwxrwxrwx 1 root root 10413 Sep 24 2011 installnet

-rwxrwxrwx 1 root root 3797 Sep 24 2011 installhost

-rwxrwxrwx 1 root root 8093 Sep 24 2011 make_sol.sh

-rwxrwxrwx 1 root root 10605 Sep 24 2011 make_hppa.sh

-rwxrwxrwx 1 root root 10375 Sep 24 2011 make_hp800.sh

-rwxrwxrwx 1 root root 7864 Sep 24 2011 makefoothld.sh

-rwxrwxrwx 1 root root 41796 Sep 24 2011 makedev

-rwxrwxrwx 1 root root 1735 Sep 24 2011 makealink.sh

-rwxrwxrwx 1 root root 1917 Sep 24 2011 makeadmdir.sh

-rwxrwxrwx 1 root root 19009 Sep 24 2011 machinfo.sh

-rwxrwxrwx 1 root root 6662 Sep 24 2011 loadlicense

-rwxrwxrwx 1 root root 5297 Sep 24 2011 hp8buses.sh

-rwxrwxrwx 1 root root 2753 Sep 24 2011 mymachinfo.sh

-rwxrwxrwx 1 root root 1029 Sep 24 2011 munghpver.sed

-rwxrwxrwx 1 root root 2441 Sep 24 2011 mintmpspace.sh

-rwxrwxrwx 1 root root 5366 Sep 24 2011 md_solaris.sh

-rwxrwxrwx 1 root root 3882 Sep 24 2011 md_sgi.sh

-rwxrwxrwx 1 root root 6952 Sep 24 2011 md_rs6000.sh

-rwxrwxrwx 1 root root 2776 Sep 24 2011 md_linux86.sh

-rwxrwxrwx 1 root root 1139 Sep 24 2011 md_linux86-glibc.sh

-rwxrwxrwx 1 root root 4312 Sep 24 2011 md_hppa.sh

-rwxrwxrwx 1 root root 7975 Sep 24 2011 md_hp800.sh

-rwxrwxrwx 1 root root 2564 Sep 24 2011 md_chkexist.sh

-rwxrwxrwx 1 root root 1410 Sep 24 2011 maketarlst.sed

-rwxrwxrwx 1 root root 46518 Sep 24 2011 makelinks.sh

-rwxrwxrwx 1 root root 3543 Sep 24 2011 protectwc.sh

-rwxrwxrwx 1 root root 3687 Sep 24 2011 protect.sh

-rwxrwxrwx 1 root root 13755 Sep 24 2011 probedev

-rwxrwxrwx 1 root root 3860 Sep 24 2011 prefer_rmt

-rwxrwxrwx 1 root root 3869 Sep 24 2011 prefer_ob

-rwxrwxrwx 1 root root 20859 Sep 24 2011 obparameters

-rwxrwxrwx 1 root root 6498 Sep 24 2011 obndf

-rwxrwxrwx 1 root root 1334 Sep 24 2011 obgserverfiles

-rwxrwxrwx 1 root root 2410 Sep 24 2011 obgclientfiles

-rwxrwxrwx 1 root root 1291 Sep 24 2011 obgadminfiles

-rwxrwxrwx 1 root root 2318 Sep 24 2011 obclientfiles

-rwxrwxrwx 1 root root 1447 Sep 24 2011 obadminfiles

-rwxrwxrwx 1 root root 251 Sep 24 2011 nfsmpars.sed

-rwxrwxrwx 1 root root 1673 Sep 24 2011 nfsmount.sh

-rwxrwxrwx 1 root root 4816 Sep 24 2011 tgttmproom.sh

-rwxrwxrwx 1 root root 2981 Sep 24 2011 tgtobroom.sh

-rwxrwxrwx 1 root root 2865 Sep 24 2011 tgtmachinfo.sh

-rwxrwxrwx 1 root root 4937 Sep 24 2011 stoprb

-rwxrwxrwx 1 root root 3789 Sep 24 2011 stopd.sh

-rwxrwxrwx 1 root root 1857 Sep 24 2011 sol_bus.sh

-rwxrwxrwx 1 root root 5176 Sep 24 2011 setupserver.sh

-rwxrwxrwx 1 root root 13151 Sep 24 2011 setupobc.sh

-rwxrwxrwx 1 root root 2667 Sep 24 2011 setlists.sh

-rwxrwxrwx 1 root root 4890 Sep 24 2011 setdevdfts.sh

-rwxrwxrwx 1 root root 1832 Sep 24 2011 selhosts.sh

-rwxrwxrwx 1 root root 1262 Sep 24 2011 quietstderr.sh

-rwxrwxrwx 1 root root 5795 Sep 24 2011 osbcvt

-rwxrwxrwx 1 root root 2472 Sep 24 2011 observerfiles

-rwxrwxrwx 1 root root 7136 Sep 24 2011 webconfig

-rwxrwxrwx 1 root root 5160 Sep 24 2011 webcheck.sh

-rwxrwxrwx 1 root root 8772 Sep 24 2011 valdrives.sh

-rwxrwxrwx 1 root root 27632 Sep 24 2011 updaterc.sh

-rwxrwxrwx 1 root root 2883 Sep 24 2011 updatendf.sh

-rwxrwxrwx 1 root root 1185 Sep 24 2011 updatecnf.sh

-rwxrwxrwx 1 root root 3889 Sep 24 2011 unmk_rs6000.sh

-rwxrwxrwx 1 root root 18932 Sep 24 2011 uninstallob

-rwxrwxrwx 1 root root 30555 Sep 24 2011 uninstallhere

-rwxrwxrwx 1 root root 4460 Sep 24 2011 uiparse.sh

-rwxrwxrwx 1 root root 58 Sep 24 2011 trexit

-rwxrwxrwx 1 root root 34 Sep 24 2011 trenter

-rwxrwxrwx 1 root root 2913 Sep 24 2011 tmp_room.sh

-rwxr-xr-x 1 root root 1215 Sep 24 2011 size_table.sh

-rwxr-xr-x 1 root root 20860 May 4 11:19 obparameters.savedbysetup

bash-3.00# ./installob

Welcome to installob, Oracle Secure Backup's installation program.

For most questions, a default answer appears enclosed in square brackets.

Press Enter to select this answer.

Please wait a few seconds while I learn about this machine... done.

Have you already reviewed and customized install/obparameters for your

Oracle Secure Backup installation [yes]?

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Oracle Secure Backup is not yet installed on this machine.

Oracle Secure Backup's Web server has been loaded, but is not yet configured.

Choose from one of the following options. The option you choose defines

the software components to be installed.

Configuration of this host is required after installation completes.

You can install the software on this host in one of the following ways:

(a) administrative server, media server and client

(b) media server and client

(c) client

If you are not sure which option to choose, please refer to the Oracle

Secure Backup Installation Guide. (a,b or c) [a]?

Beginning the installation. This will take just a minute and will produce

several lines of informational output.

Installing Oracle Secure Backup on aptest (solaris version 5.10)

You must now enter a password for the Oracle Secure Backup encryption

key store. Oracle suggests you choose a password of at least 8

characters in length, containing a mixture of alphabetic and numeric

characters.

Please enter the key store password:

Re-type password for verification:

You must now enter a password for the Oracle Secure Backup 'admin' user.

Oracle suggests you choose a password of at least 8 characters in length,

containing a mixture of alphabetic and numeric characters.

Please enter the admin password:

Re-type password for verification:

You should now enter an email address for the Oracle Secure Backup 'admin'

user. Oracle Secure Backup uses this email address to send job summary

reports and to notify the user when a job requires input. If you leave this

blank, you can set it later using the obtool's 'chuser' command.

Please enter the admin email address: [email protected]

generating links for admin installation with Web server

updating default library list via crle to include /usr/local/oracle/backup/.lib.solarisx86_64

updating secure library list via crle to include /usr/local/oracle/backup/.lib.solarisx86_64

checking Oracle Secure Backup's configuration file (/etc/obconfig)

setting Oracle Secure Backup directory to /usr/local/oracle/backup in /etc/obconfig

setting local database directory to /usr/etc/ob in /etc/obconfig

setting temp directory to /usr/tmp in /etc/obconfig

setting administrative directory to /usr/local/oracle/backup/admin in /etc/obconfig

protecting the Oracle Secure Backup directory

installing /etc/init.d/OracleBackup for observiced start/kill ops at

operating system run-level transition

installing start-script (link) /etc/rc2.d/S92OracleBackup

installing kill-script (link) /etc/rc1.d/K01OracleBackup

installing kill-script (link) /etc/rc0.d/K01OracleBackup

initializing the administrative domain

Is aptest connected to any tape libraries that you'd like to use with

Oracle Secure Backup [no]?

Is aptest connected to any tape drives that you'd like to use with

Oracle Secure Backup [no]?

Installation summary:

Installation Host OS Driver OS Move Reboot

Mode Name Name Installed? Required? Required?

admin aptest solaris no no no

Oracle Secure Backup is now ready for your use

本文出自 “豫霸天下” 博客,转载请与作者联系!

你可能感兴趣的:(安装,配置,import,Export,create)