db2"call get_dbsize_info(?,?,?,-1)"
返回:
参数名: SNAPSHOTTIMESTAMP
参数值: 2014-04-08-19.43.00.127185
参数名: DATABASESIZE
参数值: 10976190464
参数名: DATABASECAPACITY
参数值: 171442124288
返回状态 = 0
7 >>-GET_DBSIZE_INFO--(--snapshot-timestamp--,--dbsize--,---------> 7 7 >--dbcapacity--,--refresh-window--)---------------------------->< 77 7
The schema is SYSTOOLS.
7The GET_DBSIZE_INFO procedure calculates the database size and 7maximum capacity. 7The calculated values are returned as procedure output parameters 7and are cached in the SYSTOOLS.STMG_DBSIZE_INFO table. 7The procedure caches these values because the calculations are 7costly.7The SYSTOOLS.STMG_DBSIZE_INFO table is created automatically the 7first time the procedure executes. 7If there are values cached in the SYSTOOLS.STMG_DBSIZE_INFO table and 7they are current enough, as determined by the 7snapshot-timestamp and refresh-window values, 7these cached values are returned. 7If the cached values are not current enough, new cached values are 7calculated, inserted into the SYSTOOLS.STMG_DBSIZE_INFO table and 7returned, and the snapshot-timestamp value is updated.
7 7 snapshot-timestamp 7An output parameter of type TIMESTAMP that returns the time at 7which dbsize and dbcapacity were calculated. 7This timestamp, along with the value of refresh-window, is 7used to determine when the cached values in the 7SYSTOOLS.STMG_DBSIZE_INFO table need to be refreshed. 7 7 dbsize 7An output parameter of type BIGINT that returns the size of the 7database (in bytes). 7 7 dbcapacity 7An output parameter of type BIGINT that returns the database 7capacity (in bytes). 7This value is not available on partitioned database systems. 7 7 refresh-window 7An input argument of type INTEGER that specifies the number of 7minutes until the cached values for database size and capacity are to 7be refreshed. 7Specify -1 for the default refresh window of 30 minutes. 7A refresh window of 0 forces an immediate refreshing of the cached 7values. 7 7 7Examples:
7Example 1: 7Get the database size and capacity using a default refresh window of 730 minutes. 7The database size and capacity will be recalculated when the cached 7data is older than 30 minutes.7
7 CALL GET_DBSIZE_INFO(?, ?, ?, -1)
7The procedure returns:7
7Value of output parameters 7 -------------------------- 7 Parameter Name : SNAPSHOTTIMESTAMP 7 Parameter Value : 2004-02-29-18.31.55.178000 7 7 Parameter Name : DATABASESIZE 7 Parameter Value : 22302720 7 7 Parameter Name : DATABASECAPACITY 7 Parameter Value : 4684793856 7 7 Return Status = 07
Example 2: 7Get the database size and capacity using a refresh window of 0 7minutes. 7The database size and capacity will be recalculated immediately.7
7 CALL GET_DBSIZE_INFO(?, ?, ?, 0)
7The procedure returns:7
7Value of output parameters 7 -------------------------- 7 Parameter Name : SNAPSHOTTIMESTAMP 7 Parameter Value : 2004-02-29-18.33.34.561000 7 7 Parameter Name : DATABASESIZE 7 Parameter Value : 22302720 7 7 Parameter Name : DATABASECAPACITY 7 Parameter Value : 4684859392 7 7 Return Status = 07
Example 3: 7Get the database size and capacity using a refresh window of 24 hours. 7The database size and capacity will be recalculated when the cached 7data is older than 1440 minutes.7
7 CALL GET_DBSIZE_INFO(?, ?, ?, 1440)
7The procedure returns:7
7Value of output parameters 7 -------------------------- 7 Parameter Name : SNAPSHOTTIMESTAMP 7 Parameter Value : 2004-02-29-18.33.34.561000 7 7 Parameter Name : DATABASESIZE 7 Parameter Value : 22302720 7 7 Parameter Name : DATABASECAPACITY 7 Parameter Value : 4684859392 7 7 Return Status = 0
The calculated values are returned as procedure output parameters and are cached in the SYSTOOLS.STMG_DBSIZE_INFO table. The procedure caches these values because the calculations are costly. The SYSTOOLS.STMG_DBSIZE_INFO table is created automatically the first time the procedure executes. If there are values cached in the SYSTOOLS.STMG_DBSIZE_INFO table and they are current enough, as determined by thesnapshot-timestamp and refresh-window values, these cached values are returned. If the cached values are not current enough, new cached values are calculated, inserted into the SYSTOOLS.STMG_DBSIZE_INFO table and returned, and the snapshot-timestamp value is updated.
To ensure that the data is returned by all partitions for a global table space snapshot, the database must be activated.
The SYSTOOLSPACE is used for the routine's operation tables to store metadata; that is, data used to describe database objects and their operation.
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp
The obvious part here is the data that resides in tables and indexes and the meta data that describes the database. The not so obvious areas include:
This is one of my favorite little functions for quick answers to the simple questions on database size. I have the syntax actually memorized. It is executed like this:
$ db2 connect to d01
Database Connection Information
Database server = DB2/LINUXX8664 9.7.4
SQL authorization ID = DB2INST1
Local database alias = D01
$ db2 "call get_dbsize_info(?,?,?,-1)"
Value of output parameters
--------------------------
Parameter Name : SNAPSHOTTIMESTAMP
Parameter Value : 2012-06-18-10.34.39.907630
Parameter Name : DATABASESIZE
Parameter Value : 1157869568
Parameter Name : DATABASECAPACITY
Parameter Value : 1307672571
Return Status = 0
The numbers returned are in bytes, so with a bit of math, we can see that this small database is a bit less than 1.1 GB. These numbers include only used pages in the tablespaces. This means that it essentially covers table data and index data and not much else. It also means that if you have DMS tablespaces with a large amount of free space, you may have more space actually taken up on disk than the value of DATABASESIZE as reported here.
The DATABASECAPACITY is calculated by adding up pages that have been allocated but not used in DMS tablespaces and filesystem free space for filesystems that hold SMS tablespaces. There are problems inherent in that, of course – you may have a lot of filesystem space where DMS containers reside that would not be reflected in DATABASECAPACITY until you extend the tablespace(s) to use them, and you may also have SMS tablespaces in a filesystem where you really shouldn’t use the filesystem up to 100%.
Be Aware: if you use this function, and have not created the SYSTOOLS tablespace, DB2 will automatically create it for you. This can be handy so you can control its creation by using this, but on the other hand, if it creates this tablespace, your incremental backups will fail until the next full backup, because you must take a full backup after creating a new tablespace before incremental backups will succeed.
If you want details on what those parameters are that you’re passing in, see the info center entry:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.rtn.doc/doc/r0011863.html
This is a common cheat for finding approximate database size. Looking at the backup size using a simple ls or whatever tool your OS has for listing files and their size. BUT, if you’re taking compressed backups, it doesn’t work. If you compress backups after they’re taken, this can also be difficult.
It may also be difficult if you take backups directly to TSM or another tape library.
Personally, since I’m nearly always backing up to disk, I nearly always take compressed backups, so this is rarely an option for me.
This method is similar to the first one, of course, in what it includes – essentially all data in the database, though it includes things that are usually small like the history file, database configuration, etc. For online backups, this will also contain an unknown number of transaction log files.
This can be the most difficult, but is the most comprehensive. To gather information, you look in each area where the database stores things, and simply look at that area from a filesystem level. It is essential to have a more complete picture if you’re planning on say cloning the server or something like that. I have a standard set of filesystems that makes this easier. It’s not that it’s technically difficult – it just requires that high level of attention to detail that is critical for a DBA to have.
To look at this at the filesystem level, you first have to find all of the filesystems involved. One way to do that is:
$ db2pd -d d01 -tablespaces
Database Partition 0 -- Database D01 -- Active -- Up 0 days 00:41:41 -- Date 06/18/2012 11:15:00
Tablespace Configuration:
Address Id Type Content PageSz ExtentSz Auto Prefetch BufID BufIDDisk FSC NumCntrs MaxStripe LastConsecPg Name
0x00002B4FD762FE20 0 DMS Regular 4096 4 Yes 24 1 1 Off 1 0 3 SYSCATSPACE
0x00002B4FD7631580 1 SMS SysTmp 4096 32 Yes 192 1 1 On 1 0 31 TEMPSPACE1
0x00002B4FD7634CA0 2 DMS Large 4096 32 Yes 192 1 1 Off 1 0 31 USERSPACE1
0x00002B4FD7636400 3 DMS Regular 8192 32 Yes 192 2 2 Off 1 0 31 TAB8K
0x00002B4FD7637B60 4 DMS Regular 16384 32 Yes 192 3 3 Off 1 0 31 TAB16K
0x00002B4FD76392C0 5 SMS SysTmp 8192 32 Yes 192 2 2 On 1 0 31 TEMPSYS8K
0x00002B4FD763C9E0 6 SMS SysTmp 16384 32 Yes 192 3 3 On 1 0 31 TEMPSYS16K
0x00002B4FD7640100 7 SMS SysTmp 32768 32 Yes 192 4 4 On 1 0 31 TEMPSYS32K
0x00002B4FD7643820 8 DMS Large 4096 4 Yes 24 1 1 Off 1 0 3 SYSTOOLSPACE
Tablespace Statistics:
Address Id TotalPgs UsablePgs UsedPgs PndFreePgs FreePgs HWM Max HWM State MinRecTime NQuiescers PathsDropped
0x00002B4FD762FE20 0 81920 81916 67772 5704 8440 77192 77192 0x00000000 0 0 No
0x00002B4FD7631580 1 1 1 1 0 0 0 0 0x00000000 0 0 No
0x00002B4FD7634CA0 2 172032 172000 168448 64 3488 168544 168544 0x00000000 0 0 No
0x00002B4FD7636400 3 16384 16352 14688 0 1664 14688 14688 0x00000000 0 0 No
0x00002B4FD7637B60 4 6144 6112 4192 0 1920 4192 4192 0x00000000 0 0 No
0x00002B4FD76392C0 5 1 1 1 0 0 0 0 0x00000000 0 0 No
0x00002B4FD763C9E0 6 1 1 1 0 0 0 0 0x00000000 0 0 No
0x00002B4FD7640100 7 1 1 1 0 0 0 0 0x00000000 0 0 No
0x00002B4FD7643820 8 8192 8188 304 0 7884 304 304 0x00000000 1340030079 0 No
Tablespace Autoresize Statistics:
Address Id AS AR InitSize IncSize IIP MaxSize LastResize LRF
0x00002B4FD762FE20 0 Yes Yes 33554432 -1 No None None No
0x00002B4FD7631580 1 Yes No 0 0 No 0 None No
0x00002B4FD7634CA0 2 Yes Yes 33554432 -1 No None None No
0x00002B4FD7636400 3 Yes Yes 33554432 -1 No None None No
0x00002B4FD7637B60 4 Yes Yes 33554432 -1 No None None No
0x00002B4FD76392C0 5 Yes No 0 0 No 0 None No
0x00002B4FD763C9E0 6 Yes No 0 0 No 0 None No
0x00002B4FD7640100 7 Yes No 0 0 No 0 None No
0x00002B4FD7643820 8 Yes Yes 33554432 -1 No None None No
Containers:
Address TspId ContainNum Type TotalPgs UseablePgs PathID StripeSet Container
0x00002B4FD7631340 0 0 File 81920 81916 0 0 /db_data/db2inst1/NODE0000/D01/T0000000/C0000000.CAT
0x00002B4FD7632A40 1 0 Path 1 1 0 0 /db_data/db2inst1/NODE0000/D01/T0000001/C0000000.TMP
0x00002B4FD76361C0 2 0 File 172032 172000 0 0 /db_data/db2inst1/NODE0000/D01/T0000002/C0000000.LRG
0x00002B4FD7637920 3 0 File 16384 16352 0 0 /db_data/db2inst1/NODE0000/D01/T0000003/C0000000.USR
0x00002B4FD7639080 4 0 File 6144 6112 0 0 /db_data/db2inst1/NODE0000/D01/T0000004/C0000000.USR
0x00002B4FD763A780 5 0 Path 1 1 0 0 /db_data/db2inst1/NODE0000/D01/T0000005/C0000000.TMP
0x00002B4FD763DEA0 6 0 Path 1 1 0 0 /db_data/db2inst1/NODE0000/D01/T0000006/C0000000.TMP
0x00002B4FD76415C0 7 0 Path 1 1 0 0 /db_data/db2inst1/NODE0000/D01/T0000007/C0000000.TMP
0x00002B4FD7644D40 8 0 File 8192 8188 0 0 /db_data/db2inst1/NODE0000/D01/T0000008/C0000000.LRG
$ db2 connect to d01
Database Connection Information
Database server = DB2/LINUXX8664 9.7.4
SQL authorization ID = DB2INST1
Local database alias = D01
$ db2 "select substr(container_name,1,100) container_name from sysibmadm.container_utilization"
CONTAINER_NAME
----------------------------------------------------------------------------------------------------
/db_data/db2inst1/NODE0000/D01/T0000000/C0000000.CAT
/db_data/db2inst1/NODE0000/D01/T0000001/C0000000.TMP
/db_data/db2inst1/NODE0000/D01/T0000002/C0000000.LRG
/db_data/db2inst1/NODE0000/D01/T0000003/C0000000.USR
/db_data/db2inst1/NODE0000/D01/T0000004/C0000000.USR
/db_data/db2inst1/NODE0000/D01/T0000005/C0000000.TMP
/db_data/db2inst1/NODE0000/D01/T0000006/C0000000.TMP
/db_data/db2inst1/NODE0000/D01/T0000007/C0000000.TMP
/db_data/db2inst1/NODE0000/D01/T0000008/C0000000.LRG
9 record(s) selected.
You have to add up the sizes of all of the files, or if using SMS, all of the directories here. In my case, I can do:
[db2inst1@luxec2 cron]\-DEV-$ du -sh /db_data/db2inst1/NODE0000/D01
1.3G /db_data/db2inst1/NODE0000/D01
Database home directory:
This is usually pretty small, but you have to be careful, because your database data could be here too – so be sure not to double count. You first have to find the whole path for the database you’re looking at, and then you can determine the space:
$ db2 list db directory
System Database Directory
Number of entries in the directory = 1
Database 1 entry:
Database alias = D01
Database name = D01
Local database directory = /db_data
Database release level = d.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
$ db2 list db directory on /db_data
Local Database Directory on /db_data
Number of entries in the directory = 1
Database 1 entry:
Database alias = D01
Database name = D01
Database directory = SQL00002
Database release level = d.00
Comment =
Directory entry type = Home
Catalog database partition number = 0
Database partition number = 0
$ du -sh /db_data/db2inst1/NODE0000/SQL00002
5.2M /db_data/db2inst1/NODE0000/SQL00002
To look at the filesystem level at how much space is being used here, you can do this to find the right path:
$ db2 get db cfg for D01 |grep "Path to log files"
Path to log files = /db_logs/D01/NODE0000/
$ db2 get db cfg for D01 |grep -i log
...
Log file size (4KB) (LOGFILSIZ) = 10000
Number of primary log files (LOGPRIMARY) = 12
Number of secondary log files (LOGSECOND) = 50
...
$ du -sh /db_logs/D01/NODE0000/
431M /db_logs/D01/NODE0000/
Assuming you’re archiving transaction logs to disk (if you are retaining them, you should always archive them somewhere else, even if it’s on the same filesystem), you can find where they’re going using this:
$ db2 get db cfg for D01 |grep ARCH
First log archive method (LOGARCHMETH1) = DISK:/db_arch_logs/D01/
Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
$ du -sh /db_arch_logs/D01/
530M /db_arch_logs/D01/
You can take backups to any path on the server. In practice, you probably use just one or two locations. You can find the location for each individual backup using:
$ db2 list history backup all for d01
List History File for d01
Number of matching file entries = 3
Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID
-- --- ------------------ ---- --- ------------ ------------ --------------
B D 20120614135343001 F D S0000000.LOG S0000000.LOG
----------------------------------------------------------------------------
Contains 5 tablespace(s):
00001 SYSCATSPACE
00002 USERSPACE1
00003 TAB8K
00004 TAB16K
00005 SYSTOOLSPACE
----------------------------------------------------------------------------
Comment: DB2 BACKUP D01 OFFLINE
Start Time: 20120614135343
End Time: 20120614135404
Status: A
----------------------------------------------------------------------------
EID: 7 Location: /db_bkup
Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID
-- --- ------------------ ---- --- ------------ ------------ --------------
B D 20120614135833001 F D S0000001.LOG S0000001.LOG
----------------------------------------------------------------------------
Contains 5 tablespace(s):
00001 SYSCATSPACE
00002 USERSPACE1
00003 TAB8K
00004 TAB16K
00005 SYSTOOLSPACE
----------------------------------------------------------------------------
Comment: DB2 BACKUP D01 OFFLINE
Start Time: 20120614135833
End Time: 20120614135850
Status: A
----------------------------------------------------------------------------
EID: 9 Location: /db_bkup
Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID
-- --- ------------------ ---- --- ------------ ------------ --------------
B D 20120618094410001 N D S0000002.LOG S0000002.LOG
----------------------------------------------------------------------------
Contains 5 tablespace(s):
00001 SYSCATSPACE
00002 USERSPACE1
00003 TAB8K
00004 TAB16K
00005 SYSTOOLSPACE
----------------------------------------------------------------------------
Comment: DB2 BACKUP D01 ONLINE
Start Time: 20120618094410
End Time: 20120618094438
Status: A
----------------------------------------------------------------------------
EID: 12 Location: /db_bkup/full
$ du -sh /db_bkup
241M /db_bkup
There’s no way to determine this through database level commands -you just have to know where you put it. Assuming you keep at least a bit of snapshot data, which I really recommend (and espeically if you’ve left your package cache at automatic), this can be of significant size. In my case, I know that all of my data of this nature is in /db_adm:
$ du -sh /db_adm
142M /db_adm
It’s not technically part of the database size, but is at the instance level. But if you’ve got scrolling errors, it can blow up quickly, and also your archiving strategy here makes a difference.
$ db2 get dbm cfg | grep DIAG
Diagnostic error capture level (DIAGLEVEL) = 3
Diagnostic data directory path (DIAGPATH) = /db2diag/
Alternate diagnostic data directory path (ALT_DIAGPATH) =
Size of rotating db2diag & notify logs (MB) (DIAGSIZE) = 0
$ du -sh /db2diag
269M /db2diag
You only need to worry about this if you’ve specified a location that is outside of the database path. If you specified a relative path, or only have the default deadlock event monitor in place, then it’ll be included in the database path info above. Remember that event monitors can output very large amounts of data, especially activity or statement event monitors – you can easily generate more data than you have data in the database as it tracks every statement or activity in the database and metrics about those.
If you’re using db2audit or the new audit facility, you can use db2audit to specify or check the path. This data includes some instance level information as well.
$ db2audit describe
DB2 AUDIT SETTINGS:
Audit active: "FALSE "
Log audit events: "FAILURE"
Log checking events: "FAILURE"
Log object maintenance events: "FAILURE"
Log security maintenance events: "FAILURE"
Log system administrator events: "FAILURE"
Log validate events: "FAILURE"
Log context events: "NONE"
Return SQLCA on audit error: "FALSE "
Audit Data Path: ""
Audit Archive Path: ""
AUD0000I Operation succeeded.
If no path is set, like in the above, then the default path is sqllib/security/auditdata.
Data compression (both page level and row compression) is a pay-for-use feature. It’s effects are reflected in all of the methods shown above.
If you have space, it’s nice to track database growth over time. This can help you make somewhat accurate estimates of future growth and be ready for tables that may exceed the maximum size (less of an issue on v9 with large tablespaces). It can also help you answer questions like “When was all this data deleted?”. SYSIBMADM.ADMINTABINFO or ADMIN_GET_TAB_INFO_V97 can be useful for this. I generally write table sizes out to a table once a day – that granularity seems to work well for me.
So, readers, what methods and details have I missed here? Leave a comment, I want to hear from you!
转自:http://db2commerce.com/2012/06/19/how-to-find-the-size-of-a-db2-database/