Target OS: | HP-UX 11 for PA-RISC, Itanium |
Host OS: | HP-UX 11.x |
BY USING THIS SOFTWARE YOU AGREE TO McObject's LICENSE AGREEMENT |
Building the runtime and samples (source-code license only)
Your distribution package contains a number of pre-built runtime libraries installed into the target/bin directory. This readme file outlines the?purpose of each library (see What's included in this package section). Each runtime library in the target/bin directory is built with some particular options set. For example, the libmcolib_log.a represents a core runtime library that includes transaction logging support, while the libmcolib.a does not include the transaction logging support. Each of the pre-built binaries correspond to a set of options defined in the include/mcocfg.h file. For instance, the transaction logging support is turned on by the following define:
#define MCO_CFG_LOG_SUPPORT
You may build the eXtremeDB samples using the preinstalled schema compiler and pre-built libraries simply by executing 'make' from the root installation directory. If you decide to rebuild all binaries, please use the following make options:
$ make
or
$ make samples - to build or rebuild sample applications
$ make tools -?to build mcocomp, pickmem and libraries
$ make all - to build samples and tools
$ make clean -?to remove all temporary and intermediate files
$ make distclean - to remove all binary, temporary and intermediate files
The make script will build the DDL schema compiler (mcocomp), the tools (pickmem), the runtime libraries (libmcolib*, libmcoxml*,libmcolog*) and put them into the host/bin and target/bin directories respectively.
The make script will build the entire set of binaries regardless of the settings in the include/mcocfg.h file. If you wish to build an individual runtime with some particular set of options, you may do so by allowing (defining) the runtime options that you wish to build the runtime with in the mcocfg.h file and running 'make' from the target/mcolib directory. For example, you may #define MCO_CFG_LOG_SUPPORT and?run 'make'. This will result in building three libraries:
libmcolib.a -- core runtime with TL support
libmcolog.a -- the actual transaction logging
libmcoxml.a -- xml extensions
To build the debug version of the samples, you may edit the include/header.mak file, and remove the comment from the following line:
DEBUG=on
or simply type 'make DEBUG=on' instead of 'make'
Object-only installations:
For object-only installations, the runtime is pre-built and installed into the target/bin directory. To build all the samples, run make from the root installation directory.
Building the DDL compiler. This package contains native HPUX and Microsoft Windows versions of the eXtremeDB DDL compiler. They are located in the host/bin directory. It is not necessary to re-build the compiler. However, if you decide to do so, please run make tools. Please note that g++ version 3.0.2 of higher is required to build the native version of the compiler. In order to rebuild the Microsoft Windows version of the compiler, please contact our technical support department.
Setting the eXtremeDB shared memory pool. When a shared memory database is created, the eXtremeDB runtime allocates two shared memory segments: one for the eXtremeDB "registry" that keeps information about all database instances created on the machine and another segment for the data repository itself. HPUX shared memory segments are implemented via the System V IPC mechanism. System V IPC methods are associated with system-wide integers, called keys which are associated with files. By default, the files associated with the keys created by the eXtremeDB runtime are places into the HOME directory. You may, however, overwrite this setting by setting the environment variable EXTREMDB_DIR.
export EXTREMEDB_DIR= path
The kernel parameter shmmax controls the maximum amount of shared memory a process can have. If the database is larger than the maximum allowed shared memory segment, please set it up in the SAM (usr/bin/sam). Please remember that in the 32bit model, you can not attach any single shared memory segment of size greater than 1GB. 1073741824 (0x40000000) is the size of the absolutely largest shared memory segment possible (32bit).
export MCO_DEBUG_PRT=1
Database memory pool anchor address
When a database is created via the mco_db_create() API, the application-specified address to the start of the database memory pool is used to map the shared memory segment into each process' address space. The eXtremeDB runtime performs the mapping via the shmat() system call:
void* shmat(int shmid, void* shmaddr, int shmflg)
This system call maps the shared memory segment identified by the shared-memory-id (shmid) into the data area of the calling process and returns the starting address of the segment. In an HP-UX system, the virtual address returned to any process will always be the same for any given segment. That is, a segment will have the same virtual address in all processes that map it. Therefore, the eXtremeDB runtime ignores the anchor database address passed to the mco_db_open() and lets the OS to choose the map address. Note that this is not the case in other UNIX systems.
Cleaning-up orphaned shared memory segments. If the process is interrupted before the database is removed, the memory segments remain allocated. Please remove them manually using the ipcrm command. The following sample illustrates this process:
$ ipcs
Shared Memory:
m 0 0x411c07a8 --rw-rw-rw- root root
m 1 0x4e0c0002 --rw-rw-rw- root root
m 2 0x412006b0 --rw-rw-rw- root root
m 3 0x301c5a27 --rw-rw-rw- root root
m 2404 0x0c6629c9 --rw-r----- root root
m 5 0x06347849 --rw-rw-rw- root root
m 406 0x49182073 --rw-r--r-- root root
m 3407 0x5e10000a --rw------- root root
m 8 0x00000000 D-rw------- root root
m 809 0x00000000 D-rw------- www other
$ ipcrm -m 0x0c6629c9
resource(s) deleted
Starting with the version 2.3 eXtremeDB adds an ability to log the database transaction to a persistent storage media such as a hard disk. Transaction Logging facilitates the durability of the eXtremeDB in-memory databases by providing the Transaction Log API. In the TL-enabled version of the eXtremeDB, every update action of a transaction is recorded in the in-memory buffers. Upon the transaction commit these buffers are appended to the database log files. No records are added to the log if the transaction is read-only. For recovery the log is read backwards. To avoid a complete backwards scan during recovery a mechanism called “checkpointing” is used. Periodically the application may request a complete data backup of the database (checkpoint). Once the in-memory image is created on disk, the transaction log created before the checkpoint is erased. In order to enable the transaction logging feature, in addition to the core eXtremeDB library, the application should be linked with the transaction log library libmcolib.a for UNIX platforms and (mcolib.lib for Windows platforms).
#define CFG_TCP_SOCKET_CHANNEL 1
#define CFG_UDP_SOCKET_CHANNEL 1
#define CFG_PIPE_CHANNEL 1
#define CFG_QNXMSG_CHANNEL 1
Running the framework sample application
-m[N] | Runs the application as a "main master" that replicates N "master" databases. "Main master" creates a "commit thread" that provides the context in which all (main and secondary) master node applications execute their database commits; N = 1,2,3 or 4, N = 1 by default |
-ms[N] | Runs the application as a "secondary master" that replicates N "master" databases "Secondary master" database commits are executed in the context of the "main master" commit thread; N = 1,2,3 or 4, N = 1 by default |
-r[I] | Runs the application as a "replica" attached to the database with the index I. When multiple master databases are replicated, each "master" database is replicated by its own "replica" process; I = 1,2,4 or 4, I = 1 by default |
-s[I] | The same as rI, except the replica becomes the "master" if the current "master" process has failed; I = 1,2,3 or 4, I = 1 by default |
-sm[I] | Runs as a "replica". Synchronizes the database referenced by the index I and continues running as a "master" process. I = 1,2,3 or 4, I = 1 by default |
While running, both the master and the replica applications report the progress to the console. There could be multiple replicas running at the same time. In order to terminate a replica, press Ctrl-C in the replica's console. You may run another replica from the same console and it will attach to the master. The master can be terminated "normally" by pressing "Enter". In this case, the master is exited and all the running replicas are detached.
The master can also be terminated "abnormally" by pressing Ctrl-C in the master's console. The replica run with the "-s" parameter will assume the role of the master. All other replicas will re-synchronize with the new master and continue running.
When the framework is configured to run multiple 'master' processes, the replica will take over in case the 'main' master is terminated abnormally. The other processes attached to the master database will also be terminated by the framework-provided software watchdog.
When all master processes are terminated the master database must be assumed to be in the inconsistent state. In order to continue using the master node it must be re-synchronized from the running replica via HA procedures.
Further, when the 'main master' process is terminated "normally", other master processes are not notified by the sample application. and will also be terminated via watchdog. When developing your application we recommend notifying the processes connected to the master database, so they could gracefully detach themselves from the database via the mco_db_disconnect() before terminating the main master process
Phone: | +1-425-831-5964 |
Fax: | +1-425-831-1542 |
e-mail: | [email protected] |