eXtremeDB 3.1

eXtremeDB™ 3.1 from McObject®

Release Notes

Target OS: HP-UX 11 for PA-RISC, Itanium
Host OS: HP-UX 11.x

BY USING THIS SOFTWARE YOU AGREE TO McObject's LICENSE AGREEMENT

Build procedures

Note on using GNU tools 
This package assumes that the  gcc compiler, gnu  bash and gnu  make are installed and available on the host machine. If you need to install them, please follow the links below: 
gcc compiler gnu make gnu bash  

Building the runtime and samples (source-code license only)


Your distribution package contains a number of pre-built runtime libraries installed into the target/bin directory. This readme file outlines the?purpose of each library (see What's included in this package section). Each runtime library in the target/bin directory is built with some particular options set. For example, the libmcolib_log.a represents a core runtime library that includes transaction logging support, while the libmcolib.a does not include the transaction logging support. Each of the pre-built binaries correspond to a set of options defined in the include/mcocfg.h file. For instance, the transaction logging support is turned on by the following define:
#define MCO_CFG_LOG_SUPPORT

You may build the eXtremeDB samples using the preinstalled schema compiler and pre-built libraries simply by executing 'make' from the root installation directory. If you decide to rebuild all binaries, please use the following make options:

$ make
or
$ make samples - to build or rebuild sample applications
$ make tools -?to build mcocomp, pickmem and libraries
$ make all - to build samples and tools
$ make clean -?to remove all temporary and intermediate files
$ make distclean - to remove all binary, temporary and intermediate files

The make script will build the DDL schema compiler (mcocomp), the tools (pickmem), the runtime libraries (libmcolib*, libmcoxml*,libmcolog*) and put them into the host/bin and target/bin directories respectively. 
The make script will build the entire set of binaries regardless of the settings in the include/mcocfg.h file. If you wish to build an individual runtime with some particular set of options, you may do so by allowing (defining) the runtime options that you wish to build the runtime with in the mcocfg.h file and running 'make' from the target/mcolib directory. For example, you may #define MCO_CFG_LOG_SUPPORT and?run 'make'. This will result in building three libraries: 

libmcolib.a -- core runtime with TL support 
libmcolog.a -- the actual transaction logging 
libmcoxml.a -- xml extensions

To build the debug version of the samples, you may edit the include/header.mak file, and remove the comment from the following line:


DEBUG=on

or simply type 'make DEBUG=on' instead of 'make'

Object-only installations:
For object-only installations, the runtime is pre-built and installed into the target/bin directory. To build all the samples, run make from the root installation directory.

Building the DDL compiler. This package contains native HPUX and Microsoft Windows versions of the eXtremeDB DDL compiler. They are located in the host/bin directory. It is not necessary to re-build the compiler. However, if you decide to do so, please run make tools. Please note that g++ version 3.0.2 of higher is required to build the native version of the compiler. In order to rebuild the Microsoft Windows version of the compiler, please contact our technical support department.

Performance optimizations

Database runtime locking. The runtime implements a number of latches to synchronize the access to the runtime heap, transaction queue and registry from multiple threads. Although better parallelism is achieved with multiple latches, the overhead of applying the locks could bring the overall performance down. The number of latches used by the runtime can be configured at compile time in four ways.
  • multiple latches (locks) for the transaction queue, separate locks for the heap and the registry. This option is best for applications with many concurrent threads running on multiple CPUs.
  • one lock for the transaction queue, separate locks for other internals. Usually best in a single CPU environment.
  • one latch to lock the entire runtime. Usually best with a low number of simultaneous threads.
  • no locks at all. (Single thread mode.)
By default the runtime is configured with the fully serialized transaction queue, but separate latches for the registry and the runtime heap (the second option described above). If your application would benefit from another configuration option, define the appropriate serialization method in the  mcocfg.h files and rebuild the runtime. If the source code is not available, please contact technical support for assistance. 

Runtime build options. The eXtremeDB runtime library is built with the +O2 +Oaggressive options. If you have access to the eXtremeDB source code, you may wish to further optimize the runtime performance by using other compiler flags. Please consult the compiler documentation for choosing the appropriate options. 
In order to optimize the runtime size (footprint), it can be built with the -Os option. It is also possible to compile out features that your application will not require, such as history support, save/load support, etc. If you need assistance to make further optimizations please contact our  technical support department.

Single Threaded version

eXtremeDB is available in a Single-Threaded edition (eXtremeDB-ST). eXtremeDB-ST is a version of eXtremeDB, that only permits the runtime to be accessed from a single thread. Since eXtremeDB-ST does not need to coordinate access from multiple threads, no locks are applied (see  Performance Optimizations section above). 

The following options cannot be combined with the ST edition:
  • High Availability option
  • XML support
  • Shared Memory option

Shared-memory databases issues

Synchronous events handles must be used only from the process that has registered the event. The runtime does not verify the process that calls the event handler, thus an access violation will occur.

Setting the eXtremeDB shared memory pool. When a shared memory database is created, the eXtremeDB runtime allocates two shared memory segments: one for the eXtremeDB "registry" that keeps information about all database instances created on the machine and another segment for the data repository itself. HPUX shared memory segments are implemented via the System V IPC mechanism. System V IPC methods are associated with system-wide integers, called keys which are associated with files. By default, the files associated with the keys created by the eXtremeDB runtime are places into the HOME directory. You may, however, overwrite this setting by setting the environment variable EXTREMDB_DIR.

export EXTREMEDB_DIR= path

The kernel parameter  shmmax controls the maximum amount of shared memory a process can have. If the database is larger than the maximum allowed shared memory segment, please set it up in the SAM (usr/bin/sam). Please remember that in the 32bit model, you can not attach any single shared memory segment of size greater than 1GB. 1073741824 (0x40000000) is the size of the absolutely largest shared memory segment possible (32bit). 

Debugging shared memory databases initialization and shutdown. Sometimes, during development, it is useful to observe how the shared segments are being created and destroyed. This is done by setting the environment variable  MCO_DEBUG_PRT:

export MCO_DEBUG_PRT=1

Database memory pool anchor address

When a database is created via the mco_db_create() API, the application-specified address to the start of the database memory pool is used to map the shared memory segment into each process' address space. The eXtremeDB runtime performs the mapping via the shmat() system call:

void* shmat(int shmid, void* shmaddr, int shmflg)

This system call maps the shared memory segment identified by the shared-memory-id (shmid) into the data area of the calling process and returns the starting address of the segmentIn an HP-UX system, the virtual address returned to any process will always be the same for any given segment. That is, a segment will have the same virtual address in all processes that map it. Therefore, the eXtremeDB runtime ignores the anchor database address passed to the mco_db_open() and lets the OS to choose the map address. Note that this is not the case in other UNIX systems.

 

Cleaning-up orphaned shared memory segments. If the process is interrupted before the database is removed, the memory segments remain allocated. Please remove them manually using the ipcrm command. The following sample illustrates this process:

$ ipcs
Shared Memory:
m 0 0x411c07a8 --rw-rw-rw- root root
m 1 0x4e0c0002 --rw-rw-rw- root root
m 2 0x412006b0 --rw-rw-rw- root root
m 3 0x301c5a27 --rw-rw-rw- root root
m 2404 0x0c6629c9 --rw-r----- root root
m 5 0x06347849 --rw-rw-rw- root root
m 406 0x49182073 --rw-r--r-- root root
m 3407 0x5e10000a --rw------- root root
m 8 0x00000000 D-rw------- root root
m 809 0x00000000 D-rw------- www other

$ ipcrm -m 0x0c6629c9
  resource(s) deleted

Transaction Logging

Starting with the version 2.3 eXtremeDB adds an ability to log the database transaction to a persistent storage media such as a hard disk. Transaction Logging facilitates the durability of the eXtremeDB in-memory databases by providing the Transaction Log API.  In the TL-enabled version of the eXtremeDB, every update action of a transaction is recorded in the in-memory buffers. Upon the transaction commit these buffers are appended to the database log files. No records are added to the log if the transaction is read-only. For recovery the log is read backwards. To avoid a complete backwards scan during recovery a mechanism called “checkpointing” is used. Periodically the application may request a complete data backup of the database (checkpoint). Once the in-memory image is created on disk, the transaction log created before the checkpoint is erased. In order to enable the transaction logging feature, in addition to the core eXtremeDB library, the application should be linked with the transaction log library libmcolib.a for UNIX platforms and (mcolib.lib for Windows platforms).

High-Availability

Starting with the version 2.2, eXtremeDB is available in the High Availability version. In the current version the High Availability Application Framework has been significantly improved. eXtremeDB-HA delivers the performance that only an in-memory database can provide, and the added safety of a solution that can survive the failure of a component. eXtremeDB-HA maintains a master and one or more replica databases in perfect synchronization through a time-cognizant two-phase commit protocol that can be implemented over TCP, UDP, pipes or any other available transport.
Because the master and every replica are in synch at all points in time, you can distribute read requests to any copy and be assured that read requests will return identical results with no replication latency.

The High-Availability package includes the  eXtremeDB-HA framework that demonstrates how to build HA-aware applications. The directory structure for the framework is shown  above. The framework includes working examples of communication channels built over TCP/IP, UDP/IP, Named Pipes and QNX Messaging (QNX Neutrino only). The framework also includes a HA-aware application example that can be configured to use any of the channels above. In addition, the sample application can be configured to use either the standard or shared memory runtime.

HA communication protocol layers.
The eXtremeDB HA master and replica applications use user-defined communication channels to exchange data and communication messages during transaction processing. The HA subsystem exports an API that isolates an application from platform dependencies and the communication media. The communication protocol API used by the HA subsystem is divided into two layers:
  • The low-layer that is referred to as a transport layer. The transport layer provides the means for guaranteed data transfer between the master and the replicas, isolating the application and the HA subsystem from platforms dependencies and the communication media. The transport layer implements an independent point-to-point channel between the master and each replica.
  • The higher layer is called the interface layer and presents a simplified API that isolates an application from the transport layer implementation details. In general, you only need to use the transport layer for user-defined channel implementations. If your application is using one of the channels provided by us, you should use the interface layer API. 

Interface layer C++ support
To facilitate the development of the HA-aware applications using C++ language, the HA framework includes C++ wrapper classes for the interface API. Please refer to the  haframework/framework/framework_cpp.h and the  haframework/framework/framework.cpp files

How to configure the framework sample communication channels.
To configure the framework sample to use different channels, open the  interface.h file located in the  eXtremeDB/haframework/include directory and declare one of the following channel definitions:

#define CFG_TCP_SOCKET_CHANNEL 1
#define CFG_UDP_SOCKET_CHANNEL 1
#define CFG_PIPE_CHANNEL       1
#define CFG_QNXMSG_CHANNEL     1  


Configuring the Framework application to support multiple processes connected to the master database  
In order to build the framework application to support multiple processes connected to the master database, the following steps are necessary:
  1. 1. the 
CFG_SHARED_COMMIT should be declared in the interface.h file.
  1. 2. the 'master' process that creates the database, must set the MCO_MULTIPROCESS_COMMIT mode by calling 
mco_HA_set_mode() function. In order to set this mode, assign the value  MCO_MULTIPROCESS_COMMIT to the  par.is_master field of the  params structure.

 

Running the framework sample application
To run the application, open two or more command windows. The sample application,  mcoha, takes the following command-line options:
 
-m[N]   Runs the application as a "main master" that replicates N "master" databases. "Main master" creates a "commit thread" that provides the context in which all (main and secondary) master node applications execute their database commits;
N = 1,2,3 or 4, N = 1 by default
-ms[N] Runs the application as a "secondary master" that replicates N "master" databases "Secondary master" database commits are executed in the context of the "main master" commit thread;
 N = 1,2,3 or 4, N = 1 by default
-r[I]  Runs the application as a "replica" attached to the database with the index I. When multiple master databases are replicated, each "master" database  is replicated by its own "replica" process;
 I = 1,2,4 or 4, I = 1 by default
-s[I]   The same as rI, except the replica becomes the "master" if the current "master" process has failed;
 I = 1,2,3 or 4, I = 1 by default
-sm[I] Runs as a "replica". Synchronizes the database referenced by the index I and continues running as a "master" process.
I = 1,2,3 or 4, I = 1 by default

While running, both the master and the replica applications report the progress to the console. There could be multiple replicas running at the same time. In order to terminate a replica, press Ctrl-C in the replica's console. You may run another replica from the same console and it will attach to the master. The master can be terminated "normally" by pressing "Enter". In this case, the master is exited and all the running replicas are detached.
The master can also be terminated "abnormally" by pressing Ctrl-C in the master's console. The replica run with the "-s" parameter will assume the role of the master. All other replicas will re-synchronize with the new master and continue running.
When the framework is configured to run multiple 'master' processes, the replica will take over in case the 'main' master is terminated abnormally. The other processes attached to the master database will also be terminated by the framework-provided software watchdog.

When all master processes are terminated the master database must be assumed to be in the inconsistent state. In order to continue using the master node it must be re-synchronized from the running replica via HA procedures.

Further, when the 'main master' process is terminated "normally", other master processes are not notified by the sample application. and will also be terminated via watchdog. When developing your application we recommend notifying the processes connected to the master database, so they could gracefully detach themselves from the database via the mco_db_disconnect() before terminating the main master process

Known issues

  • When using the evaluation runtime library you could get an exception with the error code MCO_E_EVAL (100). This error code indicates that you have reached the limit for the evaluation version of the software. Please contact us for assistance with this matter. The evaluation runtime library is limited to 1,000,000 transactions.
  • The object history feature is not currently supported by the eXtremeDB-HA runtime.
  • The Named Pipe Channel does not work under Linux with remote replicas.
  • The functionality of the mco_HA_set_mode() API is extended to support multiple processes connected to the master database.

Documentation

Assuming you've installed your development suite, you'll find an extensive set of online documentation in Adobe PDF format. You will need Adobe Acrobat Reader to open these files. Please go to the  Adobe web site and download a free copy.

Technical support

We appreciate your support for McObject products. We have tried to make installing and using eXtremeDB as easy and trouble-free as possible. However, if you need to report a problem or need assistance with installing or using this software, please contact our Technical Support Department as follows: 

Phone: +1-425-831-5964
Fax: +1-425-831-1542
e-mail: [email protected]


Copyright Information

eXtremeDB is a trademark of McObject LLC and McObject is a registered trademark of McObject LLC. All other brands and product names may be trademarks or registered trademarks of their respective holders.

你可能感兴趣的:(技术文章)