Blackfish SQL
ContentsIndex
PreviousUpNext
System Architecture

This chapter provides an overview of Blackfish SQL system architecture.

  • Compatibility
  • Windows Connectivity
  • Java Connectivity
  • Differences Between Local and Remote Drivers
  • Database Files
  • Database File System
  • Transaction Management
  • High Availability
  • Heterogeneous Replication Using DataExpress

Blackfish SQL for Windows and Blackfish SQL for Java are compatible in these ways:

  • The database file format is binary-compatible between the two
  • The database clients and servers are interchangeable
  • Windows clients can connect to Java servers and Java clients can connect to Windows servers

Compatibility is restricted in the following ways:

  • The Object type uses platform-specific serialization; therefore the data cannot be shared between two different clients:
    • An ADO driver cannot read a Java serialized object.
    • A Java driver cannot read a .NET serialized object.
    • A DbxClient driver cannot read Java or .NET serialized objects.
  • The maximum scale for a decimal is different in Java and .NET.
  • For Blackfish SQL for Java, the Timestamp type has two more digits in the fractional portion.

The following Blackfish SQL for Java features are not yet supported in the Windows version:

  • ISQL SQL Command Line Interpreter
  • High Availability features, including incremental backup and failover
  • Graphical tooling for some administrative capabilities
  • Access to file and object streams
  • Tracking and resolving of row-level insert, update and delete operations
  • Access to the Blackfish SQL File System directory

Windows applications can use one or more of the following connectivity solutions to access a Blackfish SQL database programmatically:

  • DBXClient DBXClient is a 100% Object Pascal dbExpress 4 database driver that enables Win32 Delphi and C++ applications to connect to a Blackfish SQL for Windows or Blackfish SQL for Java server.
  • ADO.NET ADO.NET is the Microsoft standard for database connectivity on the .NET platform. Blackfish SQL for Windows has the following ADO.NET providers:
    • Local ADO.NET 2.0 Provider: This 100% managed code driver enables .NET applications to connect to a local Blackfish SQL server. The local ADO.NET driver executes in the same process as the BlackFish SQL database kernel, for better performance.
    • Remote ADO.NET 2.0 Provider: This 100% managed code driver enables .NET applications to acquire a remote connection to either a Blackfish SQL for Windows or Blackfish SQL for Java server.

See Establishing Connections for instructions and code examples for using these drivers.

Java applications can use one or more of the following connectivity solutions to access a Blackfish SQL database programmatically:

  • JDBC Type 4 Drivers JDBC is the industry standard SQL call-level interface for Java applications. Blackfish SQL for Java provides the following JDBC drivers:
    • Local JDBC driver: This 100% managed code driver enables Java applications to connect to a local Blackfish SQL server. The local JDBC driver executes in the same process as the BlackFish SQL database kernel, for better performance.
    • Remote JDBC driver: This 100% managed code driver enables Java applications to acquire a remote connection to either a Blackfish SQL for Windows or Blackfish SQL for Java server.
  • ODBC to JDBC Gateway Provided by EasySoft Limited, this gateway is an industry standard SQL call-level interface. The EasySoft ODBC to JDBC Gateway enables native applications to access Blackfish SQL databases.
  • DataExpress JavaBeans DataExpress JavaBeans provides additional functionality not addressed by the JDBC standard. See DataExpress JavaBeans for details.

See Establishing Connections for instructions and code examples for using these drivers.

DataExpress JavaBeans

NOTE: This feature is available only with Blackfish SQL for Java. 

DataExpress is a set of JavaBean runtime components that provide functionality not addressed by the JDBC standard. JavaBean is an industry-standard component architecture for Java. The JavaBean standard specifies many important aspects of components needed for RAD development environments. JavaBean components can be designed in a visual designer and can be customized with the properties, methods, and events that they expose. 

DataExpress is included in the component palette of CodeGear JBuilder Visual Designer. For information on using DataExpress from within JBuilder, see the JBuilder Help. 

Because DataExpress is a set of runtime components, you need not use JBuilder to develop and deploy applications that use DataExpress.  

The majority of DataExpress JavaBean components are those required to build both server-side and client-side database applications. Client-side applications require high quality data binding to visual components such as grid controls, as well as support for reading and writing data to a database. 

Server-side applications require data access components to help with reading and writing data to a database, but presentation is typically handled by a web page generation system such as Java Server Pages (JSPs). Even though DataExpress has extensive support for client-side data binding to visual component libraries such as dbSwing and JBCL, the DataExpress design still separates the presentation from the data access layer. This allows DataExpress components to be used as a data access layer for other presentation paradigms such as the JSP/servlet approach employed by JBuilder InternetBeans Express technology. 

The DataExpress architecture allows for a pluggable storage interface to cache the data that is read from a data source. Currently, there are only two implementations of this interface, MemoryStore (the default), and DataStore. By setting just two properties on a StorageDataSet JavaBean component, a Blackfish SQL table can be directly navigated and edited with a StorageDataSet JavaBean. By setting the DataSet property of a dbSwing grid control, the entire contents of large tables can be directly browsed, searched, and edited at high speed. This effectively provides an ISAM-level data access layer for Blackfish SQL tables.

Automating Administrative Functions with DataExpress JavaBeans

There are many DataExpress components that can be used to automate administrative tasks. Commonly-used components are: 

DataExpress Administrative Components

Task 
Component 
Custom server start and shutdown 
com.borland.datastore.jdbc.DataStoreServer 
Database backup, restore, and pack  
com.borland.datastore.DataStoreConnection.copyStreams()  
Security administration 
com.borland.datastore.DataStoreConnection 
Transaction management  
com.borland.datastore.TxManager com.borland.datastore.DataStore  

 

DataExpress JavaBean Source Code

JBuilder provides a source code .jar file that includes a large portion of the DataExpress JavaBean components. This enables you to more easily debug your applications and gain a better understanding of the DataExpress JavaBean components.

The primary difference between using local and remote drivers with Blackfish SQL is:

  • Local driver: The Blackfish SQL database engine executes in the same process as the application.
  • Remote driver: The Blackfish SQL database engine executes in either the same process or in a different process as the application.

 

Advantages of Using a Local Driver to Access a Database

A local Blackfish SQL driver provides the following benefits:

  • High-speed interface to the database Driver calls are made directly into the database kernel on the same call stack. There are no remote procedure calls to a database server running in another process.
  • Easy to embed in an application The database server does not need to be configured or started. The executable code for the database kernel, database driver and application execute in the same process.

 

Advantages of Using a Remote Driver to Access a Database

You can use a remote Blackfish SQL driver to execute Blackfish SQL in a separate database server process. However, before the application can use a remote driver, the Blackfish SQL server process must be started. Executing the Blackfish SQL database kernel in a separate database server process provides the following benefits:

  • Multi process access to a database If multiple processes on one or more computers need to access a single Blackfish SQL database, a Blackfish SQL server must be started and the remote drivers must be used by the application.
  • Improved performance using multiple computers If your application or web server is consuming a large portion of the memory or CPU resources, it is often possible to achieve better performance by running the Blackfish SQL server on a separate computer.
  • Improved fault tolerance Applications that use a remote connection typically run in a separate process. Errant applications can be terminated without having to shutdown the database server.

 

Advantages of Using Both Local and Remote Drivers to Access a Database

Using both the local and remote driver to access the same database can give you the best of both worlds. A Blackfish SQL database file can be held open by only one operating system process. When you connect using the local driver, the process that uses the local driver holds the database file open. When the remote driver makes the connection, the Blackfish SQL server process holds the database file open. 

Since the local driver causes the database file to be open in the same process, it prevents connections from the remote driver. However, if the process that uses the local driver also starts a Blackfish SQL server in the same process, then other processes using the remote driver can access the same database as the local driver. 

The Blackfish SQL server can be started inside an application by using a single line of Java code that instantiates a DataStoreServer component and executes its start method. The DataStoreServer runs on a separate thread and services connection requests from processes that use the remote driver to access a Blackfish SQL database on the computer that on which the DataStoreServer was started. 

In addition, the local driver can be used by the application that launched the DataStoreServer for faster in-process driver calls into the Blackfish SQL database engine.

These files are created and used by Blackfish SQL:

  • file-name.jds: a single file storage for all database objects.
  • database-name_LOGA_*: transactional log files. If the database file is moved, the log files must be moved with it.
  • database-name_LOGA_ANCHOR: redundantly stores log file configuration information.
  • database-name_STATUS*: log files created if status logging is enabled for the database.

A Blackfish SQL database can still be used if the anchor or status log files do not exist. 

A non-transactional (read only) database only needs the .jds database file. 

The specifications for Blackfish SQL database file capacity are: 

Blackfish SQL Database File Capacity

Specification 
Value 
Minimum block size:  
1 KB 
Maximum block size:  
32 KB 
Default block size  
4 KB 
Maximum Blackfish SQL database file size  
2 billion blocks. For the default block size, that yields a maximum of 8,796,093,022,208 bytes (8TB).  
Maximum number of rows per table stream 
281,474,976,710,656 
Maximum row length 
1/3 of the block size. Long Strings, objects, and input streams that are stored as Blobs instead of occupying space in the row. 
Maximum Blob size 
2GB each 
Maximum file stream size  
2GB each 

A Blackfish SQL database file can contain these types of data streams:

  • Table streams: These are database tables typically created using SQL. A table stream can have secondary indexes and Blob storage associated with it.
  • File streams: There are two types of file streams:
    • Arbitrary files created with DataStoreConnection.createFileStream()
    • Serialized Java objects stored as file streams

A single Blackfish SQL database can contain all stream types. 

Streams are organized in a file system directory. The ability to store both tables and arbitrary files in the same file system allows all of the data for an application to be contained in a single portable, transactional file system. A Blackfish SQL database can also be encrypted and password protected.  

The specifications for Blackfish SQL database file systems are: 

Blackfish SQL Database File System Specifications

Specification 
Value 
Directory separator character for streams 
Maximum stream name length  
192 bytes

  • Best case (all single-byte character sets): 192 characters
  • Worst case (all double-byte character sets): 95 characters (one byte lost to indicate DBCS)

 

Reserved names  
Stream names that begin with SYS are reserved. Blackfish SQL has the following system tables:

  • SYS/Connections
  • SYS/Queries
  • SYS/Users

 

 

Blackfish SQL for Java Specific Streams

Some table streams and all file streams are currently only accessible from Java applications. 

If the resolvable property for the table is set, all insert, update, and delete operations made against the table are recorded. This edit tracking feature enables DataExpress components to synchronize changes from a replicated table to the database from which the table was replicated. 

File streams are random-access files. File streams can be further broken down into two different categories: 

  • Arbitrary files created with DataStoreConnection.createFileStream(): You can write to, seek in, and read from these streams.
  • Serialized Java objects stored as file streams: You can write to, seek in, and read from these streams.

Each stream is identified by a case-sensitive name referred to as a storeName in the API. The name can be up to 192 bytes long. The name is stored along with other information about the stream in the internal directory of the Blackfish SQL database. The forward slash (/) is used as a directory separator in the name to provide a hierarchical directory organization. JdsExplorer uses this structure to display the contents of the directory in a tree.

Advantages of Using the Blackfish SQL File System

For the simple persistent storage of arbitrary files and objects, using the Blackfish SQL file system has a number of advantages over using the JDK classes in the java.io package:

  • It is simpler, because one class is needed instead of four (FileOutputStream, ObjectOutputStream, FileInputStream, ObjectInputStream).
  • You can keep all your application files and objects in a single file and access them easily with a logical name instead of streaming all of your objects to the same file.
  • Your application can use less storage space, due to how disk clusters are allocated by some operating systems. The default block size in a Blackfish SQL database file is small (4KB).
  • Your application is more portable, since you are no longer at the mercy of the host file system. For example, different operating systems have different allowable characters for names. Some systems are case sensitive, while others are not. Naming rules inside the Blackfish SQL file system are consistent on all platforms.
  • Blackfish SQL provides a transactional file system that can also be encrypted and password protected.

 

Blackfish SQL Directory Contents

Note: Currently, the directory for the Blackfish SQL database can be accessed only from Java applications. Fortunately, most applications do not need to access the directory directly. 

The JdsExplorer tree provides a hierarchical view of the the Blackfish SQL directory. The Blackfish SQL directory can also be opened programmatically with a DataExpress StorageDataSet component. This provides a tabular view of all streams stored in the Blackfish SQL file system. The directory table has the following structure: 

Blackfish SQL Directory Table Columns

Name 
Constant 
Type 
Contents 
State 
DIR_STATE 
short 
Whether the stream is active or deleted 
DeleteTime 
DIR_DEL_TIME 
long 
If deleted, when; otherwise zero 
StoreName 
DIR_STORE_NAME 
String 
The store name 
Type 
DIR_TYPE 
short 
Bit fields that indicate the type of streams 
Id 
DIR_ID 
int 
A unique ID number 
Properties 
DIR_PROPERTIES 
String 
Properties and events for a DataSet stream 
ModTime 
DIR_MOD_TIME 
long 
Last time the stream was modified 
Length 
DIR_LENGTH 
long 
Length of the stream, in bytes 
BlobLength 
DIR_BLOB_LENGTH 
long 
Length of the Blobs in a table stream, in bytes 

You can reference the columns by name or by number. There are constants defined as DataStore class variables for each of the column names. The best way to reference these columns is to use these constants. The constants provide compile-time checking to ensure that you are referencing a valid column. Constants with names ending with the suffix _STATE exist for the different values for the State column. There are also constants for the different values and bit masks for the Type column, with names ending with the suffix _STREAM. See the online help for the DataStore class for a listing of these constants.

Stream Details

Time columns in the Blackfish SQL directory are Coordinated Universal Time (UTC). 

As with many file systems, when you delete a stream in Blackfish SQL, the space it occupied is marked as available, but the contents and the directory entry that points to it are not immediately reused for new allocations. This means you can sometimes restore a deleted stream if it has not been overwritten. 

For more information on deleting and restoring streams, see Deleting Streams, How Blackfish SQL Reuses Blocks, and Restoring Streams. 

The Type column indicates whether a stream is a file or table stream, but there are also many internal table stream subtypes (for example, for indices and aggregates). These internal streams are marked with the HIDDEN_STREAM bit to indicate that they should not be displayed. Of course, when you are reading the directory, you can decide whether these streams should be hidden or visible. 

These internal streams have the same StoreName as the table stream with which they are associated. This means that the StoreName alone does not always uniquely identify a stream. Some internal stream types can have multiple instances. The ID for each stream is always unique; however, the StoreName is sufficiently unique for the storeName parameter used at the API level. For example, when you delete a table stream, all the streams with that StoreName are deleted.

Directory Sort Order

The directory table is sorted by the first five columns. Because of the values stored in the State column, all active streams are listed first in alphabetical order by name. These are then followed by all deleted streams ordered by their delete time, oldest to most recent.  

NOTE: You cannot use a DataSetView to create a different sort order. 

Blackfish SQL File System Storage Allocation

Database contents are stored in a single file. If the database has transaction support enabled, there are additional files for transactional logs.  

A database file has a block size property that defaults to 4096 bytes. The database block size property is the unit size used for new allocations in the database. This size also determines the maximum storage size of a Blackfish SQL database. The formula for computing the maximum database file size is: 

<bytes-per-block> * 2^31 

For a block size of 4096 bytes, this is about 8.8 terabytes. 

A Blackfish SQL database file does not automatically shrink as data is deleted or removed from it. However, new allocations reuse the space from deleted allocations. Deleted space in the file system is made available to new allocations in two ways:

  • Deleted blocks In this case, an entire block is reallocated from the list of deleted blocks.
  • Blocks that are partially full In this case, free space can be reused only on a per-stream basis. Specifically, the free space in a block in Table A can be reused only by a new allocation for a row in Table A. From an allocation perspective, tables, secondary indices, Blobs, and files are all separate streams.

On average, partially allocated blocks are kept at least 50 percent full. The file system goes to great lengths to ensure this is true for all stream types in the Blackfish SQL file system. The one exception to this rule occurs when a stream has a small number of blocks allocated. 

A Blackfish SQL database file can be compacted to remove all deleted space and to defragment the file system so that blocks for each stream are located in contiguous regions. To compact a database using JdsExplorer, choose Tools > Pack. You can accomplish this programmatically by using the DB_ADMIN.COPY_USERS and DB_ADMIN.COPY_STREAMS methods.

Deleting Streams

Deleting a stream does not actually overwrite or clear the stream contents. As in most file systems, the space used by the stream is marked as available, and the directory entry that points to that space is marked as deleted. The time at which the stream was deleted is recorded in the directory. Over time, new stream allocations overwrite the space that was formerly occupied by the deleted stream, making the content of the deleted streams unrecoverable. 

You can use JdsExplorer to delete streams, or you can delete streams programmatically using the DataStoreConnection.deleteStream() method, which takes as an argument the name of the stream to delete.

How Blackfish SQL Reuses Blocks

Blocks in the Blackfish SQL database file that were formerly occupied by deleted streams are reclaimed according to the following rules:

  • Blackfish SQL always reclaims deleted space before allocating new disk space for its blocks.
  • If the database is transactional, the transaction that deleted the stream must commit before the used space can be reclaimed.
  • The oldest deleted streams (those with the earliest delete times) are reclaimed first.
  • For table streams, the support streams (those for indices and aggregates) are reclaimed first.
  • Space is reclaimed from the beginning of the stream to the end of the stream. This means you are more likely to recover the end of a file or table than the beginning.
  • Because of the way table data is stored in blocks, you never lose or recover a partial row in a table stream, only complete rows.
  • When all the space for a stream has been reclaimed, the directory entry for the stream is automatically erased, since there is nothing left to restore.

 

Restoring Streams

Blackfish SQL allows deleted streams to be restored if their space has not be consumed by new allocations as described in above. You can restore a stream either by using JdsExplorer to restore it, or by calling the DataStoreConnection.undeleteStream() method. 

Because table streams have multiple streams with the same name, the stream name alone is not sufficient for attempting to restore a stream programmatically. You must use a row from the Blackfish SQL directory. The row contains enough information to uniquely identify a particular stream. 

The DataStoreConnection.undeleteStream() method takes such a row as a parameter. You can pass the directory dataset itself. The current row in the directory dataset is used as the row to restore. 

If you create a new stream with the name of a deleted stream, you cannot restore that stream while the same name is being used by an active stream.

The lifecycle of a transaction begins with any read or write operation through a connection. Blackfish SQL uses stream locks to control access to resources. To read a stream or modify any part of a stream (e.g., a byte in a file, a row in a table), a connection must acquire a lock on that stream. Once a connection acquires a lock on a stream, it holds the lock until the transaction is committed or rolled back. 

In single-connection applications, transactions primarily provide crash recovery and allows an application to undo changes. Or, you may decide to make a Blackfish SQL database transactional so that it can be accessed through JDBC. If you want to access that Blackfish SQL database using DataExpress, you must deal with transactions.

Transaction Isolation Levels

Blackfish SQL supports all four isolation levels specified by the ANSI/ISO SQL (SQL/92) standards.  

The serializable isolation level provides complete transaction isolation. An application can employ a weaker isolation level to improve performance or to avoid lock manager deadlocks. Weaker isolation levels are susceptible to one or more of the following isolation violations:

  • Dirty reads One connection is allowed to read uncommitted data written by another connection.
  • Nonrepeatable reads A connection reads a committed row, another connection changes and commits that row, and the first connection rereads that row, getting a different value the second time.
  • Phantom reads A connection reads all of the rows that satisfy a WHERE condition, a second connection adds another row that also satisfies that condition, and the first connection sees the new row that was not there before, when it reads a second time.

SQL-92 defines four levels of isolation in terms of the behavior that a transaction running at a particular isolation level is permitted to experience, which are: 

SQL Isolation Level Definitions

Isolation Level 
Dirty Read 
Nonrepeatable Read 
Phantom Read 
Read uncommitted 
Possible 
Possible 
Possible 
Read committed 
Not Possible 
Possible 
Possible 
Repeatable read 
Not possible 
Not possible 
Possible 
Serializable 
Not possible 
Not possible 
Not possible 

 

Choosing an Isolation Level for a Blackfish SQL Connection

Guidelines for choosing an isolation level for a connection include: 

Isolation Level Guidelines

Isolation level 
Description 
Read Uncommitted 
This isolation level is suitable for single-user applications for reports that allow transactionally inconsistent views of the data. It is especially useful when browsing Blackfish SQL tables with dbSwing and DataExpress DataSet components. This isolation level incurs minimal locking overhead. 
Read Committed 
This isolation level is commonly used for high-performance applications. It is ideal for data access models that use Optimistic Concurrency Control. In these data access models, read operations are generally performed first. In some cases, read operations are actually performed in a separate transaction than write operations. 
Repeatable Read 
This isolation level provides more protection for transactionally consistent data access without the reduced concurrency of TRANSACTION_SERIALIZABLE. However, this isolation level results in increased locking overhead because row locks must be acquired and held for the duration of the transaction. 
Serializable 
This isolation level provides complete serializability of transactions at the risk of reduced concurrency and increased potential for deadlocks. Although row locks can still be used for common operations with this isolation level, some operations cause the Blackfish SQL lock manager to escalate to using table locks. 

 

Blackfish SQL Locking

The locks used by the Blackfish SQL Lock Manager are: 

Blackfish SQL Locking

Lock 
Description 
Critical section locks 
These are internal locks used to protect internal data structures. Critical section locks are usually held for a short period of time. They are acquired and released independent of when the transaction is committed. 
Row locks 
Row locks are used to lock a row in a table. These locks support shared and exclusive lock modes. Row locks are released when the transaction commits. 
Table locks 
Table locks are used to lock an entire table. These locks support shared and exclusive lock modes. Table locks are released when the transaction commits.  
DDL table locks 
DDL table locks are locks acquired when database metadata is created, altered, or dropped. These support shared and exclusive lock modes:

  • Shared DDL locks are held by transactions that have tables opened. Shared DDL locks are held until the transaction commits and the connection closes the table and all statements that refer to the table.
  • Exclusive DDL locks are used when a table must be dropped or structurally modified and are released when a transaction commits.

 

 

Controlling Blackfish SQL Locking Behavior

You can specify case-insensitive connection properties to control locking behavior. The property names are: 

Case-Insensitive Connection Properties for Controlling Locking Behavior

Property 
Behavior 
tableLockTables 
Specifies the tables for which row locking is to be disabled. This can be a list of tables, defined as a string of semicolon-separated, case-sensitive table names. Set this property to “*”.  
maxRowLocks 
Specifies the maximum number of row locks per table that a transaction should acquire before escalating to a table lock. The default value is 50. 
lockWaitTime 
Specifies the maximum number of milliseconds to wait for a lock to be released by another transaction. When this timeout period expires, an appropriate exception is thrown. 
readOnlyTxDelay 
Specifies the maximum number of milliseconds to wait before starting a new read-only view of the database. For details, see the discussion on read only transaction, in Tuning Blackfish SQL Concurrency Control Performance 

 

Blackfish SQL Locking and Isolation Levels

The use of table locks and row locks varies between the different isolation levels. The tableLockTables connection property disables row locking and affects all isolation levels. Critical section and DDL locks are applied in the same manner for all isolation levels. 

All isolation levels acquire at least an exclusive row lock for row update, delete, and insert operations. In some lock escalation scenarios, an exclusive table lock occurs instead. 

The row locking behavior of the Blackfish SQL connection isolation levels are: 

Lock Use and Isolation Levels

Connection isolation level 
Row locking behavior 
Read Uncommitted 
This level does not acquire row locks for read operations. It also ignores exclusive row locks held by other connections that have inserted or updated a row. 
Read Committed 
This level does not acquire row locks for read operations. A transaction using this isolation level blocks when reading a row that has an exclusive lock held by another transaction for an insert or update operation. This block terminates when one of the following occurs: the write transaction commits, a deadlock is detected, or the lockTimeOut time limit has expired. 
Repeatable Read 
This level acquires shared row locks for read operations. 
Serializable 
This level acquires shared row locks for queries that select a row based on a unique set of column values such as a primary key or INTERNALROW column. In SQL, the WHERE clause determines whether or not unique column values are being selected. Exclusive row locks are also used for insert operations and update/delete operations on rows that are identified by a unique set of column values. The following operations escalate to a shared table lock:

  • Read operations that are not selected based on a unique set of column values
  • Read operations that fail to find any rows
  • Insert and update operations performed on a non-uniquely specified row

 

Note that although lock escalation from row locks to table locks occurs in some situations for Serializable as described above, it also occurs for all isolation levels if the maxRowLocks property is exceeded.

Concurrency Control Changes

Blackfish SQL database files created with earlier versions of Blackfish SQL continue to use table locking for concurrency control. There are, however, some minor concurrency control improvements for older database files. These include:

  • Support for Read Uncommitted and Serializable connection isolation levels
  • Shared table locks for read operations; earlier versions of Blackfish SQL software used exclusive table locks for read and write operations.

 

Blackfish SQL for Windows Connection Pooling

In an application that opens and closes many database connections, it is efficient to keep unused Connection objects in a pool for future reuse. This saves the overhead of having to create a new physical connection each time a connection is needed.

Using the DBXClient dbExpress Driver

Use the TDBXPool delegate driver to provide connection pooling support for DBXClient connections.

Using ADO.NET Providers

By default, the .NET client drivers implement a connection pool that is always in effect. Each ConnectionString has its own pool. There are connection properties that affect the maximum number of connections in a pool, and other attributes. There is also a ConnectionPool property that provides access to all active connection pools.

Blackfish SQL for Java Connection Pooling and Distributed Transaction Support

Blackfish SQL provides several components for dealing with JDBC DataSources, connection pooling, and distributed transaction (XA) support. These features require J2EE. If you are running with a JRE version less than 1.4, download the J2EE.jar file from java.sun.com, and add it to the classpath.

Blackfish for Java Connection Pooling

The JdbcConnectionPool object supports pooled XA transactions. This feature allows Blackfish SQL to participate in a distributed transaction as a resource manager. Blackfish SQL provides XA support by implementing these standard interfaces specified in the Java Transaction API (JTA) specification:

  • javax.sql.XAConnection
  • javax.sql.XADataSource
  • javax.transaction.xa.XAResource

To acquire a distributed connection to a Blackfish SQL database from a JdbcConnectionPool, call JdbcConnectionPool.getXAConnection(). The connection returned by this method works only with the Blackfish SQL JDBC driver. XA support is useful only when combined with a distributed transaction manager, such as the one provided by Borland Enterprise Server. 

Under normal operation, all global transactions should be committed or rolled back before the associated XAConnection is closed. If a connection is participating in a global transaction that is not yet in a prepared state but is in a successful started or suspended state, the transaction will be rolled back during any crash recovery that may occur.

The Heuristic Completion JDBC Extended Property

Blackfish SQL provides heuristicCompletion, an extended JDBC property that allows you to control the behavior when one or more databases fail during a two-phase commit. When XA transactions are prepared but not completed (no commit or rollback has been performed), the behavior is determined by one of these property settings:

  • commit causes the transaction to be heuristically committed when Blackfish SQL reopens the database. This is the default.
  • rollback causes the transaction to be heuristically rolled back when Blackfish SQL reopens the database.
  • none causes Blackfish SQL to keep the transaction state when reopening a database. When this option is used, the locks that were held when the transaction was prepared are reacquired and held until the transaction is committed or rolled back by a JTA/JTS-compliant transaction manager.

The heuristic commit and rollback options allow for more efficient execution, because the locks can be released sooner and less information is written to the transaction log file.

NOTE: This feature is currently available only for Blackfish SQL for Java. It is not currently available in Blackfish SQL for Windows. 

One of the most important areas of concern for any database application is eliminating single points of failure. The Blackfish SQL server provides a broad range of capabilities for making a database application fail-safe by avoiding application down time and loss of critical data. The High Availability server uses database mirroring technologies to ensure data availability in the event of software or hardware failure, and to provide a method of routine incremental backup. While a more general database replication scheme could provide similar protection, a mirroring approach provides advantages in terms of simplicity, ease of use, and performance. 

A more general data replication solution could be employed to solve many of the same problems that the Blackfish SQL High Availability server addresses. Even though a more general solution would solve a broader variety of synchronization needs, it would do so at a much higher set of costs, including greater complexity and slower performance. 

The Blackfish SQL database engine uses its transactional log files to maintain read-only mirror images of a database. The same TCP/IP database connections used for general database access are also used to synchronize mirrored databases.

Mirror Types

These mirror types can be used by an application:

  • Primary
  • Read-only
  • Directory

 

The Primary Mirror

The primary mirror is the only mirror type that can accept both read and write operations to the database. Only one primary mirror is allowed at any time.

Read-only Mirrors

There can be any number of read-only mirrors. Connections to these databases can only perform read operations. Read-only mirrors provide a transactionally consistent view of the primary mirror database. However, a read-only mirror database might not reflect the most recent write operations against the primary mirror database. Read-only mirrors can be synchronized with changes to the primary mirror instantly, on a scheduled basis, or manually. Instant synchronization is required for automatic failover. Scheduled and manual synchronization can be used for incremental synchronization or backup.

Directory Mirrors

Directory mirror databases mirror only the mirror configuration table and the tables needed for security definition. They do not mirror the actual application tables in the primary mirror.  

There can be any number of directory mirrors. Connections to these databases can perform read operations, only. The storage requirements for a directory mirror database are very small, since they contain only the mirror table and security tables. Directory mirrors redirect read-only connection requests to read-only mirrors. Writable connection requests are redirected to the primary mirror.

The Blackfish SQL Engine and Failover

Blackfish SQL Engine failover handling capabilities include the following:

  • Transaction log records
  • Automatic failover
  • Manual failover

 

Transaction log records

Blackfish SQL uses transaction log records to incrementally update mirrored databases. It transmits these log records to mirrors at high speed during synchronization operations. The same mechanism used for crash recovery and rollback is used to apply these changes. Existing code is used for all synchronization. The existing Blackfish SQL support for read-only transactions provides a transactionally consistent view of the mirrored data while the mirror is being synchronized with contents of more recent write transactions from the primary mirror.

Automatic failover

When a primary mirror that is configured with two or more automatic failover mirrors fails, one of the read-only mirrors that is configured for automatic failover is promoted to be the primary mirror. The application can be affected in one of two ways:

  • If an application has already connected to the primary before it failed, all operations attempted against the failed primary will encounter SQLException or IOException errors. The application can cause itself to be hot swapped over to the new primary by rolling back the transaction. This is identical to how database deadlock is handled in high-concurrency OLTP applications.
  • If an application has never connected to the primary before it failed, its connection attempt fails. Directory mirrors can be used to automatically redirect new connection requests to the new primary mirror.

 

Manual failover

Unlike automatic failover, manual failover is performed only on request. Any read-only mirror can become the primary mirror. This is useful when the computer the primary server is executing on needs to be taken off line for system maintenance.

Advantages of the High Availability Server

The Blackfish SQL High Availability server provides a broad range of benefits, including:

  • No single point of failure Since multiple copies of the same database are maintained across several computers, there is no need for a shared storage device. The High Availability server maintains the highest level of database availability with no single point of failure and with high speed failover and recovery, guaranteed data consistency, and transaction integrity.
    • Complete data and disaster protection By maintaining copies of the database on multiple servers, the High Availability server guarantees that data remains intact in the event of media failure, a server crash, or any other catastrophic event.
  • Single, highly tuned network transport layer The high-performance transport layer used for current database connections is also used for all synchronization operations.
  • Portability The file format and synchronization is portable across all platforms that are capable of executing a Java Virtual Machine.
  • Large cost savings The High Availability server provides a significant savings on today's high availability equipment and labor costs. It runs on standard low-cost hardware. There is no need for special technology such as shared disks, private LANs, or fiber channels, and no need for additional software or operating systems such as Linux, Windows, Solaris, or Mac OSX.
  • Easy to set up, administer and deploy The High Availability server provides a high-performance, easy-to-use solution for some common database problems. There is no need for clustering expertise. All configuration settings and explicit operations can be performed using the Blackfish SQL Server Console, SQL scripts, or Java code.
  • Increased scalability and load balancing Read-only operations can be performed against read-only mirrors, reducing the transaction work load of the primary mirror, which must be used for all transactions that perform write operations. By connecting to directory mirrors, new connection requests can be balanced across several read-only mirrors. This can dramatically reduce the work load of the primary server.
  • Synch delegation You can specify the mirror to be used to synchronize another mirror. This allows the primary mirror to synchronize just one or a small number of read-only mirrors. These read-only mirrors can then synchronize other mirrors. This reduces the workload of the primary mirror, which must service all write requests.
  • Incremental database backup Read-only mirrors can be synchronized with the primary mirror automatically by scheduling one or more synchronization time periods. Read-only mirrors can also be used for manual backup by making an explicit synchronization request.
  • Distributed directory Since this failover system supports the automatic and manual failover of servers, a distributed directory mechanism is useful for locating the primary mirror and available read-only mirrors. All mirrors maintain a table of all other mirrors. An application can make any type of connection request (read/write or read-only) to any existing mirror. The mirror uses the mirror table to determine where the current mirrors are located.

NOTE: This feature is supported only for Blackfish SQL for Java. 

The replication support provided by DataExpress for Blackfish SQL easier to use and deploy than most replication solutions. This replication solution is also heterogeneous because it uses JDBC for database access. 

The replication topology provided is best described as a simple client-server relationship. Blackfish SQL does not require the server-side software or database triggers required for more complex publish-subscribe solutions. Complex multi-level hierarchies and network topologies are not directly supported. 

There are three distinct phases in the replication cycle when using DataExpress JavaBean components with Blackfish SQL for a disconnected or mobile computing model:

  • Provide Phase: Provides the client database with a snapshot of the server tables being replicated.
  • Edit Phase: The client application, which need not be connected to the database, reads/edits the client database.
  • Resolve Phase: The client database edits are saved back to the server database.

 

The Provide Phase

A StorageDataSet provider implementation initially replicates database contents from the server into a client. The client is always a Blackfish SQL database. The server is typically some server that can be accessed with a JDBC driver. The JDBC provider uses either a SQL query or stored procedure to provide data that will be replicated in the client-side Blackfish SQL database. Since there is no server-side software running in this architecture, there is no support for incremental updates from the server to the client. If the client needs to be refreshed, the same SQL query/stored procedure used to provide that the initial replication must be re-executed.  

A StorageDataSet provider is a pluggable interface. QueryDataSet and ProcedureDataSet are extensions of StorageDataSet, which preconfigure JdbcProviders that can execute SQL queries and stored procedures to populate a StorageDataSet. For memory-constrained clients such as PDAs, a DataSetData component can be used to provide data. The DataSetData component uses Java Serialization to create a data packet that can be easily transmitted between a client and server. 

The provide operation for a collection of database tables can be performed in a single transaction, so that a transactionally consistent view of a collection of tables can be replicated.

The Edit Phase

Once the provide phase is complete, both DataExpress and JDBC APIs can read and write to the Blackfish SQL tables that are replicating the database. All insert/update/delete write operations since the last provide operation are automatically tracked by the Blackfish SQL storage system. Part of the StorageDataSet store interface contract is that all insert/update/delete operations must be recorded if the StorageDataSet property resolvable is set.

The Resolve Phase

DataExpress provides an automatic mechanism for using SQL DML or stored procedures to save all changes made on the client back to a server, via a JDBC driver. An optimistic concurrency approach is used to save the changes back to the server. One or more tables can be resolved in a single transaction. By default, any conflicts, such as two users updating the same row, cause the transaction to roll back. However, there is a SqlResolutionManager JavaBean component that you can use to customize the handling of resolution errors. The SqlResolutionManager has event handlers that enable an application to respond to an error condition with an ignore, retry, abort, log, or other appropriate response. 

There are also higher-level DataStorePump and DataStoreSync components that you can use to perform provide and resolve operations for a collection of tables. For details, see Administering Blackfish SQL.

Copyright(C) 2009 Embarcadero Technologies, Inc. All Rights Reserved.
What do you think about this topic? Send feedback!