Blackfish SQL
ContentsIndex
PreviousUpNext
Optimizing Blackfish SQL Applications

This section discusses ways to improve the performance, reliability, and size of Blackfish SQL applications. Unless otherwise specified, DataStoreConnection refers to either a DataStoreConnection or DataStore object used to open a connection to a Blackfish SQL database file.

  • Loading Databases Quickly
  • General Recommendations
  • Optimizing Transactional Applications
  • Pruning Deployed Resources for Blackfish SQL for Java Applications
  • AutoIncrement Columns
  • Blackfish SQL Companion Components
  • Using Data Modules for DataExpress Components

Here are some tips that can improve the performance of your application when loading databases:

  • Use prepared statements or commands whenever possible. If the number of parameters changes from one insert to the next, clear the parameters before setting the new parameters.
  • Create the table without primary keys, foreign keys, or secondary indexes. Load the table and then create any needed primary keys, foreign keys, or secondary indexes.

 

Java-specific Database Loading Optimizations

Use the DataExpress TextDataFile class to import text files. It has a fast parser and can load data quickly. You must set the StorageDataSet store to a DataStoreConnection, and set the StoreName property to the name of your table in the Blackfish SQL database. When loading a new database:

  1. First create the database as non-transactional.
  2. While the database is non-transactional, use a DataExpress StorageDataSet.addRow or TextDataFile component to load the database.
  3. After the database has loaded, use the DataStore.TxManager property to make the database transactional.

This technique should enable the database to load two to three times faster.

This section provides some general guidelines for improving performance for Blackfish SQL applications.

Proper Database Shutdown

If a database is not properly shut down, the next time a process opens the database, there will be a delay. This is because Blackfish SQL needs about 8 to 10 seconds to ensure that no other process has the database open. To ensure that a database is shut down properly, make sure all connections are closed when they are no longer needed. If it is difficult to ensure that all connections are closed, a connection with Administrator rights can call the DB_ADMIN.CLOSE_OTHER_CONNECTIONS built-in stored procedure to ensure that all other connections are closed. 

Another benefit to closing connections is that when they are all closed, the memory allocated to the Blackfish SQL cache is released. 

Currently, non-transactional databases can be accessed from SQL only if the database read-only property is true. However, DataExpress JavaBeans can perform write operations on a non-transactional database.  

Closing a non-transactional Blackfish SQL database ensures that all modifications are saved to disk. There is a daemon thread for all open DataStoreConnection instances that is constantly saving modified cache data. (By default modified data is saved every 500 milliseconds.) If you directly exit the Java Virtual Machine without closing the database, the daemon thread might not have the opportunity to save the last set of changes. There is a small chance that a non-transactional Blackfish SQL could become corrupted. 

A transactional Blackfish SQL database is guaranteed not to lose data, but the transaction manager rolls back any uncommitted changes.

Java-specific Database Shutdown

If your application is using DataExpress JavaBean components, close all StorageDataSets that have the store property set to a DataStoreConnection when you are finished with them. This frees up Blackfish SQL resources associated with the StorageDataSet and allows the StorageDataSet to be garbage collected. 

You can use the DataStore.shutdown() method to make sure all database connections are closed before an application terminates.

Optimizing the Blackfish SQL Disk Cache

The default maximum cache size for a Blackfish SQL database is 512 cache blocks. The default block size is 4096 bytes. Therefore, the cache memory reaches its maximum capacity at approximately 512*4096 (2MB). Note that this memory is allocated as needed. In some rare situations when all blocks are in use, the cache may grow beyond 512 cache blocks. You can use the DataStore.MinCacheSize property to specify the minimum cache size. 

NOTE: Do not arbitrarily change the database cache size. Be sure first to verify that doing so will improve the performance of your application. 

Keep in mind the following considerations when changing the Blackfish SQL cache size:

  • Modern OS caches are typically high performance. In many cases, increasing the Blackfish SQL cache size does not significantly improve performance, and simply uses more memory.
  • There is only one Blackfish SQL disk cache for all Blackfish SQL databases open in the same process. When all Blackfish SQL databases are shut down, the memory for this global disk cache is released.
  • For handheld devices with small amounts of memory, set the DataStore.MinCacheSize property to a smaller number, such as 96.

 

Optimizing File Access

Blackfish SQL databases perform the majority of read/write operations against the following four file types:

  • The Blackfish SQL database file itself (filename extension is .jds) as specified by the DataStore.FileName property
  • Blackfish SQL transactional log files (filename extension is LOGAnnnnnnnnnn, where n is a numeric digit) as specified by the TxManager.ALogDir property
  • Temporary files used for large sort operations as specified by the DataStore.TempDirName property
  • Temporary .jds files used for SQL query results as specified by the DataStore.TempDirName property

You can potentially improve performance by instructing Blackfish SQL to place the files mentioned above on different disk drives.

File Storage

The following are some guidelines for file storage handling that can improve performance of your applications:

  • It is especially important to place the log files on a separate disk drive. Note that log files are generally appended in sequential order, and their contents must be forced to disk in order to complete commit operations. Consequently, it is advantageous to have a disk drive that can complete write operations quickly.
  • On Win32 platforms, performance can be improved by placing Blackfish SQL log files in a separate directory. Storing numerous files other than the log files in the log file directory can slow down the performance of commit operations. This performance tip may also apply to platforms other than Windows NT/2000/XP.
  • Remember to defragment your disk drive file systems on a regular basis. This practice is especially important for the disk drive that stores the log files, because Blackfish SQL performs many sequential read/write operations to this file.
  • For Win32 platforms, consider using a FAT32 file system with a large cluster size such as 64KB for the disk drive to which your log files are written.

 

Non-transactional Database Disk Cache Write Options for Java

Note: This section applies only to Blackfish SQL for Java, which uses the DataExpress JavaBean components. 

Use the saveMode property of the DataStore component to control how often cache blocks are written to disk. This property applies only to non-transactional Blackfish SQL databases. The following are valid values for the method:

0  
Let the daemon thread handle all cache writes. This setting gives the highest performance but the greatest risk of corruption. 
1  
Save immediately when blocks are added or deleted; let the daemon thread handle all other changes. This is the default mode. Performance is almost as good as with saveMode(0). 
2  
Save all changes immediately. Use this setting whenever you debug an application that uses a DataStore component. 

Unlike other properties of DataStore, saveMode can be changed when the connection is open. For example, if you are using a DataStoreConnection, you can access the value through the dataStore property:

DataStoreConnection store = new DataStoreConnection();
...
store.getDataStore().setSaveMode(2);

Note that this changes the behavior for all DataStoreConnection objects that access that particular Blackfish SQL database file.

Tuning Memory

You can tune the use of memory in a number of ways. Be aware that asking for too much memory can be as bad as having too little.

  • Try increasing the ConnectionProperties.MinCacheBlocks property, which controls the minimum number of blocks that are cached.
  • The ConnectionProperties.MaxSortBuffer property controls the maximum size of the buffer used for in-memory sorts. Sorts that exceed this buffer size use a slower disk-based sort.

 

Java-specific Memory Tuning

The Java heap tends to resist growing beyond its initial size, forcing frequent garbage collection with an ever-smaller amount of free heap. Use the JVM -Xms option to specify a larger initial heap size. It is often beneficial to make the JVM -Xms and -Xmx settings equal.

Miscellaneous Performance Tips

Here are some tips that can help performance:

  • Setting the ConnectionProperties.TempDirName property, used by the query engine, to a directory on another (fast) disk drive can often help.
  • Try changing the check point frequency for the Transaction Manager. A higher value can improve performance, but might result in slower crash recovery. This can be updated from SQL by using the DB_ADMIN.ALTER_DATABASE built-in stored procedure. In Blackfish SQL for Java, you can use JdsExplorer to set this property by choosing TxManager > Modify.

The increased reliability and flexibility you gain from using transactional Blackfish SQL databases comes at the price of some performance. You can reduce this cost in several ways, as described in the following sections.

Using Read-only Transactions

For transactions that are reading but not writing, significant performance improvements can be realized by using a read-only transaction. The connection readOnly property controls whether a transaction is read-only. The Blackfish SQL for Java DataStoreConnection JavaBean has a readOnlyTx property to enable read-only transactions. 

Read-only transactions work by simulating a snapshot of the Blackfish SQL database. This snapshot sees only data from transactions that were committed at the point the read-only transaction starts. This snapshot is created when the DataStoreConnection opens, and it refreshes every time a commit method is called. 

Another benefit of read-only transactions is that they are not blocked by writers or other readers. Both reading and writing usually require a lock. But because a read-only transaction uses a snapshot, it does not require any locks. 

You can further optimize the application by specifying a value for the property readOnlyTxDelay. The readOnlyTxDelay property specifies the maximum age (in milliseconds) for an existing snapshot that the connection can share. When the property is non-zero, existing snapshots are searched from most recent to oldest. If there is one that is under readOnlyTxDelay in age, it is used and no new snapshot is taken. By default, this property is set to 5000 milliseconds.

Using Soft Commit Mode

If you enable soft commit mode through the SoftCommit property, the transaction manager still writes log records for committed transactions, but does not use a synchronous write mechanism for commit operations. With soft commit enabled, the operating system cache can buffer file writes from committed transactions. Typically the operating system ends up writing dirty cache blocks to disk within seconds. Soft commit improves performance, but cannot guarantee the durability of the most recently committed transactions. You can set the SoftCommit property by calling the DB_ADMIN.ALTER_DATABASE built-in stored procedure.

Disabling Status Logging for Transaction Log Files

You can improve performance by disabling the logging of status messages. To do this, set the RecordStatus property to false. You can set the RecordStatus property by calling the DB_ADMIN.ALTER_DATABASE built-in stored procedure.

Tuning Blackfish SQL Concurrency Control Performance

The following are guidelines for optimizing the performance of Blackfish SQL concurrency control operations:

  • Choose the weakest isolation level with which your application can function properly. Lower isolations tend to acquire fewer and weaker locks.
  • Batch multiple statements into a single transaction. Connections default to autocommit mode commit after every statement execution.
  • Commit transactions as soon as possible. Most locks are not released until a transaction is committed or rolled back.
  • Reuse statement or command objects whenever possible, or better yet, use prepared statements or commands when possible.
  • Close all statements or commands, all result sets or readers, and all connection objects when they are no longer needed. Single-directional result set or reader objects automatically close when the last row is read.
  • Use read-only transactions for long-running reports or online backup operations. Use the DB_ADMIN.COPYDATABASE method for online backups. Read-only transactions provide a transactionally consistent (serializable), read-only view of the tables they access. They do not acquire locks, so lock timeouts and deadlocks are not possible. See the section Using Read-only Transactions.
  • There is some overhead for maintaining a read-only view. Consequently, multiple transactions can share the same read-only view. The ConnectionProperties.ReadOnlyTxDelay property specifies how old the read-only view can be when a read-only transaction is started. Committing the transaction for a read-only connection refreshes the view of the database. Note that a read-only transaction uses the transactional log files to maintain views. Therefore, read-only connections should be closed as soon as they are no longer needed.

 

Using Multithreaded Operations

Write transaction throughput can increase as more threads are used to perform operations, because each thread can share in the overhead of commit operations through the “group commit” support provided by Blackfish SQL.

When deploying a Blackfish SQL application, you can exclude certain classes and graphics files that are not used.

  • If Blackfish SQL is used without the JDBC driver, exclude the following classes:
    • com.borland.datastore.Sql*.class
    • com.borland.datastore.jdbc.*
    • com.borland.datastore.q2.*
  • If you are using DataExpress, and the StorageDataSet.store property is always set to an instance of DataStore or DataStoreConnection, exclude the following classes:
    • com.borland.dx.memorystore.*
  • If StorageDataSet is used, but not QueryDataSet, QueryProvider, StoredProcedureDataSet or StoredProcedureProvider, exclude the following classes:
    • com.borland.dx.sql.*
  • If DataExpress isn't using any visual components from the JBCL or dbSwing libraries, exclude the following classes:
    • com.borland.dx.text.*
  • If com.borland.dx.dataset.TextDataFile is not used, exclude the following classes:
    • com.borland.jb.io.*
    • com.borland.dx.dataset.TextDataFile.class
    • com.borland.dx.dataset.SchemaFile.class

You can specify columns of type int and long as having AutoIncrement values. 

The following attributes apply to all AutoIncrement column values:

  • They are always unique
  • They can never be null
  • Values from deleted rows can never be reused

These attributes make AutoIncrement columns ideal for single column integer/long primary keys. 

An AutoIncrement column is the internal row identifier for a row, and so provides the fastest random access path to a particular row in a Blackfish SQL table.  

Each table can have only one AutoIncrement column. Using an AutoIncrement column saves the space of one integer column and one secondary index in your table if you use it as a replacement for your primary key. The Blackfish SQL Query Optimizer optimizes queries that reference an AutoIncrement column in a WHERE clause. For instructions on using AutoIncrement columns with SQL, see “Using AutoIncrement Columns with SQL” in the SQL Reference.

AutoIncrement Columns Using Blackfish SQL for Java DataExpress JavaBeans

To create a table with an AutoIncrement column using DataExpress, set the Column.AutoIncrement property to true before opening a table. If you are modifying an existing table, you need to call the StorageDataSet.restructure() method.

The dbSwing component library provides two components (on the More dbSwing page of the Component Palette) that make it easier to produce robust Blackfish SQL applications.

  • DBDisposeMonitor automatically disposes of data-aware component resources when a container is closed. It has a closeDataStores property. When true (the default), it automatically closes any Blackfish SQL databases that are attached to components it cleans. For example, if you drop a DBDisposeMonitor into a JFrame that contains dbSwing components attached to a Blackfish SQL database, when you close the JFrame, DBDisposeMonitor automatically closes the Blackfish SQL database for you. This component is particularly handy when building simple applications to experiment using Blackfish SQL.
  • DBExceptionHandler has an Exit button. You can hide it with a property setting, but it is visible by default. Clicking this button automatically closes any open Blackfish SQL database files it can find. DBExceptionHandler is the default dialog box displayed by dbSwing components when an exception occurs.

When using a Blackfish SQL table with a StorageDataSet, you should consider grouping them all inside data modules. Make any references to these StorageDataSets through DataModule accessor methods, such as businessModule.getCustomer. You should do this because much of the functionality surfaced through StorageDataSets is driven by property and event settings. 

Although most of the important structural StorageDataSet properties are persisted in the Blackfish SQL table itself, the classes that implement the event listener interfaces are not. Instantiating the StorageDataSet with all event listener settings, constraints, calculated fields, and filters implemented with events, ensures that they are properly maintained at both run time and design time.

Copyright(C) 2009 Embarcadero Technologies, Inc. All Rights Reserved.
What do you think about this topic? Send feedback!