Blackfish SQL
|
This section discusses ways to improve the performance, reliability, and size of Blackfish SQL applications. Unless otherwise specified, DataStoreConnection refers to either a DataStoreConnection or DataStore object used to open a connection to a Blackfish SQL database file.
Here are some tips that can improve the performance of your application when loading databases:
Use the DataExpress TextDataFile class to import text files. It has a fast parser and can load data quickly. You must set the StorageDataSet store to a DataStoreConnection, and set the StoreName property to the name of your table in the Blackfish SQL database. When loading a new database:
This technique should enable the database to load two to three times faster.
This section provides some general guidelines for improving performance for Blackfish SQL applications.
If a database is not properly shut down, the next time a process opens the database, there will be a delay. This is because Blackfish SQL needs about 8 to 10 seconds to ensure that no other process has the database open. To ensure that a database is shut down properly, make sure all connections are closed when they are no longer needed. If it is difficult to ensure that all connections are closed, a connection with Administrator rights can call the DB_ADMIN.CLOSE_OTHER_CONNECTIONS built-in stored procedure to ensure that all other connections are closed.
Another benefit to closing connections is that when they are all closed, the memory allocated to the Blackfish SQL cache is released.
Currently, non-transactional databases can be accessed from SQL only if the database read-only property is true. However, DataExpress JavaBeans can perform write operations on a non-transactional database.
Closing a non-transactional Blackfish SQL database ensures that all modifications are saved to disk. There is a daemon thread for all open DataStoreConnection instances that is constantly saving modified cache data. (By default modified data is saved every 500 milliseconds.) If you directly exit the Java Virtual Machine without closing the database, the daemon thread might not have the opportunity to save the last set of changes. There is a small chance that a non-transactional Blackfish SQL could become corrupted.
A transactional Blackfish SQL database is guaranteed not to lose data, but the transaction manager rolls back any uncommitted changes.
If your application is using DataExpress JavaBean components, close all StorageDataSets that have the store property set to a DataStoreConnection when you are finished with them. This frees up Blackfish SQL resources associated with the StorageDataSet and allows the StorageDataSet to be garbage collected.
You can use the DataStore.shutdown() method to make sure all database connections are closed before an application terminates.
The default maximum cache size for a Blackfish SQL database is 512 cache blocks. The default block size is 4096 bytes. Therefore, the cache memory reaches its maximum capacity at approximately 512*4096 (2MB). Note that this memory is allocated as needed. In some rare situations when all blocks are in use, the cache may grow beyond 512 cache blocks. You can use the DataStore.MinCacheSize property to specify the minimum cache size.
NOTE: Do not arbitrarily change the database cache size. Be sure first to verify that doing so will improve the performance of your application.
Keep in mind the following considerations when changing the Blackfish SQL cache size:
Blackfish SQL databases perform the majority of read/write operations against the following four file types:
You can potentially improve performance by instructing Blackfish SQL to place the files mentioned above on different disk drives.
The following are some guidelines for file storage handling that can improve performance of your applications:
Note: This section applies only to Blackfish SQL for Java, which uses the DataExpress JavaBean components.
Use the saveMode property of the DataStore component to control how often cache blocks are written to disk. This property applies only to non-transactional Blackfish SQL databases. The following are valid values for the method:
0 |
Let the daemon thread handle all cache writes. This setting gives the highest performance but the greatest risk of corruption. |
1 |
Save immediately when blocks are added or deleted; let the daemon thread handle all other changes. This is the default mode. Performance is almost as good as with saveMode(0). |
2 |
Save all changes immediately. Use this setting whenever you debug an application that uses a DataStore component. |
Unlike other properties of DataStore, saveMode can be changed when the connection is open. For example, if you are using a DataStoreConnection, you can access the value through the dataStore property:
DataStoreConnection store = new DataStoreConnection(); ... store.getDataStore().setSaveMode(2);
Note that this changes the behavior for all DataStoreConnection objects that access that particular Blackfish SQL database file.
You can tune the use of memory in a number of ways. Be aware that asking for too much memory can be as bad as having too little.
The Java heap tends to resist growing beyond its initial size, forcing frequent garbage collection with an ever-smaller amount of free heap. Use the JVM -Xms option to specify a larger initial heap size. It is often beneficial to make the JVM -Xms and -Xmx settings equal.
Here are some tips that can help performance:
The increased reliability and flexibility you gain from using transactional Blackfish SQL databases comes at the price of some performance. You can reduce this cost in several ways, as described in the following sections.
For transactions that are reading but not writing, significant performance improvements can be realized by using a read-only transaction. The connection readOnly property controls whether a transaction is read-only. The Blackfish SQL for Java DataStoreConnection JavaBean has a readOnlyTx property to enable read-only transactions.
Read-only transactions work by simulating a snapshot of the Blackfish SQL database. This snapshot sees only data from transactions that were committed at the point the read-only transaction starts. This snapshot is created when the DataStoreConnection opens, and it refreshes every time a commit method is called.
Another benefit of read-only transactions is that they are not blocked by writers or other readers. Both reading and writing usually require a lock. But because a read-only transaction uses a snapshot, it does not require any locks.
You can further optimize the application by specifying a value for the property readOnlyTxDelay. The readOnlyTxDelay property specifies the maximum age (in milliseconds) for an existing snapshot that the connection can share. When the property is non-zero, existing snapshots are searched from most recent to oldest. If there is one that is under readOnlyTxDelay in age, it is used and no new snapshot is taken. By default, this property is set to 5000 milliseconds.
If you enable soft commit mode through the SoftCommit property, the transaction manager still writes log records for committed transactions, but does not use a synchronous write mechanism for commit operations. With soft commit enabled, the operating system cache can buffer file writes from committed transactions. Typically the operating system ends up writing dirty cache blocks to disk within seconds. Soft commit improves performance, but cannot guarantee the durability of the most recently committed transactions. You can set the SoftCommit property by calling the DB_ADMIN.ALTER_DATABASE built-in stored procedure.
You can improve performance by disabling the logging of status messages. To do this, set the RecordStatus property to false. You can set the RecordStatus property by calling the DB_ADMIN.ALTER_DATABASE built-in stored procedure.
The following are guidelines for optimizing the performance of Blackfish SQL concurrency control operations:
Write transaction throughput can increase as more threads are used to perform operations, because each thread can share in the overhead of commit operations through the “group commit” support provided by Blackfish SQL.
When deploying a Blackfish SQL application, you can exclude certain classes and graphics files that are not used.
You can specify columns of type int and long as having AutoIncrement values.
The following attributes apply to all AutoIncrement column values:
These attributes make AutoIncrement columns ideal for single column integer/long primary keys.
An AutoIncrement column is the internal row identifier for a row, and so provides the fastest random access path to a particular row in a Blackfish SQL table.
Each table can have only one AutoIncrement column. Using an AutoIncrement column saves the space of one integer column and one secondary index in your table if you use it as a replacement for your primary key. The Blackfish SQL Query Optimizer optimizes queries that reference an AutoIncrement column in a WHERE clause. For instructions on using AutoIncrement columns with SQL, see “Using AutoIncrement Columns with SQL” in the SQL Reference.
To create a table with an AutoIncrement column using DataExpress, set the Column.AutoIncrement property to true before opening a table. If you are modifying an existing table, you need to call the StorageDataSet.restructure() method.
The dbSwing component library provides two components (on the More dbSwing page of the Component Palette) that make it easier to produce robust Blackfish SQL applications.
When using a Blackfish SQL table with a StorageDataSet, you should consider grouping them all inside data modules. Make any references to these StorageDataSets through DataModule accessor methods, such as businessModule.getCustomer. You should do this because much of the functionality surfaced through StorageDataSets is driven by property and event settings.
Although most of the important structural StorageDataSet properties are persisted in the Blackfish SQL table itself, the classes that implement the event listener interfaces are not. Instantiating the StorageDataSet with all event listener settings, constraints, calculated fields, and filters implemented with events, ensures that they are properly maintained at both run time and design time.
Copyright(C) 2009 Embarcadero Technologies, Inc. All Rights Reserved.
|
What do you think about this topic? Send feedback!
|