Performance Tips¶
There are a number of simple and some not-so-simple things that you can do to improve performance. Start with the simple ones first and proceed from there:
Turn off the Mertech driver trace - When initially testing the application, you may have used the driver trace. Be sure to turn this off when you’re finished. Leaving this turned on causes slowdown in performance.
Pre-allocate space - Pre-allocate database or tablespace if the back-end allows it. Leave room for growth. Pre-allocating space for indexes and tables can reduce the number of fragments within those internal structures which reduces the need for the server to do as many random read/write IOPS.
Separate data from logs - I/O contention can occur if transaction logs and your database are on the same drive in a write-heavy environment. Separate them if possible. Note that this isn’t a problem (or an option) when using a SAN or on most cloud-based databases that hide this sort of detail from you.
Indexing, Indexing, Indexing - Although it’s definitely possible to have too many indexes, most applications suffer from a lack of indexes. A SQL server will happily sort things however you request, no matter how long it takes. Queries will be much faster though if you have indexes that match the queries you’re making. Conversely, remove indexes that aren’t being used as there is overhead from the server to maintain these indexes.
Read Replicas - A read replica is a synchronized read-only copy of your database on another server. Your server can hand off read-only queries to the read replica to service to improve performance. Of course, if you have a write-heavy application, a read replica won’t provide any benefit.
Optimize Transactions and Locking¶
Performance can be increased significantly by batching multiple DataFlex transactions into larger SQL transactions.
If you have a scenario where you repeatedly lock and unlock data in a loop, this generates a large number of small transactions on the SQL Server. Entering in and out of transactions is not a lightweight operation, since the SQL Server has to ensure data consistency (often all the way to ensuring that data has actually been physically written to hard drives).
Wrapping the loop in BEGIN_TRANSACTION and END_TRANSACTION calls (for example every ten iterations of the loop) can have a very positive impact on performance of the process.
Of course, care should be taken to not make the transactions too big, since this can increase lock contention. The added speed boost of the overall process should mitigate the additional locking that takes places. However, you should evaluate the impact of the modification on other users.
Use REREAD
¶
Use REREAD in your code to explicitly list the file(s) you’re interested in. If no files are named as arguments to the REREAD command, all opened tables are locked and their record buffers are refreshed.
Reread FILE
Move some_value to NEWFILE.Field
Saverecord NEWFILE
Unlock
Use Data Dictionaries with Smart File Locking¶
DataFlex includes a data dictionary layer to store business rules and validations. Besides providing database validation, data dictionaries employ “smart file mode” technology.
Before a save or a delete, the data dictionary determines which files need to be locked. Using smart locking will help determine which files need to be locked and can dramatically improve database processing speed.
Note
Requires SMART_FILEMODE_STATE
be set to true (the default). SMART_FILEMODE_STATE
is a property of a class inherited by the DataDictionary class.
Move Filtering to the SQL Server¶
If you have code that loops through records in a table and you’re only interested in a subset of the data, an SQL filter will allow you to move the filtering mechanism from the client to the server. This cuts down on the amount of data that needs to be transferred to the client.
For example, say that you’re looping through your CUSTOMER table in alphabetical order to prepare a report, but you’re only interested in customers with a balance of between $1000 and $2000. A standard way to accomplish this would be through something similar to:
Repeat
Find Gt Customer by Index.2 // By Customer.Name
If (Found) Begin
If ((Customer.Balance > 1000) and (Customer.Balance < 2000))
DO SOMETHING
End
Until (not(Found))
In this case all customer records are transferred to the client, when in reality we are only interested in records with a balance value in a certain range. We can move the filtering to the SQL Server:
Use mertech.inc
SET_SQL_CONSTRAINT of Customer.File_Number to “and BALANCE > 1000 and BALANCE < 2000”
SQL_CONSTRAINT OF Customer.File_Number to ACTIVATE
Repeat
Find Gt Customer by Index.2 // By Customer.Name
If (Found) Begin
DO SOMETHING
End
Until (not(Found))
SQL_CONSTRAINT_OF Customer.File_Number to DEACTIVATE
You may also choose to create indexes by hand on your SQL Server to support your filters or other embedded SQL commands. The Mertech drivers will co-exist with and use indexes that are not explicitly defined in DataFlex.
Support for SQL filters at the DDO level¶
DataFlex 18.0 introduced support for SQL filters at the DDO level. Now, in addition to using custom commands in the driver, SQL filters can be used as a part of normal DataFlex programming. Flex2SQL v14.0 and higher fully supports this new DataFlex feature.
Identify Missing Indexes¶
A common cause of application performance problems is missing indexes. One reason for a missing index is that duplicate row errors occurred during the migration process. The Flex2SQL Classic Migration Utility Index Maintenance option can check whether the indexes for each file were created properly.
Defragment the Database¶
If a query is taking a lot longer to compete than before, even though nothing in it has changed, database fragmentation may be an issue. This is especially true if the SQL database is set to grow by a fixed number of megabytes instead of a percentage.
If finds are slow, but the indexes are reported to be fine by Flex2SQL, please consult the documentation for your SQL Server on how to rebuild your table and index spaces.
Recompute Index Statistics¶
Index selectivity is a number that determines the effectiveness of an index. Index selectivity can be calculated:
Total number of distinct values ÷ Total number of records. |
The best possible index selectivity is when all records in a table have a different value for the columns in the index (this is typical for primary keys and unique constraints).
It is inefficient to maintain an index that has a low selectivity value. Most SQL databases include an optimizer that determines when to use an index and when it is faster to ignore the index and do a scan instead. Indexes with low selectivity values may be ignored even though they are included in a WHERE clause.
If large chunks of data have been inserted into your table, ask your SQL Server to recompute the index statistics and then modify or remove superfluous indexes.
Move Business Logic to the SQL Server¶
Significant performance improvement can be seen by rewriting some batch processes in SQL. This can be either as embedded SQL statements, or as a stored procedure or function for execution directly on the server.
A simple example is a business process that computes the outstanding balance for all customers. In a record oriented scenario, all the records have to be brought over to the client in a loop that computes a running total.
This can instead be turned into a short snipped of SQL code that executes fully on the server, with minimal network traffic and interaction:
SQL_SET_STMT to “SELECT SUM(BALANCE) from CUSTOMER”
SQL_PREPARE_STMT
SQL_EXECUTE_STMT
SQL_FETCH_NEXT ROW INTO iBalance
Please note that not all business logic will see a performance improvement by moving to embedded SQL. Embedded SQL that simply returns a large record set will not show significant gains over fetching the same records through a regular FIND GT loop.
Identifying those business processes where a large portion of the logic can be handled by the SQL Server to keep network traffic down.
Consider Cache Performance¶
The Mertech driver monitors the database operations you perform in real time, and adjusts the number of records and columns it fetches and caches. One scenario that can be detrimental to the cache is looping through records, while doing individual lookups on the same table. This causes caches to be flushed.
Repeat
Find gt customer by Index.1
// DO SOMETHING WITH THE CUSTOMER, LIKE LOOPING THROUGH CHILD RECORDS
// BASED ON SOMETHING HERE, YOU REALIZE YOU WANT TO LOOK UP ANOTHER
// CUSTOMER, AND THEN RETURN BACK TO THE LOOP
Move Customer.Recnum to iSavedRecnum
// Find the record of interest
// the previous cache from the FIND GT is discarded
Move something to Customer.Recnum
Find eq customer by Recnum
// DO SOMETHING WITH THE CUSTOMER
// Return to the previous position in the loop, no rows are cached
Move iSavedRecnum to Customer.Recnum
Find Eq Customer by Recnum
until (not(found))
Reseeding of the key buffers during a FIND GT leads to cache clearing.
The workaround in this case may be to perform the separate customer lookup through embedded SQL. Or, to move the whole process to embedded SQL, if it cannot be performed separately from the loop.
Use Lazy Open Mode¶
The lazy open mode feature is designed to improve an application’s initialization performance by deferring table setup. Lazy open mode delays the setup of table metadata (table, columns, indexes description) in memory until a field is accessed, even if an OPEN command is issued.
Lazy open mode is particularly useful for applications that open a large number of files when the program is launched. For example, if a program opens 30 files but only 10 are accessed immediately, then the metadata for only those 10 tables is fetched from the server at application initialization. The remaining tables are not opened until an action outside of open is performed on them.
Lazy open mode is disabled by default. Lazy Open mode can be enabled by either calling a command or through the MERTECH.INI file.
Enabling Lazy Open Mode¶
Lazy open mode can be enabled in two ways: by calling a command (SET_OPEN_MODE
) or by setting a flag in the MERTECH.INI file. Set lazy open mode in the .INI file only if you want to globally enable it for the entire run of the program. Mertech recommends that you evaluate performance for file opens before deciding on which approach is suitable for your application. It is important to note that Lazy Open mode is a global setting and once set remains in effect until the open mode setting is changed.
Lazy open mode can be enabled by calling the command SET_OPEN_MODE
. See the command reference for details on using SET_OPEN_MODE
.
When lazy open mode is enabled, during OPEN
command execution, the database driver opens the .INT file to retrieve the basic table information (table name, owner, database, number of columns). The database request for the table metadata is delayed until an operation other than OPEN
(such as a find, clear, update, delete etc.) is performed.
Lazy Open mode can also be enabled by adding the following line to the MERTECH.INI file. These MERTECH.INI entries enable Lazy Open mode for SQL and PostgreSQL drivers:
[MSSQL]
LazyOpenMode yes
[PGSQL]
LazyOpenMode yes