Programmer’s Guide

Flex2SQL v17 Classic Edition Programmer’s Guide

Flex2SQL v17 Classic Edition for DataFlex Drivers

Programmer’s Guide v17.0 Classic Edition

October 16, 2019

© Copyright 2000-2021 Mertech Data Systems, Inc.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Mertech Data Systems, Inc.

DataFlex is a registered trademark of Data Access Corporation.

Excel, Microsoft, SQL Server, and Windows are registered trademarks of Microsoft Corporation.

Linux is a registered trademark of Linus Torvalds.

MySQL is a registered trademark of MySQL AB

Oracle is a registered trademark of Oracle Corporation.

PostgreSQL is a registered trademark of PostgreSQL Inc.

Trademarks used in this book are purely for publication purposes and are the property of their respective owners


DataFlex developers have been creating enterprise level software since the 80’s. However, due to the proprietary nature of the DataFlex database, integration of their application at an enterprise level was very difficult, if not impossible. As the demand for databases from Oracle, Microsoft, IBM and others grew, DataFlex developers were confined to smaller market segments.

All this changed when Mertech introduced the first ISAM to SQL Database Connectivity (ISDBC) drivers for DataFlex in 1997. Mertech’s Oracle and MS SQL Server drivers for DataFlex allowed DataFlex developers to offer their solution on the two industry-leading database platforms. This was followed by drivers for MySQL, and PostgreSQL, allowing developers to offer their solutions from the desktop all the way up to mainframe class computers.

Mertech’s products were designed with an integration philosophy in mind that allowed developers to not only easily migrate their database to SQL backends, but also to seamlessly integrate their application with existing non-DataFlex files. Mertech’s products have helped DataFlex developers expand their business significantly, opening doors to new business!

All Drivers are not Created EQUAL!

Are drivers from two different companies the same? The answer is no. Writing SQL drivers requires a thorough understanding of the underlying database and optimization schemes. In addition, the driver must handle all DataFlex situations. A driver either works or it doesn’t. There is no middle ground. All data in the system goes through a driver, and even a minor bug renders a driver useless for mission critical applications.

Key Features of the Mertech Drivers

  • Seamless integration between DataFlex applications and their target database (without source code changes).

  • Seamless integration of non-DataFlex tables into DataFlex applications. All necessary files like .FD and .INT files are generated automatically and the file name is entered into the filelist automatically.

  • Access to existing data stored in non-DataFlex database systems.

  • Full support for DataFlex file relationships.

  • Full support for overlapped/underlapped fields.

  • Support for record locking instead of file locking.

  • Use of native database transaction logging and rollback capabilities for disaster recovery.

  • Support for alias files.

  • Support for database restructuring using Database Builder and restructuring commands.

  • Preservation of RECNUM values during data migration.

  • A full index maintenance tool that allows addition and deletion of indexes and verification of indexes on the backend.

  • A sophisticated file creation and restructuring tool that allows you to commit changes directly to the backend or to generate DataFlex restructure code for distribution to remote sites.

  • A complete migration report that shows you at a glance the creation of tables, indexes, and the number of records migrated.

  • High-speed data migration using native tools like SQL*Loader and Bulk Copy.

  • Storage analysis tool that allows you to evaluate storage availability before migration.

Supported Platforms and DataFlex Versions

Mertech products support DataFlex 3.1c and Visual DataFlex 5 through 19 and Windows & Linux operating systems. The following table shows the availability of Mertech drivers on various platforms.

Client OS


MS SQL Server



10g thru 19c

2012 thru 2019 including Azure and LocalDB

5.7 thru 8.0 & 10 thru 10.5

9.0 thru 13.2

Windows 7 – 10 & 2008 - 2019










Whether you are installing for Windows or Linux, the installation program is run on a Windows machine. The Flex2SQL Setup Wizard guides you through the installation process. Follow the prompts, accepting the terms of the license agreement.

Flex2SQL Utility Install.png

Note: See Installing for Linux on the next page.

The install program installs a DataFlex (DF) single user distribution license for running Flex2SQL. Regardless of the version of DataFlex you are using, Flex2SQL (which previously was compiled for all available versions of DataFlex and used the already installed DataFlex runtime components) now deploys its own runtime component. This standardization was made possible by the introduction of a DataFlex Workspace Manager that allows Flex2SQL to select and work with any version of DataFlex on Windows.

Installing for Linux

Select the Complete setup type option in the Flex2SQL Setup Wizard to install the Linux drivers.

Flex2SQL Utility Install.png

When the Complete option is selected, the linux subdirectory is included in the install tree directory under the <Program Files>Mertech Data Systems\ DB DriversDataFlex folder:









The client subdirectory contains the libraries used by MySQL ( and PostgreSQL (

The shared subdirectory contains the drivers for Oracle (, MySQL ( and PostgreSQL ( These drivers were compiled without any of their library dependencies.

The static subdirectory contains the drivers for Oracle (, MySQL ( and PostgreSQL ( These drivers were compiled with the basic environment libraries.

Use the ldd command to list the driver’s shared library dependencies. For example: ldd The runtime environment must provide the required libraries.

Important Oracle Information

Before using the Oracle driver, make sure that the following Oracle environment variables are properly set.

  • ORACLE_HOME: The top level directory that contains the database client software or the entire database installation. The driver needs the Oracle client libraries and files that are located through this environment variable. Each user must have this environment variable properly set to use the driver.

  • TNS_ADMIN: The directory that contains the Oracle file tnsnames.ora, which allows calls to the Oracle database to set the required server connections.

If installing the Oracle driver using shared libraries, the library path environment variable must also be set. This variable lists the directories to search for dynamic-link libraries. We recommend that this variable it be placed in each user’s profile after the ORACLE_HOME environment variable is set. If the library path environment variable is not set, the driver will not function properly since it will not be able to find the Oracle libraries. Set the library path environment variable:

  • PATH: on Windows

  • LD_LIBRARY_PATH: on Linux

Oracle License Count View

Server-side control scripts must be run before you connect to Flex2SQL for Oracle. These must be run as a SYSDBA.

  1. Select the Flex2SQL Utility shortcut from the Windows Start menu.

Note: It is recommended that you use the same version of the Flex2SQL GUI migration tool and Flex2SQL database drivers. If a mismatch is detected (for example, v12.0 migration tool and v12.1 drivers, a warning message displays.

The Login dialog box displays.

  1. In the Server Type field, select Oracle from the drop-down list.

  2. Select SYSDBA from the AutoMode drop-down list.

  3. In the Server Name field, type the name of the server you are logging into.

  4. In the User Name field, type the SYSDBA user name (for example SYS or SYSTEM).

  5. In the Password field, type the user password.

  6. Click Connect to log into the server. Server-side license scripts run automatically.

  7. Logout then login as a DEFAULT user before performing any migration work.

Note: If you do not have DBA privileges, ask your DBA to run one of the following scripts to create the required Mertech view.

For a non-RAC Oracle database:

create or replace view mds_session_info as select username, machine, terminal, module from v$session where module like ‘ORAFlex%’;

grant select on mds_session_info to public;

drop public synonym mds_session_info;

create public synonym mds_session_info for sys.mds_session_info;


For an Oracle Real Application Cluster (RAC)

create or replace view mds_session_info as select username, machine, terminal, module from gv$session where module like ‘ORAFlex%’;

grant select on mds_session_info to public;

drop public synonym mds_session_info;

create public synonym mds_session_info for sys.mds_session_info;


Questions have been asked about the requirement to create the MDS_SESSION_INFO view within the SYS schema. The purpose of MDS_SESSION_ INFO is to limit v$session information that is available to the application user to only ORAFlex connections (“where module like ORAFlex%”).

Alternately, you can create the view in the user’s schema, but you must then add “grant select on v_$session” to the script. Since v_$session is the underlying table of v$session view, this method opens up v$session to the user.

We recommend creating the view as the SYS user, since this provides the highest-level of security.

ciText for PostgreSQL

The PostgreSQL ciText module provides ciText, a locale-aware, case-insensitive Text type. Essentially, LOWER() is called when comparing ciText values. Otherwise, ciText data types behave exactly like Text.

The Mertech driver for PostgreSQL supports data type ciText to allow for case insensitive fields without the need for inverse key columns. Using the ciText SQL functions provides much better performance with case- insensitive data types than using inverse keys.

When you login in to a PostgreSQL server, the driver checks to see if the PostgreSQL server has ciText support enabled. If ciText support is not enabled a message displays to explain how-to enable this feature on your PostgreSQL server.


Note: Install the ciText module by running pgsql/contrib/citext/citext.sql. For later versions of PostgreSQL you may have to run the CREATE EXTENSION citext command. Refer to PostgreSQL documentation for additional information.

License and Registration

When you purchase Mertech’s drivers, you receive license files with your order, which are also available for later download with your Mertech account. Please see the readme file in the license package for more information.

Changes were made to the DataFlex runtime engine in Visual DataFlex (VDF) and DataFlex 3.1c and above to allow external database drivers to be loaded as dlls and to work as a part of the runtime. These changes enable a driver to work with existing DataFlex programs without changing a single line of code. Flex2SQL takes advantage of this runtime environment, and provides database drivers for Oracle, MS SQL Server, MySQL, and PostgreSQL.

How Everything Works

The Flex2SQL product bundle is comprised of a GUI migration utility and database driver dll. The GUI migration utility that takes existing DataFlex tables and indexes and migrates them to a target backend, creating the table structures, index structures, and recnum (optional) values for all the files in the filelist. The database driver dll works with the DataFlex runtime and handles all client/server connection and database operations.

Data Migration

After completing the installation and setup of the Flex2SQL GUI component and database driver(s), you are ready for data migration. Data migration is performed using the GUI component. The procedure consists of logging into the target database, choosing the filelist to be converted, and then running the Convert Database command.

Flex2SQL migrates the selected files to the target database and creates an intermediate file for each data file. Intermediate files are stored in the workspace working directory, or if files where opened with the filelist option, where the .DAT file resides.

Running your Program

After all the files are converted to the target database, the next step is to add the appropriate login command to your program. This is the only change that is required. There are two ways to do this:

  • Mertech provides an automatic login dialog box that pops up if you are not logged onto the server and are trying to open files that are marked as Oracle, MS SQL Server, MySQL, or PostgreSQL files. This option is useful for testing purposes or running reports.

  • Another way, is to add a separate login program (sample source code, login.src, is provided with the driver installation) that prompts for the user id, password, etc. and then “chain waits” to the main program.

Integration with DataFlex

To give you an idea on how these pieces work together, let us look at a simple DataFlex program that opens a file.


The runtime engine calls the DataFlex API function call, dfFileOpen(), which searches the filelist entries for a file named MYFILE. If MYFILE resides on a server, the entry in the filelist is MYFILE.INT or of the type ORA_DRV:\servernameschema*tableName. The “INT” extension means that there is an intermediate file. The DataFlex API sees the .INT extension (or the embedded servername) and opens the .INT file. From this .INT file, the API looks for the driver name entry. If the driver was previously loaded, it searches for the driver in the driver table. If the driver was not previously loaded, it loads the driver and adds it to the driver table. Once the driver has been loaded, the API calls the driver FileOpen code and passes control to the driver.

The driver searches the system catalogs to find out if the table name (MYFILE) is a valid table name. If it is, the driver performs the operations that are necessary to open the file. Once the driver is finished it passes control back to the API, which returns control back to the runtime engine.

In a similar fashion, the API passes control to the driver to handle all major functions like find, save, edit, delete, transactions, locking, etc.


DF_FILE_TYPE Attribute

The DataFlex DF_FILE_TYPE attribute is a table attribute that identifies the database Connectivity Kit used to open the table. DF_FILE_TYPE is a numerical value that can be retrieved using the GET_Attribute command.

The following DF_FILE_TYPE values are assigned to Mertech drivers.











Planning for Migration

There are many factors to consider prior to performing the actual database migration. These include but are not limited to hardware requirements, networking requirements, security, accessibility, and reliability. These all vary depending on your needs. Some considerations are outlined in this section.

Hardware Requirements

It is important to understand that the disk space used by a database engine is larger than that of native DataFlex. The speed at which the data is processed, especially for data write operations, requires faster disk drives and potentially different RAID configurations than the native DataFlex. For very large databases it may be advisable to investigate SAN solutions. In addition, if high database availability is mandatory, mirroring involving multiple MS SQL Server instances and potentially multiple servers (physical or virtual) would also increase the amount and type of hardware (and software – in this particular case Microsoft software) requirements.

See Also

High Availability

Networking Requirements

Normally, in all database migrations, multiple machines/devices have access to the database. These can be at a minimum connected via a local area network but can also include wide area networks and almost invariably a certain amount of Internet access. Depending on your requirements, access to a central secure database may necessitate some changes. These can include such networking as HTTP endpoints or virtual private networks. This should be discussed with your networking personnel or vendor.


There are many security considerations and only a few are discussed here. This includes the type of access to the database. For MS SQL Server, access can be through trusted connections, such as with an Active Directory Domain, or through an SQL Server user login. Further security restrictions can be made via the user schemas and roles used. For additional information on how to best implement security in MS SQL Server to meet your specific needs, please see Windows Server Security and Microsoft Network Access Protection. The security you need is also determined by accessibility. If all your users work at a single site with no outside access, the security requirements are significantly different than if you have users scattered throughout the world.

The Flex2SQL database migration utility contains all the features that a developer needs to migrate an existing DataFlex application to an SQL back-end or to integrate non-DataFlex tables into his/her application. The Flex2SQL utility lets you set global preferences for script generation, file casing, field mappings, date handling, and more. Custom preferences can also be selected for a specific migration. A built-in browser lets you examine both DataFlex and converted tables. The Flex2SQL utility can also create new tables and indexes or restructure existing tables and indexes.

Select the Flex2SQL Utility shortcut from the Windows Start menu to launch the migration utility.

Flex2SQL Utility.png

Obtain context sensitive help for options available in the Flex2SQL utility by selecting the Help button or by pressing the <F1>.

Database Migration

Commands on the Flex2SQL utility Database menu let you perform the database conversion, create a text file for high-speed text migration, browse DataFlex or migrated tables, copy data between the DataFlex and target database, and perform other database modifications.

Table Creation and Restructuring

The Create and Restructure Table options are a very powerful and dynamic subset of the Flex2SQL utility. They allow you to create or restructure tables and/or indexes without having to interact with any backend specific utilities. All enhancements can be saved in the accompanying .INT and .FD files, maintaining the integrity between the frontend and backend.

The Create or Restructure Table dialog box contains a comprehensive range of edits to fields and field attributes, indexes, and table information.

There are two additional features in this tool that reduce developers’ time and effort. One is the ability to generate SQL scripts and restructuring code. This allows for changes to take place in a development environment and have them replicated in a production environment. The other is the ability to incrementally save restructure steps without losing any of the previous work.

Create Table.png

Index Maintenance

The most common reason why a DataFlex program does not perform optimally after migration to an SQL database backend is because the user did not create the indexes or did not create the indexes properly. The driver fails to create indexes for a number of reasons, but the two most common are duplicate data and running out of disk space. For example, a DataFlex table might contain dates like 12/12/99 and 12/12/1999 in two separate records. Since the Flex2SQL utility converts all dates to four-digits during migration, such data causes a unique constraint violation if this date field is part of an index.

The Index Maintenance option allows the user to check whether the indexes for each file were created properly. To use the tool, select all the files in the Filelist dialog box, then right-click and select Index Maintenance. The Flex2SQL utility examines each DataFlex file and compares it with the indexes created on the target server. If an index was not created, the Flex2SQL utility displays a red check mark next to it.

Inverse Key Maintenance

The Flex2SQL utility may create additional columns, called inverse key columns, during migration to handle case sensitive indexes and descending index segments. This option is needed when an SQL backend (for example PostgreSQL) does not efficiently support case-insensitive index segments.

The Mertech driver makes sure that inverse key fields remain up to date if a table contains them. However, if non-DF tools are used to update a table, care has to be taken to keep the inverse key fields consistent with their corresponding field partners.

The Flex2SQL utility contains an option to create a trigger on a table to ensure that inverse key values are always updated. Right-click the table and then select Enable Trigger to Handle Inverse Key from the popup menu (this option is also available from the Database menu). It is recommended to use triggers to maintain inverse keys if any non-DF tools will update the table. Otherwise, the trigger can be omitted for a slight performance boost. Select the Disable Trigger to Handle Inverse Key option to disable the trigger.

The Flex2SQL utility also contains an option to examine and fix corrupted inverse key columns. Select the tables you want to check, and then choose Check Inverse Key Columns from the Database or popup menu. If any tables have corrupted inverse key values, the Check Inverse Key columns dialog box displays letting you know which tables are corrupted, and giving you the opportunity to fix them.

Check Inverse.tif

The need for inverse keys has diminished with new features in later SQL backends. It is especially uncommon to require inverse key fields in MS SQL Server.


MySQL does not currently support ascending and descending index value storage. However, this feature is planned for future releases. Once this feature is available in MySQL, the Flex2SQL utility will no longer allow the Create Inverse Keys option to be enabled for MySQL backends.

Inverse Key Attributes

Two read-only attributes are available to retrieve inverse key information.



Inverse Key Constants

Four constants can be used to identify the segment type.





Example: Display Inverse Key Name


Get_Attribute DF_FIELD_INVERSE_KEY_TYPE of iTable iField iInkTypes to iHasInvK

If (iHasInvK) Begin

Get_Attribute DF_FIELD_INVERSE_KEY_NAME of iTable iField iInkTypes to sInkName

Showln ‘Ink Name: ‘ sInkName



Example: Display Field #, Field Name, Inverse Key Type, and Inverse Key Name

Open ‘’ as invt

Move invt.File_Number to iFile

Get_Attribute DF_FILE_NUMBER_FIELDS of iFile to iFields

For iField from 1 to iFields

Get_Attribute DF_FIELD_NAME of iFile iField to sFieldName

Get_Attribute DF_FIELD_INVERSE_KEY_TYPE of iFile iField to iInverseKeyType


If (iInverseKeyType = INVK_CASE_SEG) Move ‘INVK_CASE_SEG’ to sInverseKeyType

If (iInverseKeyType = INVK_DESC_SEG) Move ‘INVK_DESC_SEG’ to sInverseKeyType

If (iInverseKeyType = INVK_DESC_AND_CASE_SEG) Move ‘INVK_DESC_AND_CASE_SEG’ to sInverseKeyType

Get_Attribute DF_FIELD_INVERSE_KEY_TYPE of iFile iField iInverseKeyType to iHasInvK

If (iHasInvK) Begin

Get_Attribute DF_FIELD_INVERSE_KEY_NAME of iFile iField iInverseKeyType to sInverseKeyName

Showln ‘Field#’ iField ‘ - Field Name: ‘ sFieldName ‘ - Inverse Key Type: ‘ sInverseKeyType ‘ - Inverse Key Name: ‘ sInverseKeyName




Relationship Synchronization

The Flex2SQL utility Synchronize Relationship option allows you to transfer DataFlex relationships to the server back-end or to retrieve the relationship information from the server into the .INT files.

After a database conversion, all the relationships are stored in the .INT file, but they are not propagated to the SQL database. Select Synchronize the SQL Foreign Key with the INT Relationship (Synchronize.bmp in the left pane) to transfer the relationships to the SQL back-end. The right pane then shows the relationships that are created.

Synchronize Relationships.png

With the Synchronize Relationship option you can also use external tools to create database relationships and then update the .INT file using Synchronize the INT file relationship with the SQL table foreign key (Synchronize point left.bmp in the right pane).

Relationships need to be maintained in the INT file to ensure that DataFlex operates as expected. It is up to you to determine if you need relationships on the SQL side. The Mertech driver works the same with or without the relationships

Scripting Options

The Flex2SQL utility lets the developer generate scripts that perform data migration and database maintenance and lets the DBA review the scripts before running them on the live server. The Flex2SQL utility Tools menu contains commands to generate SQL scripts to:

  • create SQL tables, including any constraints and indexes

  • restructure SQL tables

  • analyze the SQL database after a text file load and adjusts the RECNUM values to run properly with DataFlex

  • initialize auto increment columns (in PostgreSQL) or sequences (in Oracle) to be the highest number in the table plus one

  • enable or disable a trigger so that inverse key values are properly updated

  • drop objects such as sequence definitions on Oracle or an index or table in any SQL database engine

Propagating Changes to Customer Sites

A common scenario is for a developer to make changes to the database on his own machine and commit the changes to the SQL Server in the development environment.

The problem is how to deploy the changes to customer sites. The developer must send the following information to a customer site before updates can go live:

  • His new application

  • Updated INT file(s) and SQL script(s), or
    procedures for restructuring the database

The Flex2SQL utility saves restructure code in the Restructure Table > Results > Source Code tab. So, one option is for the developer to create a program that includes this restructuring code, test the code, and then run the program at each customer site.

The Flex2SQL utility has a new feature, available in the Restructure Table > Results > SQL Script tab, to create an SQL script containing the required database modifications. Now the developer can ship his new application and INT file(s) along with the SQL script to the customer site. The customer’s DBA (or whoever manages the customer database) can review the SQL script, and then run the script on the live server.

Another new feature in the Restructure Table > Results tab is the option to directly save the INT file for the updated table. The new INT file can be copied directly from the saved location.

Note: The INT file can also be copied from the development drive once the changes are committed and the new INT file is generated.

Macro Commands to Create Restructuring Code

The SQL update script and new INT file can also be created programmatically. Mertech provides the following new macro commands for a developer wishing to write their own restructuring code:





As an example, this sample reads a DEF file, and uses it to save the corresponding SQL script and INT files, without making any actual changes to the SQL database.

Procedure Generate

Local Integer iFile iOffset iLen

Local String sScriptLine


// Load the DEF file, the SQL_SCRIPT

Move 0 to iFile


Structure_Start iFile sMainDriver

Load_Def ‘salesp.def’ Onto iFile

Structure_End iFile


// Save the SQL script

Move 0 to iOffset

Direct_Output “salesp.sql”



If (iLen > 0) Begin

Write sScriptLine

Move (iOffset + iLen) to iOffset


Until (iLen = 0)


// Save the INT file

Move 0 to iOffset

Direct_Output “”



If (iLen > 0) Begin

Write sScriptLine

Move (iOffset + iLen) to iOffset


Until (iLen = 0)





Tells the driver that the following restructuring calls should not actually write anything to the server or update the INT file on disk, but should instead, log the changes.



Stops logging and restores restructuring back to the normal state.


Syntax: GET_INT_SQL_SCRIPT_SQL_CHUNK <Offset> TO <Variable> <Returned length>




Starting position for storing the SQL script (offset based on position 0)


Internal variable for storing the SQL script

Returned length

Length of the stored portion

Stores a portion of the SQL script in an internal variable. The script is stored in chunks since a script can be larger than the maximum allowable string length.


Syntax: GET_INT_SQL_SCRIPT_INT_CHUNK <Offset> TO <Variable> <Returned length>




Starting position for the storing the INT file (offset based on position 0)


Internal variable for storing the INT file

Returned length

Length of the stored portion

Stores a portion of the INT file in an internal variable. The INT file is stored in chunks since the file can be larger than the maximum allowable string length.


The MERTECH.INC file contains macro commands that can be called from a DataFlex program. MERTECH.INC describes each command and contains example code. A copy of the MERTECH.INC file is installed in <Program Files>Mertech Data SystemsDB DriversDataFlexlib.

Detailed information about each command in the MERTECH.INC file is available in a separate book later in this online help file.

Note: Significant changes were made to MERTECH.INC in v12.1. This includes overhauling documentation of the commands and attributes defined in MERTECH.INC and adding support for new features like ESQL and unified stored procedures. Code that is compiled using the v12.1 MERTECH.INC file will only work with the V12.1 drivers. Code that was compiled with an older version of MERTECH.INC will continue to work with the new drivers.

INT Files

The Flex2SQL migration utility adds a driver prefix (for example, Sql_drv ) to the root file name entry in the filelist and generates an intermediate file (root_file_name.INT) when a DataFlex file is converted to the target database.

When an open command is issued for a file containing a driver prefix, the DataFlex runtime passes the open operation to the driver associated with the file.

When an open command is issued for a file containing an INT extension, the API searches the INT file for the DRIVER_NAME token, loads the appropriate driver and then passes control to the driver.

An INT file contains information required to allow DataFlex to connect to and open the converted SQL tables. INT options are listed below. A sample INT file follows.




Driver to load when opening this file.


The name of the server with optional instance name (see Login Details).


The database name.


The table name (see Login Details).


The database owner (see Login Details).


Number of fields in the DataFlex file.


The index number for the field in the DataFlex file that is used to create a primary key on the target server.


Primary index in the DataFlex file.


Enable (1) or disable (0) OEM character translation within the OLE-DB driver.


Fetch data a record (FOR_ONE_ROW) or a set of records (FOR_SET) at a time. When FOR_SET is specified, the driver limits the size of the query to the MAX_ROWS_TO_QUERY value. When FOR_ONE_ROW is selected, the driver generates a query that produces a result set containing only one record. Performance issues can arise if FOR_ONE_ROW is used.


The number of records to be fetched in one network round-trip.


Enable (YES) or disable (NO) storage of fetched records in local cache.


The DataFlex column number. All subsequent INT entries apply to this field until another FIELD_NUMBER (or INDEX_NUMBER) entry is listed.


The DataFlex main index for the column.


The number of bytes on the target database.


The number of bytes in the DataFlex field.


The DataFlex field type for the column (DataFlex DF_FIELD_TYPE enum value).


The starting column for an overlap field.


The ending column (inclusive) for an overlap field.


Filelist number of a related parent table.


Column number in the parent table that this column relates to.


Name for the inverse key column to handle a case sensitive index or descending index segment.


The DataFlex index number. All subsequent INT entries apply to this index until another INDEX_NUMBER entry is listed.


As an alternative to specifying the segments of the index, after an INDEX_NUMBER entry, an INDEX_NAME entry can be added with the name of the index on the server specified that should be linked to the preceding INDEX_NUMBER. When used, this is the only option that needs to be specified. All the rest of the details will be loaded from the server.

If Segment details are also specified, then Flex2SQL will operate in a “strict” mode in regards to indexes and require that the name AND index segment details match.


Number of segments in this index


Index created by the Flex2SQL migration utility on the server.


The field number of the index segment column.


Whether this index segment is case sensitive (USED, case sensitive, or IGNORED).


The direction for this index segment (ASCENDING or DESCENDING).

Sample INT file


SERVER_NAME testsyssqlexpress








































Login Details

The Flex2SQL migration utility allows you to choose whether login details (SERVER_NAME, DATABASE_NAME, and SCHEMA_NAME) are included in the INT file or not (Maintenance > Create Table or Restructure Table > Table tab and Tools > Preferences > Login tab).

If login details are included, these values are hardcoded for the table. If a token is not included in the INT file, then the assigned value comes from the user’s default information (DATABASE_NAME and SCHEMA_NAME), or from the server the user logged into (SERVER_NAME).

Note: If login details are included in the INT file and you move your tables to another server, database, or schema, the driver is no longer able to find the tables, and you have to recreate or manually edit the INT file.

Working Without an INT File

A table can be opened without an INT file, provided a string containing table information is passed as an argument to the OPEN command. The string format is driver-specific:

Flex2SQL for MySQL :

Open “mdsmysql:<serverName><database>*<tableName>=<numberFields>” as table

Flex2SQL for Oracle:

Open “ora_drv:<serverName><owner>*<tableName>=<numberFields>” as table

Flex2SQL for PostgreSQL:

Open “mdspgsql:<serverName><database>*<schema>*<tableName>=<numberFields>” as table

Flex2SQL for MS-SQL:

Open “sql_drv:<serverName><database>*<owner>*<tableName>=<numberFields>” as table




Server name used in a previous Login command


Database name where the table is stored


Owner of the table


PostgreSQL schema name


Case sensitive table name


Number of DataFlex fields the table contains (optional)

For example, the following line can be used to open a table named Customer, from a local MS SQL Server, that is available at the pubs database, with dbo as the table owner:

Open “sql_drv:\localhostpubs*dbo*Customer” as customer

There are certain DataFlex attributes (for example, main index, file relationship, overlap fields, and field length) that are not defined when a table is opened without an associated INT file. You can use Set_Attribute commands to set these attributes and fully define the table as a DataFlex data file. Keep in mind that settings made using Set_Attribute are only valid while the table is open. As soon as you close the table, the settings are lost.

Setting Table Attributes Dynamically

A new Flex2SQL macro command block, Structure_Start_Dynamic / Structure_End_Dynamic, completely eliminates the need for an INT file. Instead, this command block allows the full table structure (which is generally kept in the INT file) to be passed to the driver during program execution.

Table settings are assigned inside the Structure_Start_Dynamic and Structure_End_Dynamic block using Set_Attribute commands. Set_Attribute commands called inside the block affect the data file structure that is kept in memory by the driver. No physical change to the table takes place.

The syntax for these 2 commands is:

Structure_Start_Dynamic Of <tableName>
Structure_End_Dynamic Of <tableName>




DataFlex name of the table opened without an INT file

Customization that can be done inside the dynamic setting block is generally the same customization that can be specified inside an INT file. Below are a few examples to show how this works. The examples replicate information from real INT files.

Note: Make sure that you update the Set_Attribute commands if the physical table structure is changed.

Example: Opening a System File

The following INT file was created during the ORDYS.DAT migration to MS SQL Server database.

















To open the corresponding table without the INT file, while still retaining all the DataFlex file settings, we can do the following:

// Open the table without the INT file

Open “sql_drv:\localhostframework*dbo*ordsys=3” as ordsys

// set the DataFlex attributes using the dynamic settings

Structure_start_dynamic of ordsys

Move ordsys.File_Number to iFileNumber

Set_Attribute DF_FILE_IS_SYSTEM_FILE of iFileNumber To DFTRUE

Set_Attribute DF_FIELD_LENGTH of iFileNumber 1 To 6

Set_Attribute DF_FIELD_LENGTH of iFileNumber 2 To 6

Set_Attribute DF_FIELD_LENGTH of iFileNumber 3 To 6

Structure_end_dynamic of ordsys

// At this point the Ordsys DataFlex data file has the same settings as

// if it was opened using an INT file.

If you compare the information from the INT file with the information specified in the Set_Attribute commands calls, you see that although most of the attributes are defined, there are a few tokens from the INT file that are not specified. They are:





These attributes can also be set, but since these attributes are assigned their default values, the settings are not required.

Example: Opening a Table containing Index Definitions

Below is the INT file created when migrating the DAT file Salesp to an MS SQL Server database:
























The following code opens this table and set its indexes segments as case insensitive segments using dynamic settings:

// Open the table without the INT file

Open “sql_drv:\localhostframework*dbo*salesp=2” as salesp

// Set the DataFlex attributes using the dynamic settings

Structure_start_dynamic of salesp

Move Salesp.File_Number to iFileNumber

Set_Attribute DF_FIELD_INDEX of iFileNumber 1 to 1

Set_Attribute DF_FIELD_INDEX of iFileNumber 2 to 2

// Index.1

Set_Attribute DF_INDEX_NUMBER_SEGMENTS of iFileNumber 1 to 1

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 1 1 to 1

Set_Attribute DF_INDEX_SEGMENT_CASE of iFileNumber 1 1 to DF_CASE_IGNORED

// Index.2

Set_Attribute DF_INDEX_NUMBER_SEGMENTS of iFileNumber 2 to 2

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 2 1 to 2

Set_Attribute DF_INDEX_SEGMENT_CASE of iFileNumber 2 1 to DF_CASE_IGNORED

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 2 2 to 1

Set_Attribute DF_INDEX_SEGMENT_CASE of iFileNumber 2 2 to DF_CASE_IGNORED

// Refresh the internal file structure

Structure_end_dynamic of salesp

Example: Opening a Table with Overlap Fields and Relationships

Below is the INT file created when migrating the DAT file Object to a MS SQL Server database:





























FIELD_NAME obj_var_overlap












Use the following code to open this Object table and set the overlap and relationship info:

// Open the table without the INT file

Open “sql_drv:\localhostframework*dbo*object=15” as Object

Structure_start_dynamic of object

Move object.File_Number to iFileNumber

Set_Attribute DF_FIELD_INDEX of iFileNumber 1 to 1

Set_Attribute DF_FIELD_RELATED_FILE of iFileNumber 1 to 100

Set_Attribute DF_FIELD_RELATED_FIELD of iFileNumber 1 to 1

Set_Attribute DF_FIELD_INDEX of iFileNumber 2 to 1

Set_Attribute DF_FIELD_INDEX of iFileNumber 3 to 1

Set_Attribute DF_FIELD_NATIVE_LENGTH of iFileNumber 3 to 6

Set_Attribute DF_FIELD_LENGTH of iFileNumber 5 to 2

Set_Attribute DF_FIELD_RELATED_FILE of iFileNumber 6 to 150

Set_Attribute DF_FIELD_RELATED_FIELD of iFileNumber 6 to 1

Set_Attribute DF_FIELD_LENGTH of iFileNumber 8 to 2

Set_Attribute DF_FIELD_LENGTH of iFileNumber 10 to 2

Set_Attribute DF_FIELD_NAME of iFileNumber 15 to “ obj_var_overlap “

Set_Attribute DF_FIELD_TYPE of iFileNumber 15 to DF_OVERLAP

Set_Attribute DF_FIELD_INDEX of iFileNumber 15 to 1

Set_Attribute DF_FIELD_OVERLAP_START of iFileNumber 15 to 1

Set_Attribute DF_FIELD_OVERLAP_END of iFileNumber 15 to 2

Set_Attribute DF_FIELD_RELATED_FILE of iFileNumber 15 to 101

Set_Attribute DF_FIELD_RELATED_FIELD of iFileNumber 15 to 15

// Index.1

Set_Attribute DF_INDEX_NUMBER_SEGMENTS of iFileNumber 1 to 3

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 1 1 to 1

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 1 2 to 2

Set_Attribute DF_INDEX_SEGMENT_FIELD of iFileNumber 1 3 to 3


// Refresh the internal file structure

Structure_end_dynamic of Object


The MERTECH.INI file has settings used by the Mertech DataFlex to SQL drivers. Settings in this file are described below. Refer to Customize MERTECH.INI to meet your needs for instructions to modify the MERTECH.INI file.

By default, all available options are present in this file but are disabled (by using a semi-colon (;) at the beginning of the line).

A setting is assigned by entering a value after the equal sign (e.g. Server = sqlsrvsql2008). Leaving a value empty means use the driver’s default value. Spaces are ignored before and after the equal sign. All settings and values are case insensitive.

Entries under a specific heading (e.g., heading [MSSQL] in the INI file) can only be used in that heading.

All settings are listed in the table below. Settings available for a particular driver are marked with an “X”. The value marked with an asterisk (*) is the default for the field.












The name of the server with optional instance name. Usually comes from the INT file or login dialog box.

Example: sqlsrvsql2012





The database name. Usually comes from the INT file or login dialog box.

Example: modeldb




Port number where the server is listening for communication.

Default value PGSQL = 5432 Default value MYSQL = 3306



The database owner. Usually comes from the INT file or login dialog box.

Example: dbo



Use Windows “trusted” authentication. Usually comes from the INT file or login dialog box.

Values: <yes, *no>



Controls if the client will automatically trust the certificate sent by the server. Needs to be yes if the server is using the default self-signed certificate. The default of yes is less secure but shouldn’t require any changes from prior versions. This connection property is only available starting from build 17.0.9622 and will only work when using the v19 MSOLEDBSQL client.

Values: <*yes, no>



Controls if the client will use encrypted communication with the server. This setting defaults to no as older servers do not by default support encryption. This connection property is only available starting from build 17.0.9622 and will only work when using the v19 MSOLEDBSQL client.

Values: <yes, *no>






If not using a trusted connection, the login ID for authentication. Usually comes from the login dialog box.

Example: mydomainme






Encrypted password associated with the User. Use the MdsEncryptPassword utility to get this value (for example, MdsEncryptPassword MyPassword).

Example: 78870b4d0a2122226983074d8d2a3b50






Should Error and Login dialog boxes be disabled? If set to yes, then no login dialog box pops up even if credentials are missing from the INI file, or if they are in error. This is useful when using WebApp server where no desktop is available for a popup dialog box.

Values: <yes, *no>



If the SQL Server has a mirror partner, the server name is provided here (with optional instance name).

Example: sqlsrvmirror2005






Should the driver try to re-establish stale and/or dead connections? If AutoReconnect is set to yes, the driver throws an exception when a query is issued on a stale or dead connection in the current transaction. The driver then attempts to re-establish the connection before the next query is issued on the connection in a new transaction. Not recommended unless the application can properly handle SQLExceptions.

Values: <yes, *no>.






If AutoReconnectDialog is set to yes and a connection fails, the driver pops up a dialog box asking the user if it is okay to reconnect. This allows the user to reconnect the network cable or wait for the DSL to come back on. Not recommended in a web application.

Values: <yes, *no>





Sets the amount of time (in seconds) a connection waits before it times out. Default value MSSQL = 15 Default value MYSQL = 15 Default value PGSQL = 0 (which means to use the operating system timeout setting)




Sets the number of milliseconds a statement waits for a lock to be released.


* -1 = wait forever

0 = no wait - return error immediately

other number = milliseconds



Controls the locking and row versioning behavior of Transact-SQL statements issued by a connection to SQL Server


Note: Only valid for MS SQL.





Maximum number of cursors to keep open per server.

Default value = 100



Enable/disable OEM character translation within the OLE-DB driver.

Values: <*yes, no>

UseRowCountIn Transactions




Use the “rowcount” value during transactions?

The MAX_ROWS_TO_QUERY setting in the INT file controls how many rows are fetched at one time. Fetching multiple rows at a time can improve performance when reading records in a file. However, when multiple rows are fetched during a transaction, all these rows are locked. This causes unnecessary contention on the server if not all of the rows require updating. If contention is a concern for your database, set this option to no.

Values: <*yes, no>






If blank or zero is moved to a field, force it to have a value even if it is nullable.

If set to yes:

if a value is never moved to a field, the field gets 0/default value

if a value is moved to a field (even if its 0/default date), the field gets that value

Values: <yes, *no>






Location of the debug trace file. Specify the complete path and filename for the trace file. If no file is specified, no tracing occurs.

Example: c:flex2sql.tra






Trace output level


-1: Errors only

0: Basic

1: Detailed, 1st level

2: Detailed, 2nd level

3: Detailed, 3rd level``






Flush each line to disk?

To improve performance, flushing is turned off by default and the operating system normal file caching mechanism is used. If the application crashes, the last few lines of a trace are not written to the trace file. Enable flushing to cause each line to be immediately written to disk.

Values: <yes, *no>






Determines how the driver manages table and field names: maintain the current case, convert to uppercase, or convert to lower case.

Values: < KeepCase*, Uppercase, Lowercase >



Automatically remap numeric data types to native types (e.g. smallint, int, bigint). Turn this setting off to always migrate numeric data as numeric/decimal.

Values: <*yes, no>




Manage the use of the Index hints optimization during a Query. Only use this if you are having index performance issues.


Default : Off in MSSQL, on in Oracle

NoIndexHint : do not use index hints

UseIndexHint: use index hints






Force date format initialization when stored procedure/function or trigger is executed. This is only needed if a stored procedure or trigger changes the date format.

Values: <yes, *no>

Sample settings in a MERTECH.INI file are shown below:

; Mertech.ini

; This file has settings for the Mertech DataFlex to SQL drivers

; It should be placed in the same location as your driver and license
; file (sql_drv.dll, mdsmysql.dll, sql_drv.cfg, mdsmysql.cfg, etc)


;; The name of the server with optional instance name

;; usually comes from the INT file or login dialog

;; example: sqlsrvsql2008

Server = sqlsrvsql2008

;; The database name

;; usually comes from the INT file or login dialog

Database = modeldb

;; If not using trusted connection, the login ID for authentication

;; usually comes from the login dialog

;; example: mydomainme

User = mydomainme

;; Encrypted password associated with the User

;; use the MdsEncryptPassword utility to get this value

;; example: MdsEncryptPassword MyPassword

Password = 78870b4d0a2122226983074d8d2a3b50

Customize MERTECH.INI to meet your needs

  1. Make a copy of the MERTECH.INI file, which is available:

<Program Files>Mertech Data SystemsDB DriversDataFlexbinmertech.ini.

  1. Edit the copy to match your customer requirements.

Make sure you edit the section appropriate for your driver, for example the [MSSQL] section for the MS SQL Server driver.

  1. Save the edited file in the {DataFlex Install Path}bin directory or in the folder where your FLX/EXE file is stored.

Database Maintenance

You can restructure and manage your database in a variety of ways.

  • One way is to use the Flex2SQL utility restructuring dialogs and your own restructuring code (refer to the Maintenance Menu section in the Flex2SQL online help for information).

  • Another option is to make SQL Server the main repository for your database structure and use available SQL modeling tools to manage your database.

  • You can also restructure and manage tables using DataFlex Studio or Database Builder.

SQL Server as the Database Repository

You can make SQL Server the main repository for your database structure and use available SQL modeling tools to manage your database. Then, you can run the Flex2SQL utility Generate .INT File from Table/View/Synonym feature to generate .INT and .FD files from the modified tables and bring your changes back into DataFlex.

Select Tools > Generate.INT > .INT File from Table/View/Synonym to display the Generate INT File dialog box.

Generate INT File.png

The grid contains the following columns:



Check Box

Select the check box to mark the table for importing into DataFlex.

List icon

A green arrow over the list icon indicates the table already exists in DataFlex.

Table Name

The name of the SQL table on the server.

File Number

The existing (if the table already exists in DataFlex) or proposed (for a new table) file number.

Note: You can change the file numbers in the grid. However, we recommend not changing the files number for existing tables, or you may end up with duplicate tables in the filelist, but a single INT file.

INT File Name

The intermediate file name for this table.

FD File Name

The field offset definition file name for this table.

Select a Fetch radio button on the right to choose the type of object to be retrieved from the DBMS.

The following import options are available:



Import SQL relationships

Select to import the SQL relationships into DataFlex.

Note: Be sure to check this option. Relationships need to be maintained in the INT file to ensure that DataFlex operates as expected.

Show INT File info view

Select to display the Create Intermediate File dialog box, which allows you to choose additional import settings. Additional settings are not usually required.

Insert into Filelist

Select to insert the imported tables into the DataFlex filelist.cfg file.

Generate FD File

Select to generate the FD file.

Caution when updating previously migrated files

When a DataFlex file is migrated to an SQL backend, the Flex2SQL utility creates the SQL Table using the same DataFlex field names and the proper SQL data types. The Flex2SQL utility also creates an INT file that contains DataFlex information such as relationships, underlaps, special field lengths and data types. The INT file allows the driver to connect to the SQL table and to make sure that DataFlex works as you expect.

Recreating the INT file from a previously migrated table overwrites the current INT file and the DataFlex settings like underlaps, relationships, etc. are lost.

Note: Check the Import SQL relationships option to include the SQL relationships in the INT file. The Flex2SQL Relationship Synchronization option also allows you to retrieve the relationship information from the server into the .INT file.

Accessing Updated Tables in DataFlex

  1. Start the Flex2SQL utility and select Tools > Generate > Generate .INT File from Table/View/Synonym.

  2. Fetch the desired tables or views from the SQL database.

  3. Check the tables (or view) for which you want to import or update INT files (you can also use the Select buttons).

  4. Click OK. The .INT files are generated and added to your Filelist with the driver prefix.

  5. When prompted, select a Filelist slot for the file.

  6. Open DataFlex Studio or Database Builder and select the file. Make sure the filename includes the driver prefix.

Note: DataFlex includes connectivity wizards for directly importing existing SQL tables. This option is not compatible with Mertech’s drivers. INT files must first be created for existing SQL tables using the Flex2SQL utility Tools > Generate .INT > .INT File from Table/View/Synonym. Then these tables can be fully managed in DataFlex.

  1. The Mertech Login dialog box is displayed. Enter the login information. After login, Database Builder shows the file structure information.

  2. Generate the Data Dictionary files (.DD files) for the table.

  3. Open DataFlex Studio and generate a view using the view wizard and the .DD generated in the previous step.

You are ready to compile and run your DataFlex program.

DataFlex as the Database Repository

Mertech also supports restructuring and management of tables using DataFlex Studio or Database Builder. You can use DataFlex as the master repository for your data and the Mertech drivers and DataFlex will ensure that all updates are reflected in the SQL backend, INT files, and any FD files.

The screenshot below shows a column being added to the Users table from the previously migrated order entry example.

VDF Add Column.png

Note: The DF_FILE_ROOT_NAME includes the Mertech driver prefix.

DataFlex 18.0 added support for declaring and editing native data types in the Table Editor. This means DataFlex developers can use the Table Editor to set the native (SQL) type for table columns instead of being limited to the default mapping from DataFlex types to native types.

For example, the Mertech driver natively supports GUID column types for some backends. The Table Editor now enables developers to set a column type to GUID even though DataFlex does not have a native GUID type. Prior to DataFlex 18.0, developers had to use the Flex2SQL Migration Utility to do this, or edit tables directly in the tools provided by the database vendor.

Flex2SQL v14.0 and higher fully support this new DataFlex feature.

Transaction Overview

When you transfer money from your savings account to your checking account, you expect that both accounts will reflect the updates. If the bank loses connectivity to its database before your money is deposited into your checking account, you expect that no money will be debited from your savings account. A database that supports transactions is able to guarantee that either both of these steps succeed or both steps fail.

A transaction is a unit of work that either completes or fails as a whole. A transaction has the following pattern:

  1. Begin transaction

  2. Query the database and perform the required updates

  3. If no errors occur commit the transaction

If errors occur rollback the transaction

  1. End the transaction

To be reliable, a transaction must have ACID properties.

  • Atomicity. A transaction must be an atomic unit of work. Either all the updates succeed or the database is left unchanged.

  • Consistency. When completed a transaction should leave the database in a consistent state. Internal data structures such as indexes must be correct at the end of the transaction.

  • Isolation. Modifications made by the current transaction must be isolated from modifications made by other concurrent transactions. Updates can be lost if one transaction is in the process of modifying data but has not yet completed, and then a second transaction reads and modifies the same data. Isolation is enforced through locking.

  • Durability. After a transaction has completed its effects are permanent. Once the user has been notified of a transaction’s success the transaction must not be lost. Many DBMS implement durability by recording transactions in a transaction log that can be reprocessed to recreate the system state right before any later failure.

Locking Overview

Transaction isolation is accomplished through the use of locking. Locking is a synchronization mechanism that guarantees exclusive access to an object. All databases that support multi-user access use some form of locking.

The example below shows a problem that arises when locking is not used. Two users simultaneously gain access to an account. The twenty dollar debit by the first user is overwritten when the second user stores their updated record.

Locking prevents the second user from accessing the account until the first user releases control, guaranteeing both updates are recorded.


Databases support a variety of locking mechanisms. Locks can be applied at the row, page, or table level. Using low-level locks, such as row locks, increases concurrency by decreasing the probability that two transactions will request locks on the same piece of data at the same time.

DataFlex Locking

DataFlex programs alter data in tables either by using data-dictionaries (DDOs), or procedurally through manual coding.

Using DDOs

DDOs use the REQUEST_SAVE or REQUEST_DELETE method to modify tables. Both methods automatically implement transactions, which fully support all of the ACID property requirements.


DataFlex also supplies the following commands to manually code procedures that perform database updates:

  • BEGIN_TRANSACTION to define the beginning of a transaction

  • END_TRANSACTION to commit a transaction

  • ABORT_TRANSACTION to rollback a transaction

  • LOCK to lock the database

  • REREAD to lock the database and refresh all active buffers

  • UNLOCK to unlock the database

  • SAVERECORD to update fields with new values

In either case, the native DataFlex database only supports table level locking. This means that a lock placed on a DataFlex table prevents anybody else from updating any rows in that table.

Native DataFlex allows other processes to have read access to data that has been locked by a process in a transaction/LOCK/REREAD scenario. SQL on the other hand, denies other processes access to rows locked inside a transaction block. Outside of a transaction block, SQL behaves like native DataFlex; locked rows can be read but not updated.

Locking with Mertech Drivers

Mertech drivers use row level locking. Since DataFlex locking is based on tables, this is an added benefit for using the client/server solution in medium to large multi-user environments. If a DataFlex program contains REREAD, the driver re-finds the record in the record buffer and locks the current record until the transaction is completed. Any record not explicitly locked can be updated by other users. This provides a great boost in concurrency over the native DataFlex database.

Dealing with Deadlocks

Deadlock occurs when one process holds a lock and then attempts to acquire a second lock. If the second lock is already held by another process, the first process is blocked. If the second process then attempts to acquire the lock held by the first process, the system has deadlocked.

The example below shows a deadlock that occurs when two processes attempt to transfer funds between the same two accounts.


Deadlocks are normally avoided by always locking in the same order. The above example violates this rule. The first process locks the checking account and then attempts to lock the savings account, while the second process locks the savings account and then attempts to lock the checking account.

Note: The entire table is locked in native DataFlex. Only an individual account needs to be locked when the Mertech drivers are used, lessening the chance for deadlock.

Deadlocks in the drivers are detected and passed back to DataFlex, which rollbacks the transaction and retries it. If the deadlock persists it is reported as an error back to the user.

Note: Different SQL databases have different options for dealing with deadlocks. If you encounter a deadlock, refer to the online documentation for the particular database. Trace files may be available to help identify which tables and queries were involved in the deadlock.

Some tips for avoiding deadlocks:

  • Set Smart_Filemode_State to true (default) for all DDOs in the Data Dictionary, so smart table locking is used.

  • If using REREAD, explicitly list the tables you want to lock, otherwise all open tables are locked

  • Open tables that are not updated as read only

  • Properly setup aliases, if an alias is set up incorrectly, the application can end up waiting for itself.

  • Always lock in the same order. If two applications access the same database, make sure the internal table numbers have the same relative order.

  • Use the Index Maintenance utility to verify that all indexes have been created. Missing indexes slow down finds and leads to longer held locks

  • Familiarize yourself with Mertech’s macro commands, for example to use embedded SQL to move processing to the server side.

SQL allows records to be inserted into a table without a value in a field if the field is optional. This means that when a record is saved the state of optional fields can be NULL. NULL is not a value, it is used as a placeholder for missing or inapplicable information.

DataFlex Applications

DataFlex applications have no concept of an SQL NULL value. A NULL value is always mapped to an empty value in a DataFlex application. For example, if you have a DF_ASCII field that is assigned to a VARCHAR table column, and the column is NULL, DataFlex reads the DF_ASCII field as a blank ASCII string. The following table shows how SQL NULL values are mapped in a DataFlex application:

DataFlex Data Type

SQL NULL value is mapped to


A blank ASCII string “”


A zero value 0


See Datetime for values.


An empty binary value “”


An empty text value 0x0


The default format for datetime fields depends on the configured date format (for example, military, US, or European) for the computer on which the application runs. The difference between these formats is the order of the fields. For example, yyyy-mm-dd vs. dd-mm-yyyy.

The default datetime value, shown as (yyyy-mm-dd) is:

0000-00-00 00:00:00 for MySQL Windows drivers v12.0 and above

0001-01-01 00:00:00 for earlier versions of the MySQL drivers

0001-01-01 00:00:00 for PostgreSQL drivers

0001-01-01 00:00:00 for Oracle drivers

For MS SQL drivers the default value depends on the native date format:

1753/01/01 00:00:00 for DATETIME

1900/01/01 00:00:00 for SMALLDATETIME

0001/01/01 00:00:00 for the other formats

Note: The DF_LOW value for the date data type is 0000-00-00 for MySQL Windows drivers v12.0 and above. The DF_LOW value for older drivers defaults to 0001-01-01. Applications using an old driver and performing a FIND GT/LT/GE/LE against an Index with a Date segment, will not find records with values of 0000-00-00.

Migration Options

Mertech always recommends migrating fields as NOT NULL since there is no way in DataFlex to differentiate between NULL and a blank value. Sometimes it may be necessary to use NULL for a certain column. If this is required, make sure that column is never used as an index segment. Using NULL for index segments may give unexpected results and can be detrimental to performance.

The Flex2SQL utility allows you to choose whether you want a non-index field to be NULLable when creating a new table or restructuring an existing table (Maintenance > Create Table or Restructure Table > Fields tab). If NOT NULL (the default setting) is selected, you can choose an appropriate default value for the field.

The Flex2SQL utility also allows you to set field handling preferences during migration. General preferences and user-defined default values for NOT NULL field types can be set in the Convert Database > Migration Options tab. Table-specific settings are made in the Convert Database > Fields tab. System defaults for NOT NULL fields are listed in the table below:

DataFlex Data Type

System default for NOT NULL columns


‘ ‘(one blank space)


0 (zero)


See Datetime for values.




‘ ‘(one blank space)

Note: Since indexed columns are always migrated as NOT NULL, a user-defined (if available) or system default value is assigned by the driver if none was previously assigned.

Macro Commands

Mertech provides two macro commands that you can use to customize field handling behavior. The first one is SET_FIELDS_NOT_NULL. Setting this macro to DFTRUE tells the driver that during an insert no columns of a given data type should be NULL. So even if a blank is moved to a column, the driver makes sure that the column has a value in it. The second macro command is SET_FIELDS_NULL. You can use this macro to tell the driver that columns of a given data type must be assigned NULL even if a value was moved to the column.

The following table shows the effective NULL value handling by the Mertech driver based on the values of FIELDS_NULL and FIELDS_NOT_NULL.






Conflicting values)

The SET_FIELDS_NULL value is ignored because the NOT_NULL value takes precedence.

The field is considered NULL regardless of if a 0 or empty string (“”) is moved to the field or if no value is assigned.


If a value is never moved to a field, the field is assigned 0 or the default value for that field.

If a value is moved to a field (even if it is 0 or the default date), the field retains that value.

(Default values)

If a value is never moved to a field, the field is NULL

If a value is moved to a field (even if it is 0 or the default date), the field retains that value.

Note: The syntax for these macros allows you to specify a different NULL handling setting for each of the data types supported by this operation.

For example:

// Table named T1 has the following numeric columns (c1, c2, c3)


Move 1 to T1.c1

Move 0 to T1.c2

Saverecord T1

The driver makes sure that c3 has a zero (0) value stored in it, instead of being NULL. If c3 was a string column (and SET_FIELDS_NOT_NULL DF_ASCII to DFTrue), then one blank space would be saved.

Mertech offers several reporting options:

  • A migration report to log the status of a migration.

  • A driver trace to log detailed driver calls that can help to identify bad data during the migration.

  • A driver trace to log driver calls while running any DataFlex program.

These options, along with the pathname for the generated files, are enabled in the Tools > Preferences > Reports tab.

Preferences Reports.png

Migration Report

The migration report gives users a complete picture of the migration process. It is no longer necessary to analyze the migration process on a table-by-table basis. This report displays any errors that may have occurred during the migration(s), the license and version of the migration tool, a table creation report, an index creation report, and a data migration report.

There is a summary at the top of the report that lets you know right away if there are any errors in the migration, you do not have to search through the whole report to check for errors. You can print and save the report text file. The report provides details on table and index creation along with a comparison of the number of records migrated with the number of records in the DataFlex files. If a process fails for any reason, the STATUS column shows the status as “FAILED” and the ERROR LOG column indicates the error generated.

Enabling the Migration Report

  1. Select Tools > Preferences > Reports tab.

  2. Select the Enable Migration Report checkbox.

  3. Click Browse to select a location for the migration report to be saved.

  4. Type the name for the report text file.

  5. Select the detail level needed, a good place to start is Detailed 2.

  6. Click OK.

Once the migration is complete, select Reports > Migration Report to view the report.

Clearing the Migration Report

Each time you perform a migration for the selected file(s) a new migration report is appended to the end of the previous migration report. In order to start with a fresh, blank report, you need to select Reports > Clear Migration Report.

Sample Migration Report


Driver Trace

The driver trace log is useful in identifying bad data in the DataFlex database, which might result in incomplete migration. You can turn on this option in the Tools > Preferences > Reports tab. Mertech’s drivers have various levels to define tracing granularity.

The Errors Only option is especially useful during migration as it logs only the errors rather than all driver calls to the server. If bad data is encountered during migration, the target backend raises an error. On receiving this error, the driver logs the contents of the record buffer to the file and moves on to the next file. Migration for that file stops right after the error. The user can look at the trace file, correct the data in the DataFlex file, and then run the migration process on the file again. The most common problems are in data fields.

Note: In the event you are receiving an error from the server and the nature of the problem is unclear, Mertech recommends you run the driver trace first and then contact Mertech Technical Support. Any error returned by the server should first be looked up in the server documentation for clarification before contacting Mertech.

Enabling Driver Trace for Migration Logging

Select the Enable Migration Logging checkbox in the Flex2SQL utility Tools > Preferences > Reports tab to log errors during the migration process.

Enabling Driver Trace for Application Logging

If you are getting an error while running any DataFlex program and the error message or text is not clear, turn on driver tracing for applications.

Note: The application logging option severely affects performance if left on and generates very large trace files. Remember to turn this off once you have generated the trace you need.

There are several ways to enable application logging.

  1. Select the Enable Application Logging checkbox in the Flex2SQL utility Tools > Preferences > Reports tab.

  2. You can also turn on application logging by setting an environment variable TRACE_ON.

For Windows:

SET TRACE_ON=<TracefileName>

SET TRACE_LEVEL=1 (see also Selecting the Trace Level)

Note: There are no spaces after the equals sign.


TRACE_ON=<TracefileName>; export TRACE_ON

TRACE_LEVEL=1; export TRACE_LEVEL (see also Selecting the Trace Level)

Note: Variable names must be all in uppercase because LINUX is case sensitive.

  1. Alternately, logging can be configured using Mertech’s separate Trace Control program. Click the Windows Start button, open the Program menu and select Mertech ISDBC Drivers for DataFlex > Trace Control.

Specify the trace file and trace level, then click Save.

Mertech Driver Trace.png
  1. A fourth option is to use the DataFlex macro command provided by Mertech:

ENABLE_TRACE ON TO “<tracefilename>” <tracelevel>


This option allows a developer to turn tracing on and off from inside an application. Custom trace messages can also be sent to the trace file:

SQL_TXT_MESSAGE “<trace message>”

These messages can help to pinpoint where the driver is in relation to the application when troubleshooting a particular function.

For example:



// Note that LOGIN or LOAD_DRIVER must be called

// before turning on the trace either explicitly or implicitly

// through an OPEN command of a migrated table



SQL_TXT_MESSAGE “Open completed”



  1. You can also use settings in the MERTECH.INI file (Trace_on, Trace_Level, and Trace_Flush) to control application logging. Refer to MERTECH.INI for additional information.

  2. If using WebApp, you may use either the INI file or code similar to that shown below to turn on tracing.

Note: Using the trace control tool or Flex2SQL to control tracing does not work with WebApp.

Object oWebApp is a cWebApp

Load_Driver “SQL_DRV”

ENABLE_TRACE_ON to “C:WebApp_Trace.log”

Login <server> <user> <password> <driver>

If (Err) Begin

// Your Code here


// Use this command to stop tracing at point you want


Use Customer.wo



Selecting the Trace Level

If you only want to log the errors you can do so by selecting detail level Errors Only in the Reports tab or by setting TRACE_LEVEL to -1. No other information is included in the log file. The message(s) are preceded by ‘ERROR’ or ‘WARNING’. This option reduces the size of the log file.

The following trace levels are available. Each level includes the previous level’s information.

Detail Level





Errors and warnings only.

DEBUG_LEVEL (default)


All the database calls issued by the application (like FINDs, SAVEs and DELETEs, LOGIN, LOGOUT) and the embedded SQL calls, along with the call parameters.



Information about the MOVE commands from/to file field buffers and the calls to the Get_Attribute and Set_Attribute commands.



Text file generation through the driver.



All calls to the driver and when they start and when they finish.

Tracing to the Windows Debug Channel

Starting in version 15 of Flex2SQL, you can now trace to the Windows debug channel instead of output to a file. To enable this mode, there is a new trace utility called the “Mertech Trace Utility” that will allow you to turn on/off tracing on all of Mertech’s products.


The Trace utility can also be used to direct trace output to a file. To view the trace output sent to the debug channel, you’ll need to use a 3rd party utility such as DebugView or DebugView++ (both free). You can also programmatically send the output to the debug channel:



// Note that LOGIN or LOAD_DRIVER must be called

// before turning on the trace either explicitly or implicitly

// through an OPEN command of a migrated table



SQL_TXT_MESSAGE “Open completed”



The advantages of outputting to the debug channel include:

  1. Immediacy: You are able to see debug output in realtime as the messages are generated.

  2. Logging Options: The 3rd party utilities let you control how the output is saved in a variety of ways. For instance, you could set a maximum log file size, have the log file automatically compressed to a zip file after reaching a certain size, rotate the log file, and more!

  3. Thriftly/Webapp Support Improved: Using the debug channel you’re able to tell which process output the log messages so you can debug a running web application that has process pooling turned on.

  4. Remote Debugging: Some of these debug channel viewers support sending the debug output over the network so you can remotely view debug output!

NOTE: Because this feature uses the standard Windows Debug Output, when used inside of the Dataflex Studio, debug output is captured by the Studio and you will not be able to capture output using a 3rd party utility. Unfortunately Dataflex Studio does not display this output although it captures it.

The driver uses the DataFlex error mechanism to display errors returned by the server. Since every error in DataFlex requires an error number, all errors returned by the server are mapped as DataFlex Error Number 25000 followed by the Server Error Number and Error Text as shown below.

<25000>[Server Error No] <Error Text>

You can look up the description of these error numbers and text in the database server documentation.

DataFlex specific errors, like Find Past end of File or Find Prior to Beginning of File are DataFlex specific behavior and are generated with the usual DataFlex error numbers.

Retrieving Native Error Messages

Native error messages can be retrieved without the DataFlex error handler by calling the command SQL_ERROR_MESSAGE.


This command is useful especially when using embedded SQL statements.

Basic Troubleshooting

Cannot Load DLL error

Oracle, PostgreSQL, and MySQL require that the client be installed (PostgreSQL and MySQL clients are shipped with the installation, in the bin folder). MS SQL Server will start, but might give a message that the native client is missing. The native MS SQL Server client is available as a download from Microsoft.

Note: If you are using Linux, you can see a list of missing libraries by using ldd. For example: ldd

Unable to Initialize User Counting system <<status 28691>>

Indicates the driver could not find the ORA_DRV.CFG, SQL_DRV.CFG, MDSMYSQL.CFG, or MDSPGSQL.CFG license file. Either you have an invalid license or your 30-day evaluation license has expired.

Obtaining License Information

You can obtain your license information in a few different ways:

  • Run Mertech’s separate Support Utility program. Click the Windows Start button, open the Program menu and select Mertech ISDBC Drivers for DataFlex > Mertech Support Utility

  • Perform a driver trace; the license information is located in the first line of the trace file.

  • Log into the Flex2SQL utility and select Help > About Flex2SQL.

License information is also available programmatically, please consult Mertech Data Systems, Inc for additional information.

With previous driver versions, Mertech allowed object names to be either all upper case or all lower case. The field names followed the convention selected for the table name. Now users are able to determine how the driver handles table and field names.

  • Convert to upper case.

  • Convert to lower case.

  • Keep all text case as it is currently stored in each DataFlex file.

Note: The default is to keep the existing case. The default in earlier versions was to convert to uppercase.

Setting Global Preferences for Flex2SQL

The behavior for case handling during creation, restructuring, and migration of tables using the Flex2SQL utility is selected by choosing the appropriate File Casing radio button in the Flex2SQL utility Tools > Preferences tab.


Note: When Flex2SQL is used, the setting selected in the Preferences > General tab overrides any TableCasing value specified in the INI file.

Setting Global File Casing

If databases and files are managed through DB Builder, the DataFlex Studio, or source code the new TableCasing option should be added to the INI file in the DataFlex runtime folder.

Note: If TableCasing is not specified in the INI file, the driver keeps the existing case. The default in earlier versions was to convert to uppercase. Some servers have case sensitive identifiers, so if the casing is changed examine any embedded SQL to make sure the cases match.

Another option when files are managed through source code is to use the new DataFlex macro command provided by Mertech:




The filecasing value specified by SET_DEFAULT_FILE_CASING has precedence over a TableCasing setting in the INI file.

Setting File Specific Options

V10.x also includes a new file attribute, DF_FILE_CASING, which can be assigned the filecasing values listed above.

This setting is file-specific and overrides any global settings. The DF_FILE_CASING attribute controls how table and field casing are handled when restructuring or creating a new table. When the driver opens an existing table, it examines the casing in use for that table and assigns the DF_FILE_CASING attribute accordingly.

Mertech drivers support a standard OEM (DOS) to ANSI (Windows) translation based on the Windows API. When this feature is enabled, data is translated from the OEM character set to the ANSI character set before it is sent to the SQL database and then from ANSI back to OEM when the data is returned to the application.

Mertech version 11 drivers include a new feature that allows the user to customize the mapping value used for the OEM to ANSI translation. Custom translation can be enabled in two ways: a) by calling a macro command or b) by making an entry in the MERTECH.INI file. The difference between these two approaches is that the macro command must be added to each application and the application source code must then be recompiled. Whereas the MERTECH.INI file approach enables the custom OEM to ANSI character translation automatically for all applications.

If you plan to use customized OEM to ANSI translation, you must define the customized values before you read or write data from the SQL table. If you save data using the standard OEM to ANSI translation (or even if you do not select OEM to ANSI translation) and then subsequently enable customized translation, you may end up with unexpected character values.

Using the Macro Command

The syntax for the OEM to ANSI character translation macro is:





Character position in the code page (0 to 255)


Number to be assigned to the OEM character


Number to be assigned to the ANSI character




Character position in the code page (0 to 255)


Number to be assigned to the OEM character


Number to be assigned to the ANSI character

For example, these macro commands map the characters in positions 135 and 225 to a blank space (’ ‘):


Load_Driver <driver>



Login <server> <user> <password> <driver>


Note: OEM_TRANSLATION_STATE must be set to true after logging in to use OEM to ANSI translation. If no custom character mappings are specified, the driver uses Win32 API OemToCharBuff/CharToOemBuff to do the conversion.

There is also a macro command to retrieve the current OEM and ANSI character translation settings. The syntax for this macro is:





Character position in the code page (0 to 255)


Variable to store the returned value for the OEM character


Variable to store the returned value for the ANSI character

The following code retrieves the current OEM and ANSI values for the characters in positions 135 and 225:

Using the MERTECH.INI file

Add the new CustomOEMToANSI token to the MERTECH.INI file to define custom OEM to ANSI character translation using the INI file. The syntax for this token is:

CustomOEMToANSI <CharEntry>, <OEMValue>, <ANSIValue>




Character position in the code page (0 to 255)


Number to be assigned to the OEM character


Number to be assigned to the ANSI character

For example, the following MERTECH.INI file shows the three token entries required to customize the OEM to ANSI translation for characters 150, 151, 152 for the SQL and PostgreSQL drivers:


CustomOEMToANSI 150, 85,117

CustomOEMToANSI 151, 115,118

CustomOEMToANSI 152, 125,113


CustomOEMToANSI 150, 85,117

CustomOEMToANSI 151, 115,118

CustomOEMToANSI 152, 125,113

To include custom OEM to ANSI character translation in the migration process, you must set up the MERTECH.INI file before using the Flex2SQL database migration utility.


High Availability

A system that is continuously operational over a long period of time is referred to as a high availability system. A high availability computer system can be achieved through hardware and software monitoring and hardware redundancy. If a hardware or software fault is detected an application can be automatically restarted on the backup system without any administrative intervention, a process known as failover.

The diagram below shows a 2 node high availability cluster. One server is active and the second server is a standby resource. The servers exchange heartbeat messages over redundant paths to monitor each other’s status.

Data storage is replicated using software or through disk mirroring (RAID1) onto separate physical hard disks in real time to ensure continuous availability. A logical address is assigned to the services performed by the cluster. If the active node goes down, the services and the network address used to access the server are brought back up on the passive node and the user is transparently redirected to the passive node.


Note: Many other high availability configurations (e.g., additional nodes, both servers active) are possible.

SQL Server Mirroring

Microsoft introduced database mirroring in SQL Server 2005. All SQL Server databases record data changes in a transaction log before any changes are made to the actual database. In database mirroring, the principal server writes the transaction log buffer to disk and simultaneously sends it to the mirror instance. The transaction log records are then replayed on the mirror to keep the two database instances in sync.

SQL Server Mirroring has three operational modes:


High Availability mode provides maximum database availability with full transactional safety and automatic failover to the mirror database if the principal database fails. A transaction is not completed on the principal server until the principal server receives an acknowledgement from the mirror server that the transaction log record has been recorded. Once this occurs, the client application gets confirmation that the transaction was committed. A witness server is used to monitor the status of the principal and mirror servers. If the principal server becomes unavailable, the witness server can initiate automatic failover.

High Protection mode also provides full transactional safety, but since there is no witness server only manual failover is possible.

High Performance mode uses asynchronous transfer of log records so transactional safety is OFF. The principal server does not wait for an acknowledgement from the mirror that all transaction log records have been recorded on the mirror. The mirror does its best to keep up with the principal, but it is not guaranteed that the two databases are in sync at any point in time. The only type of failover allowed is manual failover.

The following steps are required to setup SQL Server 2005 mirroring. For a full description refer to articles on Database Mirroring in SQL Server 2005.

  1. Activate transaction log in the principal and mirror server.

  2. Enable FULL recovery mode in the principal server.

  3. Configure endpoints (port listeners) where the principal and mirror are going to establish communication.

  4. Make a backup of the principal database and restore it to the mirror server using the WITH NORECOVERY option of the RESTORE command.

  5. Setup the mirroring partnerships.

Flex2SQL for MS-SQL Mirroring Support

To add support for SQL Server Mirroring, Mertech added the SET_MIRROR_SERVER command to the Flex2SQL command set. To use the SQL Server Mirroring feature, developers add the SET_MIRROR_SERVER command to their login module. This identifies the mirrored server to the driver, which in turns negotiates communication with the server. If the principal server fails, the driver automatically connects to the mirror server and continues operation. No additional code is required in DataFlex to check for a lost connection.

Note: Any non-committed transactions need to be retried by the application. A DF application normally retries the transaction, but please consult the DF manual for details.


SET_MIRROR_SERVER to <Server name> <Main server>




String p1 p2

Load_Driver “SQL_DRV”

// Set the Mirror Server


If (Err) Showln “Error setting failover Server”

// Login into the main server

Login “MACHINEMAIN” “user” “password” “sql_drv”

cmdline p1

cmdline p2

chain (p1 + “ “ + p2)

Compile the login program and run it:

dfcomp loginchain.src

dfrun logchain your_application_executable

To test the program, shutdown the MAIN server and start your application. The driver trace shows that your application logged into the MIRROR server.

Note: The GET_MIRROR_SERVER command is also available. Syntax: GET_MIRROR_SERVER to <VAR>.

The Flex2SQL automatic login dialog also has an option to set the mirror server and database:


If you have a personalized login dialog, using the SET_MIRROR_SERVER command is the best option.

Note: The mirror server and mirror database can also be provided as settings in the Mertech INI file.

Note: Adjust the ConnectionTimeout setting to your environment for better reconnect reliability.

See Also

Connection Management


Connection Management

FLEX2SQL v10.x introduced an auto-reconnect feature. The driver code that handles the reconnect after a mirroring failover is the same code used to handle the auto-reconnect after other disconnects.

Previously, if a server timed out a client connection due to lack of activity, a DataFlex program encountered unexpected errors. With this new feature, the driver automatically reconnects to the server and continues from the same point where the disconnect occurred.

You can even put a machine to sleep, disconnect it from the network, then bring it back up, and it will not have missed a beat. Customers with Web applications, customers with unreliable connections (especially if they are connecting over the Internet), and customers who leave their applications idling overnight will find this feature useful.

To the end user, there is no visible impact of losing the connection to server in most instances. The only exception to this is in-progress transactions. Any non-committed transactions must be retried by the application.

Three settings in the MERTECH.INI file, AutoReconnect, AutoReconnectDialog, and ConnectionTimeout affect the auto-reconnect feature.

See Also


Flex2SQL includes features that allow users of DAW (Data Access Worldwide) Connectivity Kits (CK) to switch to Mertech drivers. Why switch to Mertech drivers? Mertech drivers:

  • Overwhelmingly outperform DAW’s drivers in real world environments.

  • Have an intelligent data fetching mechanism that determines the optimal number of rows and columns to fetch, boosting performance while reducing network traffic.

  • Can handle recovery from connection issues.

  • Support MS SQL Server database mirroring.

  • Include optimized queries that eliminate driver cache files, which can result in stale structure caching.

  • Keep all opened file information in memory for faster switching between application views.

  • Provide the ability to store large amounts of data in LOB columns and to fetch it in pieces to bypass the 16K field limit in DataFlex.

  • Support SQL Server partitioning, which can speed up SQL Server performance when there is a large number of concurrent users.

Two options are available for switching to Mertech drivers:

  • Performing a complete migration, re-creating tables for Mertech drivers. This option is available for users currently running Pervasive, Oracle, MySQL, or PostgreSQL.

  • Simply regenerating the .INT files and replacing the DAW CK dll with Mertech’s database driver. This option is available for users who want to switch from DAW’s CK for MS SQL to Mertech’s MS SQL driver.

Performing a Complete Migration

The Flex2SQL GUI migration tool uses the DAW CK driver to read the current data files and migrates the data to the selected target SQL database. The key point to remember is that both the DAW CK driver and the Mertech drivers use INT files to store information. In order to do a safe migration, you have to ensure that your current INT files are properly backed up.

Important: Before starting the migration, make sure the .INT files currently in use with the DAW CK are backed up properly, so they are not overwritten by the .INT files created by Flex2SQL. If the DAW CK .INT files are over written you cannot revert back to your old system.

The example below shows the steps required to migrate from an existing MS SQL database to a new MS SQL database using Mertech’s Flex2SQL driver. These same steps can be used to migrate from Pervasive.


Before starting the migration, copy the DAW CK dlls and license files for MS SQL or Pervasive into Mertech’s Flex2SQL bin directory. This ensures that Mertech’s Flex2SQL tool is able to load these drivers and read the source database correctly.

Note: Mertech distributes Flex2SQL with VDF 15.0. Some DAW CK licenses are specific to a VDF version. You need to ensure that you have a license file and version that works with VDF 15.0.

Migration Steps

  1. From the Flex2SQL GUI migration tool, log into your target database server using any of Mertech’s Flex2SQL drivers for that target database server.

  2. Open your DataFlex filelist (File > Open).

Your tables have an Mssqldrv: or DFBTRDRV: prefix since they were migrated with the DAW CK.

NonMertech Migration1.png
  1. Select the Migrate Non-DataFlex Files checkbox at the bottom of the Filelist dialog box.

  2. Checking this option enables the .INT File Directory text box. Browse to the location where your current .INT files reside.

  3. Double-click a table in the Filelist dialog box to verify that the setup and license files are correctly installed.

The table should open and you should be able to view the data. If there are any errors in loading the DAW CK or with licensing, the file will not open correctly and an error will be generated.

  1. Select all the tables you want to migrate.

  2. Click Convert Database.

  3. If you are migrating to SQL Server, a notification message displays recommending you use the Generate INT from DAT option Click OK, you can ignore this message.

  4. The Convert DAT to Table dialog box displays. Modify the default settings if desired, and then click Convert All.

After the migration, the prefix in the Filelist dialog changes to a Mertech driver name.

  1. Double-click a migrated table to browse the data. You can also view the migration report (Reports > Migration Report) to verify that the migration was completed successfully.

Note: You can also use this option to migrate between SQL Servers using Mertech Drivers, for example, Oracle to PostgreSQL.

Switching MS SQL Drivers without Migration

You can switch the MS SQL database driver in two ways:

  1. By doing a complete migration as described in the previous section.

  2. By simply regenerating the .INT files and replacing the DAW CK dll with Mertech’s database driver. Mertech’s drivers will continue to work with DAW CK specific features such as computed columns and their table structures.

The second option is the simplest. You will see immediate performance gains (100-200% improvement in performance) by just switching to the Mertech drivers. This option is good for situations where downtime is unaffordable.

The first option is listed because you can see added performance gains (15-20%) if the tables are created using Mertech’s Flex2SQL tool. The reason for this is that there are some slight differences in the data types that are chosen and how the tables are created. The remigration will also ensure that the SQL Server 2008 latest date data types are used: DATE and DATETIME2 instead of DATETIME, which is the only date data type created by DAWs CK.

If using MSSQL and your table has IMAGE or TEXT columns, it is recommended to change them to VARBINARY(MAX) and VARCHAR(MAX), respectively, since Microsoft has marked the original types as deprecated.

Important: Before you begin, back up the.INT files that were created from DAW tools. Mertech’s migration tool needs the DAW driver to read the database information and generate its own INT files. If the files are not backed up or not put at a different location in the path, then the old INT files are overwritten, causing you to lose your overlap and relationship information.


To use this feature, you need the Microsoft SQL Server Connectivity Kit from Data Access Corporation. Flex2SQL will generate new .INT files from the DAW .INT files, so you can use the data you previously migrated using DAW CK.

Migration Steps

  1. Perform steps 1 through 6 in the previous section to select all the tables you want to migrate.

  2. Select Tools > Generate .INT >Generate .INT from .DAT.

The Create Intermediate File dialog box displays.

Create Intermediate File.png

This dialog box is similar to the Convert DAT to Table dialog box except it only generates INT files, it does not migrate the tables. Options selected here are added to the INT files.

  1. Click Generate All.

Once the process is completed, the prefix in the Filelist dialog changes to a Mertech driver name.

Working in a Distributed Environment

The Mertech drivers work seamlessly in a distributed environment with multiple local or remote SQL Servers.

Consider the following scenarios:

  1. During the migration process, some files are left in DataFlex, while others are migrated to an SQL backend. This could be an interim step during the migration process to test the migration. Or, a decision may have been made to leave private tables in DataFlex on the local machine, while common tables are stored on a central SQL Server.

  2. In a second scenario, branch offices may have a local SQL Server, and also connect to a central master SQL Server, to access certain data that needs to be shared across branches.

The Mertech drivers support all of these scenarios.

Multiple database servers.jpg

Working with LocalDB

LocalDB is one of the new features available in Microsoft SQL Server Express 2012 Edition. LocalDB is a simplified version of SQL Server. Local DB:

  • Has a smaller installation than SQL Express (approximately 30 MB)

  • is targeted to developers and non-network single user applications

  • Is easy to install and deploy

  • Requires no management

  • Provides an alternative to DataFlex files

  • Is fully compatible with other SQL Server editions

LocalDB allows developers to specify a database file location. This means that developers can distribute a database as standard files by detaching the database from LocalDB or Express server, copying the database files to another computer, and then attaching to them using that computer’s LocalDB installation. This simplifies installations and allows for demo workspaces that use Mertech’s drivers to be distributed without a lot of groundwork to get them up and running.

Installing LocalDB

LocalDB is free to install and requires no licensing. Local DB is available (along with the Express edition installers) here.

When you run the LocalDB installer a local instance is setup with the name “(localdb)v11.0″. One of the differences from the express edition (and in fact a primary feature of LocalDB) is that the instance does not actually start a process or service on your PC. So, there are no resources used by a LocalDB instance while it is sitting idle. Once you connect to the instance and attach or create a database a process is started that lives until a few minutes after the last connection to it is closed.

Creating a Database

  1. Start a Windows Command Prompt.

  2. Launch the sqlcmd utility and enter an SQL statement to connect to the LocalDB server (default <localdb>v11.0). A trusted connection is used in the example below.

  3. Enter an SQL statement to create a database (bar in the example below).

  4. Execute the SQL statements (go).

  5. Exit sqlcmd.


The database is created in the directory where the Command Prompt comes up, your user profile folder.

Note: LocalDB also includes a command line tool, SqlLocalDB.exe, that enables users to create, delete, stop, and start a LocalDB instance.

Note: A example using a LocalDB ships with Flex2Crystal. The example database, MTSample, is installed in the Documents Library: Flex2Crystal > Example Application > Data.


The FLEX2SQL GUI provides full support for LocalDB. The attach command is added to the Select Database dialog box and the detach command is available in the Logout dialog box.

New Commands

Two new commands were added to MERTECH.INC to support LocalDB, SQLSERVER_ATTACH_DB and SQLSERVER_DETACH_DB.


Syntax: SQLSERVER_ATTACH_DB Database ON MDFFilename

When you attach a database, you pass the database name, along with the path to the master database file (MDF) that contains the database.




The database name.


An existing master database file. This is the name of the primary database file. The name must include the full path. Any associated LDFs (log files) and NDFs (secondary data files) must be in the same folder.

DataFlex code example:


Login “(localdb)v11.0” “” “” “SQL_DRV”

SQLSERVER_ATTACH_DB “MTSample” ON “C:MertechDataSampleDb.mdf”


If you call SQLSERVER_ATTACH_DB on a database that is already attached, no error is generated.

After you call SQLSERVER_ATTACH_DB, you should call SET_DATABASE_NAME if you want to make the attached database the currently selected database.


Detach closes ALL open files (even those not associated with this database or driver). A detach can only be performed if the database has no open connections.





The database name.

Note: SQLSERVER_DETACH_DB may fail to execute if you are testing your application from DF Studio. This is because DF Studio normally has open connections to the database.

Windows Azure

Windows Azure is Microsoft’s application platform that gives you the option to deploy in the cloud. Version 12.1 provides transparent support to run your Microsoft Azure SQL Database on Azure. Except for changing the connection string, no code changes are required on the Flex2SQL side.

Note: Contact Mertech sales or technical support regarding licensing.

BLOBs (binary large object) are very large, variable length, binary or character data objects. BLOBS are typically documents (e.g., .pdf, .doc) or pictures (e.g., .jpeg, .gif, .bmp).

Note: Related terms, LOB (large object) and CLOB (character large object in Oracle), are used to refer to large objects consisting entirely of text data.

When a DF text field is too large for the native type, the Mertech drivers automatically use the BLOB data type during the migration. For example in MS SQL Server the driver automatically switches from varchar (<string length>) to varchar (max) when database limitations are hit.

As long as the size of the object fits in a regular DataFlex string, the object is bound as any regular TEXT column. The Mertech drivers transparently handle this, defaulting to non-BLOB data types if possible.

Note: Mertech suggests using non-BLOB data types, whenever possible, if designing database tables outside of Mertech’s tools. For example, when using available SQL modeling tools to manage your database structure.

BLOBs can either be stored inside the database or in the file system. Storing BLOB data in the database offers a number of advantages:

  • Security measures used to control access to the database can also be used to control access to BLOBs.

  • Backup routines used to backup the database can be used to capture BLOB data as well, simplifying maintenance.

  • Transaction control, if required, is built into the database.

  • Search operations can be performed against formatted text-based data contained within BLOB fields—for example, Microsoft Word or Microsoft Excel documents.

The file system may be a better storage choice for BLOBs when:

  • The object requires significant overhead to process, e.g., streaming video.

  • The cost of storage space is a consideration. Storage on disks used in Web server farms is typically less expensive than on database SANs.

  • The object will be modified frequently. The file system may handle fragmentation better than the database server.

For all databases, there is a slight performance penalty fetching BLOB data, because BLOB data is not stored in the same internal page as the rest of the record.

MS SQL Server

There are special considerations when using MS SQL Server. If you use embedded SQL and access LOB columns, you have to specify a server side cursor in your call to SQL_PREPARE_STMT. MS SQL Server enforces cursor limitations on LOB columns and transactions.


By default, the max_allowed_packet size attribute is set to 4Mb for MySQL 5.7.


To use the improved LOB commands with MySQL 5.7, you must edit the my.ini file and set this attribute to at least 32Mb.


Note: The MySQL Configuration Wizard places the my.ini file in the MySQL server installation directory.


The BLOB implementation in PostgreSQL uses the Large Object feature. When a BLOB is created, and an OID (object identifier) is saved in a column in your table and the BLOB is stored separately. Physically an OID is an unsigned 32 bit integer that is a reference to a binary file object.

Note: MERTECH.INC includes a definition for data type OID: Define ePgSQL_OID For 26

The contents referenced by an OID cannot be read using any SQL functions or directives. In order to read the contents of the BLOB file, one has to read the OID from a database record, open the BLOB file, do the reading and then close it.

The OID field has to be nullable. A NULL value means an empty BLOB. If you write something into that BLOB using LOB_WRITE or LOB_APPEND, the driver automatically creates a new LOB object and writes its OID into the corresponding database field.

There are several significant things that differ in the PostgreSQL BLOB implementation from all other drivers:

  • BLOB columns cannot be converted from regular text columns like VARCHAR etc.

  • You cannot read the data stored within BLOBs using regular SAVE_RECORD / UPDATE_RECORD calls. You can only manipulate raw OIDs themselves.

  • All BLOB related calls must happen within a transaction even if no modifications are performed. In each BLOB API call the driver checks if there is a transaction in effect. If not, it starts it and commits it after the call.

PostgreSQL Example

Below is an example of turning a PostgreSQL TEXT column into an OID column.

Note: To turn a PostgreSQL TEXT column into an OID column you must delete and then recreate the column.



Move CUSTOMER.File_Number to iFile

Structure_Start iFile “MDSPGSQL”

Move 13 to iField // CUSTOMER.COMMENTS, note this is the last column in the SQL table

Delete_Field iFile iField

Create_Field iFile At iField

Set_Attribute DF_FIELD_NAME of iFile iField to “COMMENTS”

Set_Attribute DF_FIELD_TYPE of iFile iField to DF_ASCII

Set_Attribute DF_FIELD_NATIVE_TYPE of iFile iField to ePgSQL_OID

Set_Attribute DF_FIELD_LENGTH of iFile iField to 8016

Set_Attribute DF_FIELD_NULL of iFile iField to 1

Structure_End iFile DF_STRUCTEND_OPT_NONE “.” 0


Improved LOB Access

In version 14, Mertech added new LOB commands that are independent of the field buffer and allow reading and writing from/to a LOB field in chunks up to 64MB (the previous maximum was 16KB). The new access method is also much faster (20x) than the old method. The new LOB commands introduced in version 14.0 are:




Save data to a LOB field. If the field is larger than what is allowed by DataFlex, SQL_ADD_LOB_CHUNK is used in a loop to add all the data to the field. The save is performed separately.


Add additional chunks to the buffer before saving the record. The save is performed separately.


Retrieve a chunk of data from a specified LOB field.


Retrieve a chunk of data from a LOB field, specifying the offset and length of the chunk to be retrieved.


Get the length of the specified LOB field.


Set a LOB column to null.

Example reading a LOB field:

Procedure LoadImage

Integer iImageLen

Integer iOldVal iChunk iSize ISoFar iChunksize

Move 2097152 to ChunkSize

SQL_GET_LOB_LENGTH salesp.icture to iImageLen

If (iImageLen > 0) Begin

Get_Argument_Size to iOldVal

Set_Argument_Size iImageLen

String sChunk SBitmap

UChar[] ucFileAs Array

Address pStr

//The Repeat loop is shown below to demonstrate how chunking works. This loop could //have been replaced with a call to SQL_GET_LOB salesp.picture sBitmap


SQL_GET_LOB_CHUNK salesp.picture to sChunk OFFSET (iChunkSize * iChunk) Length iChunkSize

Move (sBitmap + sChuck) to sBitmap

Increment iChunk

Add iChunkSize to iSoFar

Until (iSoFar > iImageLen)

//Code follows to convert the image from sBitmap to the type expected by the image control

Note: MERTECH.INC contains a full description and examples of each command. You can also find additional examples and information on the YouTube video Mertech Campfire – Introduction to Flex2SQLv14.

Note: The old LOB commands are still supported.

Mertech supports two methods for calling stored procedures that are uniform across all drivers: CALL_STORED_PROCEDURE and SET_SQL_PROCEDURE_NAME.

Note: SQL_SET_ PROCEDURE_NAME (and related commands) is now the advised method to use. All other driver-specific commands are deprecated starting with v12.


This simple one-line server call has the procedure name and parameters all on the same line. This method has a limit of 255 characters.

For example



The new SQL_SET_PROCEDURE_NAME (and SQL_SET_FUNCTION_NAME) splits the procedure name and parameters across multiple lines, potentially increasing the number of parameters that can be passed. This interface requires that you define OUT, IN, and IN_OUT parameters in your code if you wish to retrieve their value after a stored procedure call.

Note: In prior versions, defining a parameter as OUT was optional and you could still retrieve the returned value. Now you MUST define them as OUT or IN_OUT if you want the value returned.

Note: This interface eliminates the ability to access non-result set return values using the SQL_FETCH_NEXT_ROW command. This command should now ONLY be used for access to result set based return values.

DataFlex stored procedure code example:

Number iParam

String outparam

Move 1234 to iParam

SQL_SET_PROCEDURE_NAME “testinout” numpar 2





Showln “param “ sDrv “ 2 = “ outparam

DataFlex stored function code example:

String sPar1

Number iPar2

Date dPar3

Move ‘* NOT CHANGED *’ to sPar1

Move 1234 to iPar2

Sysdate4 dPar3







Showln ‘sPar1: “’ sPar1 ‘”’

Syntax is shown below for stored function calls. The same syntax is used for stored procedure calls. Refer to MERTECH.INC for additional information.


The SQL_SET_FUNCTION_NAME command allows you to execute a previously stored function. Call SQL_SET_FUNCTION_NAME passing the function name and number of parameters the function is expecting. Next, pass each parameter individually using SQL_SET_FUNCTION_PARAMETER. Once all parameters are passed, execute the function with SQL_FUNCTION_EXECUTE. Finally, retrieve the result through a call to the SQL_GET_FUNCTION_RETURN.





The name of the stored function to call.


Required keyword


The number of parameters to be passed to the function.

Support for SQL_SET_FUNCTION_NAME and its sub commands was added to the Mertech v11 MS SQL Server driver. Previously this feature was only supported by the Oracle and MySQL drivers.


The SQL_SET_FUNCTION_PARAMETER command is called after a call to SQL_SET_FUNCTION_NAME. Use SQL_SET_FUNCTION_PARAMETER to pass parameters to the named function. You call SQL_SET_FUNCTION_PARAMETER once for each parameter, indicating the parameter number (starting with 1 from left-to-right in the parameter list) and the parameter type.

Syntax: SQL_SET_FUNCTION_PARAMETER iParam to sValue type




The parameter number (from 1 to iNum)


The parameter value


The parameter type (input) IN, (output) OUT, (input and output) IN_OUT or (out result set) RSET

Note: You MUST define a parameter as OUT, IN, IN_OUT or RSET (Oracle and PostgreSQL only) if you want the value returned.

Note: The RSET type is added in v14.1 for Oracle and PostgreSQL. Stored procedures in MS SQL and MySQL return result sets directly. Oracle and PostgreSQL do not, instead an output parameter of type RSET must passed. See Using REF CURSORS To Return Result Sets.


SQL_FUNCTION_EXECUTE executes the function named in the call to SQL_SET_FUNCTION_NAME. You can call finderror afterwards to determine if the function executed successfully.



Call SQL_GET_FUNCTION_RETURN after successful execution to get the function result into a DataFlex variable.





Variable for the returned value

Fetching a Result Set using a Stored Procedure Call

The example below shows how to retrieve a result set using a stored procedure call.

Note: Example applies to MS SQL and MySQL only, see Using REF CURSORS To Return Result Sets for examples using Oracle and PostgreSQL.

Move ‘customer’ to sTableName

Move ‘dbo’ to sOwner






SQL_FETCH_NEXT_ROW INTO sQualifier sTableOwner sTableName sColumnName iDataType sDataType iDataScale iDataLength

If (Found) Begin

Show ‘sQualifier: ‘ sQualifier ‘ ‘

Show ‘sColumnName: ‘ sColumnName ‘ ‘

Show ‘iDataType: ‘ iDataType ‘ ‘

Show ‘sDataType: ‘ sDataType ‘ ‘

Show ‘iDataScale: ‘ iDataScale ‘ ‘

Show ‘iDataLength: ‘ iDataLength ‘ ‘



Until (not(Found))

Fetching Multiple Result Sets using SQL_NEXT_RESULT_SET


You can enter multiple SELECT statements in a stored procedure to retrieve multiple result sets and then use SQL_NEXT_RESULT_SET in the client code to position the reader to these result sets.

SQL_NEXT_RESULT_SET iterates through the result sets in order when multiple result sets are returned. SQL_NEXT_RESULT_SET returns 0 (FALSE) if there are no more result sets and a nonzero value (TRUE) otherwise.

For example, you might create a stored procedure with three SELECT statements.

Note: Example applies to MS SQL and MySQL only, see Using REF CURSORS To Return Result Sets for examples using Oracle and PostgreSQL.





SELECT COUNT(*) from orderhea;


SELECT * from salesp order by ID;


Then use the following client code to process the data.

Procedure TestRSFromSP

Local Integer iRS iColumn iNumCols

Local String sColumnValue sColumnName

Move 1 to iRS




If (Found) Begin

Showln ‘RS#’ iRS

Sql_Get_Num_Cols to iNumCols


For iColumn from 1 to iNumCols

Sql_Fetch_Column iColumn INTO sColumnValue

Show sColumnValue ‘ - ‘




Until (not(Found))

Indicate Found True





If (Found) Begin

Move (iRS + 1) to iRS


Until (not(Found))


Showln “Total RS:” iRS


Support for SQL_NEXT_RESULT_SET was added to the Mertech v11 MS SQL Server driver and to the v12 PostgreSQL driver. Previously this feature was only supported by the MySQL driver. MERTECH.INC version 3.0.8 describes this new feature.

Syntax: SQL_NEXT_RESULT_SET Of <FileNum|FileName>

Argument (optional)



File number


File name

Argument (optional)



File number


File name

Note: Always use a stored procedure if you plan to return multiple result sets. Results are unpredictable if multiple select statements are separated by semicolons in inline SQL code.

Note: The new unified stored procedure interface, implemented in v12.1, eliminated the ability to access non-result set return values using the SQL_FETCH_NEXT_ROW command. This command should now ONLY be used for access to result set based return values.

Using REF CURSORS To Return Result Sets

Stored procedures in MS SQL and MySQL return result sets directly. Oracle and PostgreSQL do not. So, to retrieve a result set from an Oracle or PostgreSQL stored procedure/function you have to pass the procedure an output parameter of type REFCURSOR.

To support this functionality, a new direction parameter, RSET, is added for SQL_SET_PROCEDURE_PARAMETER/SQL_SET_FUNCTION_PARAMETER. RSET is an OUT parameter of type REFCURSOR. The following rules apply:

  • If a stored procedure or function has one or more RSET parameters, then SQL_FETCH_NEXT_ROW (following SQL_PROCEDURE_EXECUTE) opens the cursor and treats it as a result set.

  • If a stored function returns a REFCURSOR, then it is immediately available for traversing using SQL_FETCH_NEXT_ROW (following SQL_PROCEDURE_EXECUTE).

  • If a stored procedure or function returns a SET OF REFCURSOR, then the next result set can be accessed using SQL_NEXT_RESULT_SET.

IMPORTANT: Flex2SQL for Oracle and Flex2SQL for PostgreSQL do not support mixing ESQL calls and Stored Procedures/Functions that return result sets using an RSET parameter that returns a cursor.

IMPORTANT: PostgreSQL closes all REFCURSORs at the end of the transaction, so in order for the above functionality to work, all commands beginning with SQL_FUNCTION_EXECUTE and ending with the final SQL_CANCEL_QUERY_STMT must be enclosed in an explicit transaction (Lock/Unlock).

The Oracle example below shows how to retrieve a result set using a stored procedure call.

// Create the stored procedure




SQL_APPEND_STMT to “OPEN RC for SELECT customer_number,name,state FROM customer WHERE state=sState Order By Name; “




// New RSET parameter used to return a REFCURSOR

Move ‘CA’ to sState






SQL_FETCH_NEXT_ROW into iCustomer sName sState

If (Found) Showln iCustomer ‘ - ‘ sName ‘ - ‘ sState

Until (not(Found))


The Oracle example below shows how to retrieve multiple result sets using SQL_NEXT_RESULT_SET.

// Create the stored procedure




SQL_APPEND_STMT to “ OPEN RC01 for SELECT customer_number,name,state FROM customer WHERE state=sPar1 Order By Name; “

SQL_APPEND_STMT to “ OPEN RC02 for SELECT customer_number,name,state FROM customer WHERE state=sPar2 Order By Name; “




Move ‘CA’ to sState01

Move ‘FL’ to sState02




// New RSET parameter used




Showln ‘Records from State=’ sState01


SQL_FETCH_NEXT_ROW into iCustomer sName sState

If (Found) Showln iCustomer ‘ - ‘ sName ‘ - ‘ sState

Until (not(Found))



Showln ‘Records from State=’ sState02


SQL_FETCH_NEXT_ROW into iCustomer sName sState

If (Found) Showln iCustomer ‘ - ‘ sName ‘ - ‘ sState

Until (not(Found))

// You can use SQL_NEXT_RESULT_SET in a loop to retrieve the result sets.

// NOT FOUND is returned when there are no more result sets to retrieve.


If (not(Found)) ….



CONVERT_DAT_FILE is a Mertech macro command that can be called from a DataFlex program to migrate a DataFlex file to the target backend, creating table and index structures and copying data using the default Flex2SQL settings.

You can use the CONVERT_DAT_FILE macro to write your own custom migration routines to handle migrations at remote sites. A code snippet is provided below.

// Assumes iFileNumber is the file number of the file you wish to migrate


Get_Attribute DF_FILE_ROOT_NAME of iFileNumber to sPhysicalFileName


Showln “File: ” sPhysicalFileName


Showln “Converting DF File …”


Open iFileNumber Mode DF_EXCLUSIVE

// Add the driver prefix in the filelist

Set_Attribute DF_FILE_ROOT_NAME of iFileNumber to ((UpperCase(sDriver)) + “:” + (Trim(sPhysicalFileName)))

Note: CONVERT_DAT_FILE has a few additional options, to control handling of inverse keys, changing the SQL table name, converting to RowId, and to NOT NULL handling. Please consult MERTECH.INC for additional information.


COPY_DATA is a Mertech macro command that can be used to copy data from an SQL table to a DAT file and vice versa.

Flex2SQL also has options to copy data from a DAT file to an SQL table and vice versa, but the COPY_DATA command provides improved performance by eliminating the overhead required by the interaction between the Flex2SQL application and the DataFlex runtime.

COPY_DATA opens the source and destination files, copies the data, and then closes the files. When the source file is an SQL table, a command line argument can be used to restrict the table rows that are copied to the DAT file, in the form of an SQL WHERE clause.

The COPY_DATA macro is based on the regular DataFlex command Copy_DB. Since COPY_DATA requires no user interaction, trace messages are provided to monitor the copy process and to display any error messages.

Flex2SQL uses the callback object to send the following messages to the application during the COPY_DATA process:




Sent when Flex2SQL starts the process.


Sent when Flex2SQL is about to copy the source data.


Sent when an error occurs.


Sent each time an additional 10% of the data is copied. The message format is “<number of records saved>,<total number of records>”

In addition to monitoring the copy progress, the user can cancel the process through the callback object as well.

Syntax: Copy_Data {source-table} To {dest-table} [Callback {callback-object}] ;
[Constrain {SQL-expression}]




Full pathname of the source table/file to be copied.


Full pathname of the destination table/file.


A handle to an object that will receive the callback message during the operation.


The SQL expression that will restrict the rows that to be copied from an SQL table to a DAT file. This option is only applied when {source-table} is an SQL table

Example: Copy Data

  1. The following code copies data from a table named DIARY into a DAT file named DIARY. Only the rows containing JobCode = ‘02-78CC’ are copied.

Copy_Data “sql_drv:diary” to “diary” Callback 0 Constrain “JobCode = ‘02-78CC’”

  1. The following code copies data from a table named CREW into a DAT file named CREW2. Only the rows containing JobCode = ‘02-78CC’ are copied. This code has a callback object that interacts with the driver during the migration.

Integer giCallBackObj

Object CallBackObj is a Array

Move Self to giCallBackObj

Function Callback String sText Integer iLogicalID Returns Integer

If (iLogicalID = DF_MESSAGE_ERROR) Begin

<Handle Error Message>
// Tell operation to Stop

// Function_Return DFTRUE



<Handle Data Migration Progress>
// Tell operation to Continue

// Function_Return DFFALSE


Function_Return DFFALSE



Copy_Data “sql_drv:crew” to “crew2” Callback (CallBackObj(Self)) Constrain “JobCode = ‘02-78CC’”

  1. The following code copies all the data from a DAT file named TIMECARD to an SQL table named TIMECARD.

Copy_Data “timecard” to “sql_drv:timecard” Callback 0

  1. The following code obtains the SQL server, user, password, driver, database and table from the command line to copy records from a DAT file to an SQL Table.

Note: This example could be used to append rows from multiple DAT files to the SQL table.


String sMainServer sMainUser sMainPasswd sMainDataBase sMainDriver sMainTable

cmdline sMainServer

cmdline sMainUser

cmdline sMainPasswd

cmdline sMainDataBase

cmdline sMainDriver

cmdline sMainTable

If (Err) Begin

Showln “Login Error”

Inkey Pageend



// Set the Database to Open and ESQL procedures

Set_DataBase_Name to sMainDataBase

SQL_Use_DataBase sMainDataBase


// Set the case for the objects created



Integer giCallBackObj

Integer iPercProcess

Move 0 to iPercProcess

Object CallBackObj is a Array

Move Current_Object to giCallBackObj

Function Callback String sText Integer iLogicalID Returns Integer

Local Integer iWorking ret# herr# liPos liLen liCount liTotal liPercent

Local String lsValue


Showln ‘Starting Converting …’


Else If (iLogicalID = DF_MESSAGE_HEADING_1) Begin

Showln ‘Starting Copying Records …’


Else If (iLogicalID = DF_MESSAGE_HEADING_2) Begin

Showln ‘Copying Records Finished …’


Else If (iLogicalID = DF_MESSAGE_ERROR) Begin

Showln ‘Converting Error …’



Move (iPercProcess + 10) to iPercProcess

Showln ‘Converting Status … - ‘ iPercProcess ‘%’


Function_Return DFFalse



Procedure Test

Integer iRecords

String sDriverRevision

GET_DRIVER_REVISION to sDriverRevision

Showln “Driver Revision: “ sDriverRevision


Open (sMainTable + “.dat”) as customer

// Retrieve the current number of records in the SQL Table

Get NumRecords sMainTable to iRecords

Showln ‘Current Number of Records: ‘ iRecords


Copy_Data (sMainTable + “.dat”) to (sMainDriver + “:” + sMainTable) callback giCallBackObj

// Retrieve the actual number of records in the SQL Table

// after the Copy_Data process

Get NumRecords sMainTable to iRecords


Showln ‘Number of Records after the process: ‘ iRecords


Function NumRecords String sTableName Returns Integer

Integer iSQLRecords

SQL_Set_STMT to (‘select count(*) from ‘ + sTableName)



SQL_Fetch_Next_Row into iSQLRecords

Function_Return iSQLRecords


Send Test


Showln “ok…”

Inkey Pageend

CL2SQL - Command Line Tool for Unattended Migration

Mertech provides a command line tool to allow migration of data using Flex2SQL features without the GUI interface. The tool, called CL2SQL, is located in the Mertechbin directory, and supports all the options of Flex2SQL through command line parameters.

Cl2SQL can be used along with the CONVERT_DAT_FILE to develop an unattended migration process.

Double-click cl2sql.exe to see full description of the application. The application starts and an information window displays listing the syntax, command line options, and providing examples. The command line options are also listed below:


This information screen.

-d <drv>,

-driver <drv>

Use driver <drv> where drv is one of our supported drivers: ORA_DRV, SQL_DRV, MDSMYSQL and MDSPGSQL

-f <fs>,

-fileset <fs>

The fileset passed as a parameter in <fs> on which you want to operate. The fileset is created from inside the Convert dialog in Flex2SQL.

-file <n>

The filenumber passed as a parameter in <n> on which you want to operate. You can only use this option with a fileset. This allows you to override the set of files on which you want to operate with another file.

-prefix <ABC>,

-postfix <ABC>,

The prefix <ABC> and postfix <ABC> options are to be used for the tablename prefix or postfix as an override on what is defined in the fileset. You can only use this option along with a fileset.

-s <server>,

-server <server>

Database <server> to connect to, where <server> can be either the server name or IP used.

-database <db>

If you want to use another database as is specified in the configuration file, then you can override the setting with the value in <db>.

-u <user>, -user <user>

User id to login with. If you do not use this parameter the driver will connect using a trusted connection.

-p <pass>,

-password <pass>

Password to use for logging in.

-port <n>

Use an alternative port to connect to on the database server.

-c <cfg>,

-config <cfg>

Use the Flex2SQL configuration provided in <cfg> file



Automatically log in using the last used/saved parameters. This is the default behavior if no login details are passed.

-r <rpt>,

-report <rpt>

Write a migration report to file <rpt>.



Erase migration report file <rpt> before the new migration report is written.

Working with Computed Columns

A computed column is a column that is calculated from an expression that uses other columns in the same table. For example, a computed column could contain the total number of outstanding orders. Computed columns can be virtual (they are computed when referenced), or persisted (they are computed when the record is updated and stored as part of the record.)

Computed columns can be created using standard SQL tools. Then, you can run the Flex2SQL Generate .INT File from Table/View/Synonym feature to generate .INT and .FD files from the modified tables and bring your changes back into DataFlex.

See Also

SQL Server as the Database Repository

It is common in the DataFlex and Pervasive environments to create files with the same name residing in different directories. For example the C:data directory might contain subdirectories for ACME company and Widget company. Each company subdirectory might contain a list of sales people (salesp.dat).

Consolidating Directories.jpg

The problem on the SQL side is that different tables with the same names will collide if stored in the same database. Mertech provides several ways to resolve this problem. You can

  • Migrate to multiple SQL Servers or databases

  • Migrate to different schema names

  • Attach a custom prefix or postfix to the table names

While all of these options solve the problem, you end up with as many SQL tables as you had .DAT files. It would be nice to be able to consolidate the data into one table.

Create the Table and Copy the Data

The following steps can be taken in Flex2SQL to consolidate matching files from multiple data folders.

Note: This assumes that the table structures in the different folders are identical.

  1. Add a new field to each file that can be used to identify the data source. Since the table is still a DAT file at this point, use the DF Studio or Database Builder to add the field.

We would add COMPANY_ID to each .DAT file in the above example.

  1. Populate the new field to uniquely identify the source.

We would put the value “ACME” in the new COMPANY_ID field in the c:dataACMEsalesp.dat file, and “Widget” in the COMPANY_ID field in the second folder.

Note: Please contact Mertech if you would like input into how to automate this process based on your particular scenario.

  1. Migrate the first file to your SQL Server (Filelist, select file, Convert Database).

This creates the SQL table and copies the data from the first file into the table.

  1. Generate an .INT file for a matching file in another folder (Filelist, select file, Tools > Generate .INT > Generate INT from .DAT).

This will create an .INT file in your data directory, and allow you to access the table that was migrated earlier in SQL.

  1. Copy data from the .DAT to the SQL table (right-click the file, Copy Data from .DAT to Table).

Repeat steps 4 and 5 for any additional matching files in other data folders.

You now have one SQL table that contains the data from all the copies of the given table, with a field that identifies the source for each row.













25 Main







11 Olive







56 East 12th







988 Ocean







1 W Spring







22 Lexington



Update your Application

Now that the files have been merged into one SQL table, there needs to be a way to easily distinguish data from one original file to the others with no or minimal changes to the DataFlex application.

A new field attribute, DF_FIELD_PROGRAMMATIC _DEFAULT, is defined to store a default value for a given field. The driver will use this default value during inserts if no explicit value is moved to the field buffer.

Locate the point in your application where you allow the user to select the working data folder. Once a new data folder has been selected, issue a call to:



Where field number 1 is COMPANY_ID, and the user selected data folder is c:dataACME

When DF_FIELD_PROGRAMMATIC_DEFAULT is set, if no value is moved to the field marked as the programmatic default field, the value given in the attribute is saved to the field when a new record is created. In our example, any new data inserted when working in the c:dataACME folder will have “ACME” stored in the COMPANY_ID field. This is without making any other changes to your code.

The SQL_SET_CONSTRAINT macro command has been extended to constrain finds to only return data belonging to this new programmatic default field.

Syntax: SET_SQL_CONSTRAINT OF <FileName|FileNumber> TO <Constraint_Expression| FROM_PROGRAMMATIC_DEFAULTS >




Name of the file to be limited by Constraint_Expression.


Number of the file to be limited by Constraint_Expression.


An SQL expression that limits the data view.


Use previously specified programmatic defaults for the constraint.

Example: To constrain finds to only return data belonging to “ACME”

Set_Attribute DF_FIELD_PROGRAMMATIC_DEFAULT of salesp.file_number 1 to “ACME”

SET_SQL_CONSTRAINT of salesp From_Programmatic_Defaults



Find Gt salesp by Index.2

Showln salesp.last_name “ - “ salesp.state

Until (not(Found))

This ensures that data belonging to other companies is not retrieved. Again, this is without modifying any source code outside of where tables are opened and the working directory is set.

Note: The FIELD_PROGRAMMATIC_DEFAULT token can also be specified in the INT file.

Best Practices

Good database design is the key to obtaining accurate information and achieving optimum database performance. The design process includes the following steps.


Identify the purpose of the database. Is the database a simple order entry system or might it be used for sales forecasting in the future?

Organize the Information

Determine the types of information you want to store in the database, such as customer information, product information, and purchase orders. Organize this information into subjects. Each subject becomes a table.

Assign Data Elements to Each Table

Decide which items should be stored in each table. Each item becomes a column. For example the customer table might include the customer name and customer address.

Follow these rules when defining table columns.


Follow an agreed upon naming convention for table and column names to improve readability.

Do Not Duplicate Data

Duplicate or redundant information not only wastes space, but also increases the likelihood of errors and inconsistencies. Store the customer name in the customer table; do not duplicate the customer name in the purchase orders table.

Do Not Store Calculated Data

Do not store running totals (for example number of orders placed by a customer) in a table column. This requires that all add, modify, and delete code update this field each time a change occurs. The total can instead come from doing a count of the number of orders this customer has placed. Embedded SQL or even computed columns can also be used.

Store Information in its Smallest Logical Parts

Break information into logical parts so you can retrieve individual facts later. Create a separate field for first name and last name, instead of one name field.

Identify Primary Keys

Each table should include a column or set of columns that uniquely identifies each row stored in the table. Choose a primary key whose value will not change frequently. Often, a unique number (e.g., order number or customer ID) is used as the primary key. The primary key is normally the clustered index, and controls the physical layout of data in the table. Finding on a primary key is fast since it involves only one lookup. For other indexes the index data is first searched to find the location of the data row, followed by the actual data row lookup.

Determine Relationships

Relationships enable you to prevent redundant data. Look at each table and decide how the data in one table is related to the data in other tables. Add fields to tables or create new tables to clarify the relationships. Use foreign keys to identify relationships between tables. A foreign key is an index in one table that matches the primary key column in another table.

There are various types of relationships that can exist between two tables:




One to (zero or) one

A row in table A can have no more than one matching row in table B, and vice versa. This type of relationship is not common but might be used to e.g., hide private information in a separate table. Example: Each customer can have only one social security number and each social security number is assigned to only one customer.


One to (zero or) many

A row in table A can have many matching rows in table B, but a row in table B can have only one matching row in table A. This is the most common relationship. Example: a customer can have many orders, but each order is associated with only one customer.


Many to Many

A row in table A can have many matching rows in table B, and vice versa. Example: A customer can have multiple addresses and an address can be shared between multiple customers.


Note: The above symbols are used later in this document. Different notations are available to designate these relationships.

Normalize your database

Examine your database to verify it is organized into tables that minimize redundancy and produce the intended and unambiguous results. This process is referred to as normalization.

The example below examines a poorly organized database and shows two steps that can be taken to improve the database design.

  1. Eliminate repeating elements or groups of elements. Each row in the table below contains entries for two ordered items. What if a customer orders three items, or twenty items?

Purchase Orders Table

Cust ID

Last Name

First Name


Order No

Item ID

Item Desc


Item ID

Item Desc





10 Main











151 Oak











2 Brook








Creating separate entries for each ordered item and segregating the customer details from the order information improves the database design. However, our new tables still violate normalization rules.

  1. Eliminate redundant information. The customer name is stored in both tables, and item description appears next to each order in the orders table. Not only does this waste space it increases the likelihood of errors and inconsistencies.

The solution to this problems is to create tables that only contain information for one subject and then to establish a relationship between the tables using primary and foreign keys.


Populating the new tables with our order information gives the following results.

Relational DB.jpg

Refine and Test the Design

Once you have finished your database design, create and populate your tables with data and test the results. Refine the design as necessary.

Note: Refer to available documentation on normalization for a full description of the normalization rules and procedures. Refer to documentation for your SQL backend database to see what database design best practices are recommended.


Full overlaps have been deprecated since VDF 11, with the introduction of multi-segment relationships. The Mertech drivers seamlessly work with full overlaps, and treat them on the backend as multi-segment relationships and indexes.

Partial overlaps are a construct that allows you to create an index using only parts of a field. For example, you can create an index using the first two characters of one field, and combine it with another field, so that you can do searches on the combined data.

Mertech drivers fully support partial overlaps, but their use is discouraged. Partial overlaps have no native counterpart in the SQL world, and thus any SQL modeling tools will not be able to properly take them into account. Additionally, if non-DataFlex tools are used to access the database they are not able to use the partial overlap fields either.

A partial overlap field is only created as a native column in an SQL table if the field is part of an index. If the partial overlap field is not an index segment, it is not created in the SQL Table, it is simply controlled at the DataFlex side using the overlap field information in the INT file.

Mertech provides triggers that can be enabled to ensure that overlaps are taken care of if data is updated using non-DataFlex tools.

If a partial overlap field is created on the SQL server, Flex2SQL automatically creates the following functions:



  • FINVERSE_KEY_DATE2 (for MS-SQL only)



The objective of a relationship is to isolate data so that changes made in one table are propagated through the rest of the database via the defined relationships. Above examples show the importance of relationships in good database design.

Flex2SQL provides a utility to synchronize relationships between DataFlex tables and the SQL Server backend (Maintenance > Synchronize Relationship). Enforcing relationships on the SQL Server helps ensure data consistency.

See Also

Relationship Synchronization

UUIDs as Dynamic Primary Keys

Most database tables have a primary key that uniquely identifies each row in the table. This key can be a natural part of the database that is entered along with other attributes (for example, SSN). This key can also be dynamically assigned (for example, a dynamically generated employee ID).

Mertech has provided UUID (Universally Unique Identifier), also referred to as GUID (Globally Unique Identifier), support for several versions, but, starting with Flex2SQL v12.1 you can use a UUID as your primary key. This feature gives developers the ability to create UUID columns with server- or client-side generation and to use them seamlessly with DF applications. The generated key is passed back to the DF application so you can use it in a re-read to retrieve the record.

A UUID is a 128-bit quantity generated by an algorithm that nearly guarantees to make it unique. A UUID is displayed as a sequence of hexadecimal digits, in groups separated by hyphens. For example: {C0E61A92-BE06-4557-AABA-36AB263E0457}.

Note: After calculating 70 trillion UUIDs odds of a collision are 1 in 25 million.

A UUID has a data type uniqueidentifier in MS SQL, RAW(16) in Oracle, and uuid in PostgreSQL. DF treats GUIDs as strings, making searching easy.

Note: PostgreSQL requires installation of the OSSP UUID library to generate UUIDs. When you launch the Flex2SQL Migration Utility a message displays if the uuid-ossp module is missing. This module can be installed by running uuid-ossp.sql, which can be found in the (Postgres installation directory)sharecontrib. Additionally, the DPK radio button is disabled in the Flex2SQL Create Table dialog box if the OSSP UUID library is not installed. A tooltip hint over the disabled radio button explains how-to install the library.

Advantages of Using UUIDs

There are several reasons you might want to use a UUID as a dynamic primary key:

  • Unlike sequences, there is no lock contention when generating UUIDs.

  • Unlike RowIDs, UUIDs are not proprietary.

  • UUIDs work on the Web.

  • UUIDs are easy to setup.

  • UUIDs make it easier to distribute a database across multiple servers.

  • When you use a UUID, you do not have to worry about collisions if you need to merge rows from multiple versions of the same table.

  • UUIDs can be generated anywhere without a round trip to the server.

  • Replication often requires a UUID column.

Disadvantages of Using UUIDs

  • UUIDs are generated by executing a function, which is slower than incrementing an in memory sequence.

  • UUIDs are not user-friendly to read, making them harder to debug.

  • UUIDs can cause index fragmentation (see Clustered Indexes).

  • With a fully randomized UUID, you cannot use the UUID column to determine insert order.

  • If you create your own function to generate UUIDs the results might not be truly random and duplication can occur. It is recommended that you use Windows API calls if you want to generate your own UUIDs.

Clustered Indexes

A clustered index determines the physical layout of records in a table. Records are stored on disk in sort-order by this index. Using a clustered index can increase speed of retrieval if data is accessed sequentially or when a range of items is selected.

Ideally, a clustered index is unique, does not ever change, and is incremented by the same amount each time (monotonical).

If your table has a column or columns that are frequently used by queries, these columns are candidates for the clustered index. A database can only have one clustered index. The primary key is normally the clustered index.

In the example below, we use a sequentially assigned customer ID as our clustered index, and add sixteen customer records (IDs from 123 through 138). Each record is inserted at the end of the file, no rearranging of data is required for records to remain in customer ID order.

Page 1

Page 2

Page 3

Page 4

















Overtime, as records are deleted (124 in the example below) and new records are added in the empty space (139 in the example below), some fragmentation occurs. But, in general the file remains unfragmented.

Page 1

Page 2

Page 3

Page 4

















If you choose a randomly assigned ID as your clustered index, the record insertion point is also random. Once a page is full, and a record is added that needs to go on that page, the page is split to make room for the new record. Half the rows remain on the initial page and the rest are moved to the new page. The new row is then inserted into one of the two pages.

If we add twelve records with randomly assigned IDs (287, 356, 823, 765, 783, 472, 795, 870, 717, 830, 489, 800), we might end up with the following:

Page 1

Page 2

Page 3

Page 4

Page 5













This example demonstrates two problems:

  • Pages 1, 2, and 3 are only half full. Internal fragmentation occurs when space is wasted on index pages.

  • The logical order for the pages is 1, 4, 2, 5, 3. External fragmentation occurs when the next logical page as determined by the index order is not the next physical page in the data file.

Fragmentation slows down database access.

Sequential UUIDs

We saw in the previous section that using a random UUID column as your clustered index can result in high index fragmentation. What are the other options: 1) If you have a good, natural, primary key, use that. A good primary key does not ever change, is ideally a single column, is either an integer or short fixed-width string, 2) If you do not already have a natural primary key, you might consider using sequential UUIDs as a surrogate key. Sequential UUIDs are universally unique and they are also incremental (although not monotonical) so you can use them as your clustered primary index key.

In either case, you should use your UUID column for relationships.

Note: Sequential UUIDs are currently best supported in MS SQL.

Alternate Primary Keys

Starting with v12.1, Mertech drivers not only support the use of server-side functions to generate UUIDs, you can also specify any back-end function to generate a key value and the value will be passed back to your application.


Q: Is the dynamic primary key feature supported on RECNUM tables?

A: RECNUM based tables use RECNUM as the primary key, therefore this feature is not applicable to RECNUM based tables.

Q: What is the advantage of using a UUID over a standard sequence based primary key?

A: A sequence based primary key can easily collide with data from another primary key column in another table if the tables need to be merged. From a security standpoint, a UUID has a very low collision rate, which equates to almost no possibility of guessing a UUID.

Q: Which database servers support dynamic primary keys?

A: Dynamic primary keys are supported on MS SQL, Oracle, and PostgreSQL. MS SQL can only make use of a dynamic primary keys on tables that do not use insert triggers. Insert triggers prevent the server from returning the newly created primary key. MySQL does not have the needed interface to support dynamic primary keys.

Q: Does this mean I cannot use UUIDs in MySQL?

A: You can use UUIDs in MySQL, they just cannot be generated at the server-side. You would need to generate them in your code.

Hardware Considerations

Utilize RAID 1+0

1+0 offers the best balance of performance and data integrity.

Defragment the Server Drives

If you have never defragmented a drive, you will be amazed at the improvement in access speed.

Have at least 16GB of RAM

More memory means better performance. The usual amount of data often accessed should fit in the server’s cache with room to spare for optimal performance.

Turn off the Mertech Driver Trace

When initially testing the application, you may have used the driver trace. Be sure to turn this off when you are finished. Leaving this turned on causes slowdown in performance.

Place Data Index and Transaction Logs on Different Physical Hard Drives

Place Data Index and Transaction Logs on different physical hard drives on the server to balance the drive access.

Pre-allocate Database or Tablespace

Pre-allocate database or tablespace if the back-end allows it. Leave room for growth and once again, defragment the drive. Both of these steps help ensure that the tables are not fragmented across the hard disk.

Note: It is important to understand that the disk space used by a database engine is larger than that of native DataFlex.

Set the Recovery Model to BULK LOGGED

Set the Recovery Model to BULK LOGGED for Microsoft SQL Server.

Optimize Transactions and Locking

Batch Transactions

Performance can be increased significantly by batching multiple DataFlex transactions into larger SQL transactions.

If you have a scenario where you repeatedly lock and unlock data in a loop, this generates a large number of small transactions on the SQL Server. Entering in and out of transactions is not a lightweight operation, since the SQL Server has to ensure data consistency (often all the way to ensuring that data has actually been physically written to hard drives).

Wrapping the loop in BEGIN_TRANSACTION and END_TRANSACTION calls (for example every ten iterations of the loop) can have a very positive impact on performance of the process.

Of course, care should be taken to not make the transactions too big, since this can increase lock contention. The added speed boost of the overall process should mitigate the additional locking that takes places. However, you should evaluate the impact of the modification on other users.


Use REREAD in your code to explicitly list the file(s) you are interested in. If no files are named as arguments to the REREAD command, all opened tables are locked and their record buffers are refreshed.

Reread FILE

Move FILE.Field to NEWFILE.Field

Saverecord NEWFILE


Use Data Dictionaries with Smart File Locking

DataFlex includes a data dictionary layer to store business rules and validations. Besides providing database validation, data dictionaries employ “smart file mode” technology.

Before a save or a delete, the data dictionary determines which files need to be locked. Using smart locking will help determine which files need to be locked and can dramatically improve database processing speed.

Note: Requires SMART_FILEMODE_STATE be set to true (the default).

Move Filtering to the SQL Server

If you have code that loops through records in a table and you are only interested in a subset of the data, an SQL filter will allow you to move the filtering mechanism from the client to the server. This cuts down on the amount of data that needs to be transferred to the client.

For example, say that you are looping through your CUSTOMER table in alphabetical order to prepare a report, but you are only interested in customers with a balance of between $1000 and $2000. A standard way to accomplish this would be through something similar to:


Find Gt Customer by Index.2 // By Customer.Name

If (Found) Begin

If ((Customer.Balance > 1000) and (Customer.Balance < 2000))



Until (not(Found))

In this case all customer records are transferred to the client, when in reality we are only interested in records with a balance value in a certain range. We can move the filtering to the SQL Server:


SET_SQL_CONSTRAINT of Customer.File_Number to “and BALANCE > 1000 and BALANCE < 2000”



Find Gt Customer by Index.2 // By Customer.Name

If (Found) Begin



Until (not(Found))


You may also choose to create indexes by hand on your SQL Server to support your filters or other embedded SQL commands. The Mertech drivers will co-exist with and use indexes that are not explicitly defined in DataFlex.

Support for SQL filters at the DDO level

DataFlex 18.0 introduced support for SQL filters at the DDO level. Now, in addition to using custom commands in the driver, SQL filters can be used as a part of normal DataFlex programming. Flex2SQL v14.0 and higher fully supports this new DataFlex feature.

Identify Missing Indexes

A common cause of application performance problems is missing indexes. One reason for a missing index is that duplicate row errors occurred during the migration process. The Flex2SQL Utility Index Maintenance option can check whether the indexes for each file were created properly.

Defragment the Database

If a query is taking a lot longer to compete than before, even though nothing in it has changed, database fragmentation may be an issue. This is especially true if the SQL database is set to grow by a fixed number of megabytes instead of a percentage.

If finds are slow, but the indexes are reported to be fine by Flex2SQL, please consult the documentation for your SQL Server on how to rebuild your table and index spaces.

Recompute Index Statistics

Index selectivity is a number that determines the effectiveness of an index. Index selectivity can be calculated:

Total number of distinct values ÷ Total number of records.

The best possible index selectivity is when all records in a table have a different value for the columns in the index (this is typical for primary keys and unique constraints).

It is inefficient to maintain an index that has a low selectivity value. Most SQL databases include an optimizer that determines when to use an index and when it is faster to ignore the index and do a scan instead. Indexes with low selectivity values may be ignored even though they are included in a WHERE clause.

If large chunks of data have been inserted into your table, ask your SQL Server to recompute the index statistics and then modify or remove superfluous indexes.

Move Business Logic to the SQL Server

Significant performance improvement can be seen by rewriting some batch processes in SQL. This can be either as embedded SQL statements, or as a stored procedure or function for execution directly on the server.

A simple example is a business process that computes the outstanding balance for all customers. In a record oriented scenario, all the records have to be brought over to the client in a loop that computes a running total.

This can instead be turned into a short snipped of SQL code that executes fully on the server, with minimal network traffic and interaction:





Please note that not all business logic will see a performance improvement by moving to embedded SQL. Embedded SQL that simply returns a large record set will not show significant gains over fetching the same records through a regular FIND GT loop.

Identifying those business processes where a large portion of the logic can be handled by the SQL Server to keep network traffic down.

Consider Cache Performance

The Mertech driver monitors the database operations you perform in real time, and adjusts the number of records and columns it fetches and caches.

One scenario that can be detrimental to the cache is looping through records, while doing individual lookups on the same table. This causes caches to be flushed.


Find gt customer by Index.1




Move Customer.Recnum to iSavedRecnum

// Find the record of interest,

// the previous cache from the FIND GT is discarded

Move something to Customer.Recnum

Find eq customer by Recnum


// Return to the previous position in the loop, no rows are cached

Move iSavedRecnum to Customer.Recnum

Find Eq Customer by Recnum


Reseeding of the key buffers during a FIND GT leads to cache clearing.

The workaround in this case may be to perform the separate customer lookup through embedded SQL. Or, to move the whole process to embedded SQL, if it cannot be performed separately from the loop.

Use Lazy Open Mode

Mertech’s new Lazy Open mode feature is designed to improve an application’s initialization performance by deferring table setup. Lazy Open mode delays the setup of table metadata (table, columns, indexes description) in memory until a field is accessed, even if an OPEN command is issued.

Lazy Open mode is particularly useful for applications that open a large number of files when the program is launched. For example, if a program opens 30 files but only 10 are accessed immediately, then the metadata for only those 10 tables is fetched from the server at application initialization. The remaining tables are not opened until an action outside of open is performed on them.

Lazy Open mode is disabled by default. Lazy Open mode can be enabled by either calling a new macro command or through the MERTECH.INI file.

Enabling Lazy Open Mode

Lazy Open mode can be enabled in two ways: a) by calling a macro command or b) by setting a flag in the MERTECH.INI file. Set Lazy Open mode in the .INI file only if you want to globally enable it for the entire run of the program. Mertech recommends that you evaluate performance for file opens before deciding on which approach is suitable for your application. It is important to note that Lazy Open mode is a global setting and once set remains in effect until the open mode setting is restored to its default value.

Using the Macro Command

Lazy Open mode can be enabled by calling the macro command SET_OPEN_MODE. The syntax for this macro is:






Is one of DEFAULT (regular open mode) or LAZY_MODE

For example, the following code enables Lazy Open mode:


Open “” as customer


Find GT customer by Index.1

During OPEN command execution, the database driver opens the .INT file to retrieve the basic table information (table name, owner, database, number of columns). The database request for the table metadata is delayed until an operation other than OPEN (such as a find, clear, update, delete etc.) is performed.

Since Lazy Open mode is a global setting, every table is opened using the new Lazy Open mode after a SET_OPEN_MODE TO LAZY_MODE command is issued. Return to default open mode behavior by calling SET_OPEN_MODE TO DEFAULT.

Using the MERTECH.INI file

Lazy Open mode can also be enabled by adding the following line to the MERTECH.INI file.

LazyOpenMode yes

These MERTECH.INI entries enable Lazy Open mode for SQL and PostgreSQL drivers:


LazyOpenMode yes


LazyOpenMode yes

The following troubleshooting tips, plus new features, and other hints are discussed in detail on the Flex2SQL Technical Blog.

Server Side License Control View for the Oracle Driver

After downloading and installing the Flex2SQL Migration tool, you need to create the Server Side License Control View at your Oracle server before performing a migration.

What’s New in Version 17.0 Classic Edition

For a complete description of new features available in v17 Classic Edition refer to the Release Notes, available under the Windows All Programs > Mertech Flex2SQL v17 Classic > Release Notes.

For more detailed information on new features, refer to the Mertech channel on YouTube:


If you would like to know more about Mertech’s products, please visit our web site or contact us at:

Corporate Head Office

Mertech Data Systems, Inc.
18503 Pines Blvd. Suite 312
Pembroke Pines, FL 33029
Tel: +1 (954) 585-9016
Fax: +1 (866) 228-1213

California Office

Mertech Data Systems, Inc.
7621 N. Del Mar Ave. Suite 101
Fresno, CA 93710