Before You Begin

Before you begin the migration process, make sure that your database client is able to connect to the database server. If you are going to perform the migration on the same machine where the database server is installed, then all the pieces you need to establish the connection are already in place. However, if you are connecting to a database server from a client machine, then you must make sure that you have the client libraries installed and the client configured to connect to the server. For example, with Microsoft SQL Server, you will need to use the Client Network Utility to set up server alias and connection protocol. With Oracle, you will use the SQL*Net Configuration assistant. See those products for more details.

After your client and server are set up properly and are able to communicate, then you can proceed with the migration.

Prerequisites

IMPORTANT: Be sure the appropriate 32-bit client is installed and that the client and server version numbers match and are supported by BTR2SQL. You can develop 64-bit applications, and if so 64-bit libraries are available from Mertech. However, the Migration Utility requires the 32-bit client to be installed. If a 64-bit client is installed the Migration Utility may close unexpectedly when you attempt to login.

Btrieve 6.15 or Pervasive.SQL 7.0 (or greater)

BTR2SQL contains database driver dlls to replace your Btrieve 6.15 engine (wbtrv32.dll) or Pervasive.SQL 7.0 (or greater) engine (w3btrv7.dll). The replacement drivers will allow your existing application to access the newly migrated SQL backend.

Btrieve data files with schema defined in Pervasive.SQL DDFs

The BTR2SQL GUI Migration Utility requires a Pervasive.SQL 7.0 (or greater) engine to read Data Definition Files (DDFs) during data migration. DDFs define the layout, or schema, of the tables in the Pervasive.SQL database. Without the DDFs, the data cannot be efficiently migrated to SQL.

If you are currently running Btrieve 6.15, you can obtain a later version of Pervasive.SQL and the DDFCONV tool from Pervasive (Actian). V2 Metadata (introduced in Pervasive.SQL 10) is not supported.

If your application is written in UniPaaS (now called Magic xpa Application Platform), Mertech provides MigrateUniPaasData, a command-line migration tool that works with Magic uniPaaS applications where the source files are stored in XML format. Contact Mertech for additional information.

See the Pervasive documentation and many other resources and tools available online for further information on building DDFs if they are not included in your database.

Microsoft SQL, Oracle, PostgreSQL

MS SQL Server, Oracle, and PostgreSQL are the four SQL backends currently supported by the ISDBC for Btrieve drivers. One of these backends must be installed and running. The client software must be configured on the computer where the migration and the application are run. Refer to your database server documentation for details.

MS SQL Server

  • The DB_OWNER role should be set for the User ID identified for the migration process.

    • For the fullest support of data types, the SQL Server 2012 Native Client should be used. Alternatively, the 2008 or 2005 Native Client can be used. (Download the Native Client from the Feature Pack available here: http://msdn.microsoft.com/en-us/data/aa937733.aspx)

    • An MSSQL client is included with the BTR2SQL installation.

    Oracle

  • Using Oracle’s Security Manager, add the Select Any Table privilege to the user ID used for the migration process.

    • Requires OCI.dll, which may have other dependencies. All of these should be included with the client software.

  • Missing Oracle dependencies may be resolved by installing the Visual C++ 2010 Redistributable package. (https://www.microsoft.com/en-US/download/details.aspx?id=8328)

    PostgreSQL

    • The server must be 9.1 or above.

    • PostgreSQL does not support storing Zero (null) bytes in a string. If your data needs to include a Zero byte, you must include the Binary flag on the field in the DDFs (bit 12 / 0x1000 on field flags).

    • The PostgreSQL citext module provides the case-insensitive character string type CITEXT, which allows for case insensitive fields without the need for inverse key columns.

  • Dlls are installed in <Program Files>Mertech Data SystemsDB DriversBtrieveclientspgsql and can be copied to the <windir>System32 directory. The version of the installed client must match the version of the server.

.NET Framework 3.5

The version 5 migration tool was rewritten to use the .NET framework resulting in an improved interface. The version 5 migration tool requires .NET Framework 3.5. The installation automatically installs .NET 3.5 if it is not already installed, except on Windows 8.

NOTE: Windows 8 includes .NET 4, but not the required 3.5 version. When you run an application that requires an older .NET version, Windows 8 displays a dialog box asking you to download it.

Database Limitations

In some cases, the SQL servers have different limits or data type ranges than supported by Btrieve. Most applications are not affected by the difference, but it is good to be aware of these differences.

The largest difference is in record size. Btrieve supports up to about 64k in the fixed portion of the record; however, most SQL servers max out around 8K. The driver tries to compress records larger than 8K by converting large text fields to a blob form — for instance, it will use varchar(max) instead of varchar on MSSQL which is stored in a different manner and takes up very little space in the original record.

Microsoft SQL Server

Max 1024 columns

Max 8060 bytes per record

MSSQL 2005 limits dates to a lower range of 1753-01-01.

Oracle

Max 1000 columns

PostgreSQL

Maximum 250 - 1600 columns depending on column types

DDFs

If you are a Btrieve user, then you are probably aware of the challenges that you face when you try to integrate the old and new applications that do not have Data Definition Files – the infamous DDFs. These files are absolutely essential to describe the entire layout of data, or schema, within the Btrieve files. But it can be frustrating and intensely time-consuming when you try to build DDFs manually. Sifting through C header files for definitions of the records, deciding which data type is required for each field in the database, and a myriad of other details consume hours of valuable resource time, and the end result may still be plagued with inconspicuous errors.

Fortunately, Pervasive Software has developed and provided their new tool DDF Builder with Pervasive.SQL. Many other tools of this type are also available on the internet to help you with this daunting task. However, even though these tools can be remarkably helpful, identifying the definitions for a 300-table database can still be prone to errors. Initial visual inspection of the revised data in the Control Center may be misleading. It is easy to miss a field, a misaligned offset, or the wrong data type, such as zstring instead of string.

NOTE: Mertech also provides MigrateUniPaasData, a command-line migration tool that works with Magic uniPaaS applications where the source code is stored in XML format.

Handling variant records

Variant records live up to their name. They can display in various ways, depending on some known flag or tag. In C, this is known as a union and typically has a value in the first part of the record that dictates the appearance of the rest of the record. Before the advent of SQL and relational databases, this kind of storage was popular and Btrieve is quite famous for supporting Variant records.

Let’s examine a simple example in C:

struct Correspondence
{
  char name[30];
  BDATE date;
  int type;  // 1=email, 2=mail, 3=fax
  union {
    char emailAddr[150];
    AddrInfo addressInfo;  //struct size is 300
    char faxNum[15];
        };
}; //Correspondence

EmailAddr, addressInfo, and faxNum share the same space in the record and therefore cannot be defined as separate fields. When Pervasive introduced SQL access to the data, one way to see this data was to define three different tables, all using the same MicroKernel file. All three tables start with name, date, and type. One table also includes emailAddr, the second table includes addressInfo, and the third includes faxNum. When you use SQL to access the data, you must include an extra Where clause. For example, if you want all of the mail records, you might execute SELECT * from Coor_Mail where type=2.

Currently, the Mertech drivers do not provide any special handling to make this functionality operate. If you give the Migration Utility the command to migrate all three tables, then you will have the same records in each of the three tables. The records that do not belong will have garbage in the columns within the union portion. You need to allow this data to be used in a fully-SQL backend and to treat the union portion as binary data.

Think of the record as follows:

struct Correspondence
{
    char name[30];
    BDATE date;
    int type;  // 1=email, 2=mail, 3=fax
    unsigned char unionData[300];
}; //Correspondence

The union is replaced with an array of bytes. This does not need to be done in the C code; it is just another way to view the record. However, you will cause the Pervasive.SQL table to appear as shown in the above example. Note the size of the binary field. It must be at least as large as the largest piece in the union.

Therefore, use syntax similar to this:

CREATE TABLE Correspondence IN DICTIONARY
USING 'correspondence.mkd' (
    name VARCHAR(29) not null,
    deliveryDate DATE not null,
    recType INTEGER not null,
    unionData BINARY(300) not null
)

The IN DICTIONARY clause

When you create DDFs for existing MicroKernel files, you must be careful not to overwrite the existing file or its indexes. As shown above in the variant record example, the IN DICTIONARY clause can be used. This lets the Pervasive.SQL engine update only the DDF files and not create the associated data file.

The same logic applies to indexes. For example:

CREATE INDEX idxRecType IN DICTIONARY ON Correspondence (recType)

Nullable columns and the True-Null extra byte

Starting with Pervasive.SQL 2000, a new type of Null Indicator was introduced called True Null. Refer to Key Attributes in the Pervasive.SQL Programmer’s Guide for details. In short, when a table is created with True Nulls, each nullable column is shifted by one byte. This byte is used as a Boolean value and indicates whether the column is null or has a real value.

For example:

CREATE TABLE SomeNulls ( C1 integer, C2 integer ) needs a C struct as follows:
struct SomeNullsRec {
  bool nullC1; int C1;
  bool nullC2; int C2
};

If you are creating table definitions for existing data files, it is unlikely that they have this extra Boolean byte. Therefore, make sure that you include the not null clause on each column as demonstrated in the variant record example.

CREATE TABLE NoNulls
( C1 integer not null, C2 integer not null )

or:

SET TRUENULLCREATE=OFF;
CREATE TABLE LegacyNulls
( C1 integer, C2 integer )

Restart the Pervasive.SQL engine after modifying table definitions

Some versions of the SQL engine within PSQL may remember older definitions of tables after you have modified them. Some definitions may appear to be inaccurate when you visually examine the data in the Control Center. If you suspect that this is the case, then we recommend that you restart the PSQL engine after each major modification to the table structures.

Validate Data Definition Files using DDF Validator

Mertech’s DDF Validation Tool was developed to make sure that the DDFs have been defined accurately. It searches for common mistakes, such as the total length of fields that do not match the Btrieve record size for the file and for files that have not been defined. It also checks for anomalies in the names of the columns, indexes, etc. After checking for table and column level issues, it reads every row from each table in the database and examines each field. When an error is detected, a message displays providing the error code and a suggestion to help you fix the problem.

Optimally, you should run the DDF Validation tool prior to migrating your data to SQL. If you have already migrated the tables, you can convert them again if problems are encountered with the DDFs.

We recommend that you run a validation even if you have had DDFs for years. The validation does not take long to run, but you may want to limit the row scanning to save some time.

Any time that you make changes to the DDFs, we recommend that you run the DDF Validation Tool on the affected tables.

DDF Validator command-line syntax

Select Mertech’s ISDBC Drivers for Btrieve | DDF Validator from the Windows Start menu. Since this is a command-line utility, a Command Prompt is automatically opened.

Run the DDFValidator script by typing the following at the command prompt:

DDFValidator [-dblocation <DDF Folder>] [-log <log file name>]
[-table <table name>] [-max-warnings n] [-max-rows n]
[-ignore <error list>] [-?]

-dblocation: The full path to the Pervasive.SQL DDFs that define the database schema. If this is not provided, the current directory is assumed.

-log: Send the report (XML format) to the specified file. When -log is not included, the XML is sent to stdout. When -log is provided, the XML is sent to the file and only the error messages display so you can see that the tool is working and not hung.

‑table: By default, all tables are tested. The ‑table parameter allows you to narrow the tests by specifying one or more tables. Use semicolons to separate the list; or include the ‑table parameter multiple times.

-max‑warnings: Stops processing after <n> errors.

-max‑rows: Allows you to narrow the testing time by reducing the number of rows checked. If your table has 10 million rows, it is likely that after approximately the first million, the rest of the rows will validate the same way.

-ignore: Error codes to ignore - separated by semicolons. If you receive many of the same warning messages and you have verified that it is not really a problem for your database, then you can filter it out to avoid littering the output.

Parameters with brackets [] are optional. Surround names with quotes if needed (space in the name).

Example: Test the Pervasive.SQL Demodata database; send report to XML file:

DDFValidator -dblocation "c:\pvsw\demodata" -log DemoDataTest.xml

DDF Validator errors

Database

  • (4000) More than one table defined for the same Btrieve file.

  • (4001) Btrieve file found in database folder that is not defined in the DDFs.

Table

  • (3000) Number of rows read did not match the number reported by B_STAT.

  • (3001) Table, Column, or Index name is invalid for SQL.

  • (3002) Obsolete.

  • (3003) Not all of the record is defined by fields.

  • (3004) Record length is too large or small.

  • (3005) ACS / ISR is not supported by the Mertech drivers at this time.

Columns

  • (2000) Column defined outside record length.

  • (2001) NumericSTS data type is not currently support by the ISDBC drivers.

  • (2002) Length of field is too large or small for the data type.

  • (2003) Two or more fields are overlapping.

Indexes

  • (5000) A column appears more than once in an index definition.

  • (5001) A Key is defined in the Btrieve file but not in the DDFs.

  • (5002) An index is defined in the DDFs but not in the Btrieve file.

Rows

  • (1000) Field defined as ZString/String/LString but appears to have binary data.

  • (1001) Value not recognizable for the defined data type.

  • (1002) BIT columns should only be 0 or 1 (normally).

  • (1003) Field defined as ZString but did not have a null terminator.

  • (1004) Characters beyond the terminator character of a string. May be bytes from adjacent field due to alignment problem.

  • (1005) Value for Null-Indicator-Segment (NIS) should normally be either 0 or 1.

  • (1006) Length byte for LString is longer than defined field length.

  • (1007) Timestamp value looks suspicious. While almost any value is “valid”, a really old or in the future timestamp may mean trouble.

General errors

  • (9000) Unexpected Btrieve error.

  • (9001) This tool does not support … (a data type, etc.).

  • (9002) Btrieve returned an unexpected data length value.

Sample output from the DDF Validator tool

<code>
<report>
  <timestamp time="Mon Jan 02 13:27:17 2006" />
    <database location="D:\DDFVALIDATOR\TESTDATA\">
      <warning severity="1" code="4001">
        "Btrieve file 'NotATable.btr' found in database location
        but is not defined in the table list."
      <suggestion> "Should this file have a table definition?"
 </suggestion>
    </warning>
      <warning severity="2" code="4000">
        "Table T1 and T2 are both defined as Btrieve file
        D:\DDFVALIDATOR\TESTDATA\t1.mkd. This is illegal."
          <suggestion> "Remove one of the table
          definitions." </suggestion>
    </warning>
      <table name="T10" recordSize="42" flags="64" numRecords="1">
         <location path="D:\DDFVALIDATOR\TESTDATA\T10.mkd" />
         <column name="In va#l*id" type="4-TIME" offset="1"
 size="4" flags="4">
              <warning severity="1" code="3001" column="In va#l*id">
              "Column name is not a valid SQL identifier. This may
              cause problems with SQL queries later."
                <suggestion> "Names should only contain a space,
 underscore, or alpha-numeric characters" </suggestion>
           </warning>
         </column>
         <rows>
            <warning
 severity="3" code="1001" recordNum="1" column="In va#l\*id">
              "Error reading data - Bad time [145:01:145.1]."
              <suggestion> "Check the offset and length of the
 field." </suggestion>
            </warning>
            <warning
 severity="2" code="1000" recordNum="1" column="C1">
              "Field has binary data.  '1' found at offset 1 in
 the field."
              <suggestion>
                "This may mean the field should be defined as
 Binary instead of Text."
               </suggestion>
            </warning>
            <warning
 severity="3" code="1006" recordNum="1" column="C2">
                "Length byte indicates value (103) which is larger
 than defined field size of 42"
                <suggestion> "Either the data is corrupt or the
 field is misaligned." </suggestion>
            </warning>
         </rows>
         <results recordsTested="1" validRecords="0" />
      </table>
      <table name="T4" recordSize="0" flags="0" numRecords="0">
         <location path="D:\DDFVALIDATOR\TESTDATA\t4.mkd" />
         <column name="c1" type="1-INTEGER" offset="1" size="4"
 flags="4">
            <warning severity="3" code="2000" column="c1">
               "Column defined beyond Btrieve record size."
               <suggestion> "Columns must fit within defined record
 length." </suggestion>
            </warning>
         </column>
         <column name="field_0" type="0-STRING" offset="0"
 size="10" flags="0">
            <warning severity="3" code="2003" column="field_0">
              "Column overlaps with 'field_1'.  This is illegal
 for SQL access."
            </warning>
         </column>
         <warning severity="2" code="3003">
              "Some portions of the record are not defined by a
 column."
              <suggestion> "Ensure fields are defined for the entire
 Btrieve record." </suggestion>
         </warning>
         <results recordsTested="0" validRecords="0" />
      </table>
   </database>
   <results tablesTested="16" recordsTested="30" validRecords="13"
 />
</report>

Hardware Considerations

Utilize RAID 1+0

1+0 offers the best balance of performance and data integrity.

Defragment the server drives

If you have never defragmented a drive, you will be amazed at the improvement in access speed.

Have at least 2G of RAM

More memory means better performance. The usual amount of data often accessed should fit in the server’s cache with room to spare for optimal performance.

Turn off the Mertech driver trace

When initially testing the application, you may have used the driver trace. Be sure to turn this off when you are finished. Leaving this turned on causes slowdown in performance.

Place Data Index and Transaction Logs on different physical hard drives

Place Data Index and Transaction Logs on different physical hard drives on the server to balance the drive access.

Pre-allocate database or tablespace

Pre-allocate database or tablespace according to the current size of the Pervasive.SQL database if the backend allows it. Leave room for growth and once again, defragment the drive. Both of these steps help ensure that the tables are not fragmented across the hard disk.

Set the Recovery Model to BULK LOGGED

Set the Recovery Model to BULK LOGGED for Microsoft SQL Server.

Utilizing Function Executor

Whatever language your application is written in and whether it utilizes a high-level COM object, a class wrapper, or the BTRV function directly, each operation eventually is funneled into the BTRCALL (or BTRCALLID) function exported from w3btrv7.dll or wbtrv32.dll.

Pervasive provides a low-level tool for executing Btrieve commands called Function Executor. Since the tool runs directly off of w3btrv7.dll calling the standard BTRCALLID function, it is a good troubleshooting tool when an application is receiving an unexpected error or you would like to experiment with a particular scenario.

A simple technique to discern the difference between Btrieve and SQL behavior is to create two disk folders. In one, place wbexec32.exe (from pvsw\bin) and the original w3btrv7 dll. In the second folder, place another copy of the wbexec32 along with the Mertech w3btrv and sql_btr.cfg, ora_btr.cfg, or pgs_btr.cfg. Then create shortcuts to each of these and you can selectively execute Btrieve API calls against Btrieve and against SQL.

NOTE: Function Executor is just like any Btrieve application and will thus require the SQL client to be installed and on the system PATH (see Prerequisites).

Using the Function Executor History window, you can save a sequence of API calls and then play them back later or ask it to repeat them any number of times. During playback, there is a small arrow in the lower right of the dialog box which will access a record of time to execute the calls. This can be used for quick performance comparisons.