Release Notes

BTR2SQL v6.0.9558.1

BTR2SQL v6.0.9558.1 was released on 2022-12-14. This build contains the MS-SQL, PostgreSQL, and Oracle drivers.

Bug Fixes

  • BTR2SQL-117 B_DELETE and B_UPDATE outside transaction weren’t being committed.

  • BTR2SQL-121 Skip index storage estimation for indexes without segments.

  • BTR2SQL-126 String Date value was being truncated at timestamp portion.

  • BTR2SQL-135 Field buffer is not updated if it is retrieved from cache and Oracle indicates NULL.

  • BTR2SQL-138 Fix to MSSQLLocksLibrary.

  • PROD-331 IS_NULL columns not updated properly.

  • PROD-332 B_GET Issue.

  • PROD-333 Documentation links were wrong.

  • PROD-334/PROD-336 Memory allocation bug in Oracle driver affecting DTI testing.

  • PROD-338 Fix to new lock logic in MS-SQL.

BTR2SQL v6.0.9526.0

BTR2SQL v6.0.9526.0 was released on 2022-10-24. This build contains the MS-SQL, PostgreSQL, and Oracle drivers.

New Features

This version now includes the Oracle driver as well. Oracle 21c is the highest supported release at this time. For MS-SQL there is a new lock mechanism to deal with a hash collision in MS-SQL server’s hash calculator. This feature is not enabled by default and can only be enabled at the instruction of Mertech support staff.

Bug Fixes

  • BTR2SQL-81 Fixed a Stat 19 with MS-SQL driver caused by an optimization that wasn’t quite correct.

BTR2SQL v6.0.9465.0

BTR2SQL v6.0.9465.0 was released on 2022-08-24. This build contains the MS-SQL and PostgreSQL drivers. The Oracle driver is not included in this build.

New Features

This version now includes the PostgreSQL driver as well. PostgreSQL v13 is the highest supported release at this time. Additional new features:

  • FET_DISABLE_REPLACEMENT_VALUES This new feature flag should only be used at the direction of Mertech support.

  • TREAT_INVALID_VAL_AS_ERROR This new flag should only be used at the direction of Mertech support.

  • BTR2SQL-72 Improved trace output with more details.

Bug Fixes

  • BTR2SQL-29 Allow applications to store values that would otherwise be interpreted as a special default value.

  • BTR2SQL-35 Performance issue related to cursors not being properly disposed of.

  • BTR2SQL-103 SQL LOB field are not getting the default value assigned in the migration XML.

  • BTR2SQL-102 GetNextExtended not caching rows causing performance issue.

  • BTR2SQL-87 Fixed path resolution to handle more pathing options (relative paths, UNC, etc).

  • BTR2SQL-92 Fixed MS-SQL migration to use DATETIME2 instead of DATETIME (more accurate).

BTR2SQL v6.0.8817.0

BTR2SQL v6.0.8817.0 was released on 2020-11-13. This section also contains update details for older versions.

New Features

Support for the latest RDBMS versions

Microsoft SQL Server 2019

Multi-LOB Support

In v5.3, we released Multi-LOB support for our MS-SQL driver. Now PostgreSQL and Oracle support Multi-LOB. This allows for the variable portion of a Btrieve record structure to support multiple binary segments that will be stored in SQL as independent columns, but addressable in Btrieve with the same offset based access that you’re familiar with.

Oracle Unicode Support

Some editions of Oracle utilize UTF-8 as the default character set. This default mode is now supported with the ability to read/write to nchar/nvarchar/nclob style columns.

XML Generation from DDFs

This allows you to save all migration settings (starting from our defaults that are loaded from your DDFs) so that any customizations for your migration can be easily reproduced and your migration can be automated.

Extended Fetch Filter Limit Removed

This used to have a Btrieve-imposed limit of 119. This artificial limit has now been lifted.

Trimmed Multi-segment Queries

This was first implemented in v5.3 for MS-SQL. It is now available in Oracle as well. This is a speed optimization that reduces roundtrips to the server for certain query operations. It is done silently in the background.

White Label Support

Our drivers no longer contain Mertech company info. This keeps your implementation details private for a competitive advantage.

Migration Tool for Magic users

Magic uniPaaS applications with multi-tenant data will find the MigrateUniPaasData tool much easier to use now that it ignores data files not defined in the folder being migrated.

Bugs Fixed

Oracle Fix

“ORA-01400: cannot insert NULL” during Insert when having a Manual Value in AutoInc field.

Cache Size Wrong

Driver was not returning NotFound due to incorrect cache variables; make sure cache size is set correctly.

Triggers

Enabling Triggers gets error in GUI when Case setting is different on the table. Triggers created with incorrect Case for identifiers when setting was not Upper

Oracle Fix

“ORA-24795: Illegal COMMIT attempt made” error that might be caused when Load Data fails during migration.

E-SQL Fix

A transaction may be left active when generating Scripts, causing side effects for later Embedded-SQL commands.

Large Query Exception

An exception could occur for tables with a large number of fields due to improper internal buffer expansion.

Primary Index Fix

Was not always being set properly.

PostgreSQL string issue

Zeros are not allowed to be embedded in a string column, convert them to Spaces instead

Migration Fix

Empty default values are now allowed through the XML interface

Version 5.3

MSSQL

BTR2SQL 5.3, a Microsoft SQL Server only release, includes the following new features:

Large Objects (LOBs) Now Split to Individual SQL Columns

The BTR2SQL migration now splits LOBs (type 21) into multiple SQL columns, rather than lumping all LOB data together. This split divides the distinct pieces of data you may be storing in the final segment of your Btrieve records (notes, media, etc.) into multiple, variably-sized columns that are easily accessible post-migration.

Specify Tables and Fields’ SQL Data Type and Collation as You Add Them

When you add a table or field while performing a migration, or when using our MdsAddTable or MdsAddField APIs, you can now specify the SQL data type and collation of the table or field you’re adding. For example, you could specify a SQL-specific datetime type for data that benefits from a unique datetime range.

Fetch Files Within Multi-tenant Structures More Easily

All commands that accept a file path, most notably B_OPEN, now also accept templated schema and table names, allowing you to easily retrieve specific files within multi-tenant structures that use multiple schemas and tables.

Improved Performance for a Variety of Operations

We’ve improved the speed of a number of Btrieve operations, including the ExtendedFetch, GetGE, and GetLE commands.

Access Data Defined Within Your Btrieve Records’’ Variable Portion

During migration, you can now split the defined data stored within the variable portion of your Btrieve records into separate, defined SQL columns, making this data accessible post-migration.

Save Users’ Server Login Credentials

New, SQL Server Management Studio-style support for remembering users’ server logins saves users and developers the trouble of entering their server credentials each time they access or test your application.

Use Your App With SQL Server 2016 and 2017, Microsoft Azure, and Microsoft’s 2018 OLE DB SQL Server Client

BTR2SQL 5.3 officially supports using your app alongside MS SQL 2016, MS SQL 2017, and Azure databases, as well as with Microsoft’s 2018 OLE DB SQL Server client.

Assigned Names for Default Constraints

BTR2SQL now assigns custom names to default constraints, rather than using SQL’s automatically assigned names. This allows you to more easily compare multiple versions of the same database when performing a database diff, since each version now uses the same names for the same constraints.

More Comprehensive View Support

BTR2SQL now more fully supports using views to manipulate and abstract data returned by a particular query or contained within a particular table.

Limit Data Fetch Queries to a Set Number of Records for Faster Performance

If you know you need to read only a handful of records, you can use the new B_SET_QUERY_CONSTR command to limit the number of records a fetch query reads.

Specify a Custom Path to Your BTR2SQL License

If our default license search logic doesn’t meet your needs, you can use the new LICENSE_FILE_PATH setting to manually specify the path to your BTR2SQL license. You can add this setting and specify your license file path within either BTR2SQL’s mds_global.ini file or within your application itself.

Create and Drop Triggers Using New APIs

New MdsCreateTriggers and Mds- DropTriggers APIs allow you to create and drop triggers when adding or modi- fying data using outside, non-BTR2SQL applications. For example, if an outside application attempts to modify a LOB used by your application, you can use the MdsCreateTriggers API to ensure your application synchronizes these modifications with the data stored on your SQL database.

To take advantage of these new BTR2SQL for MS SQL features, download Version 5.3 of BTR2SQL for MS SQL from the Mertech website, and then install the application using your typical upgrade process. If you’re using BTR2SQL with a SQL database other than MS SQL or Azure, DO NOT DOWNLOAD AND INSTALL THIS VERSION OF BTR2SQL. Instead, continue using version 5.2. If you need help upgrading or implementing a feature, refer to BTR2SQL’s product documentation or contact us at support@mertechdata.com.

Version 5.2

MSSQL

Server Support

Support is available for `MS SQL Server and LocalDB.

Oracle

Server Support

Support is available for Oracle 12c.

Lock Conflict

The extremely small namespace Oracle allowed in the DBMS_LOCK package (30 bits) can lead to a lock conflict. A custom lock package is provided that expands the namespace. This should eliminate false lock errors.

PostgreSQL

Server Support

Support is available for PostgreSQL 9.4.

GUI Migration Utility

Attach and Detach

“”Attach and detach options were added to the BTR2SQL GUI Select Database dialog box to allow users to specify the database file location. (MSSQL)””

Command-line Migration Tools

MdsMigrateFile

A new command-line tool, MdsMigrateFile, is installed. MdsMigrateFile can be used to migrate Btrieve files, as a last resort, when DDFs are not available.

MigrateUniPassData

Several improvements have been made to the MigrateUniPassData command-line tool, which performs migrations for Magic applications.

APIs

New APIs, MdsSetSetting and MdsGetSetting, are available to programmatically set and retrieve global (mds_global.ini) and folder specific (mds.ini) settings.

The Btrieve APIs B_RENAME_FILE and B_DELETE_FILE are supported.

Support was added for NextInChunk and AppendToChunk sub functions of the B_GET_DIRECT and B_CHUNK_UPDATE functions.

When B_UPDATE or B_DELETE are called, a Status 80 is returned if any of the fields has changed since the last read.

General

A new setting in mds_global.ini, PERVASIVE_DLL_NAME, allows you to specify the name and optionally the path of the Btrieve access dll to be used for reading Btrieve data during migration.

The default for the IDENTIFIER_CASE setting was changed from “Upper” to “Original” (MSSQL and Oracle).

Shortcuts were added for easy access to the deployment folder (where the Btrieve replacement dlls are located) and to the SDK.

New blog posts and white papers are available to describe multi-migration scenarios.

Examples to show how to use Embedded SQL to create and drop databases and users are provided in sdkSamplesMdsApisCMdsApiSamples.cpp.

Version 5.1

License

BTR2SQL v5.1 comes with a fully functional license that is valid for 14 days after installation.

Support for Btrieve 6.15

A more explicit error message displays if Btrieve dlls are not accessible.

PostgreSQL client library

The required dependencies for the PostgreSQL client library are installed in <Program Files>Mertech Data SystemsDB DriversBtrieveclientspgsql and can be copied to the <winsys32> directory.

FUJITSU COBOL

A new Btrieve Type, B_TYPE_NUMERICSAF (100) was added, enabling the Mertech drivers to properly map the Fujitsu COBOL signed data type.

Support for Microsoft SQL Server Filegroups

The MS SQL Filegroups feature is used to group database objects and files together for allocation and administration purposes. The BTR2SQL GUI Migration Utility, mds.ini file, command-line migration tools and migration API were modified so separate Filegroups can be selected for tables, indexes, and text storage. Choosing separate Filegroups can streamline administration and increase data access speed.

Migration Utility

A Quick Start Guide and Video are provided to facilitate getting started as a new BTR2SQL user.

Three new fields were added for to the Convert File dialog box for assignment of MS SQL Filegroups.

SQL

Locking

Two settings in the INI file, SERVER_SIDE_LOCKING and SQL_LOCKS (new) control how locking is implemented.

Execution

The dataBuffer parameter was added to the B_SQL_EXECUTE command so a call to B_SQL_PREPARE is no longer required before executing embedded SQL statements.

Support for SQL Case Identifier

A new setting in the INI file, IDENTIFIER_CASE, controls how table and field names are handled.

Tracing

Startup Tracing

Startup tracing was added to the mds_app_lock plugin for MySQL and PostgreSQL. This server-side trace alerts to problems with the mds_app_lock plugin during the SQL server startup.

Trace Query Times

A new setting in the INI file, TRACE_QUERY_TIMES, allows the query time output to be suppressed during tracing. The default for this setting is No.

CACHE

A new setting in the INI file, CACHE_GETEQUAL, determines whether a record is fetched from local cache.

Version 5.0

Server Support

Supported SQL Server Version Embedded in the License File

“The driver itself is no longer tied to a specific SQL server version. Instead, the supported SQL server version (minimum and maximum) is embedded in the license file. This means you can upgrade to the latest SQL server versions without changing your driver.”

PostgreSQL

Version 5.0 introduces PostgreSQL 9.1 to the list of supported servers.

MSSQL 2012

Version 5.0 adds support for Microsoft SQL Server 2012 Edition.

SQL Native Client

By default the driver now looks for Microsoft SQL Server 2012 Native Client and then falls back to version 2008 or 2005 if version 2012 is not available.

Proper Locking Support in a High-Speed Multi-Process Environment

The SQL update command includes the call sequence lock, update, unlock. Although all the calls execute in the same network request, the lock and unlock are not tied to the transaction - they are performed immediately whereas the updated data is not available until the commit is performed.

The SQL update command was modified so that the unlock occurs after the transaction commit. This resolves problems observed in some high-speed multi-process environments.

Tracing

Startup Trace

A startup trace is now available to record errors detected during the early stages of driver initialization (before the regular trace begins), which can help to track down tracing and license file issues.

Migration Utility

Improved GUI Interface

The Migration Utility was rewritten to use the .NET framework resulting in an improved interface.

Support for Selecting Data Type Mappings from Btrieve to SQL

“BTR2SQL v5.0 allows you to modify the field-type mappings used during migration. General preferences and table-specific settings are supported. A drop-down list of field-types ensures that only suitable mappings can be selected. Most of the time it is not necessary to change the assignment made by BTR2SQL, but there may be instances when this is useful. For example, you can map a string field to a date/time field to provide meaningful interpretation of date/time information.”

Improvement in Oracle Data Migration Using DIRECT PATH Load

“SQL*Loader provides two methods for loading data: conventional path load and direct path load. A conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. A direct path load parses the input data according to the description given in the loader control file, converts the data for each input field to its corresponding Oracle column data type, and builds a column array structure (an array of <length, data> pairs). You can now choose the direct path load, which is faster than the conventional path load.”

Renumbering Indexes

Indexes are automatically renumbered to eliminate gaps after an index is dropped. This feature may be particularly useful for Magic developers, where adding and dropping indexes occurs more frequently.

Improved Command-line Tools

A new command-line tool, MigrateUniPaasData, was added for direct migration from uniPaaS XML applications to the SQL backend. A second tool, MigrationValidator_uniPaas, was also added to verify this migration.

Version 4.5

Btrieve API Support

Manual assignment for autoinc fields

Normally an AUTOINC field is automatically assigned a value by the server – the application places a zero in the dataBuffer. Sometimes it is necessary for the application to give a specific value to such a field – the application fills the dataBuffer with the desired value. This is now supported.

Support for DOS apps

There are a few applications still left based on the DOS OS. If your application is one of these, you are using Pervasive’s DOS driver, which forwards the DOS interrupt call to the 32-bit access dll. Replacing the Pervasive w3btrv7.dll with the Mertech version will cause the application to utilize our SQL access. The process of migration and deployment are the same as for a Windows 32-bit application except you must also deploy the Pervasive DOS driver.

Migration

Utilize the index list in Btrieve file

Previously, the indexes defined in Index.DDF were used to create the indexes on the SQL server. In many cases, these indexes did not accurately match those in the Btrieve file causing various issues at runtime. To be more compliant with what the application expects from the driver, the migration now utilizes the key definitions in the Btrieve file instead of those in the DDFs. If the Btrieve file is not available during migration, the DDFs will be used.

Supporting >8K record sizes

The SQL backends do not support a record size greater than 8K. In order to accommodate the rare Btrieve record larger than 8k, the migration process will use varchar(max) and varbinary(max) data types rather than varchar and varbinary. It is suggested to split the file into smaller records since utilizing these large data types comes with a performance overhead.

SDK

“Selective Column Fetching” (B_SET_FIELD_FETCH)

B_SET_FIELD_FETCH allows the application to control which fields are retrieved during a B_GET or B_STEP function. This is a simple way to add huge performance benefits to sections of the application that read large records but only require a few of the fields.

MdsPrepareDB

A migration function that allows you to setup the server and/or database with needed objects (stored procedures, views, etc.) for running your Btrieve application with the Mertech driver.

MdsEncryptPassword

If your application creates data folders on the fly (for a new customer for instance), you will probably want to create a new mds.ini file in the folder with server-specific credentials. The password in the INI is encrypted and given in hex format. To convert a plain text password to the required format, call MdsEncryptPassword (or B_ENCRYPT_PASSWORD).

mds_global.ini

A new INI file is introduced as an alternative to the registry and environment variables. All prior settings are now available in the INI file. The location of the mds_global.ini file is up to you. The first mds_global.ini found on the usual PATH is used. We recommend placing the mds_global.ini file in the same location as the access dll and license (.cfg) file.

Misc

Lowered memory consumption

Structure sizes and dynamic memory usage were reduced to lower the overhead of the driver.

Lower record size on server

Where possible and depending on the server backend, some data types on the SQL server have been reduced in size and utilize a more native data type. For instance, MDS_RECNUM changed from NUMBER(28) to a BIGINT. This will reduce the data storage and retrieval time on the server. Note, existing tables will remain with the previous definition.

Geometric Progression of cache

Applications often do a scan of a file or a portion of the file while at other times only individual records are read. In order to make both scenarios as fast as possible, the caching mechanism in the driver watches the series of function calls and determines when the application is doing a table scan. As this progresses, the driver increases the record cache to provide the fastest access to the records needed.

Tools

Trace Control Tool installed

Have trouble finding just the right registry spot to set the trace file up? We now install a Trace Control tool that provides easy access to the settings.

Improved screen updates during migration

Small improvements to the interface during migration provide better and timely feedback so you are not stuck wondering if the tool is hung.

More checks in DDF Validator

The DDF Validation tool adds more checks to help ensure your migration is a smooth success.

License Registration

The license registration tool is removed. Rather than typing in a long registration code, you will be sent a license CFG file via email. All license codes are also always available for download from our website using your Mertech Data Systems Purchasing Account.

Version 4.2

Server Support

Oracle 11g

The 11g OCI client library is used internally now. This fixed a few issue found in the older library when accessing newer OCI dlls. The driver is also tested with the 11g server for full support.

UTF Character Set in Oracle

If Oracle database is using UTF character set, the migration will now use NVARCHAR2 instead of VARCHAR2.

Tracing

TRACE_BUFFERS

This new value (registry or environment variable) can contain any combination of ‘pb’, ‘kb’, ‘db’, or ‘sql’. Many times you want to see the activity of the application without logging the large Data buffers. Setting TRACE_BUFFERS=’kb,sql’ will cause the trace to only log the Key Buffer and SQL commands so the trace file will be much smaller.

Btrieve API Support

Expanded Legacy NULL support

Legacy null support expanded to properly filter records based on the null “value” provided in the Btrieve index definition. An extra field is added to the table ending in “_isnull” and this field is then included in the SQL index.

Improved support for Titan controls

Titan utilizes some “special” values in fields that are not normally valid as a float or date for instance. The driver checks for these values and reacts accordingly.

B_DATETIME (type 30)

B_DATETIME is now fully supported.

Milliseconds in B_TIMESTAMP

The string value stored in the SQL database is now expanded to include milliseconds and its format is changed to military style (YYYY-MM-DD-hh-mm-ss.nnn) so it sorts properly as a string.

Btrieve Version Number

Sometimes an application checks the version of Btrieve to validate the API support or to narrow the range of tested scenarios. The Mertech driver always reports itself as version 7.9 in the B_VERSION call so some applications refuse to run. To override the information returned, you may add a registry entry: HKLMSOFTWARE[Wow6432Node]Mertech Data SystemsDB DriversBTR<Driver>v4

Add a new key “BtrieveVersion”.

Add a string value — call it the name of the process and the value is the desired version. For instance, “wbexec32.exe”=9.11.

Indexes defined with different lengths

Btrieve allows different lengths to be defined for the same “field” in multiple indexes. So you might have one index defined as 100 bytes for the address field but another index only defines 20 bytes. This kind of partial definition is not supported by most SQL backends and thus not supported by the driver. The index will be defined as the full length of the field. This may affect such operations as GET_EQUAL.

Related to this, it is possible to have a large field (a string of say 1000 bytes). The indexes can only define 255 bytes at most. In this instance, the “field” will be split into two when migrated to SQL. The original field name is used for a string of 255 and another field is added with “_extra” appended to the name to hold the remainder of the data.

B_SET_DIR and B_GET_DIR

These two simple functions are implemented now.

Overlapping fields

The DDFs allow multiple fields to be defined for the same portion of a record. It is somewhat common to define one binary field that covers several smaller fields or to have one index segment defined for multiple fields rather than multiple segments (one for each field). This approach causes several problems.

When the dataBuffer is split out to individual fields, the data will be duplicated into SQL – the smaller fields and also the overlapping field. This is wasteful and depending on the exact definitions of the fields, may have conversion errors into the SQL data types.

When the dataBuffer is pieced back together for a read, there are two ways of looking at the data so the conversions could get corrupted.

An error will be given during migration for tables defined in this manner. To avoid the situation, the developer should decide which definitions are most correct. Normally, this means removing the definition of the overlapping field in favor of the smaller more accurate definitions.

Misc

Use HKEY_CURRENT_USER

HKEY_LOCAL_MACHINE is now only used for the initial information obtained during installation – for instance, the location where the product was installed. All other information is stored and retrieved from HKEY_CURRENT_USER. For instance, trace settings are now found at HKEY_CURRENT_USER\Software\Mertech Data Systems\DB Drivers\BTR\MySql Driver\v4 for the MySQL driver.

View SQL Button

A view needs to be created on the Oracle server during migration. The migration tool will pop up a dialog box asking for the SYS password in order to run the needed SQL command. Most administrators will not give you the password and will instead want to run the SQL themselves. So, the dialog box offers a button which will show you the SQL. Pass this SQL off to your Oracle administrator to be run.

SDK

Migration API

MdsMigrateOneTable, MdsMigrateAllTablesInDdfs, MdsMigrateAllTableInConfig

These new APIs give you full control over the migration process and allow you to automate it for seamless deployment.

A command-line migration tool is also provided which has more flexibility and automation than the GUI tool. Source is provided as a sample of the migration API.

Improved DOCs

The documentation has been broken into logical sections or “modules”. Overview and introduction pages are added to give you more information about the group of functions. And the information has been reviewed and expanded upon where needed. Let us know if you need more details for any of the functions.

Version 4

Server Support

MSSQL 2008

Version 4 adds support for MSSQL 2008 and removes support for MSSQL 2000.

SQL Native Client

Version 3 of the driver used the OLE-DB client distributed with MDAC. Version 4 now utilizes the newer “SQL Native Client” (2008 or 2005). This newer client provides the same interface but has improvements and fixes. It can be downloaded from Microsoft in the Feature Packs available here (http://msdn.microsoft.com/en-us/data/aa937733.aspx)

Support for new MSSQL 2008 data types DATE and DATETIME2

These two types have a larger range and thus better support the Btrieve Date. If the 2008 client is used during migration, these new types are chosen by default. If the 2005 client is used, the migration and runtime driver will treat date columns as the older DateTime type with 01-01-1753 minimum value.

Migration sets MSSQL database to use snapshot isolation

Snapshot isolation has always been needed for proper transaction control; the migration is being proactive now to ensure this happens. During runtime, the trace file will show a warning if it detects the database is not in snapshot mode.

ALTER DATABASE <db> SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE <db> SET READ_COMMITTED_SNAPSHOT ON

Tracing

Controlling trace in the application

Several APIs were added to the SDK to allow the application to have control over the trace file output and to log messages into the trace.

  • See the SDK documentation Modules tab > Driver Trace Control for details

Buffered file I/O utilized for the trace output

Buffering the output significantly increases the speed of tracing. A level 1 trace is almost an unnoticeable impact on the application’s performance now.

Because the output is buffered, looking at the trace while the application is running will not show you the last few operations. And if the application exits unexpectedly, the last part of the trace will be lost – making it difficult to see what might have caused the crash. You can disable the buffered I/O by setting TRACE_FLUSH to “ON” in the registry or environment variable (along with the usual TRACE_ON and TRACE_LEVEL).

Trace file automatically breaks at 512MB size

A new file is started with a number added to the end of the filename when the file size reaches 512MB. As the trace grows you will thus see “mds.tra”, “mds_1.tra”, “mds_2.tra”, etc. This allows you to periodically remove pieces so the hard drive doesn’t fill up and makes viewing a section of the trace easier as most editors won’t handle 4GB files.

“Slow” queries are reported

Each query that is run is timed. If the query takes longer than a predefined time slice, it is reported as slow. You can search the trace file for “Performance:” to find these entries. Generally queries to find data will run in less than 0.01 seconds with an occasional “burp”. If you find queries taking many times longer, this could indicate a problem with an index.

Btrieve API Support

True-NULLs

Version 4 now includes the “null indicator segment” field in the SQL database so this extra value is properly maintained in the dataBuffer.

Full support for Btrieve Client ID

Each “client” receives a new context inside of the driver including a separate login to the server, transaction support, cursors, etc.

Override flag for B_CREATE to force driver to ignore an existing INT file

Version 4 introduces a flag allowing you to ignore the existing INT file:

B_CM_OVERWRITE_IGNORE_INT_FILE= -6

Opposite this, perhaps you want the driver to always use the definition in the INT. Two more flags force this behavior (the FILESPEC in the dataBuffer is completely ignored):

B_CM_OVERWRITE_FORCE_INT_FILE =-99


B_CM_NOOVERWRITE_FORCE_INT_FILE=-100
  • See Creating tables on the fly for an example using the B_CM_OVERWRITE settings

    Support for extra data types defined in fieldext.ddf

    Version 4 now reads this information and makes a wiser choice for the type used in SQL.

B_EXTENDED_VERSION

It may be interesting, or even needed, for your application to know if the Pervasive version or the Mertech version of the access dll is loaded and you may want to know the version of the Mertech driver (B_VERSION reports 7.9 always). Or perhaps you need some version information for the SQL server currently connected. B_EXTENDED_VERSION can do all of this. Calling this function against the Pervasive dll returns a status 1 (unknown operation) making it easy to distinguish the two.

Version 3

Performance and Usability Improvements

Version 3 represents Mertech’s continuing commitment to create products that will completely fulfill your application’s needs. We have added several new features to the application API to improve performance and functionality. We have also added many new tools to help build the Data Definition Files (DDFs) and validate a successful migration.

Windows 16-bit support

Although Windows 3.1 is now a thing of the past, there are still many applications running that were written for 16-bit Windows. These applications use the Pervasive w1btrv7 dll instead of w3btrv7. An introductory version of this dll is now included in the Version 3 installation. At this time, it is only supported on Windows NT and above. It is not supported if the application uses the Chunk APIs. The dll uses thunking to forward the 16-bit calls to the w3btrv7 dll, so both dlls are required. You can test it by using the old Btrieve function executor, wbexec.exe.

Driver prefix no longer needed in DDF files

The Driver Prefix was initiated in the DataFlex product line. DataFlex uses a prefix in the table location to determine which driver to load for data access. When the Btrieve product was introduced, the same concept was used. Either sql_btr: or ora_btr: is prepended to the file location column in FILE.DDF (table list). Btrieve does not use or need this and it caused problems in applications that depend on DDFs.

Therefore, we removed the driver prefix. Instead of sql_btr: in front of the file location in BTR2SQL, a checkmark now appears on the file’s icon in the File Selection grid. This checkmark indicates that the file has been migrated to an SQL table. It is easier to see and you no longer need to use the Add/Remove Driver Prefix buttons.

MS SQL Server 2005 support

Support for SQL Server 2005 has been included within BTR2SQL Version 3 to allow functionality within existing or future deployments to SQL 2005.

Improved database migration tool

Several small issues have been fixed or enhanced on the Migration Utility. Continued improvement of this tool is a priority for us in the months ahead.

Performance improvements

Overall improvements in find and update performance, cleaning up of some code that caused unexpected application aborts.

Documentation improvements

We have moved from PDF format for distribution to compiled help file and html.

Mertech SDK for BTR2SQL

In the past, the Mertech SDK has simply consisted of the Btrieve API — the Btrieve SDK, sub the Mertech w3btrv dll. Version 3 implements several of the operations previously ignored, specifically the Chunk operations. We have also introduced some new functions that your applications can use to enhance performance and to allow on-the-fly table manipulation. For detailed explanations of all of these functions and examples, please see the SDK documentation. The documentation for the SDK and examples with project files are installed in the <Program Files>Mertech Data SystemsDB DriversBtrievesdk\ subdirectories.

New functions to handle Chunks

Btrieve is famous for the ability to store large amounts of data quickly. Before the term BLOB was widely used, Btrieve supported extremely large records which could hold any object, picture, file, etc. The data length and buffer is limited to 16-bits, which means that the API only supports about 64k. In order to access larger data records, Chunks were invented. The application inserts or fetches the fixed portion of a record, which is always less than 64k, using the standard B_Insert and B_Get/Step. Then the application uses the Put and Get Chunk APIs to insert and read the rest of the data in small pieces.

Many applications have used this feature to store pictures and other objects in their database, so B_Get_Chunk(23) and B_Update_Chunk(53) operations have been long requested. Nothing in your application needs to change. If you were previously unable to use the Mertech driver due to the lack of Chunk support, give Version 3 a try.

Bulk inserts using the new FAST_INSERT functions

B_Insert is fast, but when you use an SQL database at the backend, there are more efficient ways to insert large numbers of records. This is especially true when the records themselves are large.

B_BEGIN_FAST_INSERT, B_FAST_INSERT and B_END_FAST_INSERT have been added to support this new method. Native SQL APIs are used to let you take advantage of the fastest loading method for the backend in use. As of March 2006, only MS SQL Server is supported.

While these are new operations not supported by Btrieve itself, they are implemented through the standard BTRV/BTRCALL function interface. Consult the SDK documentation for detailed examples and usage.

The Fast Insert operation supports a callback mechanism to insert large records. This is similar to the Update Chunk functions discussed above. Fast Insert allows the application to insert the fixed portion of a record and then it feeds small pieces of the variable portion until the record is complete. This works well if the application is reading the data from a stream; read 16k from the stream, append 16k to the data record do again. However, this is still not the fastest method of getting data into the SQL backend.

Since the Fast Insert operations are new, the full 32-bit data length is allowed. Thus, the application can read the entire record including the variable portion (BLOB) prior to the insert, and then pass the entire data buffer, up to 2G, to the Fast Insert operation. The speed of this method is phenomenal compared to a B_Insert and multiple UpdateChunk calls. A sample project under the sdksamples directory shows a comparison of insert speeds.

Schema manipulation

Perhaps your application needs to create temporary data files during operation or add some new tables over time. You may need some new columns in a file. In the past, your only option was to update the schema in the DDFs and then perform another migration of the affected files.

Now, the application can use a set of new functions similar to Pervasive’s DTI functions; MdsAddTable(), MdsRenameTable(), MdsDropTable() and MdsAddFields().

The functions do what their names imply. They are documented in the mds_btrieve.h header file located in the <Program Files>Mertech Data SystemsDB DriversBtrievesdkinclude directory.

Embedded SQL

You may occasionally want the application to communicate with the SQL backend directly in order to access features that previously did not exist in the Btrieve API.

The new Embedded SQL APIs were created to remedy this situation. They are transacted through the normal BTRV/BTRCALL API using additional Operation code constants that have values of 10000 and above. For example, you can use B_SQL_EXECUTE to execute an SQL command and B_SQL_FETCH to retrieve the results of the query.

New validation tools

Additional customer feedback involved the need to validate the steps of the migration process. You have many tables, fields, and even more rows of data. How do you know the DDFs are defined correctly and that the data was successfully migrated to the SQL backend?

DDF Validator

Building DDFs for your Btrieve database can be the largest undertaking of moving the application to SQL. This is listed as a prerequisite rather than a step in the migration process. Without DDFs, your data is just a conglomeration of meaningless bytes. Searching for headers and sifting through possibly thousands of field names, data types, sizes, flags, etc. can be exasperating and error-prone. Even when you think that you have fixed everything, a small quirk may remain that doesn’t appear unless you read every record and check each field.

This is exactly what the DDF Validation Tool does. It uses a set of tests to verify that the definitions that you created match the actual records in the file.

Even if your applications have contained DDFs for years, it is possible that a bad value may have been inserted into the tables over time, which can cause problems when you move the data to SQL. We recommend that you at least run the DDF Validator on a subset of the data rows to make the transition as smooth as possible.

Migration Validator

This tool validates the actual data migration. It uses the standard Btrieve API to read rows from Btrieve and from SQL simultaneously and then compares the data buffer returned. Any mismatch displays in the output.