Foxhound is the better* Database Monitor for SQL Anywhere.
*better: More thorough, more relevant, more effective.
...more Alerts, more All Clears, more details, more control in your hands.
|
[Home] | [Table of Contents] | [Previous Section] | [Next Section] |
Breck Carter
Last modified: February 23, 1998
mail to: bcarter@bcarter.com
It is possible to make significant database design and other changes without resorting to DBXTRACT or SYNCHRONIZE SUBSCRIPTION: you can use PASSTHROUGH START and STOP commands instead. Section Five shows how this is done via DBREMOTE and DBREMOTW for some DDL and related commands: CREATE TABLE, CREATE PUBLICATION and SUBSCRIPTION, CREATE FUNCTION, START SUBSCRIPTION and INSERT.
Step 58 - Run the DDL on the consolidated database (required).
This command is run on the central server to execute the commands shown in Figure 58a:
ISQL -c "DBN=consol;UID=DBA;PWD=SQL" READ c:\test\consddl.sql
Figure 58a shows that CONSDDL.SQL contains 3 sets of commands:
Figure 58a - CONSDDL.SQL - DDL Commands For The Consolidated Database
(This file is contained in rep.zip.)
IF EXISTS ( SELECT * FROM sys.systable WHERE table_name = 'replication4' AND table_type = 'BASE' ) THEN DROP TABLE replication4; END IF; CREATE TABLE replication4 ( unique_id4 INTEGER DEFAULT AUTOINCREMENT NOT NULL, creator_id4 CHAR(100) DEFAULT CURRENT PUBLISHER NOT NULL, subscription_id4 CHAR(100) NOT NULL, numeric_data4 INTEGER NOT NULL, string_data4 CHAR(100) NOT NULL, unique_id3 INTEGER NOT NULL, creator_id3 CHAR(100) NOT NULL, PRIMARY KEY ( unique_id4, creator_id4 ), FOREIGN KEY fk_replication3_4 ( unique_id3, creator_id3 ) REFERENCES replication3 ( unique_id3, creator_id3 ) ); IF EXISTS ( SELECT * FROM sys.syspublications WHERE publication_name = 'pub4' ) THEN DROP PUBLICATION pub4; END IF; CREATE PUBLICATION pub4 ( TABLE replication4 SUBSCRIBE BY subscription_id4 ); CREATE SUBSCRIPTION TO pub4 ( 'Sub1' ) FOR site0001; CREATE SUBSCRIPTION TO pub4 ( 'Sub2' ) FOR site0002; // ...START SUBSCRIPTION commands are required but must be run later, // and INSERT statements must also wait to follow the START SUBSCRIPTIONS. IF EXISTS ( SELECT * FROM sys.sysprocedure WHERE proc_name = 'f_capitalize' ) THEN DROP FUNCTION f_capitalize; END IF; CREATE FUNCTION f_capitalize ( as_word CHAR ( 255 ) ) RETURNS CHAR ( 255 ) BEGIN RETURN UCase ( Left ( as_word, 1 ) ) + LCase ( Substr ( as_word, 2 ) ); END; |
Tip: Keep foreign key columns out of the primary key.
In replication4 the foreign key columns are not part of the primary key. This is a common practice when artificial or surrogate keys are used instead of intelligent or business-related columns because it keeps the number of primary key columns to a minimum in lower tables. It is also important in a replication setup because it simplifies key value generation in two situations:
The creator_id4 column guarantees primary key uniqueness between different databases and unique_id4 guarantees uniqueness within each database. Keeping these columns separate from creator_id3 and unique_id3 allows different rows in replication3 and replication4 to be inserted on different databases while still allowing parent-child and sibling relationships between these rows.
Like the other tables replication4 gets its own subscription_id4 column to allow for flexible replication rules. In Figure 58a a single publication with a SUBSCRIBE BY clause is used to send rows in replication4 to different sites depending on the value in subscription_id4. In this example the publication and subscriptions for replication4 are almost identical to the ones used for replication3 (see Figure 2c), but they could be different.
The simple function f_capitalize() in Figure 58a demonstrates one of the most common uses of passthrough processing: to send new executable code to remote databases. Application logic changes more often than database design, and if you use stored procedures, functions and triggers to implement this logic then you need a way to publish those changes. DBXTRACT will do it but replacing the whole database to change one procedure is very inefficient. DBREMOTE won't do it as part of normal processing because execution logic is not part of the publication and subscription architecture. Passthrough processing, on the other hand, forces DBREMOTE to send your commands as-is to the remote databases.
Step 59 - Make a backup: MAKECBAK 59 and MAKERBAK 59 (optional).
Step 60 - Run the "Passthrough" commands on the consolidated database (required).
This command is run on the central server to execute the commands shown in Figure 60a:
ISQL -c "DBN=consol;UID=DBA;PWD=SQL" READ c:\test\pass.sql
Figure 60a - PASS.SQL - Passthrough Script For The Consolidated Database
(This file is contained in rep.zip.)
PASSTHROUGH ONLY FOR SITE0001, SITE0002; CREATE TABLE replication4 ( unique_id4 INTEGER DEFAULT AUTOINCREMENT NOT NULL, creator_id4 CHAR(100) DEFAULT CURRENT PUBLISHER NOT NULL, subscription_id4 CHAR(100) NOT NULL, numeric_data4 INTEGER NOT NULL, string_data4 CHAR(100) NOT NULL, unique_id3 INTEGER NOT NULL, creator_id3 CHAR(100) NOT NULL, PRIMARY KEY ( unique_id4, creator_id4 ), FOREIGN KEY fk_replication3_4 ( unique_id3, creator_id3 ) REFERENCES replication3 ( unique_id3, creator_id3 ) ); CREATE PUBLICATION "DBA"."pub4" ( TABLE "DBA"."replication4" ); CREATE SUBSCRIPTION TO "DBA"."pub4" FOR "consol"; START SUBSCRIPTION TO "DBA"."pub4" FOR "consol"; CREATE FUNCTION f_capitalize ( as_word CHAR ( 255 ) ) RETURNS CHAR ( 255 ) BEGIN RETURN UCase ( Left ( as_word, 1 ) ) + LCase ( Substr ( as_word, 2 ) ); END; PASSTHROUGH STOP; |
The commands in Figure 60a differ from the ones in Figure 58a as follows:
Because passthrough processing is dangerous as well as powerful it is very important to test the commands before sending them to real remote databases. Running the commands on the consolidated database will catch some errors but the passthrough script should be checked via DBREMOTE on a test remote database for the following reasons:
The extra step of testing the passthrough script is omitted from this example because this whole demonstration is a test. In real life, however, testing of passthrough scripts is critically important.
Step 61 - Make a backup: MAKECBAK 61 and MAKERBAK 61 (optional).
Step 62 - Delete the old DBREMOTE output files from the central server (suggested).
This command is run on the central server to delete the old DBREMOTE output files that have already been sent to the remote computer:
rem This command only shows one remote computer site0001 erase c:\test\site0001\*.*
Figure 62a shows that all the replication subdirectories on the central server are empty.
Figure 62a - Files On The Central Server Before The "Passthrough" DBREMOTE
Directory of c:\test\consol ...empty Directory of c:\test\site0001 ...empty |
Step 63 - Run DBREMOTE on the consolidated database to gather the "Passthrough" data (required).
This command runs DBREMOTE on the central server to gather the "Passthrough Only" data for the remote computers:
c:\test\consremo.bat
Figure 63a shows that a new file has been added to SITE0001 subdirectory.
Figure 63a - Files On The Central Server After The "Passthrough" DBREMOTE
Directory of c:\test\consol ...empty Directory of c:\test\site0001 consol d 948 97 11 16 11:12a |
Step 64 - Make a backup: MAKECBAK 64 and MAKERBAK 64 (optional).
Step 65 - Start replication of the new table on the consolidated database (required).
This command is run on the central server to execute the commands shown in Figure 65a:
ISQL -c "DBN=consol;UID=DBA;PWD=SQL" READ c:\test\startrep.sql
Figure 65a - STARTREP.SQL - Start Replication Of The New Table On The Consolidated Database
(This file is contained in rep.zip.)
START SUBSCRIPTION TO pub4 ( 'Sub1' ) FOR site0001; START SUBSCRIPTION TO pub4 ( 'Sub2' ) FOR site0002; TRUNCATE TABLE replication4; BEGIN DECLARE parent_id INTEGER; INSERT INTO replication3 ( subscription_id3, numeric_data3, string_data3 ) VALUES ( 'Sub1', 0, 'DDL initial test' ); SET parent_id = @@identity; INSERT INTO replication4 ( subscription_id4, numeric_data4, string_data4, unique_id3, creator_id3 ) VALUES ( 'Sub1', 0, 'DDL initial test', parent_id, CURRENT PUBLISHER ); END; |
The START SUBSCRIPTION and INSERT commands have been held until this point to let the first DBREMOTE (Step 63) have a chance to gather the CREATE TABLE statements.
The START SUBSCRIPTION commands are necessary because DBXTRACT is not being used and therefore subscriptions on the consolidated database are not automatically started. These commands have been held until this point to avoid the danger of data being replicated before the table is created on the remote database. This is the reason the commands have been split into Figures 60a and 65a instead of all run at once: two separate runs of DBREMOTE force the remote database updates into the correct order.
INSERT commands to load test data have also been held up because they must be run after the START SUBSCRIPTION commands on the consolidated database in order for the data to be replicated.
The two INSERT statements in Figure 65a create parent and child rows in replication3 and replication4. The primary key columns have been omitted from the VALUES list because they're set by DEFAULT AUTOINCREMENT and DEFAULT CURRENT PUBLISHER specifications.
Tip: Use @@identity to fill cascading AUTOINCREMENT foreign keys.
The global variable @@identity is used to determine the DEFAULT AUTOINCREMENT value that was used by the preceding INSERT to fill in the parent primary key column replication3.unique_id3. The same value is then used in the second INSERT to fill in the child foreign key column replication4.unique_id3.
The special CURRENT PUBLISHER constant is used to fill replication4.creator_id3 because it is known to be the same as the DEFAULT CURRENT PUBLISHER value that was used to fill replication3.creator_id3. That doesn't always have to be the case, of course, since parent/child rows in replication3 and replication4 could be inserted on different databases.
Different values would then be required in replication4.creator_id4 and replication4.creator_id3. It's not a bug, it's a feature: the freedom to insert rows on different databases and still have referential integrity while guaranteeing unique primary keys is very important in some business situations.
Step 66 - Make a backup: MAKECBAK 66 and MAKERBAK 66 (optional).
Step 67 - Run CHECKDDL.SQL on the consolidated database (optional).
Figure 67a shows a SQL script that will display "Everything is OK" in ISQL if the commands in Figures 58a (DDL) and 65a (start replication) all worked. This script will also be used later to check the remote database via ISQLW.
Figure 67a - CHECKDDL.SQL - Check That Everything Is OK
(This file is contained in rep.zip.)
IF EXISTS ( SELECT * FROM sys.systable WHERE table_name = 'replication4' AND table_type = 'BASE' ) AND EXISTS ( SELECT * FROM sys.syspublications WHERE publication_name = 'pub4' ) AND EXISTS ( SELECT * FROM sys.syssubscriptions WHERE publication_name = 'pub4' ) AND EXISTS ( SELECT * FROM replication3, replication4 WHERE string_data3 = 'DDL initial test' AND replication3.unique_id3 = replication4.unique_id3 ) AND EXISTS ( SELECT * FROM sys.sysprocedure WHERE proc_name = 'f_capitalize' ) THEN SELECT 'Everything is OK' as status; ELSE SELECT 'Something went wrong' as status; END IF; |
This command runs DBREMOTE on the central server to gather the START SUBSCRIPTION data for the remote computers:
c:\test\consremo.bat
Figure 68a shows that there are now two files in the SITE0001 subdirectory, one for the passthrough commands and the second one holding the new replication4 data.
Figure 68a - Files On The Central Server After The "Start Replication" DBREMOTE
Directory of c:\test\consol ...empty Directory of c:\test\site0001 consol d 948 97 11 16 11:12a consol e 431 97 11 16 11:19a |
Step 69 - Make a backup: MAKECBAK 69 and MAKERBAK 69 (optional).
Step 70 - Send the "Passthrough" and "Start Replication" files to the remote computer (required).
This command is run on the central server to copy the "Passthrough" and "Start Replication" files to the first remote computer:
rem This command only shows one remote computer site0001 copy c:\test\site0001\consol.* f:\test\site0001
Step 71 - Delete the old DBREMOTW output files from the remote computer (suggested).
This command is run on the remote computer to delete the old DBREMOTW output files that have already been sent to the central server:
erase c:\test\consol\*.*
Figure 71a shows that the "Passthrough" and "Start Replication" files are ready to be processed and the output subdirectory is empty and waiting.
Figure 71a - Files On The Remote Computer Before The "Passthrough and Start Replication" DBREMOTW
DIRECTORY OF C:\TEST\CONSOL ...empty DIRECTORY OF C:\TEST\SITE0001 CONSOL D 948 11-16-97 11:12a CONSOL E 431 11-16-97 11:19a |
This command runs DBREMOTW on the first remote computer to apply the "Passthrough" and "Start Replication" data from the central server:
DBREMOTW -c "DBF=c:\test\remote.db;UID=DBA;PWD=SQL" -o c:\test\dbremotw.txt
After DBREMOTW runs the SITE0001 subdirectory is once again empty and Figure 72a shows that another new file has been created in the CONSOL subdirectory.
Figure 72a - Files On The Remote Computer After The "Passthrough and Start Replication" DBREMOTW
DIRECTORY OF C:\TEST\CONSOL SITE0001 3 46 11-16-97 11:24a DIRECTORY OF C:\TEST\SITE0001 ...empty |
The new SITE0001.3 file doesn't actually contain any data to be replicated back to the consolidated database because no recent changes have been made to the remote database (except for the passthrough stuff, and that came from the consolidated database.) Even if there are no changes, however, the consolidated database still needs to receive confirmation that it's last output messages have been received and applied, and that's what this file contains.
Figure 72b shows what can happen if START SUBSCRIPTION and INSERT commands are run on the consolidated database before DBREMOTE is run to send the associated CREATE TABLE to the remote database via passthrough mode. When DBREMOTE is finally run the CREATE TABLE and INSERT commands are both sent to the remote database in one file but in the wrong order. The solution is to run DBREMOTE twice as is done in this demonstration, once to gather the passthrough commands and then later to gather the new rows after the subscriptions have been started. The point is to get the data into two separate files so DBREMOTW will process them in the right order.
Figure 72b - DBREMOTW Fails When Passthrough DDL And Replicated Data Are In The Same File
Step 73 - Make a backup: MAKECBAK 73 and MAKERBAK 73 (optional).
Step 74 - Run CHECKDDL.SQL on the remote database (optional).
Figure 74a shows the final "Everything is OK" message produced by running CHECKDDL.SQL (see Figure 67a) through ISQLW on the remote database. This confirms that the changes from Figure 60a (passthrough) and Figure 65a (start replication) have been applied to the remote database.
Figure 74a - "Passthrough and Start Replication" Processing Worked OK On The Remote Database
At this point, everything is indeed OK, and this demonstration is over.
[Home] | [Table of Contents] | [Previous Section] | [Next Section] |