Ranga Babu

SQL Server Replication with a table with more than 246 columns

January 14, 2019 by

Problem

In our environment we use SQL Server merge replication to replicate data to multiple subscribers. We had a situation where a table needs to be part of merge replication and the table has more than 246 columns. Merge replication supports tables with maximum of 246 columns. In this article let us see how to replicate a table with more than 246 columns.

Solution

Merge replication is used to replicate data both ways i.e. data is sent from publisher to subscribers and vice versa.

In our database we have few tables which has more than 246 columns and they should be part of replication. But, merge replication allows tables with columns not more than 246. So, we created an intermediate table and put all the columns data in the form of xml and replicated intermediate table instead of main table.

To explain this, I am creating a sample table “TEST” with more than 246 columns and an intermediate table “TEST_INT”.

Main table: TEST

Intermediate table: TEST_INT

Create intermediate table “TEST_INT”.

Triggers:

Now we need to create three triggers on the main table “TEST” which inserts, updates, deletes records on intermediate table “TEST_INT”.

All these triggers on main table “TEST” should be fired only when user perform DML operations and not replication agent. So, we need to mark these three triggers as “NOT FOR REPLICATION” or we can use sessionproperty(‘replication_agent’) to determine if it is a user operation or replication agent operation.

We need create three triggers on intermediate table “TEST_INT” for insert, update and delete which inserts, updates and deletes data back on main table “TEST”. These three triggers on intermediate table should fire only when replication agent performs DML operation not the actual user. So, we can include sessionproperty(‘replication_agent’) in the trigger code to check if it is replication agent or not.

We can have one trigger in place to do the same task but to make it simple I am using three different triggers.

Basically, the flow will be like below for INSERT, UPDATE and DELETE.

INSERT:

  • User inserts data into main table which fires the trigger to insert the same record in intermediate table
  • The insert on intermediate table will be replicated to subscribers using merge agent
  • Now at the subscriber, the insert on intermediate table will fire insert trigger to insert data back to main table

UPDATE:

  • User updates data on main table which fires the trigger to update the same record in intermediate table
  • The update on intermediate table will be replicated to subscribers using merge agent
  • Now at the subscriber, the update on intermediate table will fire update trigger to update data back to main table

DELETE:

  • User deletes data on main table which fires the trigger to delete the same record in intermediate table
  • The delete on intermediate table will be replicated to subscribers using merge agent
  • Now at the subscriber, the delete on intermediate table will fire delete trigger to delete data on main table

Refer to the below image to understand the flow.

We need make sure the main table is created manually on subscriber otherwise the triggers on intermediate table “TEST_INT” fails.

Now we need to replicate intermediate table “TEST_INT” instead of main table “TEST”.

Create Merge Replication:

Before creating merge replication, we need to make sure the main table is created on publisher and subscriber with triggers.

The intermediate table is created on Publisher with triggers.

Login to SQL Server from SQL Server management studio. Assuming the distribution is already configured navigate to replication folder-> Local publications -> Right click on new publication. Please refer to the below image.

Click on Next-> select the database where you created main table and intermediate table. Click Next and select merge replication.

Click Next and check 2008 or later -> add TEST_INT and click on article properties ->set properties of all table articles-> make sure copy user triggers is enabled.

Click Next-> Next-> add filter if you want or you can skip and click Next-> check create snapshot immediately and uncheck schedule the snapshot agent to run at following times.

Click Next and add the credentials you want to use for snapshot agent and connecting publisher -> Next -> Next -> and input publication name and finish.

Right click on publication you just created, launch replication monitor and make sure the snapshot is completed.

Once the snapshot is completed we need to add the subscriber to publication.

Navigate to publication you just created, right click and click on new subscription -> Next – Next -> select the type of subscription you want (Pull or push) as per your need -> Next-> add the subscriber server and select the subscriber database. Add the logins to run merge agent, connect the publisher, distributor and the subscriber -> Next -> select agent schedule as per your need. In my case I selected run continuously -> Initialize immediately -> select subscription type as per your need -> check create subscriber and click Next -> Finish.

Once the initial snapshot is applied the intermediate table and the triggers on it are created on subscriber.

Data Loading:

After adding the intermediate table to replication, we need to perform below steps if the main table “TEST” has data already.

We ignore below steps if the main table “TEST” is blank and we are inserting data after adding intermediate table “TEST_INT” to replication and snapshot is applied to subscriber.

Now as part of initial data load into intermediate table “TEST_INT” we can update any one column with same value. So that update trigger on main table is fired and data is inserted into intermediate table or you can use any other way to load initial data into intermediate table.

Now these updates populate data into intermediate table and are replicated to subscribers. At subscribers the data is inserted back to main table.

Now we have the same data on main table and intermediate at publisher and subscribers.

Now any incremental inserts, updates and deletes on main table will get replicated to subscribers via intermediate table.

Notes:

  • Any changes in table structure like dropping column or adding new column to main table “TEST” needs trigger modification.
  • Schema changes to main table “TEST” should be done on both publisher and subscribers manually
  • Trigger change on main table “TEST” should be done on both publisher and subscribers manually
  • Make sure the triggers on main table should fire only on user operation not the replication agent
  • Make sure the triggers on intermediate table should fire only on replication agent operations not the user

In this article we have discussed below points.

  • How to replicate a table with more than 246 columns using intermediate table
  • Triggers which should fire on user operation
  • Triggers which should fire on replication agent operation
  • Use of sessionproperty(‘replication_agent’)

Next Steps

  • Use this solution if you need to replicate a table with more than 246 columns
  • Add more error handling cases to these triggers or the solution to make it strong
  • Validate your data regularly or create an alert to check counts of main and intermediate tables
  • You can optimize the trigger code and use your own logic to insert/update/delete

Table of contents

SQL Server Replication with a table with more than 246 columns
Foreign key issues while applying a snapshot in SQL Server merge replication
SQL Server Replication (Merge) – What gets replicated and what doesn’t
SQL Server Replication (Merge) – Performance Issues in replicating schema changes
Merge SQL Server replication parameterized row filter issues
Log shipping on a mirrored database
Ranga Babu
Replication

About Ranga Babu

SQL Server DBA, Developer with good experience in SQL Server administration, development, performance tuning, monitoring, high availability and disaster recovery technologies

168 Views