Downgrading A SQL Server Database To An Older Version

2019-04-18 1 Comments

Dave Mason - Downgrade SQL Server Database

One of the recurring questions I see on Stack Overflow is "How do I restore a SQL Server backup to a previous version of SQL Server?" The answer, of course, is you don't. Upgrading a database to a newer (major) version is a one-way ticket--at least as far as the database files and subsequent backups go. I recently found myself in a similar position as all those hapless Stack Overflow questioners. I had a customer that had migrated to a newer version of SQL and they wanted to roll back to the previous version. What to do?

A couple of thoughts immediately came to mind. There's the SQL Server Import and Export Wizard and the Generate and Publish Scripts Wizard. Neither of these sounded convenient. In particular, generating a script with both schema and 500 GB of data sounded like a fruitless endeavor. Two other options sounded much more appealing. So I focused on those.


The first was the Export Data-tier Application, which would create a BACPAC file from the newer version. In theory, I would be able to import that to a lower version of SQL Server. I started the wizard from SSMS, made two simple choices (name and path of the file, along with the list of tables I wanted) and clicked "finish". Any optimism I had was short lived. My database had cross-database references in numerous views and stored procedures. The BACPAC export failed. Much to my surprise, I found no option to pick and choose what objects (other than tables) were to be included/excluded with the export. On to plan B.

Something Old, Something New

The option I settled on was to bulk load tables one at a time. I'd have to truncate tables on the old version database first. Truncating tables in a particular order due to foreign keys gave me pause. I didn't want to deal with that (go ahead and chuckle now--I had to revisit this later). The "older" version of SQL in my circumstance was SQL 2014. I enlisted the use of DBCC CLONEDATABASE to get an "empty" database as my starting point. This was merely the second time I'd ever used this new-ish feature. To connect to the "new" version of SQL that held the current source data, I opted for a linked server. Now all I had to do was figure out the steps to load the data. With a little trial and error I settled on this:

  1. Put database into SIMPLE recovery mode
  2. Disable nonclustered indexes
  3. Disable FOREIGN KEY and CHECK constraints
  4. Disable triggers
  6. Iterate through the tables
    • Turn IDENTITY_INSERT off (if necessary)
    • Insert into old version table from new version table, (excluding computed columns, if any)
    • Ensure rows are inserted by order of cluster key column(s)
    • Turn IDENTITY_INSERT on (if necessary)
  8. Enable triggers
  9. Enable FOREIGN KEY and CHECK constraints
  10. Rebuild/enable nonclustered indexes
  11. Update non-index statistics?
  12. Revert recovery mode to FULL

This seemed to work fairly well for a smaller database about 15 GB. In the end, all of this ended up being an academic exercise. The customer ultimately decided to stay on the newer version of SQL and I never did downgrade the large 500 GB database.

Hat Tips

When I first realized the challenge ahead of me, I sought out some advice from the community. Special thanks to Peter Schott, Constantine Kokkinos, Enrique Argüelles, and Patrick (aka "Paddyrick" on Slack) for all the healthy discussion and sharing.

1 comment: