I have successfully used the following methodology, elaborated in Version Control and your Database:
- maintain a version number in metadata (I use a database extended property)
- any schema change is coded as a script that updates from current version to next version
- application ships with all the scripts to upgrade from version 0 (initial deployment) all the way to current version
- Every change is done through a script. Including 'system' data changes like dictionaries and lookup table entries.
- when deployed, the application checks the on-disk schema version, then runs all the upgrade steps to bring the schema to current required version
I often hear the opinion of 'how is this different from just keeping the object definition scripts under source control?'. The difference is huge, because when you deploy a new version of your app you're not going to simply create a new database. Most times your app will have to upgrade the existing database, including the existing data. This is a crucial difference, your upgrade steps need to ensure the integrity and consistency of existing data during the upgrade. Some operations are trivial in code (add a non-nullable column with default value to the table object definition script, done), but they are in fact hugely painful at actual deployment (the table has 1.5 Billion rows, the add column would run out of log space if done the 'simpleton' way).
How does this works with branching:
- when the branch is created, it snaps the current schema version, say version 1.6
- as the team start working on the branch, it adds a new version, 1.7, and then it starts coding the upgrade step from 1.6 to 1.7
- the upgrade step gets changed as modifications are done in the branch. It always runs the script that upgrade from v 1.6 to 1.7, but what exactly those scripts do, is subject to the normal code iterations and check-ins in the branch
- branch finishes development, it prepares for the reverse integration (to be merged back into baseline)
- it does a new forward integration from the baseline to the branch. If the integration does not bring any changes to the schema version, all things are good, the branch can reverse integrate as-is. version 1.7 becomes the new baseline version.
- the interesting stuff is when another branch has reverse integrated into base in the meantime and now the base schema version has changed to, say, 1.7. In this case our branch has to bump it's deployment target schema version to 1.8 and do a review of the upgrade step that was formerly from 1.6 to 1.7 to see how it operates in the new environment, upgrading from 1.7 to 1.8. Logical schema conflicts have to be resolved, script may require changes, testing has to be done. Once completed, the branch can reverse integrate into base. The deployed target version of the product now becomes 1.8.
- when another branch that has forked at schema version 1.6 wants to reverse-integrate, it needs to it will have to bump it's schema version to 1.9, test the upgrade script from 1.8 to 1.9, then it can integrate back into the base.
Notice that there is no tool involved, no magic schema diff scripting, no wizards and no right-button-click-generate-script involved. This is a 100% developer driven process, based on source (scripts). Many find this whole process elaborate, but it works. In fact, as a SQL Server user, you have already leveraged the results of this process in your daily use of SQL Server: SQL Server itself uses a very similar database upgrade process and, as you probably expect, the product development process makes extensive use of branching and the problem you mentioned is a very real problem that has to be solved.
BTW, how the branching/integration actually occurs differs between source control products, I'm using the terms familiar from the perforce integrate mode of operation.