2

We are using Visual Studio 2013 with SSDT mainly for versioning t-sql code, so the sql is being developed on the dev server and then we use schema compare to transfer the scripts into visual studio (and check into Git). Before deployment (which we currently do with schema compare, too) we have to replace database and server references (with [$(database)] etc.). If we change the code in the dev server and compare again, such SQLCMD variables are lost again. (I would expect schema compare to be smart enough to retain the SQLCMD variables but I found no way to accomplish this).

The logical step is to develop sql in visual studio from the start. But so far, it has been hard to convince anybody in the team to do that. One can write sql and execute it in VS, no problem. One can also switch to SQLCMD mode and execute, all right. But when you create e.g. a view in VS, you must write down a create statement and of course this can be executed once but will yield an error when altering the view and executing the create statement again.

So my question is if anybody has some essential tips on how to do database development exclusively in Visual Studio. We were able to get the database references and all that straight, but not the development process.

John Saunders
  • 160,644
  • 26
  • 247
  • 397
TvdH
  • 1,088
  • 14
  • 29

4 Answers4

5

I've been streamlining local database development and deployment using Visual Studio database projects for a few years now. Here are some tips.

In general...

Use local db instances: Each developer should have their own database instance installed locally. All scripts (tables, views, stored procs, etc.) should be developed in Visual Studio. Create a publish profile for deploying the project to the local db instance.

Use Publish feature: Confusingly Visual Studio provides both a Deploy and a Publish option which ultimately do the same thing. I recommend using just Publish because it's more prominent in the UI and you can create profiles to configure the deployment process for various database instances.

Keep local db up to date: When a developer makes changes in the database project and checks them in to source control then the other developers should check out these changes and republish the project to their local databases.

Create vs. Alter statements

All of your statements should be Create statements. There is no need for Alter statements or existence checks. Everything should be scripted as if you are creating the database objects for the first time. When you deploy or publish, VS will know whether to issue Alter statements for existing objects.

Data

Some ideas:

  • Script your data as a series of Insert statements. Include them in a post-deployment script in the database project. But this can be tedious and error-prone.

  • Keep a database backup that includes all of your test data. When setting up a development environment for the first time, create the database from the backup. After you make significant changes to the data, create a new backup and have your devs recreate their databases from the backup. In most cases it's ok if the backup is out of sync with the schema defined in the project -- simply republish the project (make sure to turn off the "Re-create database" setting so that only the differences are published and thus the data is not lost).

  • There may be 3rd party tools to do this in which case they are worth looking in to.

  • Create your own solution for deploying data. Mine involved the following and worked really nicely (but required a lot of time and effort!):

    1. All data stored in XML files - 1 file per table - whose structure resembled the table
    2. An executable to read the XML files and generate SQL merge (or insert/update) statements for each row of data and save them to a SQL script
    3. A pre-build event in the database project to run the executable and copy the resulting SQL script to a post-deployment script in the project
    4. Publish the project and the data will be pushed during post-deployment

Test/Production Deployments

Publish feature: You can create publish profiles for your test and production environments. However it will include your pre- and post-deployment scripts, and you won't get the versatility that the other options provide.

dacpacs: Ed Elliott covered them in his answer. Advantages: no need for Visual Studio to deploy, they can be deployed via SQL Management Studio or the command line with sqlpackage.exe, they can be easier to work with than a T-SQL deployment script.

Schema Compare: Schema compare may be good if you can use Visual Studio for your deployments and you like to double check all of the changes being deployed. You can also selectively ignore changes which is useful when you aren't lucky enough to have a development environment that completely mirrors production.

Keith
  • 20,636
  • 11
  • 84
  • 125
  • Thank you for sharing your hard earned (I mean it) insights. Our solution contains about 12 interreferenced databases with about 2 TB of data, about 5% changed per day. Our dev machines are notebooks with 500 GB harddisks. If we want to develop locally, then we need to carefully select test data and since we are constantly connecting new sources, we must spend too much time preparing a suitable subset again, your XML approach would be far to much work for our solution. That is why I was hoping to get along with the dev server and objects developed in VS. – TvdH Mar 26 '15 at 22:29
  • After some hesitation I accept this answer, since it contains some really essential tips. In the meantime we managed to get along with VS and SQL projects, but there was no eureka moment. We just had to get used to it. What I found especially useful is to always have a schema compare open and configure a toolbar with connect / chance connection / sqlcmd mode / execute / cancel execution buttons, I can use that to test the marked SQL right in VS. – TvdH Oct 22 '15 at 14:30
1

An age-old challenge. We've tried to use the data projects as they were defined through the years, but ran into several problems, including the fact that it seemed that these projects changed with every release of Visual Studio.

Now, we use the data project only to integrate with TFS for work item management and source code control. The way we do it so that we can build sprocs/views in Visual Studio is we write each script using the drop/create pattern. Our scripts also contain security (we made the mistake of using the default schema... if I could go back in time we'd segregate schemas and do schema-based role level security).

For table schema, we do schema compares to/from a versioned template database.

A typical stored proc looks like this:

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_MyStoredProcedure]') AND type in (N'P', N'PC'))
    DROP PROCEDURE [dbo].[sp_MyStoredProcedure]
GO
CREATE PROCEDURE [dbo].[sp_MyStoredProcedure]
    @MyParameter int
AS
BEGIN
    -- Stored Procedure Guts
    select 1
END

Good luck... ultimately, it just has to work for your team.

Chris Steele
  • 1,343
  • 1
  • 9
  • 20
  • Good idea, but it raises an "SQL70001: This statement is not recognized in this context." error, therefore the developer would not only have to add the "IF EXISTS" manually but also comment it out once the editing is done. I was hoping there was a better way to develop scripts in VS. – TvdH Mar 26 '15 at 16:23
  • No, the idea is that the if exists/drop would stay in. I've edited my response to remove references to things you wouldn't have (like the UserRole1, etc). Try it again. – Chris Steele Mar 26 '15 at 20:45
  • I still receive error SQL700001. I can set build action to "None" to avoid the error - but I guess this will also obstruct the error / references checking for the whole file - which is undesired (refer to http://stackoverflow.com/questions/10884704/visual-studio-database-project-checking-if-sql-server-login-exists-before-creat). – TvdH Mar 27 '15 at 07:43
  • BTW: I could also just comment out the "CREATE ... AS" statement and test the sql. But - if I have SQLCMD variables in the script which we always have - SQLCMD mode needs to be activated to test directly against the server. But that deactivates intellisense... no happy crew on the VS ship. – TvdH Mar 27 '15 at 09:02
  • Ah, I see. I should have mentioned, we're not building via the project. We just execute the sql for the sproc that we're working on. We have set all of our procedures to 'None' in build action. Sounds like this won't work for you. – Chris Steele Mar 30 '15 at 19:53
1

We are currently on the way to move from SSMT to SSDT. I see that we all facing the same problems and it is very strange that there is no good tutorial on the net (at least I haven't found it yet).

First of all about the variables. I think that you need to update to the newest version of SSDT (20015.02) + DacFx. We are using it and we do not have any problems with variables. It also has some new very good features as do not drop some objects on the target if they do not exist in the source.

However we came to solution to use synonyms for all cross database and linked server objects. For example we have table in the AnotherDatabase.dbo.NewTable. We create synonym [dbo].[syn_AnotherDatabase_dbo_NewTable] FOR [$(AnotherDatabase)].[dbo].[NewTable] and use it in the code instead of referencing the other databases. The same with linked servers: CREATE SYNONYM [syn_LinkedDatabase_dbo_NewTable] FOR [$(LinkedServer)].[$(LinkedDatabase)].[dbo].[NewTable].

Now about the development process. We set debug to our dev database in the project properties (later we are going to have separate databases for each developer). Then when you are modifying stored procedures/views/functions/etc... You open the script, change the CREATE to alter and you can work in the same way as you were doing in the SSMT. You can modify the body, execute it, execute queries in that window. However when you finish, you change it back from ALTER to CREATE and save the file.

The problem here is with the objects that does not support ALTER statement. In that case, you need to publish the code first. But in practice you are doing it so not so frequently so I believe that it is not so big deal.

Dmitrij Kultasev
  • 5,447
  • 5
  • 44
  • 88
  • Yes, the synonyms make sense to develop sql in VS without having to turn SQLCMD mode on and loosing intellisense. Then one can temporarily change to ALTER (or just comment out CREATE to execute locally). On the downside you are loosing deployment options when not using SQLCMD variables, since the deployment process uses them to alter database / server names. You would then have to switch your synonyms instead. – TvdH Mar 27 '15 at 10:26
  • One more comment about schema compare: yes, we are using the latest version and it works fine when comparing project (source) to database (destination); it will not find differences if the SQLCMD variables are set correctly. However, it is unable to get scripts from the server and keep the SQLCMD variables in the project intact (at least I would not know how to). – TvdH Mar 27 '15 at 10:42
  • Yeah. We do not have such problem as we are using synonyms. By the way, synonyms are great when you need to move database/databases to other places. – Dmitrij Kultasev Mar 27 '15 at 10:45
  • I've written up my experiences here - http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html - that might help you get started or at least follow along with what we did. I'd also check out Jamie Thomson's blog posts on SSDT, VS DB Projects, DataDude, and its other names. He's blogged extensively on the subject. – Peter Schott Mar 27 '15 at 13:14
0

SSDT is mature enough to use it to create your scripts and deploy your changes but you should move away from using the schema compare to doing deployments using sqlpackage.exe

The process looks something like:

-write code in vs/ssdt -build project which results in a dacpac (either on your machine or ci server) -deploy dacpac to db instance, using variables if you need to, to bring db's up to date. Use sqlpackage.exe to deploy or generate scripts which can be deployed manually

It should be pretty straight forward, but please ask if you are not sure on anything!

Ed

Ed Elliott
  • 6,666
  • 17
  • 32
  • You are mentioning a better deployment process, but in order to deploy you have to get the scripts inside VS first. If you work on the local dev instance, you usually have no full or current data to develop with. Developing locally also means that it becomes more effort to share objects with your team. I was hoping to be able to develop in VS with actual data but without the need to create or alter the objects in sql server but in VS instead. – TvdH Mar 26 '15 at 16:29
  • ok, I think I understand - when you develop in VS, running the project deploys it locally to the localdb instance of sql. I think you need to get your dev team onside and following the same process, having some people use SSDT and some SSMS doesn't go very well. If you can get everyone to use SSDT then it is just like working with code, you work on your localdb instance via VS then when you want the latest updates you do a pull or get latest then push that to your localdb instance (or vs does it for you) - maybe I don't understand! – Ed Elliott Mar 26 '15 at 16:48
  • You are right - mixing development in SSDT and SSMS does no go well. I want to move the team to SSDT but I spot no workflow that suits. Typically, devs will try out SQL on data as real as possible (they do that in SSMS today). Then they could move to SSDT (CREATE). Then they still need to test, fine-tune and troubleshoot in SSDT - but SSDT does not seem to support a workflow for that. You can connect to the server, highlight sections and execute, turn on SQLCMD mode (and disable error checking by that) but it's bulky to say the least. I may have to dig deeper into the localdb option. – TvdH Mar 27 '15 at 07:51