Using entity framework code, I created a rather basic web application initially. After many hours of difficulty, I understood that, despite being SQL Server Express, localdb is not intended to be utilised with production when using the 'publish' command (the publish wizard). IIS 7, EF 6.1.3, SQL Server 2014, Visual Studio 2013, and other software.
If I've got this right, all you're actually doing when you choose to publish using localdb is moving your localdb database to your IIS 7 server. I couldn't figure out why my production server's database wasn't updated when I changed my lodaldb database using Visual Studio migrations. I now see (and think) that the data is not the same since there are two separate localdb instances involved.
Okay, whatever — I finished using localdb and created a genuine SQL Server 2014 database on my computer. After hours of searching on Google, I still have no idea what to do. I have a few queries:
How can I use EF to handle this new database? Consider the scenario when I wish to add a new column. I first add it to my local database, test it out, then use Add-Migration blah to do the migration before performing Update-Database. Do I then need to manually execute a SQL script on the production DB server that I created using Visual Studio?
I'm aware that this isn't ideal, but can I connect Entity Framework to a genuine staging SQL Server database (a test one) and bypass all the localdb nonsense? Then, could I manage it and maintain everything in sync using code-first migrations?
Should I use an Initializer? According to contradictory recommendations, I should utilise them for a production database or not (like this one)
MigrateDatabaseToLatestVersion is a function in the public class PricedNotesInitializer.
It irritates me that the only way I can learn about this things is via other people's blogs. Why the paperwork is so poor is beyond me.
I'm sorry for the rant and appreciate your assistance.
Use the MigrateDatabaseToLatestVersion initializer to have your application automatically make schema modifications to the database when it launches after a deploy, if at all feasible without causing data loss.
If there is a loss of data, you will get a warning and must take the necessary measures. It is possible to override this and continue even if data is lost.
The LocalDB is fine for development—in fact, I would even recommend it because it can be automatically created and initialised (and seeded with test data) for any new developer who needs to develop or debug it simply by building and running the application from source. As other answers have already mentioned, you need one connection string per environment.
With the exception of loading the solution, building, and executing the code-base, the "set up new development environment" experience should be as seamless as feasible.
Your deploy and release pipeline should be automated and should not include any human instructions that might be mistyped, overlooked, or even worse, performed incorrectly.
If you wish to be cautious or meet certain compliance standards, a simple yes/no clearance on the staged modifications might replace a manual step.
In an ideal world, automated tests in your staging environment would ease this and, if successful, would automatically push to production.
If you're running update-database interactively, I would argue that running it against production implies that the computer on which it is executing has access to the production database, which is an improbable or not-recommended circumstance. If necessary, you may use the initializer, which appears to be best suited for lesser projects, instead of write it if you want greater control over it (which this one seemed to be though).