Desktop application which can work offline when no connectivity with SQL Server

c# entity-framework entity-framework-6 sql-server wpf

Question

I am designing a WPF desktop application and using Entity framework Code First to create and use SQL Server Database. My database will be hosted on One Server machine and will be running 24*7.

I want to provide a feature, where you can modify data offline(when you have no connectivity with SQL Server DB) and Save it somehow. And whenever your application will find connection with SQL Server, all changes can be moved to SQL Server DB.

Is there any way to achieve this by using Entity Framework ? I want to emphasis on the part that I am using Entity Framework. Is this type of functionality already implemented by EF?? Or I have to do it manually, like have to write that in any file system and then manually merge it later to DB ?

1
0
10/17/2016 11:42:07 PM

Accepted Answer

You could figure out the specific exceptions that are generated when the SQL Server connection is lost, and embed your calls in try-catch blocks. If the server is offline, then in your catch block, pass the entity to a method that serializes the entity to JSON and saves it to the hard drive in a special directory or something. On your next successful query, check that directory to see if there are any saved entities that need to be saved.

Be specific with your catches - you don't want unrelated exceptions to trigger this code.

Some things to keep in mind - what if somebody else changed the data in the meantime? Are you intending to overwrite those changes? How did you get the data which needs to be saved in the first place if you are offline?

0
10/14/2016 8:42:09 PM

Popular Answer

While I have never tried this with SQL-based data I have done it in the past with filesystem-based data and it's a major can of worms.

First, you have to have some means of indicating what data needs to be stored locally so that it will be available when you're offline. This will need to be updated either all the time or before you head out--and that can involve a lot of data transfer.

Second, once you're back online there's a lot of conflict resolution that must be done. If there's a realistic chance that someone else might have changed the data while you were out you need some way of detecting the conflict and prompting the user as to what to do in that situation. This almost certainly requires a system that keeps a detailed edit trail on every unit of data that could reasonably be updated.

In my situation I was very fortunate in that it was virtually certain that if the remote user edited file [x] that overwriting the system copy was the right thing to do. Remote users would only be carrying the files that pertained to their projects, conflicts should never happen. Thus the writeback was simply based on timestamps, nothing more. Data which people in the field would not normally need to modify was handled by not even looking at it, modified files were simply copied from the system to the laptop.

This leaves the middle step--saving the pending writes. I disagree with Elemental Pete's answer in this regard--simply serializing them and saving the result them does not work because what happens when you read that data back in again? You see the old copy, not the changed copy!

My approach to this was a local store of all relevant data that was accessed exactly like the main system data was, all reads and writes worked normally.

Something a lot fancier might be needed if you have data that needs transactions involved.

Note that we also hit a nasty human problem: the update process took several minutes (note: >10y ago) simply analyzing what needed to be done, not counting any actual copy time. The result was people bypassing it when they thought they could. Sometimes they thought wrong, oops!



Related Questions





Related

Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow