The best way to retrieve large amounts of data from a SQL Server (or SQL Azure) database azure-sql-database database entity-framework-6 sql-server


I have a table with 40 millions rows. I'm looking for the best technology for retrieving this data to JSON via API.

The performance is critical!

I think, the best way is to use ADO.NET and a SqlDataReader. But, I want to use automapping (like EF).

Could you tell me that is the best way to retrieve large amounts of data? (2, 3,.. 40 million rows)

5/5/2015 4:36:00 PM

Accepted Answer

You're right that you aren't going to get any faster than raw ADO.NET if you're trying to get data from SQL Server into a C# library.

For getting simple relation->object mapping, but avoiding the price you pay with the larger, more fully featured ORMs like EF and NH, I've found that Dapper works very well ( It's faster than NH in my use cases, very easy to use, and you get the mapping that you're looking for.

There are benchmarks that have been done in the past looking at various ORMs' fetch performance that can give you an idea of what you might be looking at (this one for example:, that not surprisingly found raw ADO.NET the fastest - although it's from 2013, which is just ancient in internet-time).

You could do your own similar comparison fairly quickly with your data and environment to get a sense of what might work for your situation.

Also, to approach it from another angle - how much often does the data change? Does it need to come out of SQL server every time? You may be able to get it loaded into a cache in memory where the retrieval is faster than querying it from SQL Server.

5/6/2015 3:16:03 AM

Popular Answer

Also give a try to linq2db ( Like Dapper linq2db is a very lightweight micro ORM, but also supports linq queries for filtering, projections etc.

Related Questions


Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow