Why is there just one DbContext every web request?

asp.net c# dbcontext dependency-injection entity-framework

Question

There are a number of publications detailing how to set up Entity Framework that I have been reading.DbContext employing several DI frameworks, such that only one is built and utilized for each HTTP web request.

Why is this even a decent idea to begin with? What benefits do you get from using this strategy? Are there any circumstances in which this would be a wise move? Are there any actions you can do with this method that you can't take with instantiationDbContext s each call to a repository method?

1
391
3/1/2019 11:22:45 AM

Accepted Answer

NOTE: This answer talks about the Entity Framework's DbContext, but it is applicable to any sort of Unit of Work implementation, such as LINQ to SQL's DataContext, and NHibernate's ISession.

First, let's concur with Ian: Having only oneDbContext is a Bad Idea for the whole application. Only when you have a single-threaded program and a database that is only utilized by that single application instance does this make sense. TheDbContext not thread-safe, and considering that theDbContext stores information, it quickly becomes outdated. This will cause you a ton of problems if numerous users or apps are using that database at once (which is very common of course). However, I assume you are aware of that and are only curious as to why it is not possible to simply introduce a new instance (i.e., with a temporary lifestyle) of theDbContext into whomever needs it. (for further details on why one person)DbContext (Or even context per thread, read this response) is awful.

Firstly, let me explain that registering aDbContext Using a unit of work as transitory might be effective, but often you only want to use one such unit of work inside a certain scope. It may be possible to specify such a scope inside the parameters of a web request in a web application; this is known as a Per Web Request lifestyle. This enables you to let a variety of items interact with one another simultaneously. In other words, they participate in the same commercial activity.

The transitory lifestyle is OK if you don't intend for a set of operations to run in the same environment, but there are a few things to watch:

  • Every class that modifies the system's state must call since each item has its own instance._context.SaveChanges() (since modifications would be lost otherwise). This is a breach of the Principle of a Single Responsibility and might complicate your code by adding a second duty (the obligation of managing the context) to it.
  • Make certain that entities are [loaded and stored by aDbContext Since they cannot be utilized in the context instance of another class, ] never leave the scope of such a class. Because you must reload such entities by id whenever you require them, this may greatly complicate your code and affect speed.
  • Since DbContext implements IDisposable , you should Dispose all instances that were generated. You essentially have two choices if you want to accomplish this. After contacting, you must immediately dispose of them using the same technique.context.SaveChanges() but in such scenario, the object that was handed over from outside becomes the property of the business logic. The second alternative is to dispose of all newly produced instances at the Http Request boundary, but in that case, you'd still need some kind of scoping to inform the container when those instances need to be destroyed.

Another alternative is to administer a notDbContext at all. Rather, you administer aDbContextFactory which has the ability to generate new instances (I used to use this approach in the past). In this manner, the business logic clearly controls the context. It could resemble this:

public void SomeOperation()
{
    using (var context = this.contextFactory.CreateNew())
    {
        var entities = this.otherDependency.Operate(
            context, "some value");

        context.Entities.InsertOnSubmit(entities);

        context.SaveChanges();
    }
}

The benefit of this is that you can control how long theDbContext directly, and setting this up is simple. Additionally, it enables you to utilize a single context inside a certain scope, which has a number of benefits. For example, you may execute code within a single business transaction and pass around entities since they come from the same context.DbContext .

The drawback is that you will need to distribute theDbContext one approach after another (which is termed Method Injection). Note that although this technique is similar to the "scoped" approach in some ways, the scope is now managed inside the application code itself (and is possibly repeated many times). The unit of work is both created by and disposed of by the application. because theDbContext Constructor Injection is no longer an option after the dependency graph has been built; instead, you must rely on Method Injection to send context from one class to another.

Method injection isn't all that awful, but if the business logic becomes more intricate and involves more classes, you'll need to pass it from method to method and class to class, which may greatly complicate the code (I've seen this before). This strategy will work just well for a simple application, however.

Due to the disadvantages that this factory technique has for larger systems, another strategy that lets the container or infrastructure code/Root of composition control the unit of work may be more advantageous. The style that is the subject of your inquiry is this.

Your application code is not contaminated by needing to build, (optionally) commit, and dispose of a UoW instance by allowing the container and/or infrastructure handle this, which keeps the business logic simple and tidy (just a Single Responsibility). This strategy encounters various challenges. Where, for instance, do you Commit the instance and Dispose of it?

A unit of labor may be disposed away at the conclusion of the web request. However, a lot of people presume that this is also where the work unit should be committed. You just can't know for sure at that point in the application if the unit of work should really be committed. For instance, you don't want to commit if the business layer code produced an exception that was captured further up the callstack.

The true answer is to maintain a scope explicitly once again, but this time within the Composition Root. You will be able to design a decorator that can be wrapped around each command handler that lets to achieve this by abstracting any business logic underneath the pattern command/handler. Example:

class TransactionalCommandHandlerDecorator<TCommand>
    : ICommandHandler<TCommand>
{
    readonly DbContext context;
    readonly ICommandHandler<TCommand> decorated;

    public TransactionCommandHandlerDecorator(
        DbContext context,
        ICommandHandler<TCommand> decorated)
    {
        this.context = context;
        this.decorated = decorated;
    }

    public void Handle(TCommand command)
    {
        this.decorated.Handle(command);

        context.SaveChanges();
    } 
}

You won't have to write this infrastructure code more than once thanks to this. You may set such a decorator to be wrapped around all solid DI containers.ICommandHandler<T> implementations that are done consistently.

559
10/27/2015 9:25:34 PM

Popular Answer

The question is not truly addressed in any of the responses. The OP asked about a per-(web)request architecture and what possible advantages could be there, not about a singleton/per-application DbContext design.

I'll use http://mehdi.me/ambient-dbcontext-in-ef6/ as an example since Mehdi is a great resource:

Possible performance gains.

Each DbContext instance maintains a first-level cache of all the entities its loads from the database. Whenever you query an entity by its primary key, the DbContext will first attempt to retrieve it from its first-level cache before defaulting to querying it from the database. Depending on your data query pattern, re-using the same DbContext across multiple sequential business transactions may result in a fewer database queries being made thanks to the DbContext first-level cache.

It enables lazy-loading.

If your services return persistent entities (as opposed to returning view models or other sorts of DTOs) and you'd like to take advantage of lazy-loading on those entities, the lifetime of the DbContext instance from which those entities were retrieved must extend beyond the scope of the business transaction. If the service method disposed the DbContext instance it used before returning, any attempt to lazy-load properties on the returned entities would fail (whether or not using lazy-loading is a good idea is a different debate altogether which we won't get into here). In our web application example, lazy-loading would typically be used in controller action methods on entities returned by a separate service layer. In that case, the DbContext instance that was used by the service method to load these entities would need to remain alive for the duration of the web request (or at the very least until the action method has completed).

Remember that there are drawbacks as well. There are several further reading materials on the topic at that URL.

Just publishing this in case someone else encounters this query and avoids being distracted by responses that do not directly address the topic.



Related Questions





Related

Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow