Skip to content
This repository was archived by the owner on Nov 17, 2018. It is now read-only.

Getting a endless HttpMessageHandler cleanup cycle #165

Closed
Eilon opened this issue Aug 28, 2018 · 29 comments
Closed

Getting a endless HttpMessageHandler cleanup cycle #165

Eilon opened this issue Aug 28, 2018 · 29 comments
Labels

Comments

@Eilon
Copy link
Member

Eilon commented Aug 28, 2018

From @masterjs on Tuesday, 28 August 2018 14:21:50

Hello,
I'm using the HTTP Client factory for ASPNETCORE 2.1. But now, after my second http request, I'm starting to get some logs that are getting me worried.

HttpMessageHandler expired after 120000ms for client 'default'
Starting HttpMessageHandler cleanup cycle with 1 items
BDC.ClientPortal.ApiManager> [10:11:49 DBG] Ending HttpMessageHandler cleanup cycle after 0.0342ms - processed: 0 items - remaining: 1 items
BDC.ClientPortal.ApiManager> [10:11:59 DBG] Starting HttpMessageHandler cleanup cycle with 1 items
BDC.ClientPortal.ApiManager> [10:11:59 DBG] Ending HttpMessageHandler cleanup cycle after 0.0149ms - processed: 0 items - remaining: 1 items
BDC.ClientPortal.ApiManager> [10:12:09 DBG] Starting HttpMessageHandler cleanup cycle with 1 items
BDC.ClientPortal.ApiManager> [10:12:09 DBG] Ending HttpMessageHandler cleanup cycle after 0.0025ms - processed: 0 items - remaining: 1 items
BDC.ClientPortal.ApiManager> [10:12:19 DBG] Starting HttpMessageHandler cleanup cycle with 1 items
BDC.ClientPortal.ApiManager> [10:12:19 DBG] Ending HttpMessageHandler cleanup cycle after 0.0017ms - processed: 0 items - remaining: 1 items ...

These logs are endless. Is that normal? (doesn't look normal)

thank you,
JS

***** dotnet --info *****
.NET Core SDK (reflecting any global.json):
Version: 2.1.400
Commit: 8642e60a0f

Runtime Environment:
OS Name: Windows
OS Version: 10.0.14393
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\2.1.400\

Host (useful for support):
Version: 2.1.2
Commit: 811c3ce6c0

.NET Core SDKs installed:
1.1.0 [C:\Program Files\dotnet\sdk]
1.1.4 [C:\Program Files\dotnet\sdk]
...
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.400 [C:\Program Files\dotnet\sdk]

.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
....
Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]

Copied from original issue: dotnet/aspnetcore#3470

@Eilon
Copy link
Member Author

Eilon commented Aug 28, 2018

From @rynowak on Tuesday, 28 August 2018 18:00:02

This means that there's a client/message handler that's been expired, but someone is still holding a reference to it. You can analyze what's keeping it alive by capturing a memory dump.

@mixandmatch025
Copy link

I'm experiencing the same issue, in the memory dump I don't see any HttpClient kept alive. Any other pointers on where to look?

@mixandmatch025
Copy link

After further investigation, the following object are incremented on each call to my web service:

  • HttpClientHandler
  • SocketsHttpHandler,
  • HttpConnectionSettings
  • DiagnosticsHandler
  • LoggingScopeHttpMessageHandler
  • ExpiredHandlerTrackingEntry

I am using a custom asp.net core middleware that on each call uses IServiceProvider to create an instance of a request handler, which uses IHttpClientFactory to create an HttpClient.

@rynowak
Copy link
Member

rynowak commented Aug 29, 2018

If you're looking at a memory dump you should be able to find the "path to root" of one of these objects - that will show you what's keeping it around in memory. What's the "path to root" of LoggingScopeHttpMessageHandler?

@mixandmatch025
Copy link

The path to root is:
LoggingScopeHttpMessageHandler -> ExpiredHandlerTrackingEntry -> DefaultHttpClientFactory -> Dictionary<object, object> -> ServiceProviderEngineScope -> KestrelServerOptions -> Http1Connection -> AsyncTaskMethodBuilder... -> Dictionary<Int32, Task>

@rynowak
Copy link
Member

rynowak commented Aug 29, 2018

Is there an instance of LifetimeTrackingHttpMessageHandler alive anywhere?

@rynowak
Copy link
Member

rynowak commented Aug 29, 2018

It would be great to see the path to root of HttpClientHandler

@mixandmatch025
Copy link

Is there an instance of LifetimeTrackingHttpMessageHandler alive anywhere?

Doesn't seem to be.

It would be great to see the path to root of HttpClientHandler

HttpClientHandler -> LoggingHttpMessageHandler -> LoggingScopeHttpMessageHandler -> ExpiredHandlerTrackingEntry -> DefaultHttpClientFactory -> Dictionary<object, object> -> ServiceProviderEngineScope -> KestrelServerOptions -> Http1Connection -> AsyncTaskMethodBuilder... -> Dictionary<Int32, Task>

@rynowak
Copy link
Member

rynowak commented Aug 30, 2018

When you're collecting these logs is your application mostly idle? Everything you've shown me so far seems to indicate that things are working correctly, but the weakreference hasn't been killed yet.

ExpiredHandlerTrackingEntry should have a strong reference to LoggingScopeHttpMessageHandler and a weak reference to LifetimeTrackingHttpMessageHandler

https://github.com/aspnet/HttpClientFactory/blob/master/src/Microsoft.Extensions.Http/ExpiredHandlerTrackingEntry.cs

Once LoggingScopeHttpMessageHandler goes out of scope, the weak reference will get expired and the 'real' handler should be eligible for disposal.

@mixandmatch025
Copy link

My app is completely idle, it has finished handling the requests. Do you have any ideas why is LoggingScopeHttpMessageHandler kept in scope, or how I can find out?

@mixandmatch025
Copy link

It seems this is the expected behavior, as it happens with an empty app that just uses IHttpClientFactory to create an HttpClient. After a lot of requests the items are cleaned. Is there a certain number of items needed before cleanup starts processing items?

@mixandmatch025
Copy link

Or maybe this has to do with the underlying connection handling by kestrel?

@davidfowl
Copy link
Member

Kestrel handles sever resources not client resources (incoming connections not outgoing connections)

@mixandmatch025
Copy link

yes, I meant that maybe the connection that generated the request is still alive, even though my app finished handling the request

@davidfowl
Copy link
Member

@mixandmatch025 what does your application look like? Can you share a sample that reproduces the problem?

@mixandmatch025
Copy link

mixandmatch025 commented Sep 2, 2018

Just a simple app can reproduce what I'm refering to:

    class Program
    {
        static async Task Main(string[] args)
        {
            var host = WebHost.CreateDefaultBuilder()
                .ConfigureLogging(l => l.SetMinimumLevel(LogLevel.Debug))
                .ConfigureServices(s => s.AddHttpClient("default").SetHandlerLifetime(TimeSpan.FromSeconds(1)))
                .Configure(a => a.Run(c =>
                {
                    var client = c.RequestServices.GetService<IHttpClientFactory>().CreateClient("default");
                    return Task.CompletedTask;
                }))
                .Build();

            await host.RunAsync();
        }
    }

the .SetHandlerLifetime() is used to speed up the process.
After sending several requests (I'm using Chrome) a cyclic log starts to appear and the items aren't cleaning up.

@rynowak
Copy link
Member

rynowak commented Sep 2, 2018

Thanks, I'll try to reproduce this.

@davidfowl
Copy link
Member

Yep, I can reproduce it...

@davidfowl
Copy link
Member

The WeakReference is isn't null and is pointing to a LifetimeTrackingHttpMessageHandler that seems to be captured by a TimerCallback.

Finalizer Queue:
    000001efedd82178
    -> 000001efedd82178 System.Threading.TimerHolder
    -> 000001efedd82118 System.Threading.TimerQueueTimer
    -> 000001efedd820c0 System.Threading.TimerCallback
    -> 000001efedd81fa0 Microsoft.Extensions.Http.ActiveHandlerTrackingEntry
    -> 000001efedd81f80 Microsoft.Extensions.Http.LifetimeTrackingHttpMessageHandler

@rynowak
Copy link
Member

rynowak commented Sep 2, 2018

OK, well if it is the timer callback capturing it that's great because then we understand the issue.

@davidfowl
Copy link
Member

Hmm I'm not sure there's a leak, in my console app repro, things are cleaned when I force a GC.

@davidfowl
Copy link
Member

I think things are working as expected.

@davidfowl
Copy link
Member

OK after some investigation everything here is by design. Here's what's happening:

  • Handler expiration happens based on the configured timeout (1s in this case)
  • We move it from active dictionary to expired queue. We store a weak reference to underlying handler so that it gets removed when it is truly out of scope.
  • We have another timer that goes scans expired handlers and disposes and removes them when they are out of scope (not being held by anything).

The logs seem indicate that things are never going out of scope and hence the supposed leak. What's happening is the timer that is used to expire the handler has a reference to a callback which seemingly keeps the HttpMessageHandler alive.

When a GC happens things are cleaned up as expected. Until then, the background scanner will keep looping until that happens.

If anything we could look at changing how we scan to avoid spamming the logs in this situation.

@masterjs
Copy link

masterjs commented Sep 4, 2018

Thank @davidfowl and @rynowak for the details! Very good to know! Indeed after a long while it seems to have "process them". Avoiding spamming the logs is indeed a good first step, but could this also hide real leaks? Any idea when this could be optimized?

thank you,
JS

@rynowak
Copy link
Member

rynowak commented Sep 4, 2018

If anything we could look at changing how we scan to avoid spamming the logs in this situation.

We log this at debug level on purpose because we don't want to spam you, but we want a way to figure out what's happening if you're curious.

@masterjs
Copy link

masterjs commented Sep 4, 2018

Great! I like this answer ;o)
Meanwhile I've added "Microsoft.Extensions.Http.DefaultHttpClientFactory": "Information" to my Serilog configurations.

@davidfowl
Copy link
Member

Avoiding spamming the logs is indeed a good first step, but could this also hide real leaks? Any idea when this could be optimized?

If anything it can point out real leaks. If you're holding onto HttpClient instances the handler may never go away and may in turn never get cleaned up.

@mixandmatch025
Copy link

@davidfowl @rynowak thanks for clearing things up.

@rynowak
Copy link
Member

rynowak commented Sep 9, 2018

@mixandmatch025 thanks for sticking with us. For now this seems relatively harmless, the underlying issue (timer capturing the service provider) will go away in 2.2.0-preview3

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants