A long time with no posts, and a short post this time as well. Tons of work, too little time. Please bear with me.
I'd like to attract your attention to a boring yet important scenario, that is in fact a special case of finalizer-related problems. As you probably know, a .NET object that declares a finalizer has performance implications on our application. The most obvious of these is the fact that an object that has a finalizer will survive at least one garbage collection cycle, i.e. reach at least generation 1. This is a consequence of the fact the GC cannot collect the object once it is no longer referenced, because there is a reference to it in the freachable queue. Only after the finalizer thread has drained the finalizer queue and another GC occurs, the object's memory can finally be reclaimed.
Another obvious implication of the finalization mechanism is that it is extremely dangerous to block within the finalizer, because there is just one finalizer thread per process. Performing time-expensive operations, or even blocking indefinitely, means that no finalizable objects' memory will be reclaimed, causing a memory leak. This is covered in a great post by Tess from Microsoft. The less-obvious scenario is when the rate at which finalizers are run is slower than the rate of new object allocations (e.g. when it takes 5ms to allocate an object, but 10ms to finalize it). This also causes a memory leak, that might take longer to manifest.
Why am I reiterating the obvious, then? During one of my consultations, I've encountered a scenario that is basically the same old finalization problem causing what seemed to be a memory leak. However, it involved CCWs, making the story more complicated.
If you're not already familiar with CCWs and RCWs (the building blocks of .NET COM Interop), you can familiarize yourself with them right away. In the specific scenario I am referring to, there is a COM object that exposes a standard COM connection point (i.e., COM event). There is also a managed object that registers to that event using the automatically generated COM interop assembly (this is completely transparent to the managed object, because the nitty-gritty details of implementing the COM sink are wrapped, and all you need to do is += your delegate on the event).
In this scenario, an interesting case of the finalizer-related problem arises. The RCW, unless explicitly released using Marshal.ReleaseComObject or Marshal.FinalReleaseComObject, is subject to finalization. After all, COM requires strict life-time management using reference counting, but the managed RCW reference is not subject to reference counting, and could simply be dangling free in mid-air.
Now, assume that the managed object that registered to the event is in fact the heavy memory consumer in this application. The application simply creates these managed objects, creates the COM objects, and establishes the event registration between them, ad infinitum. In addition, the application maintains a cache of 1,000 managed objects.
It is reasonable to assume that the memory consumption of this application should be 1,000 X (total memory used by the COM object and by the managed object). However, running the application and examining GC.GetTotalMemory, looks like a memory leak in progress.
Why does this make sense after all? Consider that the managed object is registered to an event on the COM object. Therefore, the COM object holds the managed object alive. However, the COM object is subject to finalization, because our code does not explicitly release it, following COM rules. This means that it will take substantially longer to release it, and take the managed object with it. Introducing a call to Marshal.FinalReleaseComObject makes memory consumption stall at a very reasonable level.
You may ask yourself (and myself), what's the risk? After all, the memory can be reclaimed, so if the system will be running very low on memory, I won't get an OutOfMemoryException thrown - the GC will work after hours to free that memory for me. And that is generally true, if you disregard the fact that the application is consuming huge amounts of unneeded memory simply because we neglected the deterministic disposal rules that COM would like to enforce. This is yet before we mention the remote but nonetheless possible case where virtual memory fragmentation is so bad (because of all these COM objects allocating memory) that GC will prematurely hand us an OOM even when everything seems very reasonable. Therefore, this non-deterministic disposal scenario is best avoided.
Sample code for this article is included below. Feel free to play around with it.