Multithreading
Now let’s talk about thin ice. In the previous sections about IDisposable we touched one very important concept that underlies not only the design principles of Disposable types but any type in general. This is the object’s integrity concept. It means that at any given moment of time an object is in a strictly determined state and any action with this object turns its state into one of the variants that were pre-determined while designing a type of this object. In other words, no action with the object should turn it into an undefined state. This results in a problem with the types designed in the above examples. They are not thread-safe. There is a chance the public methods of these types will be called when the destruction of an object is in progress. Let’s solve this problem and decide whether we should solve it at all.
This chapter was translated from Russian jointly by author and by professional translators. You can help us with translation from Russian or English into any other language, primarily into Chinese or German.
Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository github/sidristij/dotnetbook.
public class FileWrapper : IDisposable
{
IntPtr _handle;
bool _disposed;
object _disposingSync = new object();
public FileWrapper(string name)
{
_handle = CreateFile(name, 0, 0, 0, 0, 0, IntPtr.Zero);
}
public void Seek(int position)
{
lock(_disposingSync)
{
CheckDisposed();
// Seek API call
}
}
public void Dispose()
{
lock(_disposingSync)
{
if(_disposed) return;
_disposed = true;
}
InternalDispose();
GC.SuppressFinalize(this);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void CheckDisposed()
{
lock(_disposingSync)
{
if(_disposed) {
throw new ObjectDisposedException();
}
}
}
private void InternalDispose()
{
CloseHandle(_handle);
}
~FileWrapper()
{
InternalDispose();
}
/// other methods
}
The _disposed
validation code in Dispose() should be initialized as a critical section. In fact, the whole code of public methods should be initialized as a critical section. This will solve the problem of concurrent access to a public method of an instance type and to a method of its destruction. However, it brings other problems that become a timebomb:
- The intensive use of type instance methods as well as the creation and destruction of objects will lower the performance significantly. This is because taking a lock consumes time. This time is necessary to allocate SyncBlockIndex tables, check current thread and many other things (we will deal with them in the chapter about multithreading). That means we will have to sacrifice the object’s performance throughout its lifetime for the “last mile” of its life.
- Additional memory traffic for synchronization objects.
- Additional steps GC should take to go through an object graph.
Now, let’s name the second and, in my opinion, the most important thing. We allow the destruction of an object and at the same time expect to work with it again. What do we hope for in this situation? that it will fail? Because if Dispose runs first, then the following use of object methods will definitely result in ObjectDisposedException
. So, you should delegate the synchronization between Dispose() calls and other public methods of a type to the service side, i.e. to the code that created the instance of FileWrapper
class. It is because only the creating side knows what it will do with an instance of a class and when to destroy it. On the other hand, a Dispose call should produce only critical errors, such as OutOfMemoryException
, but not IOException for example. This is because of the requirements for the architecture of classes that implement IDisposable. It means that if Dispose is called from more than one thread at a time, the destruction of an entity may happen from two threads simultaneously (we skip the check of if(_disposed) return;
). It depends on the situation: if a resource can be released several times, there is no need in additional checks. Otherwise, protection is necessary:
// I don’t show the whole pattern on purpose as the example will be too long
// and will not show the essence
class Disposable : IDisposable
{
private volatile int _disposed;
public void Dispose()
{
if(Interlocked.CompareExchange(ref _disposed, 1, 0) == 0)
{
// dispose
}
}
}
Two levels of Disposable Design Principle
What is the most popular pattern to implement IDisposable
that you can meet in .NET books and the Internet? What pattern is expected from you during interviews for a potential new job? Most probably this one:
public class Disposable : IDisposable
{
bool _disposed;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if(disposing)
{
// here we release managed resources
}
// here we release unmanaged resources
}
protected void CheckDisposed()
{
if(_disposed)
{
throw new ObjectDisposedException();
}
}
~Disposable()
{
Dispose(false);
}
}
What is wrong with this example and why we haven’t written like this before? In fact, this is a good pattern suitable for all situations. However, its ubiquitous use is not a good style in my opinion as we almost don’t deal with unmanaged resources in practice that makes half of the pattern serve no purpose. Moreover, since it simultaneously manages both managed and unmanaged resources, it violates the principle of responsibility division. I think this is wrong. Let’s look at a slightly different approach. Disposable Design Principle. In brief, it works as follows:
Disposing is divided into two levels of classes:
- Level 0 types directly encapsulate unmanaged resources
- They are either abstract or packed.
- All methods should be marked:
– PrePrepareMethod, so that a method could be compiled when loading a type
- SecuritySafeCritical to protect against a call from the code, working under restrictions
- ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success / MayFail)] to put CER for a method and all its child calls
– They can reference Level 0 types, but should increment the counter of referencing objects to guarantee the right order of entering the “last mile”
- Level 1 types encapsulate only managed resources
- They are inherited only from Level 1 types or directly implement IDisposable
- They cannot inherit Level 0 types or CriticalFinalizerObject
- They can encapsulate Level 1 and Level 0 managed types
- They implement IDisposable.Dispose by destroying encapsulated objects starting from Level 0 types and going to Level 1
- They don’t implement a finalizer as they don’t deal with unmanaged resources
- They should contain a protected property that gives access to Level 0 types.
That is why I used the division into two types from the beginning: the one that contains a managed resource and the one with unmanaged resource. They should function differently.
Other ways to use Dispose
The idea behind the creation of IDisposable was to release unmanaged resources. But as with many other patterns it is very helpful for other tasks, e.g. to release references to managed resources. Though releasing managed resources doesn’t sound very helpful. I mean they are called managed on purpose so we would relax with a grin regarding C/C++ developers, right? However, it is not so. There always may be a situation where we lose a reference to an object but at the same time think that everything is OK: GC will collect garbage, including our object. However, it turns out that memory grows. We get into the memory analysis program and see that something else holds this object. The thing is that there can be a logic for implicit capture of a reference to your entity in both .NET platform and the architecture of external classes. As the capture is implicit, a programmer can miss the necessity of its release and then get a memory leak.
Delegates, events
Let’s look at this synthetic example:
class Secondary
{
Action _action;
void SaveForUseInFuture(Action action)
{
_action = action;
}
public void CallAction()
{
_action();
}
}
class Primary
{
Secondary _foo = new Secondary();
public void PlanSayHello()
{
_foo.SaveForUseInFuture(Strategy);
}
public void SayHello()
{
_foo.CallAction();
}
void Strategy()
{
Console.WriteLine("Hello!");
}
}
Which problem does this code show? Secondary class stores Action
type delegate in _action
field that is accepted in SaveForUseInFuture
method. Next, PlanSayHello
method inside Primary
class passes pointer to Strategy
method to Secondary
class. It is curious but if, in this example, you pass somewhere a static method or an instance method, the passed SaveForUseInFuture
will not be changed, but a Primary
class instance will be referenced implicitly or not referenced at all. Outwardly it looks like you instructed which method to call. But in fact, a delegate is built not only using a method pointer but also using the pointer to an instance of a class. A calling party should understand for which instance of a class it has to call the Strategy
method! That is the instance of Secondary
class has implicitly accepted and holds the pointer to the instance of Primary
class, though it is not indicated explicitly. For us it means only that if we pass _foo
pointer somewhere else and lose the reference to Primary
, then GC will not collect Primary
object, as Secondary
will hold it. How can we avoid such situations? We need a determined approach to release a reference to us. A mechanism that perfectly fits this purpose is IDisposable
// This is a simplified implementation
class Secondary : IDisposable
{
Action _action;
public event Action<Secondary> OnDisposed;
public void SaveForUseInFuture(Action action)
{
_action = action;
}
public void CallAction()
{
_action?.Invoke();
}
void Dispose()
{
_action = null;
OnDisposed?.Invoke(this);
}
}
Now the example looks acceptable. If an instance of a class is passed to a third party and the reference to _action
delegate will be lost during this process, we will set it to zero and the third party will be notified about the destruction of the instance and delete the reference to it.
The second danger of code that runs on delegates is the functioning principles of event
. Let’s look what they result in:
// a private field of a handler
private Action<Secondary> _event;
// add/remove methods are marked as [MethodImpl(MethodImplOptions.Synchronized)]
// that is similar to lock(this)
public event Action<Secondary> OnDisposed {
add { lock(this) { _event += value; } }
remove { lock(this) { _event -= value; } }
}
C# messaging hides the internals of events and holds all the objects that subscribed to update through event
. If something goes wrong, a reference to a signed object remains in OnDisposed
and will hold the object. It is a strange situation as in terms of architecture we get a concept of “events source” that shouldn’t hold anything logically. But in fact, objects subscribed to update are held implicitly. In addition, we cannot change something inside this array of delegates though the entity belongs to us. The only thing we can do is to delete this list by assigning null to an events source.
The second way is to implement add
/remove
methods explicitly, so we could control a collection of delegates.
Another implicit situation may appear here. It may seem that if you assign null to an events source, the following subscription to events will cause NullReferenceException
. I think this would be more logical.
However, this is not true. If external code subscribes to events after an events source is cleared, FCL will create a new instance of Action class and store it in OnDisposed
. This implicitness in C# can mislead a programmer: dealing with nulled fields should produce a sort of alertness rather than calmness. Here we also demonstrate an approach when the carelessness of a programmer can lead to memory leaks.
Lambdas, closures
Using such syntactic sugar as lambdas is especially dangerous.
I would like to touch upon syntactic sugar as a whole. I think you should use it rather carefully and only if you know the outcome exactly. Examples with lambda expressions are closures, closures in Expressions and many other miseries you can inflict upon yourself.
Of course, you may say you know that a lambda expression creates a closure and can result in a risk of resource leak. But it is so neat, so pleasant that it is hard to avoid using lambda instead of allocating the entire method, that will be described in a place different from where it will be used. In fact, you shouldn’t buy into this provocation, though not everybody can resist. Let’s look at the example:
button.Clicked += () => service.SendMessageAsync(MessageType.Deploy);
Agree, this line looks very safe. But it hides a big problem: now button
variable implicitly references service
and holds it. Even if we decide that we don’t need service
anymore, button
will still hold the reference while this variable is alive. One of the ways to solve this problem is to use a pattern for creating IDisposable
from any Action
(System.Reactive.Disposables
):
// Here we create a delegate from a lambda
Action action = () => service.SendMessageAsync(MessageType.Deploy);
// Here we subscribe
button.Clicked += action;
// We unsubscribe
var subscription = Disposable.Create(() => button.Clicked -= action);
// where it is necessary
subscription.Dispose();
Admit, this looks a bit lengthy and we lose the whole purpose of using lambda expressions. It is much safer and simpler to use common private methods to capture variables implicitly.
ThreadAbort protection
When you create a library for an third-party developer, you cannot predict its behavior in a third-party application. Sometimes you can only guess what a programmer did to your library that caused a particular outcome. One example is functioning in a multithreaded environment when the consistency of resources cleanup can become a critical issue. Note that when we write the Dispose()
method, we can guarantee the absence of exceptions. However, we cannot ensure that while running the Dispose()
method no ThreadAbortException
will occur that disables our thread of execution. Here we should remember that when ThreadAbortException
occurs, all catch/finally blocks are executed anyway (at the end of a catch/finally block ThreadAbort occurs further along). So, to ensure execution of a certain code by using Thread.Abort you need to wrap a critical section in try { ... } finally { ... }
, see the example below:
void Dispose()
{
if(_disposed) return;
_someInstance.Unsubscribe(this);
_disposed = true;
}
One can abort this at any point using Thread.Abort
. It partially destroys an object, though you can still work with it in the future. At the same time, the following code:
void Dispose()
{
if(_disposed) return;
// ThreadAbortException protection
try {}
finally
{
_someInstance.Unsubscribe(this);
_disposed = true;
}
}
is protected from such an abort and will run smoothly and for sure, even if Thread.Abort
appears between calling Unsubscribe
method and executing its instructions.
Results
Advantages
Well, we learned a lot about this simplest pattern. Let’s determine its advantages:
- The main advantage of the pattern is the capability to release resources determinately i.e. when you need them.
- The second advantage is the introduction of a proven way to check if a specific instance requires to destroy its instances after using.
- If you implement the pattern correctly, a designed type will function safely in terms of use by third-party components as well as in terms of unloading and destroying resources when a process crashes (for example because of lack of memory). This is the last advantage.
Disadvantages
In my opinion, this pattern has more disadvantages than advantages.
- On the one hand, any type that implements this pattern instructs other parts that if they use it they take a sort of public offer. This is so implicit that as in case of public offers a user of a type doesn’t always know that the type has this interface. Thus you have to follow IDE prompts (type a period, Dis… and check if there is a method in the filtered member list of a class). If you see a Dispose pattern, you should implement it in your code. Sometimes it doesn’t happen straight away and in this case you should implement a pattern through a system of types that adds functionality. A good example is that
IEnumerator<T>
entailsIDisposable
. - Usually when you design an interface there is a need to insert IDisposable into the system of a type’s interfaces when one of the interfaces have to inherit IDisposable. In my opinion, this damages the interfaces we designed. I mean when you design an interface you create an interaction protocol first. This is a set of actions you can perform with something hidden behind the interface.
Dispose()
is a method for destroying an instance of a class. This contradicts the essence of an interaction protocol. In fact, these are the details of implementation that infiltrated into the interface. - Despite being determined, Dispose() doesn’t mean direct destruction of an object. The object will still exist after its destruction but in another state. To make it true CheckDisposed() must be the first command of each public method. This looks like a temporary solution that somebody gave us saying: “Go forth and multiply”;
- There is also a small chance to get a type that implements
IDisposable
through explicit implementation. Or you can get a type that implements IDisposable without a chance to determine who must destroy it: you or the party that gave it to you. This resulted in an antipattern of multiple calls of Dispose() that allows to destroy a destroyed object; - The complete implementation is difficult, and it is different for managed and unmanaged resources. Here the attempt to facilitate the work of developers through GC looks awkward. You can override
virtual void Dispose()
method and introduce some DisposableObject type that implements the whole pattern, but that doesn’t solve other problems connected with the pattern; - As a rule Dispose() method is implemented at the end of a file while '.ctor' is declared at the beginning. If you modify a class or introduce new resources, it is easy to forget to add disposal for them.
- Finally, it is difficult to determine the order of destruction in a multithreaded environment when you use a pattern for object graphs where objects fully or partially implement that pattern. I mean situations when Dispose() can start at different ends of a graph. Here it is better to use other patterns, e.g. the Lifetime pattern.
- The wish of platform developers to automate memory control combined with realities: applications interact with unmanaged code very often + you need to control the release of references to objects so Garbage Collector could collect them. This adds great confusion in understanding such questions as: “How should we implement a pattern correctly”? “Is there a reliable pattern at all”? Maybe calling
delete obj; delete[] arr;
is simpler?
Domain unloading and exit from an application
If you got to this part, you became more confident in the success of future job interviews. However, we didn’t discuss all the questions connected with this simple, as it may seem, pattern. The last question is whether the behavior of an application differs in case of simple garbage collection and when garbage is collected during domain unloading and while exiting the application. This question merely touches upon Dispose()
… However Dispose()
and finalization go hand in hand and we rarely meet an implementation of a class which has finalization but doesn't have Dispose()
method. So, let’s describe finalization in a separate section. Here we just add a few important details.
During application domain unloading you unload both assemblies loaded into the application domain and all objects that were created as part of the domain to be unloaded. In fact, this means the cleanup (collection by GC) of these objects and calling finalizers for them. If the logic of a finalizer waits for finalization of other objects to be destroyed in the right order, you may pay attention to Environment.HasShutdownStarted
property indicating that an application is unloaded from memory and to AppDomain.CurrentDomain.IsFinalizingForUnload()
method indicating that this domain is unloaded which is the reason for finalization. If these events occur the order of resources finalization generally becomes unimportant. We cannot delay either the unloading of domain or an application as we should do everything as quickly as possible.
This is the way this task is solved as part of a class LoaderAllocatorScout
// Assemblies and LoaderAllocators will be cleaned up during AppDomain shutdown in
// an unmanaged code
// So it is ok to skip reregistration and cleanup for finalization during appdomain shutdown.
// We also avoid early finalization of LoaderAllocatorScout due to AD unload when the object was inside DelayedFinalizationList.
if (!Environment.HasShutdownStarted &&
!AppDomain.CurrentDomain.IsFinalizingForUnload())
{
// Destroy returns false if the managed LoaderAllocator is still alive.
if (!Destroy(m_nativeLoaderAllocator))
{
// Somebody might have been holding a reference on us via weak handle.
// We will keep trying. It will be hopefully released eventually.
GC.ReRegisterForFinalize(this);
}
}
Typical implementation faults
As I showed you there is no universal pattern to implement IDisposable. Moreover, some reliance on automatic memory control misleads people and they make confusing decisions when implementing a pattern. The whole .NET Framework is riddled with errors in its implementation. To prove my point, let’s look at these errors using the example of .NET Framework exactly. All implementations are available via: IDisposable Usages
FileEntry Class cmsinterop.cs
This code is written in a hurry just to close the issue. Obviously, the author wanted to do something but changed his mind and kept a flawed solution
internal class FileEntry : IDisposable
{
// Other fields
// ...
[MarshalAs(UnmanagedType.SysInt)] public IntPtr HashValue;
// ...
~FileEntry()
{
Dispose(false);
}
// The implementation is hidden and complicates calling the *right* version of a method.
void IDisposable.Dispose() { this.Dispose(true); }
// Choosing a public method is a serious mistake that allows for incorrect destruction of
// an instance of a class. Moreover, you CANNOT call this method from the outside
public void Dispose(bool fDisposing)
{
if (HashValue != IntPtr.Zero)
{
Marshal.FreeCoTaskMem(HashValue);
HashValue = IntPtr.Zero;
}
if (fDisposing)
{
if( MuiMapping != null)
{
MuiMapping.Dispose(true);
MuiMapping = null;
}
System.GC.SuppressFinalize(this);
}
}
}
SemaphoreSlim Class System/Threading/SemaphoreSlim.cs
This error is in the top of errors of .NET Framework regarding IDisposable: SuppressFinalize for classes where there is no finalizer. It is very common.
public void Dispose()
{
Dispose(true);
// As the class doesn’t have a finalizer, there is no need in GC.SuppressFinalize
GC.SuppressFinalize(this);
}
// The implementation of this pattern assumes the finalizer exists. But it doesn’t.
// It was possible to do with just public virtual void Dispose()
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
if (m_waitHandle != null)
{
m_waitHandle.Close();
m_waitHandle = null;
}
m_lockObj = null;
m_asyncHead = null;
m_asyncTail = null;
}
}
Calling Close+Dispose Some NativeWatcher project code
Sometimes people call both Close and Dispose. This is wrong though it will not produce an error as the second Dispose doesn’t generate an exception.
In fact, Close is another pattern to make things clearer for people. However, it made everything more unclear.
public void Dispose()
{
if (MainForm != null)
{
MainForm.Close();
MainForm.Dispose();
}
MainForm = null;
}
General results
- IDisposable is a standard of the platform and the quality of its implementation influences the quality of the whole application. Moreover, in some situation it influences the safety of your application that can be attacked via unmanaged resources.
- The implementation of IDisposable must be maximally productive. This is especially true about the section of finalization, that works in parallel with the rest of code, loading Garbage Collector.
- When implementing IDisposable you shouldn't use Dispose() simultaneously with public methods of a class. The destruction cannot go along with usage. This should be considered when designing a type that will use IDisposable object.
- However, there should be a protection against calling ‘Dispose()’ from two threads simultaneously. This results from the statement that Dispose() shouldn’t produce errors.
- Types that contain unmanaged resources should be separated from other types. I mean if you wrap an unmanaged resource, you should allocate a separate type for it. This type should contain finalization and should be inherited from
SafeHandle / CriticalHandle / CriticalFinalizerObject
. This separation of responsibility will result in improved support of the type system and will simplify the implementation to destroy instances of types via Dispose(): the types with this implementation won't need to implement a finalizer. - In general, this pattern is not comfortable in use as well as in code maintenance. Probably, we should use Inversion of Control approach when we destroy the state of objects via
Lifetime
pattern. However, we will talk about it in the next section.
This chapter was translated from Russian jointly by author and by professional translators. You can help us with translation from Russian or English into any other language, primarily into Chinese or German.
Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository github/sidristij/dotnetbook.