Parallel and The C# Memory Model


Parallel and The C# Memory Model

Parallel programming can be tricky, both compiler and CPU’s optimization can lead into a twilight zone’s debugging.

lets take the following code snippet snippet:

Code Snippet
  1. class Program
  2. {
  3.     static void Main(string[] args)
  4.     {
  5.         Console.WriteLine("Start");
  6.         var u = new Util();
  7.         u.Exec();
  8.         Console.ReadKey();
  9.     }
  10. }
  12. public class Util
  13. {
  14.     private bool _stop = true;
  15.     public void Exec()
  16.     {
  17.         Task t = Task.Run(() =>
  18.             {
  19.                 bool b = true;
  20.                 while (_stop)
  21.                 {
  22.                     b = !b;
  23.                 }
  24.                 Console.WriteLine("Complete {0}", b);
  25.             });
  26.         Thread.Sleep(30);
  27.         _stop = false;
  28.     }
  29. }

can you predict the outcome?
will it ever reach the completion at line 24?

you can download the snippet from here.

now download the snippet and try to execute it in the following order:

  • Compiled in Debug mode and double click on the exe file (without an attached debugger).
  • Compiled in Release mode and double click on the exe file (without an attached debugger).
  • Compiled in Release mode and run it F5 (with an attached debugger).

Parallel, Task, Memory model, MemoryBarrier, rase, optimization, reordering

so why does is execute predictably in Debug mode, while behaving quit strange in Release mode?
even stranger why does it execute predictably in Release mode when a debugger attached?

what happening is a matter of a single thread optimization that can occurs at the compiler (JIT) or CPU level which is taking assumption which is not acceptable for parallel execution.

it can be fixed up using different API like Thread.MemoryBarrier and other.

instead of explaining this out, I will offer you a reading of part 1 and part 2 of a great article on those matters.


the optimization world is still rely on a single thread assumption (with some special instruction for parallel execution).
while the computing world becoming more parallel each day, this single thread assumption priority may have to be change in the future.

in order to avoid pitfalls, you should be aware o it and use a parallel dedicated APIs.

Shout it

Add comment
facebook linkedin twitter email

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>