April 2009 - Posts
I blogged about wait chain traversal (WCT) a while ago – we’ve seen that it’s quite a useful tool for analyzing system-wide synchronization issues such as multi-process deadlocks, and might actually be useful for intra-process analysis as well. What I regretted ever since is that there was no Microsoft tool for displaying WCT information, except for some rarely-cooked MSDN samples.
This comes to an end with the Windows 7 Resource Monitor. By the way, if you haven’t been introduced: Reader, Resource Monitor. Resource Monitor, Reader. Great, now that you’ve made the acquaintance, you can take it from here by either launching Resource Monitor directly or clicking the Resource Monitor button on the Performance tab of Task Manager (works in Vista too).
What I wanted to show you today is that you can go to the Overview or CPU tab in Resource Monitor, right-click any process (unresponsive processes will be shown in red to begin with) and choose “Analyze Process”. This opens a cool new dialog which shows you what the threads in the process are waiting for, and who holds the resources they are waiting for.
You even get the option to end one of the problematic processes if you deem fit.
Another nice new thing in Resource Monitor is a visualization of physical memory usage:
Resource Monitor is evolving to become a very useful utility for analyzing system performance and stability at a glance; having this kind of tool packaged with Windows and available on any Windows system will make troubleshooting and debugging work much easier.
If you haven’t tried Resource Monitor yet, it’s time to get better acquainted.
Transaction scopes are a natural part of Windows Workflow Foundation; they provide transactional semantics to a set of activities bound together by their enclosing scope. The transaction scope provides ACID semantics to a group of workflow operations, ensuring that they abort as a group or succeed as a group, but not otherwise.
Out-of-the-box, however, the usability of workflow transaction scopes (at least for my recent application) was fairly low. When a transaction rolls back within a transaction scope, its owning workflow is brutally terminated by default, removing its state information from the workflow persistence service. Effectively, when a transaction aborts for some reason, the workflow goes away with it.
An alternative I discovered is the ability to provide a fault handler for the activity enclosing the transaction scope, and handle the TransactionAbortedException or any other exception that escapes the scope. However, after handling the exception, the workflow’s state is not magically restored to the previous persistence point – the workflow simply proceeds as if the transaction scope completed successfully, defeating the whole purpose of a transaction.
For me, this was the absolute opposite of what I wanted. I wanted a transaction scope to guard a set of operations which transition the workflow from one persistence point (savepoint) to another, so that the workflow state remains consistent after the transition. Either the transaction fails, restoring the state to the previous savepoint, or the transaction succeeds, moving the workflow forward to the next savepoint.
For example, the following workflow depicts my intent. The workflow begins by performing some initial work, and then stores a savepoint of its current state (by using an activity decorated with the [PersistOnClose] attribute). Then, the workflow prints a message and begins a transaction. If the transaction succeeds, the workflow is again persisted to a savepoint and goes on to do more work. If the transaction fails, the workflow should restore itself to the previously established savepoint and repeat the process.
The mechanics of transitioning the workflow back to the previous savepoint are fairly simple, and involve a call to the WorkflowInstance.Abort method followed by WorkflowInstance.Resume. As a result, the workflow’s data is removed from memory and it is restored to its latest durable (persisted) state, and then resumed. Unfortunately, it’s impossible to call the synchronous WorkflowInstance methods from within the workflow thread, so a creative solution was in place.
This is how my SavepointScope custom activity was born. This activity provides a scope for enclosing sensitive operations which are supposed to transition the workflow from one persistence point to another (it doesn’t have to be a transaction, but it often will be). The SavepointScope automatically restores the workflow to the most recent savepoint if an error occurs while performing the activities within the scope – this is accomplished by combining the HandleFault method of the SavepointScope itself with an external local service called ReturnToSavepointService which asynchronously aborts and resumes the workflow from the latest savepoint. To ensure that the scope does not complete its execution before the workflow is aborted and resumed, the HandleFault method returns ActivityExecutionStatus.Faulting, causing the workflow runtime to stall while it is being aborted by the local service.
To further refine the idea, the SavepointScope activity provides an event handler which can decide whether to restore the state or not, and a retry count property which indicates how many times the state should be restored before the workflow is allowed to terminate. (Note that a proper implementation would store the current retry count in persistent storage, to make sure it is shared with other workflow runtimes and that it survives a system restart. For expository purposes, an in-memory cache will suffice.)
The implementation of the SavepointScope activity and the test workflow demonstrated in this post can be downloaded from my SkyDrive. Please note that this is not production-quality code, and some bugs might be pending :-)
A couple of days ago I have been bitten by an (really stupid, in hindsight) error when working with WCF services. I had a service and a client set up to use a contract using a contract link (see my post from a few months ago about using contract links as a replacement for service references).
For some reason, I couldn’t get any data transmitted from the client to the service. Everything worked great and there were no exceptions, the bindings seemed to be absolutely identical, the contract was the same contract and even the request message on the service side seemed to contain all the necessary information. Still, the data contract I was passing across the service boundaries was reaching the service end empty – an instance initialized with all the default parameters (albeit not a null reference).
So instead of this:
I was getting this:
After spending about 2 hours trying to debug and reproduce this issue, I went home and the next day on my day to work it hit me – the service assembly had an assembly-level [ContractNamespace] attribute indicating that the contracts should go to a non-default SOAP namespace! The client assembly didn’t have that [ContractNamespace] attribute (for some reason), so the namespace for the service contract and the data contract was entirely off; the client and the service were talking in different dialects of the same language…
Placing the [ContractNamespace] attribute in the client assembly fixed the problem immediately. As I said, it was a stupid thing to not notice, but fairly annoying nonetheless. Hopefully reading this post will help you avoid this kind of mistake in the future :-)
(The code required to reproduce this issue can be downloaded from my SkyDrive.)
Last Monday (March 30) I had the pleasure of presenting an MSDN event at Microsoft Raanana on the subject of Concurrent Programming. The idea was to show the design patterns, methodology and fundamentals of concurrency and parallelism in applications.
An opening line (which I also used for the summary) which I really liked was along the lines of “we’ve been resisting object-oriented programming 20 years ago, so it’s only natural that we resist concurrent programming now”. I really think that, given the design patterns, architectural differences and programming style imposed by concurrent applications, the paradigm shift expecting all of us is not smaller than the shift imposed by object-oriented programming and design.
Anyway, I just wanted to use this opportunity to thank everyone who was there for coming, and to give you a way to download the presentation that I used, from my SkyDrive.
Two hours ago, I received an email presenting me with the Microsoft MVP award! The award is given for sharing expertise with the online and offline developer community, which is what I have been doing (or at least trying to do) for the last couple of years through this blog, through online forums and through my consulting and training work. (Here’s my MVP profile.)
I am very honored to be an MVP, and I give you my word that I will strive to continue contributing to the online and offline developer community in Israel and abroad.
This is also a good time and place to extend a great thank you to quite a few people who made this possible:
- Guy Burstein of Microsoft Israel (and a former MVP himself) who nominated me for the MVP program;
- Alon Fliess (C++ MVP) who has been a professional role model for me in my work;
- My current and past managers and coworkers at Sela Group, which is easily the best software company to work for that I could think of.
That’s it for now! I will continue my regular blog post series on everything technological in a couple of days.