Recently I started receiving the following error when trying to edit TFS 2010 build templates on my machine:
Type ‘IBuildDetail’ is not defined
This was exceptionally weird as TFS was running happily for quite a while. Turns out the culprit was the Developer Preview edition of Visual Studio 11 that I installed alongside the existing VS2010.
Anyway, in order to solve the problem:
1. Open your build template (the .xaml file) in code mode. (From the Source Explorer, right-click the file, choose ‘View With…’ and select ‘XML (Text) Editor’.
2. Locate the first line, which starts with <Activity mc:Ignorable=”sad” … >
3. Replace the following:
assembly=Microsoft.TeamFoundation.Build.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
.Workflow, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
That is, use the full assembly name from VS2010. Otherwise, the workflow editor gets confused between versions from VS2010 and VS11.
In this series of blog posts, I will show how to write a simple report that makes use of the integrated business intelligence (BI) capabilities that are available in TFS 2010. If you’re unfamiliar with how to use this great feature, this series is for you! This first post will describe the final “product'” we’d like to construct.
What We’re Aiming For
Our end result for this series is the aptly-named Bug’s Life report:
For each specific bug ID, the report shows us the various states that the bug went through, along with the date, reason and the person responsible for setting this state. Here are some things that I plan to cover:
1. How to pull information from the TFS 2010 relational data store
2. How to use the Report Builder tool to create a report and store it in a Report Definition Language (.rdl) file. In particular, we’ll see how to:
- Design the report
- Add a parameter for the user to fill in
- Change the appearance of a field based on its contents
3. How to upload the report to a TFS 2010 Team Project, so it is accessible from the Team Explorer window
Why Write a Report?
The question arises – why bother? We can view the history for each bug directly through Team Explorer. The answer is fairly simple – using reports we can precisely decide how to present the information. Proper use of charts, graphics and color can help us identify trends and bring out important characteristics of our data.
Reporting in TFS 2010
TFS 2010 is based on the enterprise-grade SQL Server 2008 database. As such, it also makes use of SQL Server Reporting Services (SSRS), the component that enables us to combine a visual layout with the relevant data to produce a useful report. This means that we can produce useful visualizations of the hugely-useful amount of data that we put into TFS. In fact, you can access SSRS reports directly from the Team Explorer window using the Reports node:
In the next post, we’ll start the actual work. The first thing to do is to design the actual query: we’ll see how we can pull the data out of TFS based on user input. Stay tuned!
Having attended BUILD in Anaheim, CA last month, I was looking forward to trying out the various technologies that were introduced: Windows 8, WinRT + XAML and of course, running it all on the Samsung tablet. This post is a summary of my experiences. In general, it’s been good – the tools are appropriate, the APIs pretty much cover what I had in mind and the end result is similar (though not identical) to what I had in mind.
The first issue I encountered was setting up a comfortable development environment. While the Samsung tablet is a fully capable PC (Intel Core i5, 4GB memory, 64 GB HDD), I had a hard time working on it directly. I connected my own keyboard and mouse (instead of the Bluetooth keyboard that came with the tablet), but the screen was simply too small when working with Visual Studio and especially the XAML editor.
I then tried accessing the tablet via Remote Desktop. While this worked and was usable, it’s not an option I would recommend. The main problem is one that was mentioned in one of the Big Picture sessions at BUILD – Metro-style apps are full-screen, so when only a single monitor is available, it’s either the IDE or the application. The same session also discussed the fact that you can debug in an ‘application window’, which is a small window that does not take up the entire screen. Unfortunately, I couldn’t get this functionality to work – whenever I would click a text box, the application would jump to full-screen. The conclusion – separate screens for the IDE and the application.
At this point, it was pretty clear that I could use the tablet as my application screen – especially since that’s the only way that I can debug with touch capabilities. So I would write in one instance of VS11 (on a separate Win8 machine) and deploy to the tablet using VS’s remote debugging feature. I tried running the Win8 Developer Preview in VirtualBox on my laptop, but was not happy with the performance, especially where the display was concerned (nowhere near the ‘fast and fluid’ that was expected). Finally, I installed the image on a desktop machine (6 GB memory, quad-core i7) where it dual-boots with an existing copy of Windows 2008. This worked fine, and supplied a very pleasant development experience.
I wanted to write something useful and that would take advantage of the tablet and the touch interface. Finally, I opted to use the API exposed by Wordnik, a great on-line dictionary and a site I’m very fond of. Using Balsamiq, I came up with this initial design for a metro-style UI:
Things that I tried paying attention to (and taking into account that I’m a, shall we say, *very mediocre* designer)
- Aligning everything to a grid
- Making use of font sizes for effective information display
- Showing images along with definitions (probably using the Flickr API)
Writing the Application
When it came down to actually writing the application, I decided to pass on the existing templates in VS, and write everything from scratch. Since this is basically just an exercise, I wanted to do everything myself and get a better feel for the tools and the platform. In order to do so, I used the code and .xaml files from the Metro-style app samples and basically just copied stuff across (testing and experimenting as I go).
Some interesting points came up during the development:
- Rather than using the System.Net.WebClient class which is available in .NET 4, I needed to use System.Net.Http.HttpClient instead.
- Integrating Windows 8’s Search contract was as easy as handling the OnSearchActivated event in the Windows.UI.Xaml.Application class.
- I’m not familiar with Expression Blend at all, so I ended up writing the XAML manually. Therefore, I missed out on all the new Blend goodness that was integrated to VS11. And the XAML I DID write is probably quite bad.
Things I did not do and would like to experiment with the in future:
- Use the async APIs – right now, all the web access is done via synchronous APIs. In order to really achieve ‘fast and fluid’, it’s necessary to use the async methods of HttpClient.
- Create a more immersive tile – perhaps displaying a random word with a single definition.
- Learn Expression Blend and write a better UI!
Some screenshots of the deployed app:
All in all, it was a fun experience. The entire application took around 4 hours to write, from the minute I created the project until I had something that I was reasonably happy with. During that time, I did not experience a single crash of VS, although it did freeze once for about a minute. I used my existing development knowledge, and apart from the new APIs there wasn’t much that I needed to learn. I personally am quite fond of the Metro design style (and have been since it appeared in WP7), so I’m looking forward to further work on this kind of applications in the future.
Just came back from the first day of BUILD. I have to say – wow. Just one big WOW. This day has absolutely blown my mind with all the new technologies and philosophies.
Rather than describing the actual details of the various sessions – and in particular the keynote (see one such description here) – I’d like to give my high-level impression of what I saw. First and foremost – Windows 8 is simply better. Less memory, less processes that run in the background and less overhead. That means you get more space for your apps on one hand, and a better user experience on smaller and less powerful devices on the other hand. And speaking of devices – Win8 runs on everything. And I mean everything. The keynote showed an incredible variety of hardware that supports Windows 8, from handheld tablets all the way to fantastic water-cooled machines that have a trio of high-end NVidia graphics cards doing DirectCompute workloads. And everything that ran on Windows 7 will continue to run.
And speaking of bridging the gaps – I was completely blown away by the concept of contracts. This has got to be one of the most useful metaphors I have ever seen. The ability to seamlessly communicate between disparate applications in such a useful, and most importantly, intuitive manner means that the experience of using a home computer is now at a totally different level. I honestly believe that we have reached a stage where a layperson can be just as productive with their PC as a computing professional (I am, of course, not referring to tasks such as software development).
The Windows 8 UI is absolutely stunning. Microsoft has really done an incredible job with Metro. I am very fond of the Windows Phone 7 UI, so this looks like the natural step forward. The UI is sleek, professional and responsive. I’m really looking forward to seeing what people will be doing with these UI capabilities.
And of course – I now have a super-cool Samsung tablet to play with…
Some pictures from the conference and the Sela guys:
Tomorrow is the second day – I’m especially looking forward to the Visual Studio and ALM sessions.
It is currently 6:45 in the morning in Anaheim, California. The Sela delegation (19 experts!) is all here and we are all anxiously waiting for the conference to begin. We had a great steak dinner last night (thanks Sasha!) to kick things off on the right foot.
The conference center is huge – which is probably a sign of things to come. In about an hour we’ll start heading out, hopefully to get some breakfast and get some good seats. Currently, we have very little knowledge of what the agenda is - but it’s definitely going to be great.
Well, the time is finally here – tomorrow night I’m going to BUILD 2011 in Anaheim, CA. Looks like it’s going to be a blast! Apart from the fact there will be a whole bunch of us from Sela (orange shirts galore!), I’m looking forward to seeing some great new technology. If you’re coming, do drop me a line so we arrange to meet in person.
See you all there, and if you’re not coming – see you when we get back.
This morning we received some good news: Microsoft has released the TFS 2010 training kit that Assaf and I co-authored. Happy days!
You’ve probably heard of TFS and in particular of TFS 2010, the latest released version. It’s a powerful tool packed with features – but where do you begin? The Training Kit is designed to help you understand what TFS 2010 can do for your organization and software process. It applies to all members of the team – developers, testers, business people and of course, managers.
So if you thought TFS 2010 was just the next version of Visual Source Safe – this kit is for you!
Download the Introduction to TFS 2010 Training Kit here.
Today Assaf and I gave the talk above in front of ~20 people. People were quite receptive – and I believe we got them to understand not only how you do things in TFS 2010, but also why. Some of the things we talked about:
- What are work items and how to use them (including customizations and links)
- How to properly build a branching plan
- Where and how to apply automated builds and CI
Unfortunately, we didn’t have enough time to cover TFS 2010 reports in detail – which means another talk is in order!
A big thank-you to our audience – it was great having you!
Have you ever wanted to use the TFS 2010 data warehouse for retrieving the unit test pass/fail count for a specific build? Maybe you wanted a report showing the number of unit tests that ran during each nightly build last week. In any case, if you wanted this or something similar, you soon found out an interesting fact – picking out only unit tests from the warehouse is not trivial.
Here is the query for doing this:
SELECT BuildName AS 'Build',
COALESCE([Passed],0) AS Passed,
COALESCE([Failed],0) AS Failed
SELECT Outcome, BuildName, COUNT(*) AS TestCount
dbo.FactTestResult ftr INNER JOIN
ON ftr.ResultSK = dtr.ResultSK
INNER JOIN dbo.DimBuild db
ON ftr.BuildSK = db.BuildSK
WHERE dtr.TestTypeId = '13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b'
GROUP BY Outcome, BuildName
) AS SourceTable
SUM(TestCount) FOR Outcome IN ([Passed], [Failed])
) AS PivotTable
ORDER BY Build ASC
Some interesting points about this query:
It’s SQL, so obviously you can only run it on the relational data warehouse (TFS_Warehouse). What’s not so obvious is the TestTypeId field – seems it only exists in the relational store, and is not carried over into the OLAP cube. Sorry, MDX fans…
Where does the TestTypeId Guid value come from? This is the key for the entire query. This value is what allows us to pull out only unit tests (and only MSTest UT’s at that). The source of this value is in the HKLM/Software/Microsoft/VisualStudio/
TestTypes registry key, and its sub keys. Using these, you can use the query to determine values for specific types of tests (web,manual, generic,etc.).
This query will return results for all builds in your system. Add additional conditions to the WHERE clause in the inner query (the one with the joins) if you want to limit it further.
I spent the last week in Montreal, attending (and speaking at) DevTeach 2011. This was my first time in Montreal (and in Canada in general), and I quite liked the city. It’s a North American city, to be sure, but does remind me of Europe. I also got to meet an old friend I haven’t seen for a number of years, so that just added to the fun.
DevTeach was great. The content was good, and what more I got to meet a bunch of cool and interesting new people. This is the thing about conferences – what goes on in the hallways is just as engaging as what goes on in the sessions.
First and foremost, I’d like to thank the people who attended my session, on Product Development Using Specifications and BDD – you guys were an excellent audience! I thoroughly enjoyed giving the talk and hope this will help you in your current and future projects.
Second, I’d like to mention a number of talks that really impressed me. Jerry Weinberg, in “Secrets of Consulting”, talks about ‘jiggling a system’ to ‘unstick’ it. While I don’t think I’m ‘stuck’ on anything in particular (well, at least I certainly hope I’m not!), the ‘jiggling’ I received on some topics have really inspired me to dig deeper and explore further.
These talks, in no particular order, are:
was talking about the role of the architect in the organization. He eloquently explained why and how an architect’s job is sufficiently different from that of a developer and why a good development group needs both. His examples centered around different forms of abstractions, and I found the use of paintings and artwork to illustrate these examples to be very well thought-out.
William R. Vaughn
described the ReportViewer control available with Visual Studio 2010 in quite some detail. I was aware of this control and had even played with it before, but had no idea of just how powerful it is. In fact, I never knew that it was capable of rendering reports purely on the client side – this opens up a lot of new possibilities for doing interesting things. What impressed me the most was his example of an LOB application whose UI was based almost completely on the ReportViewer control.
talked about Command/Query Responsibility Segregation
(CQRS). I’ve heard/read about CQRS before but never really ‘got it’. Hearing it from Greg – the guy who actually came up with the term– himself, including the reasoning behind the various concepts, was truly an epiphany and made the pieces fall into place. I’m definitely looking forward to applying CQRS in a real-life project and seeing how that turns out. In addition, I got to talk to Greg in person and got a live demonstration of his Mighty Moose testing tool – super cool! This is definitely one I will be trying out.
talked about Self-Service Analytics with SQL Server 2008 R2. This talk centered around using PowerPivot with Excel 2010 and just how powerful this tool is (to quote Edwin – “PowerPivot is the Analysis Services engine brought into Excel”). In addition to his being a great guy (Edwin and I had a long talk after his session), his presentation was entertaining and educating at the same time. Being an ALM guy and working with TFS 2010 and its data warehouses, I will be applying this new knowledge very soon.
I attended two talks by Joel Semeniuk
– ‘Dash of Kanban’ and ‘Want Better Estimates? Stop Estimating!’. Once more, these talks helped me get a better handle on concepts that I was already familiar with. In particular, I had never really seen the ‘big picture’ with Kanban before – I believe I have a better grasp of it now. Joel is a fantastic speaker – he has his audience laughing, participating and asking questions left and right. His sessions were FUN.
talked about data mining using SQL Server 2008 Analysis Services. I have an interest in machine learning so I had a vague idea of the capabilities of SQL Server in this area, but had never tried it out. Steve showed how SSAS data mining can be used in financial applications and did a great job of explaining how it works and how to apply it to other domains. Here too, I look forward to putting it to use with the TFS 2010 data warehouses.
All in all – a very productive and enlightening week, and well worth the long trip. Hope to make it back to DevTeach next year!
One of the hallmarks of a good development organization is the free and unobstructed flow of information. It is therefore not surprising that many such organizations choose to expose up-to-the-minute project information to all employees using some form of visual dashboard in a public gathering place. For example, last week I was present in a presentation where HP Software employees showed some screenshots of their appropriately-named ‘Kitchen Portal’, which shows iteration data such as number of bugs as well as other interesting statistics.
If you have TFS (preferably version 2010 – but 2008 is also good), then you have a fully enterprise-ready business intelligence (BI) environment which can be used to report on the state of your project. In this post, I will not delve into the many interesting and insightful things you can do using this capability. Rather, I’d like to point out a useful tool which can help you get started in showing this information on a dashboard. This tool was written by Offir Shvartz and is available here. Offir’s company is not in the business of writing dashboard software, and so he graciously agreed to make the code available to all.
The tool – named Generic Dashboard – is a WPF application for displaying either live web pages or PowerPoint-like slide templates with free text. Offir’s team uses it to periodically cycle through a multitude of up-to-date reports generated from TFS, as well as to display important milestone goals. The interesting thing I observed is the fact that people use the dashboard as a common (and by implication, a reliable) source for discussions and planning. The data is dynamic and changing, so any plan based on this data needs to adapt as well.
Thank you Offir!
Wow, it’s been a almost a year since I last posted… Lots of things have happened in that time! In particular, I’ve changed positions inside of Sela, from development consulting to ALM. This means that I get to work alongside such talented people as Shai Raiten, Assaf Stone, Baruch Frei and Oshry Horn.
As a developer, I’ve always been one of those people who was interested in the means just as much as in the end product – I can’t remember how many times I've talked about unit tests, branch plans, pair programming, etc., etc. Fortunately, I was lucky enough to be in the right places at the right times (read: I had open-minded, far-seeing managers who thought that my views sounded reasonable) and got to implement some of these ideas in practice. It is a great feeling when a development or QA team suddenly ‘gets it’ and implements a time-saving procedure: morale soars, product quality increases and the end user is (usually) happy. Being able to do this is just as satisfying as writing the code itself. Well, to me at least. So now I get paid to do this full-time. Hooray!
I firmly believe that a good ALM practitioner needs to be a developer or tester themselves. You need to ‘feel the pain’, so to speak – and by feeling the pain you can understand what will make team members’ lives easier, not harder.
Since Sela is a Microsoft ALM Gold Partner, a lot of my time in the last months has been spent with Team Foundation Server 2010. TFS is great in that it brings together source control, automated builds and work item tracking (that is, bug tracking and requirements/task management) into a coherent package. In a previous job, I spent a large amount of time setting up custom solutions for integrating HP’s Quality Center with Rational’s ClearCase and the open-source CruiseControl.NET continuous integration (CI) server. These are all fine products, but focusing on the plumbing means not a lot of time for focusing on actual process improvements. If you haven’t played with TFS 2010 yet, I strongly urge you to – it might give your development process a boost. The Sela ALM team will be discussing this – and more – in the upcoming Dev Days. If you’re there, do come and say hello!
Code for this post is available here
As promised, here is an example of how to use the ReactiveQueue<T> from RxContrib. Imagine that you have a stateless WCF service that needs to handle a large number of client requests – perhaps, a distributed logging service. Clients need to send messages as quickly as possible and then be on their way. It is up to your service to then do something with those messages. So, assuming we have a logging service, we know several things at this point: it’s a singleton (i.e., InstanceContextMode = InstanceContextMode.Single in the ServiceBehavior attribute) and since requests are independent of each other, we do not need to maintain any shared mutable state. This last points hints at the fact that perhaps our service can be single-threaded (add ConcurrencyMode = ConcurrencyMode.Single to the ServiceBehavior).
The Sample Code
The sample code included with this post contains 4 projects:
Client – This is a simple WCF client for accessing our service. Nothing special to say.
Common – This project contains our service interface, and the request objects used by the server. I chose to encapsulate each processing task carried out by the server in a separate object, since the service itself does not need to know the specifics of each operation. For this example, we have a ShortRequest which returns immediately (after supplying some diagnostic output) and a LongRequest which takes ~3 seconds to execute (you may think of these as perhaps a write request and some kind of analytic processing request). The client sends a total of 100 requests to the server, randomly mixed.
ReactiveQueueServer and WCFServer – discussed below.
Using the Plain WCF Server
If we use the two attributes above, we indeed get a singleton and single-threaded service. Of course, each client message is handled in sequence – so the client does not finish until all requests have been fully processed. On my machine this looks like this:
In fact, if I do nothing with the binding parameters I get a timeout at some point – since the service has not been available for that long.
Obviously, what we would like to do is to receive the client requests as quickly as possible and free the service for the next request. We can do this by modifying the service to only add the requests to an in-memory queue, rather than run them immediately. In this way, we are then free to dequeue the requests at our leisure and still have a responsive service. This is a variation on a pattern known as Half-Async/Half-Reactive.
Using the ReactiveQueue Server
The ReactiveQueue server has the exact same WCF configuration as above – it’s a singleton and single-threaded. However, in this implementation we use a ReactiveQueue instance to queue requests, and subscribe an IObserver instance to it (that, for simplicity, is implemented on the ServiceImpl class). The ReactiveQueue is then responsible for dequeueing the requests – on a separate thread – and calling the OnNext method of the observer. Our implementation of the method performs the actual execution of the request.
One of the options that the ReactiveQueue gives us is the ability to decide how we’d like the OnNext method to be called. Note line 16 in the ServiceImpl class of the ReactiveQueueServer project:
The ConcurrentPublicationBehavior.Async value tells the ReactiveQueue to run each invocation of the OnNext method in a separate TPL task. Running the client with the ReactiveQueue server rather than the plain server gives us this:
Here each separate request is being handled in a separate task (that is, a separate thread from the CLR thread pool)– and we see that the end of the batch is composed of only long requests. This actually makes sense, since threads with short requests have all finished.
Coming back to our hypothetical logging server, though, we see that this way of doing things does not fit our needs. We’d still like to make sure the service is available for further requests ASAP (i.e., the queue must still be there) but the requests must be processed in the same order in which they arrived. This is easy to do – simply change the value of the ConcurrentPublicationBehavior enum to Sync rather than Async. This has the effect of using a single task for calling all OnNext invocations, so we get the required order back. Note, however, that this thread is distinct from the thread on which the ServiceImpl runs and receives messages so that we still get high throughput.
In this post we saw how to use the ReactiveQueue class for implementing a simple WCF service with high throughput, using the Half-Async/Half-Reactive pattern. We have a simple and intuitive programming model for the service where no threading synchronization is necessary and we encapsulated the actual processing into stand-alone request objects. Applying this pattern to your own services should be fairly easy.
Bnaya has released a new version of the RxContrib project which includes the Reactive Queue – an RX interface above a message queue. This version has a provider for CCR ports as the underlying queue mechanism. This provider supports a Port<T> for OnNext notifications, and PortSet<T, Exception> for both OnNext and OnError notifications.
To compile, make sure that you have a version of the CCR available (you can download it from here) and change the appropriate references in System.Reactive.Contrib.Ccr and System.Reactive.Contrib.UnitTests.
I’ll be doing some posts in the near future on how to use the Reactive Queue – rest assured that we’re dog-fooding it quite extensively in our current project. Finally, if you’re using the Reactive Queue and are willing to share your experience – we’d love to hear from you!
The Rx world is most definitely on fire! Check out Jose’s short-and-sweet implementation of an Event Aggregator using RX. And apparently, we’ll soon see it in Caliburn. Let the good times roll! (Also, check out Bnaya’s introduction and RxContrib).
As the immortal Hannibal Smith once said - “I love it when a plan comes together”.
More Posts « Previous page
- Next page »