NDepend for Instructors

June 13, 2007

one comment

I’ve recently been offered a free NDepend Professional license in return for an evaluation post on NDepend.  Those of you not familiar with NDepend probably should familiarize yourself with it: it is one of the most popular emerging code metrics tools available.  (As popular as code metrics tools can get nowadays, anyway.)

NDepend itself

My experience with NDepend goes a couple of years back, when I used the freely downloadable version to show people in my team that their code is, well, sucky.  NDepend didn’t disappoint me – it pointed out all the “cyclomatic complexity”, “too many IL instructions”, “poor cohesion” and other issues their code really had.  However, I didn’t really care what they called it.  It was bad code, period.

At a later point, I have begun to appreciate the power of NDepend’s integrated query language, CQL.  You simply have to like the idea of writing something on the following lines …

WARN IF Count > 0 IN SELECT TOP 10 METHODS WHERE NbLinesOfCode > 30 ORDER BY NbLinesOfCode DESC

… and having some results flowing back at you.  I also find NDepend’s dependency report (hence the name, I’d assume) quite useful for analyzing scenarios slightly more complex than this one:

Code metrics for instructors

One way or another, I’d probably be better off letting NDepend’s tutorials and screencasts talk for themselves.  Instead, I want to bring up an interesting usage scenario for NDepend.  I would love to hear from anyone who was using NDepend or another code metrics tool for that purpose.

I think code metrics tools such as NDepend can be spectacular for instructors teaching programming-related courses.  I think I’m biased on this point because I’ve been teaching programming-related courses for some time now (everything ranging from C programming, through Windows Internals, COM/COM+, Win32 to all varieties of .NET technologies, .NET performance, C# and so on).  Anyway, let’s try to think of possible applications for this kind of tool:

  1. Comparing student submissions to identify plagiarism (using NDepend’s Compare Builds feature)
  2. Comparing student submissions to the “school solution”
  3. Analyzing student submissions using the standard code metrics (identifying potential problems even in large projects without having to look at every line of code)

And you have all that before you even consider the possibilities you’ll have if you write custom CQL queries yourself!  I think it’s possible to imagine a perfect scenario, where your tests for student submissions would be simply the following:

  1. Run your automatic black-box tests on the student’s submission
  2. Subject the student’s submission to custom-tailored CQL queries, analyzing his/her code without having to look at every line!

As I said, if you have any input on using NDepend or any other code metrics tool for this and similar purposes, I’d really love to hear from you through the comments.

Add comment
facebook linkedin twitter email

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

one comment

  1. liorzJune 20, 2007 ב 12:29 PM

    Hi,
    We are planning an event for bloggers and would like to send you an invitation.
    Can you please contact me with your email address at http://blogs.microsoft.co.il/blogs/liorz/contact.aspx
    Thanks,
    Lior Zoref
    Microsoft Israel

    Reply