June 2008 - Posts
following mono for quite some time now and I'm amazed of the wonderful job they
One of the more interesting libraries (in my POV) is Cecil.
is library is excellent for assembly analysis because it doesn't load the
assembly into the AppDomain (!) but rather parses the CIL byte codes, this way
you can handle multiple versions of the same assembly at once. In their FAQ you can see an example of
simple dynamic code emitting capabilities.
wrote a nice post comparing mono.Cecli and System.Reflection.
Almost every application needs to store all the errors, debugging
outputs, warnings, audits and etc for the usual needs.
It's common to develop some kind of a
library that will enable us to write all the logged information in one (or
more) repository the common ones are simple files and even log entries.
There are two dominating open source libraries – one, more common in my
opinion, from Microsoft (Logging
Application Block). And one from apache java port log4j – log4net.
I will focus on log4net (Logging Application Block has a lot of
documentation and examples through the cyber space).
I'll briefly cover some of log4net feature:
Appender's are custom "writing class" for various repository
types, such as:
Rolling file (the name
Each log "message" can be layouted in various ways:
In addition log4net supports active filtering, multiple appenders,
ability to modify configuration in runtime with closing the process (internally
using FileSystemWatch) and more.
In my opinion it worth to check out this framework...
As promissed the *BONUS* - I wrote another appender that I believe many can use and benefit from it
– I called it MultiThreadedBufferedAppender and it supports:
Buffered output (x writes
before actual flush to file)
Rolling files with optional
process name before the specified output file name
Automatic flushing thread
(in case you are in the middle of the buffer)
<appender name="MultiThreadedBufferedAppender" type="log4net.Appender.MultiThreadedBufferedAppender">
In the attached
file you'll have 3 files – FileAppender (I needed to extern it's private stream
class and stream instance), MultiThreadedBufferedAppender.cs (the appender himself) and MultiThreadedBufferedAppenderTest.cs
(the nunit tests for the appender). You'll need to download log4net from here (I used
as promised in my previous
post I'm going to write above some of the insights I had while working in
large scale project (man power, equipment you name it).
I'll start from
this project a lot of the code is automatically generated (about 70%-90%).
such there is a special team that creates these templates by development teams
needs. I think it's a good idea because
creating templates requires special type of developers and when you have a
dedicated team for this job, the developers in that team are constantly
improving their skills it this area of expertise and create high quality
templates in reasonable time.
the other hand when most of your code is generated for you and you have to
implement or inject a function or two it leads to poor code ownership and poor
understanding of the overall picture. When something breaks (and it does) the
developers of the higher levels of the application layers will have more
difficulties identifying the problem and nearly no chance of overcome it.
large scale application usually take time, and as times goes by the mobility of
the working force take key factor in knowledge-loss scenarios. It's hard to
overcome this issue so try to write more maintainable code – if most of you
code/logic is parsed/consumed and not compiled it will be very hard to maintain
it especially when developers leave, so think if using a lot of configuration
and dynamically loading it during run time is worth the maintainability penalty
– in most of the cases you can generate code from that configuration in a pre
build event (or some way with equivalent effect) that way you'll be able to build and run tests against
it. This level of compliable-configuration will allow your code to be more understandable
(especially if you'll put some remark in the header of the generated code
stating that it was auto generated).
If you generate code, think
about who needs to implement it, how long will the project last (for long
periods your can risk knowledge loss when people leave).
Consider using more OO instead
of code generation, and when generating prefer generating partial classes
instead of regular classes
Try avoiding code
injections and other "hacking" methods; it becomes extremely
difficult to debug generated code (mostly because you didn't write any
Don't overdo with non-compiled
application logic, prefer compiled code that you can test and understand where
it came from.
Now regarding the builds
think that in early stages of the project you tend to think more about the
deployment than about builds (if at all), I disagree with this. You don’t have
to be an agile "fan" in order to adopt CI (continuous integration).
You can achieve high flexibility in testing the product with all the latest bug
fixes and features but only if you have CI (by dropping specific DLL's and testing
them you can easily have .NET-type DLL hell during integrated tests!).
the project advances you have more projects/solutions to build, test and pack
all this takes a lot of very precious time. In more than one occasion we had forced-unemployment
because developer(s) was/were waiting for a completion of a specific build.
between a lot of developers (and teams) can be very problematic (builds might break,
code might be overwritten or event forgotten to be merged – you need to plan
and form a methodology based on the hierarchy of the teams their development
responsibilities and architectural design of the application.
Builds can take most of the
time in the later stages. Plan towards fast incremental builds (by detecting that
no changes been made for specific assembly and not compiling it).
Form methodology for
merging development content from various teams and developers.
And now a little
bit about Management
large scale application you have (usually) a lot of developers that writing a
lot (hopefully good) code. You need to constantly CR (code reviews), developers
tend to write "temp" code during high-pressure periods and that code
tends to be "const" as time passes and you even start joking about
the horrible code and this will eventually lead you to not necessarily good
decisions from software entering perspective and so on and so on, refactor
wisely and you'll benefit from it on the long run.
When you have a lot of teams/layers/developers
you need some kind of a way to integrate all the features/changes/bug fixes
together, integration teams now becoming more popular but I think it can be
even better. In my opinion integration team are "all knowing
read-only" developers ("all knowing" - from application modules
and architecture perspective, and "read only" because they don’t
modify the code and most of the time don't develop). They should be aware of
all the major features and special types of configurations in all the modules, perhaps
by integrating them as part of architecture team or infrastructure team, this
way they will gain the application-specific knowledge more easily. In either
way this team should be highly involved with the application deployment and
Perform CR, refactor when
needed before it becomes de facto!
Integration between teams
is important to form a special team for this.
I'll be writing about all kinds of interesting bits from my experience working in
I'll be joining MS DSL team in UK by the end of this month, at last –
after one month delay… Me and my wife are expecting it to be very interesting
In my current position I'm Microsoft sub contractor in one of Elbit
systems command and control project. I'm glad that all my employers (Taldor,
Elbit and Microsoft) support me in my decision to go forward and advance professionally
and really coming towards me by letting me work although I resigned already
(some sort of a flexible resignation).
I learned a lot in Elbit, first of all I started in IT industry and done
mostly IT type projects – winform or web frontend, web service/WCF/com+/other
transport layer and business logic and data access layer with some sort of a
database at the back (SQL Server usually) – a standard data/user driven
In Elbit the story was completely different. The project I'm working on
is message driven (100% distributed) and I'm part of the SDK team (and
extensibility team but I'll write about this later on). This team consist of
highly skilled senior developers – when we had brainstorms it's really fun to
be able to receive a great and a quick feedback in various perspectives at once
– network, performance, configuration, distribution product management and
deployment – all of this is crucial in order to be able to provide the best
solution for defined problem; Beside the great team, we had a lot of
developer-goodies - an implementation of WCF-like layer (the project started in
VS2005 beta), WF-like layer, Publish-subscribe mechanism, distributed
configuration and state. Basically developer's heaven!!!
I mentioned earlier that I was part of the extensibility/cm team; well
in Elbit they created a specialized team that is responsible on builds, TFS/VS
extensibility. The main goal of this team is to provide the best conditions and
tools for the developers, so they'll be concentrated on developing high quality
product and not configuring builds or wondering what wrong in case of failure.
In the extensibility perspective we created an extensibility framework
(very similar to the power
commands infrastructure) and created various tools for developers – merge utility
(by work item), deep history for item (over branches) and much more.
In the next couple of posts I'll try to bring various takeoffs from the
wonderful and educating experience I had.
Insights to development,
deployment and management in large scale application.
MSMQ – how to detect
various problems and handle them.
Deep history tool.
Although it looks like a good bye post it's not good bye yet (and even
not good bye it's "we will probability meet in a convention so see you