Though the Microsoft marketing drums have begun beating to the rhythm of Visual Studio 2010, most of us workaday code monkeys are still using Visual Studio 2008. And while VS 2008 is a great IDE for development — especially once you add ReSharper — it has a few configuration quirks that drive me up the wall.
Most of these quirks are hidden from the typical developer and only appear once you try to package and deploy your software. It’s the dreaded Works on My Machine syndrome.
And if there’s one Visual Studio build configuration setting that causes me to scream in anguish, it’s the CopyLocal property.
When you add a reference to another .dll in Visual Studio 2008, some default settings get applied.
Here’s how the settings look after I added log4net to one of my projects. As you can see, the CopyLocal setting is set to True. Or is it?
If you move your solution to your build server, you might be surprised to find that CopyLocal isn’t actually copying the .dll. I was certainly surprised to find that my builds failing for inexplicable reasons.
It took me a while to figure out that Visual Studio 2008 is a dirty liar when it comes to CopyLocal. Let’s have a look at our .csproj file, shall we? You can load the XML in the .csproj file by following these directions.
Ah, there’s the contents of our csproj file. And there’s our reference to log4net, but…
The CopyLocal setting isn’t there! Within the log4net reference, we should see an XML element called Private. It should look like this:
But it’s clearly not there. Uh oh.
And because it’s not there, it might work on your machine but not on other machines. Even though the Visual Studio IDE represents CopyLocal as a Boolean value, it’s actually a ternary value. Where Booleans have two states, usually represented as True/False, Yes/No, or 1/0 pairs, ternary logic has three states:
Yikes! That’s a classic interface failure mode.
It turns out that the default for the CopyLocal setting is… something not quite True and not quite False. If you read the documentation for how to set the CopyLocal property, it mentions the weird logic Visual Studio uses to determine what the “default” should be. Argh.
To fix the problem, we reload our project in Visual Studio again. Then we toggle the CopyLocal setting from “not quite True, exactly” to “False” and then back to “totally, literally True”.
With apologies to the Violent Femmes, when I say CopyLocal, you best CopyLocal, motherf***er!!!
And now it’s really, truly TRUE. Honest. Take a look at our .csproj file now.
And there it is, the CopyLocal setting. The way it should be. The way it should have been all along.
I don’t know whether Visual Studio 2010 fixes this problem. I haven’t looked at the VS 2010 Beta release to find out. I’m too busy manually editing all my .csproj files to get our Infovark builds working. But I really, really, really hope that the folks at Microsoft have done something to address the problem.
Here’s the simple interface design rule: If it isn’t a Boolean setting, it shouldn’t look like a Boolean setting.
Unless of course, you want to make the pages of The Daily WTF.
Every now and then I find a comment in my code that I’ve completely forgotten about.
When I run across one of these nuggets, I guess I have the same experience as someone who’s kept a journal for months or years. I read the note and think, was that me that wrote that? Did I know that I would come back here again? Was a sending a message to my future self?
// TODO: Improve the quality of these tests. DTHRASHER 6OCT2008
// We need to verify that the filter string is being generated properly.
// We need to add tests to verify sorting behavior.
// We need to separate the ActivityParameters unit tests from the MetaIndex integration tests.
// We need a staff of 10 developers to help us finish this product! ARGH.
These always give me a chuckle.
And then I think, what a jerk that DTHRASHER guy is! I can’t believe he left me all this work to do!
Fortunately, I was able to expand my programming knowledge by catching up on my blog reading, and particularly by watching Greg Young of IMIS give a presentation called Unshackle Your Domain at QCon in June.
If you’ve ever had to built a high-performance system or one that has strict auditing and reporting requirements, this presentation is for you. Greg’s company deals with financial systems, and you can tell he’s learned many best practices the hard way.
While I doubt we’ll need an architecture as robust as he describes for Infovark, I recognize many of the the problems and patterns he describes from my old jobs in software companies making records management software (auditing) and real estate systems (transactions and reporting).
The key insight is that for certain software solutions, it’s important to model state transitions as part of the problem domain.
I’d explain in more detail, but it’d probably be easier to just watch the presentation yourself.
To a human, “I once met a man with a wooden leg named Smith” is the start of a old joke. To a computer, it’s a compile error.
Class 'WoodenLeg' has no 'Name' property or the property is not accessible.
If only computers had a sense of humor…
There’s a great discussion on Jeff Palermo’s blog about entity validation patterns. Jeff takes the position that your domain objects (or entities) should not have validation logic “baked-in” to the class itself. Instead, you should separate the validation routines into separate classes that you can use to validate the object on demand.
There are two advantages to this approach. The first is that you can use different validators for the same object in different circumstances. For example, the validation you might perform prior to storing the domain object in a persistence layer could be different than the validation routine used to validate input from the GUI layer.
The second advantage is that separating the validation logic from the data itself makes it easier to work with ORM or serialization frameworks. Most of these frameworks encourage the use of Plain Old Objects, that is, objects without special attributes or interfaces that help with these mapping and serialization tasks. (See Wikipedia’s article on Plain Old Java Objects, for example.)
Those are powerful arguments, but I’m still not convinced.
As a practical matter, Jeff’s advice is sound. It’s much easier to move business logic into the helper classes that surround your entity model. You get better tool support and more flexibility. But there’s two things about his approach that bother me. Judging by some of the excellent comments on his article, other programmers are bothered by them as well.
First, stripping away behavior from your domain objects is a recognized anti-pattern in object oriented code. Martin Fowler calls it the Anemic Domain Model. It harkens back to the days of procedural programming, where data and business logic were strictly separated. If you’re an OO purist, this is a red flag.
From an OO perspective, the need to validate the same object in different ways suggests that what you actually need to do is create more objects. Rather than pass a stripped-down data-transfer object (DTO) all the way from your data storage layer up to your GUI, you should have a bunch of intermediate objects to help transition the data and enforce proper behavior.
But I’m not an OO snob. Writing a whole bunch of extra classes to move information between tiers in my application is a hassle. I’ve done it before, and we’re doing it now with Infovark, but for most projects it just isn’t justified. Especially if you have to wrestle with various application frameworks to deal with correctly modeled but more complicated domain objects.
The second objection I have is that if we follow Jeff’s advice, we have to accept that bad data will creep into our domain. Jeff knows that this is a hard sell. It’s why he titled his article “The fallacy of the always-valid entity.”
Whew. That’s rough. That requires a whole different programming mindset. What about the problem of Garbage In, Garbage Out? Can we really create programs robust enough to handle business objects that might, at any moment, contain meaningless gibberish?
I don’t know. For now, as appealing as Jeff’s idea is, I’ll stick to always-valid approach. What do you think?