One year ago it became clear that Infovark had outgrown the Windows Communication Foundation (WCF).
We’d decided to use WCF because we wanted Infovark to provide web services, and we liked the fact that we could deploy WCF to client machines. Since WCF is built directly on top of HttpListener, a core part of the Microsoft .NET Framework, we wouldn’t need to use System.Net or Microsoft IIS.
But we’ve struggled with WCF for a variety of reasons. First, we wanted to use a REST model for our web services, and WCF’s support for REST architectures lags behind its SOAP support.
Second, there’s no easy way to return HTML from WCF. We tried transforming our XML with XSLT and returning the XHTML results as a Stream. This works, but the programming experience is frustrating.
Last, because of the previous two reasons, we were left with a website that was way too rigid and programmer-like. It didn’t feel like an organic website. The tool we’d picked was forcing us to compromise on our website design goals.
Infovark’s primary mission is to help human beings, not other computers. That means that the look and feel of the web interface should be our number one priority. Awesome web services are nice to have, but happy users are more important.
So for the past few months, we’ve been hunting for an alternative web server. We can’t use IIS because its footprint is too heavy. Most IT departments won’t allow us to install IIS on client machines.
We could use Apache. It has a nice embeddable version, but interacting with it via C# is tricky. We’d prefer something a little more Microsoft-native.
That basically leaves us with one commercial option and two open source options.
(If you know of other web servers worth investigating, please let us know in the comments!)
More important than picking an alternative web hosting framework for Infovark is the timing of the switch. We don’t want to impede future development.
As a stopgap, we might try plugging in the Spark View Engine to replace our current XML-XSLT-XHTML rendering path. Who knows? If it improves our web development flow, we might be able to keep our WCF base after all.
Jon Skeet has a bone to pick with humanity. In his talk at the London DevDays conference, he asserts that we humans have made it much too difficult for ordinary programmers to get simple things done with computers. It’s an epic fail.
Among the relatively simple things made needlessly complex, he lists:
If only we spoke binary, as the Maker intended, none of this would be a problem. Sadly, humans discovered fire and invented the wheel long before they fabricated transistors.
He’s posted the slides and transcript of his talk, to educate and amuse, on his coding blog.
On the main Infovark blog — the “business” blog — I talked about how we threw the first version of Infovark away. Not the core idea, of course, but we dumped our initial database schema and restructured most of the code. Here’s the technical story behind that tough call.
We’d originally intended to build Infovark on top of an existing Enterprise Content Management platform. At the time, Gordon and I thought of Infovark mostly as an alternative ECM interface targeted at knowledge workers.
When we decided to strike out on our own, we realized that we couldn’t take the underlying ECM services for granted anymore. We’d have to build things like object storage and version control ourselves. And if we were going to create a mini-platform for our own use, we were going to do it right, by golly.
Drawing on my background with real estate MLS systems, database reporting tools, and electronic records management, I began to work on an amazingly flexible data storage tier. Gordon dove into the data access classes. We began creating the sort of system we’d always wanted to use as highfalutin IT consultants.
The system allowed us to do neat things like define metadata for an object on the fly and roundtrip it to the database. We could access those objects in code, or via XML or JSON web services. We could define multiple views for each object. Every object was searchable using a full text index or via SQL. We were quite proud of ourselves.
It was an utter waste of time.
Technically, what we’d done was construct an entity-attribute-value database model. EAV systems are technically complex beasts designed to solve one tricky problem: client domains that don’t have well-defined metadata.
The EAV model was initially developed for clinical records systems. These needed to accommodate a huge array of possible symptoms, complaints, effects, diagnoses, and interactions. A particular patient is unlikely to have more than a handful of issues at a time, however. It’s a rich set of metadata with a sparse array of values.
Think of those questionnaires you get at your doctor’s office. They normally give you a long form with lots of checkboxes on it. You tick a few boxes here and there to record your or your family’s medical history.
An EAV model is designed to store that type of information efficiently in a standard relational database, at the cost of doing some major code gymnastics.
Except in very limited circumstances, an EAV design is considered a database smell. Some go so far as to list it as a SQL design error. Joe Celko, author of several books on SQL, has an article on how to avoid the EAV of destruction.
EAV remains popular though, perhaps because of its close ties to Steve Yegge’s universal Properties pattern.
In fact, there’s a whole slew of alternative databases designed specifically to help with EAV problems: Column-based databases like Vertica, meant for data warehousing; XML databases like Mark Logic, for structured documents; CouchDB for unstructured content; and key-value stores like MongoDB. But I digress.
All of the hoops we jumped through to help store arbitrary data was overkill. We were making the problem harder than it needed to be.
The only positive thing I can say about the effort was that it was an itch we had to scratch. We had to get the old stuff from our previous jobs out of our system before we could focus on Infovark.
While the features we built were exactly the sorts of things a consultant or systems integrator might want, end users couldn’t care less about them. We’d unconsciously built a product for ourselves, not for our customers.
Our customers didn’t want to define their own data structures. They don’t want to learn about metadata or record types. They just want a product that helps them remember stuff. Figuring out what data to store or columns to index was our job.
So while the Alpha build was incredibly cool from a techie perspective, it wasn’t easy or fun for the typical knowledge worker to use.
We needed to do our homework. What do our customers need to get out of a personal information wiki? What items will they want to reference later?
How we manage that information under the hood should be completely invisible to them. As far as they’re concerned, Infovark is an actual animal that lives inside their computer that helps them find interesting things.
Once we started looking at the problem from the user’s perspective, things got much simpler. We threw out the EAV approach and went with a much simpler data model. We gathered requirements to figure out what were the bare minimum number of data types that a typical knowledge worker would need. Then we began defining templates that gave users the ability to interact with these types in (what we hope) will be natural ways.
I guess it’s another example of the write big to write small principle. We built a general framework at first, capable of handling nearly any sort of object we threw at it, then drastically edited it back to hold the bare minimum needed.
It wasn’t that the EAV approach was wrong. It worked. We could have built on it. But it was a huge framework and it consumed a lot of our engineering effort. That’s time much better spent on things that our customers actually care about.
I wish we’d started with the simple solution. But I’m not sure we would have understood or appreciated it without trying the EAV approach first. We needed to get it out of our system.
And then we needed to get it out of our system.
In our past jobs, Gordon and I worked as part of larger technical teams. As developers, we never had to worry about the installation routine. It’s a highly specialized area of software development. We had people to do that job for us.
Fortunately I’d had a little experience working with InstallShield but that mainly involved stepped through the wizard and trying not to adjust settings that I didn’t understand. (Which meant most of the settings.)
Working on Infovark, we’ve had to absorb a crash course on Windows Installer. Windows Installer is the official Microsoft sanctioned technology for deploying applications to Windows. If you want to get the compatibility logo on your product, you must use Windows Installer or a tool that generates Windows Installer compatible .MSI files.
Windows Installer has been around for a long time, going back to at least 1998. Version 1.0 shipped with Office 2000. In the time since, it’s gone through many changes and revisions. If you didn’t “grow up” with the technology over the years, it’s a daunting challenge to get up to speed.
We figured our best bet was to pick a software package to help us build our MSI files. But since we didn’t know Windows Installer very well, it was hard to evaluate which one to use.
The best place I found for information about Windows Installer and setup and deployment tools is InstallSite. Finding my way around was a bit tricky, but there’s lots of good information there.
The series describes the slow evolution of User Account Control and per-user settings from Windows 95 to the present. This helps put all the hacks and kludges in context.
This long history is what makes creating good software installation routines on Windows difficult, especially if you want to support multiple versions of the operating system. The differences between Windows XP and Windows Vista are particularly large.
So if you’re planning to deploy your software to the desktop, make sure to include a lot of time in your development budget for research, testing and troubleshooting. It’s harder than you think.
I’ve been a bystander in the Software Craftsmanship movement so far. I’m not sure why. I like the idea of software craftsmanship. I’m just not sure what it means in practice.
I’ve read the manifesto and considered signing it. I agree with the aims expressed there. I’ve also read the blogs of those skeptical or confused about the manifesto. I can’t decide what to do about it.
The best overview of the software craftsmanship idea is Mark Levinson’s Call to Arms article on InfoQ. It describes software craftsmanship as a response to the typical coding grind, where just-barely-good-enough software is shoveled out the door as rapidly as possible.
I understand and appreciate the feeling; I’ve been there. I know how much it hurts to release bad products that frustrate customers. But I’m not sure the software craftsmanship community has a solution to that problem yet. It’s early days, though, and over the past few months I’ve discovered some interesting ideas about software craftsmanship.
On the subject of continuous learning, I recently watched Mary Poppendieck discuss deliberate practice in a webcast on InfoQ. The summary: To become an expert in any field, you need to seek out coaches that teach the skills you need and spend focused time practicing those skills. Continuous learning is about gathering resources, understanding the material, and gaining experience through repeated effort.
After listening to these two programming mavens, I remembered something I’d read a while back on Coding Horror about code kata. Dave Thomas, of pragmatic programming fame, coined the term code kata for exercises designed to improve programming skills. He has a list of code kata, but other code kata catalogs have appeared as well.
So maybe there’s hope for the software craftsmanship movement after all. We’ve moved from talking about abstract goals to ideas we can put into practice. There’s a slow consensus building as to what a professional looks like and how one becomes a professional. That’s encouraging.
Ultimately, software craftsmanship isn’t about signing a pledge. It’s about delivering quality product.