On the main Infovark blog — the “business” blog — I talked about how we threw the first version of Infovark away. Not the core idea, of course, but we dumped our initial database schema and restructured most of the code. Here’s the technical story behind that tough call.
We’d originally intended to build Infovark on top of an existing Enterprise Content Management platform. At the time, Gordon and I thought of Infovark mostly as an alternative ECM interface targeted at knowledge workers.
When we decided to strike out on our own, we realized that we couldn’t take the underlying ECM services for granted anymore. We’d have to build things like object storage and version control ourselves. And if we were going to create a mini-platform for our own use, we were going to do it right, by golly.
Drawing on my background with real estate MLS systems, database reporting tools, and electronic records management, I began to work on an amazingly flexible data storage tier. Gordon dove into the data access classes. We began creating the sort of system we’d always wanted to use as highfalutin IT consultants.
The system allowed us to do neat things like define metadata for an object on the fly and roundtrip it to the database. We could access those objects in code, or via XML or JSON web services. We could define multiple views for each object. Every object was searchable using a full text index or via SQL. We were quite proud of ourselves.
It was an utter waste of time.
Technically, what we’d done was construct an entity-attribute-value database model. EAV systems are technically complex beasts designed to solve one tricky problem: client domains that don’t have well-defined metadata.
The EAV model was initially developed for clinical records systems. These needed to accommodate a huge array of possible symptoms, complaints, effects, diagnoses, and interactions. A particular patient is unlikely to have more than a handful of issues at a time, however. It’s a rich set of metadata with a sparse array of values.
Think of those questionnaires you get at your doctor’s office. They normally give you a long form with lots of checkboxes on it. You tick a few boxes here and there to record your or your family’s medical history.
An EAV model is designed to store that type of information efficiently in a standard relational database, at the cost of doing some major code gymnastics.
Except in very limited circumstances, an EAV design is considered a database smell. Some go so far as to list it as a SQL design error. Joe Celko, author of several books on SQL, has an article on how to avoid the EAV of destruction.
EAV remains popular though, perhaps because of its close ties to Steve Yegge’s universal Properties pattern.
In fact, there’s a whole slew of alternative databases designed specifically to help with EAV problems: Column-based databases like Vertica, meant for data warehousing; XML databases like Mark Logic, for structured documents; CouchDB for unstructured content; and key-value stores like MongoDB. But I digress.
All of the hoops we jumped through to help store arbitrary data was overkill. We were making the problem harder than it needed to be.
The only positive thing I can say about the effort was that it was an itch we had to scratch. We had to get the old stuff from our previous jobs out of our system before we could focus on Infovark.
While the features we built were exactly the sorts of things a consultant or systems integrator might want, end users couldn’t care less about them. We’d unconsciously built a product for ourselves, not for our customers.
Our customers didn’t want to define their own data structures. They don’t want to learn about metadata or record types. They just want a product that helps them remember stuff. Figuring out what data to store or columns to index was our job.
So while the Alpha build was incredibly cool from a techie perspective, it wasn’t easy or fun for the typical knowledge worker to use.
We needed to do our homework. What do our customers need to get out of a personal information wiki? What items will they want to reference later?
How we manage that information under the hood should be completely invisible to them. As far as they’re concerned, Infovark is an actual animal that lives inside their computer that helps them find interesting things.
Once we started looking at the problem from the user’s perspective, things got much simpler. We threw out the EAV approach and went with a much simpler data model. We gathered requirements to figure out what were the bare minimum number of data types that a typical knowledge worker would need. Then we began defining templates that gave users the ability to interact with these types in (what we hope) will be natural ways.
I guess it’s another example of the write big to write small principle. We built a general framework at first, capable of handling nearly any sort of object we threw at it, then drastically edited it back to hold the bare minimum needed.
It wasn’t that the EAV approach was wrong. It worked. We could have built on it. But it was a huge framework and it consumed a lot of our engineering effort. That’s time much better spent on things that our customers actually care about.
I wish we’d started with the simple solution. But I’m not sure we would have understood or appreciated it without trying the EAV approach first. We needed to get it out of our system.
And then we needed to get it out of our system.