Skip to content

Jorge Luis Borges on Software Architecture

The following, from Jorge Luis Borges, reminds me of some software projects I’ve seen…

“.. In that Empire, the Art of Cartography reached such Perfection that the map of one Province alone took up the whole of a City, and the map of the empire, the whole of a Province. In time, those Unconscionable Maps did not satisfy, and the Colleges of Cartographers set up a Map of the Empire which had the size of the Empire itself and coincided with it point by point. Less Addicted to the Study of Cartography, Succeeding Generations understood that this Widespread Map was Useless and not without Impiety they abandoned it to the Inclemencies of the Sun and of the Winters. In the deserts of the West some mangled Ruins of the Map lasted on, inhabited by animals and Beggars; in the whole Country there are no other relics of the Disciplines of Geography.”

— Jorge Luis Borges

One can easily substitute a few words in the above, and it sounds remarkably familiar….

“.. In that Product, the Art of software architecture reached such Perfection that the UML diagram for the delivery of one feature alone took up the whole of a wall, and the functional spec of the project, the whole of an office. In time, those Unconscionable Specs did not satisfy, and the software architects wrote up a Specification of the Application which had the size of the Application itself and coincided with it point by point. Less Addicted to the Study of software architecture, Succeeding Generations understood that this Widespread Specification was Useless and not without Impiety they abandoned it to the Inclemencies of the Executives and of the Board. In the conference rooms of the West some mangled Ruins of the Specification lasted on, inhabited by middle managers and Contractors; in the whole Company there are no other relics of the Discipline of Software Architecture.”

— Not Jorge Luis Borges

What kind of monumental event does it take to get me to revive my moribund blog?

Seeing Leigh Dodds wearing a tie. Apparently his children thought it was pretty odd too.

Picture(17)
Oh, yeah. And having somebody point out in their “Web 2.0” presentation that you haven’t updated your blog in half a year. /Me=shamed.

Abulafia

Abulafia

Way back in 1990, when I worked at Brown University, I wrote a hypertext application for the Macintosh called “Abulafia.” (named after the computer in Umberto Eco’s book, Foucault’s Pendulum). Recently I found some old Zip disks onto which I archived my Brown work when I left the university in 1995. I asked a hardware magpie friend of mine if he had a way of reading old 100MB Zip cartridges and he did. Amazingly, the old Zip cartridges were still accessible (thanks Iomega) and even more amazingly, I was able to find an old binary of Abulafia and run it under OS X’s classic emulation mode (thanks Apple).

Over the past few years I had grown self-consious about my periodic foam-at-the-mouth-old-man-rants concerning the paucity of the web hypertext model and about “how when I was a lad (way before that dang InterWeb) we did real hypertext.” I was happy, therefor, to discover that my memories had not deceived me and that Abulafia did some pretty kick-ass stuff. It seemed like it might be a good idea to document some of this via some screencasts as I doubt Abulafia will be runnable for much longer- particularly not with the Apple move to Intel processors.

A Little History

At the time I initially wrote Abulafia, the web was still an experiment at CERN and Apple’s HyperCard had practically co-opted the term “hypertext” despite really being an application development environment. Abulafia was partially a response to HyperCard and was inspired by IRIS’s industrial strength research hypertext environment, Intermedia. To a lesser extent, Abulafia was also influenced by Eastgate Systems’ “StorySpace“- a hypertext system designed for the creation of interactive fiction. And of course all of these systems were, in turn, inspired by Ted Nelson’s Xanadu

My first version of Abulafia was – ironically – written in Hypercard. I demonstrated the first version of Abulafia to CHUG in October of 1990. Response to the HyperCard version of Abulafia was good. In fact, there was a brief period of time when Apple considered bundling it on Macintoshes sold to universities. Unfortunately- this was about the time that Apple decided that it wasn’t in the software business (Doh!) and spun out its software (including HyperCard) into a company called Claris.

Some time during this period (chronology is hazy) I had grown sick of the limitations of HyperCard and rewrote Abulafia in a pseudo version of C++ that was then distributed with “Think C.” In about 1992-3 I think that I realized that the web was going to take off (I was a NeXT developer and had seen early versions of the early CERN web client) and I dropped development to focus on creating various web tools.

The binary that I found on the Zip disks is the C++ version of Abulafia. I think I had to leave the source at Brown and I have know idea what happened to it.

The Current State of Abulafia

I was amazed when I managed to copy the binary off of the old Iomega disks and I was floored when I double-clicked on the application and it actually launched. I didn’t have any good example hypertext “collections” and it was kind of a challenge to create demo collection because I had to be able to recreate all sorts of old versions of RTF, the old Apple “pict” graphics format, etc. Funnily enough, I had the least problem with multimedia formats because all Abulafia multimedia calls were done via the then-nascent QuickTime (although I can predictably crash Abulafia and the entire classic mode if I close any QuickTime window.) Almost everything in Abulafia still seems to work. The only things that I can’t get to work are links to external applications, “automatic” dictionary links (possibly because they were hard-coded to lookup words on the dictionary service I had running on my NeXT cube) and links into/out-off particular spans of audio/video. It is also evident that there a number of memory leaks in the app- this becomes painfully clear when I play quicktime movies that are larger than the entire hard drive on the Mac II that I developed Abulafia on. Ah, memories…

Abulafia supported single-user “collections”, in which case it stored all links and document information in a special file within a collection folder, but it also supported multi-user collections, where documents were stored on network drives (AppleShare, I’m afraid) and link and document status information (e.g. document locks) were stored in a SQL database (Sybase, running on my NeXT cube). I’m afraid I can’t get it to work in multiuser mode anymore…

Explanation and Demonstration of Abulafia’s Features

(warning, the demo screencasts are large QuickTime movies)

Abulafia supported links to and from text, graphics, sound and video/animation. In Abulafia, “links” were defined as connections between two sets of “spans” in two documents. A span could be a selection of text, an area of a graphic or the “in” and “out” points of video/sound. Spans were encoded in what I called “lightweight SGML” (XML didn’t exist back then). Links could also have arbitrary metadata associated with them. This architecture allowed Abulafia to support the following advanced linking features:

Saved document properties: You could set specified documents to open automatically when a collection was opened. You could also save the size and position of documents so that they always opened in the same place.

Basic linking: Links to and from text, graphics, sound and video/animation. Note that links from sound, video and animation don’t seem to be working, but this was a pretty cool feature. You could, for instance, have a video open up a text document when a certain point of the video was reached.

Demo 1: Here I launch Abulafia, set a default overview image, link from the overview to a text document and then link from the text document to a photograph. Finally, I save the collection and quit Abulafia..

Overlapping links: The same span of text or area of a document could link to several places at one. Clicking on a linked span would provide a popup menu of the relevant links, whilst double-clicking on an active span would launch a dialog box listing the relevant links. Think of this as the ability to support overlapping HREFs.

Renaming links: Links could be renamed and remain persistent.

Link annotation: Links could be annotated. All links were stamped with their author name and time of creation. Link authors were able to provide short explanatory text for each link. This explanatory text would only appear in the link dialog box (not the link popup). This was actually all based on a generic ability to attach any metadata to a link. Note that, under OS X, Abulafia seems unable to determine a username and defaults to “Jane Doe” as the username. Hardly a surprise that this doesn’t work as I was probably grabbing the username from AppleShare settings.

Demo 2: Here I launch Abulafia again to show that the overview document now opens automatically. I follow the link to the text document, and then add two overlapping links to the same place that I linked from in example 1. I then rename and annotate the links to help disambiguate them. This example shows a link into a QuickTime movie (I don’t close the movie because doing so crashes everything).

Asynchronous linking: When creating links, the author could start and end links in any order. This was really just an authoring convenience, but other hypertext systems of the time made linking documents a pretty tedious process.

Multi-headed links: Links that could originate from several spans within the same document. Handy, for instance, if you wanted to link from all of the examples of X in document A to a detailed explanation of X in document B. Under OS X all link types except for multimedia in/out points seem to still work.

Multi-tailed links: Links that targeted several spans within a document. For instance, you might want to link from the definition of X in document A to all examples of X in document B.

Demo 3: I open the text document again, and start several links. I then open the target documents and end the links. I go to the initial document again and start a “multi-headed link” from the several instances of the word “HyperCard”. I then rename the link to show that both anchors point to the same link. Finally, to demo a “multi-tailed link” I open a document that defines the word “Adjective” and I link it to four examples of adjectives in a not-very-original sample sentence.

Auto links: Links to queries. So, for instance, you could link to a search for all instances of the word “foo” in any document. Again, this feature is broken under OS X- possibly because Abulafia can no longer find my NeXT cube ;-).

Conditional Links: Links that would be active or inactive depending on certain conditions. The conditions supported were “link state”, “date”, “time” and “random”. They worked like this:

  • State: Links activate or deactivate depending on the whether or not other links have already been followed. For instance, could make a user have to follow a link to the introduction before they were alowed to follow links to more advanced topics. This could also be used by the interactive fiction crowd to create stories that changed as you navigated them.
  • Date: Links only activate or deactivate depending on a date condition. So, for instance, you could have links that are only active before X date, only active after X date or only active between dates X and Y. This feature was put in to support learning management features (e.g. You can not access the answers to the problem sets until after their due date). It could also be used for creating date/time sensitive interactive fiction.
  • Time: Similar to the above. You could set links to only activate at certain times. For some reason, this feature doesn’t seem to be working under OS X.
  • Random: Links are given a 1 in N chance of being active. This feature was put in to support interactive fiction applications and it doesn’t seem to be working under OS X.

Of course you could combine conditions so that you could say that a certain link had a 1 in 2 chance of being active in the morning and a 1 in 100 chance of being active in the afternoon, but only after December 29th, 1996 and only if the user had already followed the link to the narrative about the butler. Kinda cool, if you were into that sort of thing.

Demo 4: I set a “state condition” on the link to the “Abulafia’s Features” document that only makes the link active if the user has already followed the “Abulafia Development” link. I also show how, if one wanted to, one could add a date condition to the link.

Persistent links: Both originating and target documents could be edited and link integrity would be maintained as long as said documents continued to contain at least one origin and target span. When all all relevant origin or target spans were deleted, then the link would be deleted (after warning the user).

Demo 5: I open the text document, unlock it, and edit it. Finally, I show that all the anchors still exist and all the links persist.

Anyway, I’m happy that I have finally been able to document Abulafia. Perhaps now I will stop frightening youngsters with tales of the hypertext systems of yore…

Two graphs that explain most IT dysfunction (Part II)

In Part I, I described two graphs that I think help explain much IT dysfunction.

Benefit Risk

I also noted that, typically:

  • People in group A will often talk to and solicit advice from people in group C. (think VC or CEO talking to technical guru)
  • There are relatively few people in group C. (some companies might not have anybody internal in group C- they hire consultants or read expert opinion)
  • Most of the people who actually have to implement and maintain new technologies are in group B.

Clearly there are lots of gradations between A & C & B, so I am using the groups as a convenient way to refer to the extremes. In the case of group B, the extreme is people with relatively-solid technical credentials but who are very cynical about technology and are very risk-averse. There are a few things that one often finds with group B:

  • Group B are adverse to risk because, when things go wrong, they are on the front lines having to deal with the aftermath.
  • Group B are rewarded for keeping things predictable and consistent- change and the risk that goes with it are anathema.
  • Group A and C perceive group B as being, at best, passive aggressive and, at worst, obstructionist. Sometimes this is true.
  • Group B views groups A and C as being out-of-touch and/or irresponsible. Sometimes this is true.
  • Even though both group A and C are frustrated by group B, if there is ever any contention between groups A and B, group C will usually align with group B because they share technical DNA and members of group C were once (at least briefly) members of group B.
  • A large percentage of the technical world never progresses beyond group B.
  • Group B makes up most of the total IT salary cost of an organization
  • Individual members of Group C cost significantly more than individual members of group B

In your typical organization, you will often find that the helpdesk, QA, network administration, and facilities groups are at the apex/trough of the risk benefit curves in group B. I have to emphasize that this is not meant to be slamming group B. As shown above- there are often very good reasons for group B’s world outlook.

So why am I faffing on about this? Well, having spent about ten years managing technical groups, I have found that a very large percentage of my time has been spent dealing with the tensions described in this chart. The superficially similar world outlook of senior management (group A) and the technorati (group C) can often lead to trouble when both groups are largely dependent on a third group- the technological Eeyore’s of the world- group B. The danger comes when group B completely dominates an organization and makes it impossible for them to innovate, or, conversely, when groups A and C, underestimate the legitimate concerns of group B. I’ve witnessed a few different strategies for trying to manage the technology disfunction caused by the different world views of groups A, B and C and they generally fall into the following categories:

Ignore Group B: This is probably the most common strategy the companies adopt when they feel paralyzed by IT dysfunction (though calling it a “strategy” is probably just dignifying what was really a muddle). The typical scenario is- A major project is estimated and planned by senior management (A), with very occasional sanity checks of isolated elements of the projects with some senior architects (C) and virtually no acknowledged input or buy-in from the poor sods who are going to implement and run the thing. The project radically misses the deadline, there are recriminations all around. Does this sound familiar? I bet it does.

Hire only group C: This is often cited as the “Google strategy”, but it is common to many tech startups. In order to avoid the disconnects caused by the differing world views of groups A, B and C- just hire people in group C. The problem with this strategy is that, while it may work very well in the initial stages of a startup, it doesn’t scale very well. Either you have to eventually hire people in group B (even if you outsource it), or you are going to have a lot of pissed off senior engineers ineffectually doing things like product management, sales, helpdesk and basic systems administration. I guarantee you that Google has recently had to hire people in the A and B camps. I know because I’ve met them. Eventually Google too will have to deal with the ABC dynamic.

Outsource: This is probably the second most popular strategy for dealing with IT dysfunction. Without going into the debates about the ethics of outsourcing, it is important to note that, in the context of dealing with the disconnects between groups A, B and C- outsourcing is simply a displacement activity (in the psychological sense- obviously this is literally true as well). It has long been noted by outsourcing specialists that you are only likely to succeed in outsourcing something that you already know how to manage. If you are having trouble managing a process, then outsourcing it will probably actually exacerbate your management problems. If a company outsources IT, they are still going to need an informed IT strategy and that strategy will still have to reconcile the differing world views of groups A, B and C. Outsourcing IT might temporarily mask IT management problems by making mistakes less costly, but I suspect that, in the longer-term, the cost will creep back up as management takes advantage of the “cheapness” to launch even more ill-informed and mismanaged IT projects.

Turn A into B: This is the most original strategy that I’ve seen for dealing with problems inherent in the ABC dynamic. The CTO (group C) of a small technical company made a habit of hiring people from group A, and intensively training them to handle group B tasks. The advantage of this, from the company’s point of view, was that they had a group with the apparent expertise of group B, but with none of the cynicism and risk aversion typically associated with the group. The result was that, for a while, the company in question was incredibly agile and innovative and the company’s clients loved dealing with the IT group because their answer to everything was “Sure, that can be done. It will only take a few days”. In the long term, this strategy had some ultimately pernicious side effects. The reason that the team was initially able to deliver on its “it will only take a few days” promise was that they took shortcuts everywhere, didn’t build to scale and released alpha quality code that they would then spend months repairing and rewriting. Clients who were at first thrilled by the quick turn-around, grew increasingly disillusioned when almost every project had to be re-released N times before it really worked. And, of course, the cumulative effect of all of these problems was that eventually many members of the team developed much of the cynicism and risk-aversion typical of group B (though, to be fair, I should note that they still remained a remarkably optimistic bunch, considering what they went through).

Triangulate: This is the dialectical process, but the key to it is recognizing that you have a three world views in the first place. Most companies just muddle along, knowing that there are different views, but not clearly understanding what informs those views or how they relate to each other. The problem is even worse for companies who have IT departments exclusively made up of group Bs- to them there just seems to be an unbridgeable gap between the non-tech and tech sides of the organization. Recognizing the three world views is the first step to being able to mange how much influence each group has over a company’s technical strategy.

The two graphs illustrating the relationship between technical expertise and attitude toward the introduction of a new technology are descriptive not prescriptive- but they have always seemed to me to serve as a useful model against which to compare the interactions of an organization and its technology group.

Chickpea meets a cow- and doesn’t have one.

Chickpea sniffs a cow

Two graphs that explain most IT dysfunction (Part I)

Inspired by reading about other people’s blogging weaknesses, I’ve decided to finally get this one off the back burner and post it. I’m pretty sure that this isn’t original, but I started thinking about this way back in 1996 (pre-social-bookmarking) and I’ve lost my pointer to whatever influenced it. Anybody who can set me straight- I’d appreciate it.

So here goes.

There are two graphs which, when seen together, explain a hell of a lot about various forms of dysfunction that you see in the technology world.

In this first graph, X represents relative “technical expertise” and Y represents the “perceived benefit” in the introduction of a new technology:

Benefit

The summary is that technical neophytes (A) tend to see high potential benefit in new technologies, while people who have a bit of technology experience (B) grow increasingly cynical about technology claims and can rattle-off the names of technologies that they have seen over-hyped and that have under-delivered. The interesting thing though, is that, as people become really expert in technology (C), their view of the potential benefits in new technology starts to increase again. At the far right of this scale I’m talking about the real experts- the alpha-geeks of the world.

In the second graph, X again represents technical expertise, but Y represents “perceived risk” associated with the introduction of a new technology:

Risk

Here the curve is inverted, but the basic pattern is the same. The neophytes (A) are blissfully unaware of the things that can go wrong with the introduction of a new technology. The tech-savvy (B) are battle-scarred and have seen (and possibly caused) countless disasters. The alpha-geeks (C) have also seen their share of problems, but they have also learned from their mistakes and know how to avoid them in the future. The alpha-geeks understand how to manage the risk.

Now things get interesting when you map these two dynamics against each other:

Benefit Risk

You see that neophytes in group A have essentially the same world view as the alpha-geeks in group C, but for completely different reasons. The trouble starts when you realize that most of senior executives, venture capitalists and members of the popular press are in group A. At the other extreme, most R&D groups, architecture groups, independent consultancies, technology pundits, etc. are in group C . There are a few problems with this:

  • People in group A will often talk to and solicit advice from people in group C
  • There are relatively few people in group C
  • Most of the people who actually have to implement new technologies are in group B.

So you can start to see the problem.

In Part II I’l talk some more about group B and I’ll discuss some of the classic patterns that emerge when A, B and C try to work with each other.

The lazyweb works

After my trash-talk about “uber-geeks”, Leigh Dodds picked up the “Subscribe To My Brain” challenge and produced this within hours. He even produced a button, which -as everybody knows- automatically turns beta software into a production service…

It turns out that Phil Wilson, was working on a similar concept and Danny Ayers was thinking along the same lines.

And I am happy to say that the phrase “subscribe to my brain” seems to have turned into a mini-meme (Play Austin Powers theme tune here, start maniacally laughing).

I promised to subscribe to the brain of whoever wrote the requested service, but the problem is I already subscribe to Leigh’s brain- so the only reward I can offer him is a tin of French sweets sporting an appropriate name:

Picture(5)

Oh, yeah- and I added a “brain button” (amongst others) to this site.

I want to subscribe to your brain

The other day I was talking to a former colleague and I was trying to explain how I have gradually switched to using an assortment of social content tools as my primary mechanism for finding relevant and authoritative information on the web. With these tools, I can subscribe to an assortment of RSS feeds produced by people who I trust and think of as authorities in their respective subjects. In short, I said, “I can subscribe to their brains”.

Or at least I can in theory… At the moment, for those of non-geekly tendencies, the practicalities of “subscribing to somebody’s brain” are a little daunting. If you have an RSS-aware browser or have installed one of the useful bookmarklets provided by the likes of bloglines, then subscribing to individual RSS feeds is relatively easy. The problem is that I might be interested subscribing to:

  • What person X is blogging
  • What person X is bookmarking- on several social bookmarking sites (e.g. del.isio.us, CiteULike, Furl)
  • What person X is listening to (e.g. AudioScrobbler)
  • What person X is taking pictures of (e.g. Flickr)
  • What person X’s travel schedule is (e.g. iCal)
  • What books X is reading or planning on reading (e.g. Amazon wish lists)

The first problem is finding out what feeds person X provides. Most of the time you have to ask them, or search through the individual services for the person’s name. If you are dealing with a relatively clued-in person, you might be lucky enough to find links to their various feeds off of their home page or in the margins of their blogs. If you are dealing with an uber-geek, then you might find this information encoded in their FOAF file. All that seems to be missing is the button titled “Subscribe to X’s Brain”.

<lazywebrequest>That is what I want- I want a bookmarklet or service that makes a best attempt to find all of the feeds that a person is publishing. If it detects a FOAF file, it will just use that to locate the feeds, if it doesn’t find a FOAF file, then it will make a bunch of educated guesses using a combination of the user’s name and the handy RESTFUL interfaces that most of these services support. Once it has located these feeds, it will create a new appropriately named folder in your favorite RSS reader and populate it with the feeds. Bonus points if it merges the feeds. A gold star if it periodically checks other services and auto-detects new feeds from that person</lazywebrequest>

And as for the person who eventually writes this wonderful tool- I want to subscribe to their brain.

Look who showed up

Allen Renear shows up (unannounced) at SSP. Several late nights of ranting about trust-metrics, document models, management headaches and consulting possibilities. It has been ten years since we last conspired like this.
Picture(12)-1

Talk goes well

It looks like attendance exceeded 200. Naturally- I crashed the pips and the hotel was so keen to get us out of the room that we didn’t have time for questions. However, a small crowd gathered around me afterwards- they were complimentary, asked good questions and were not shaking their fists. So a good result, I think.

Conference room just before talk…

Picture(11)