Tuesday, April 19, 2005

Dan Brown, Frankin W. Dixon, and the Recipe

Last night I finished my second (and what will possibly be my last*) Dan Brown book. It's not like the book was bad, on the contrary, it was a real page-turner, keeping me awake late into the night twice in a row, hooked. It's just that I have had my fill of that thread of repetitiveness, not just in the nature or writing, but also in the overall theme. You could quite literally take a few chapters out of one of his books and put them in the other one that I've read. Location aside (one is primarily set in France, the other in Italy), chunks of the book could be interchangeable. They even share the same protagonist.

Of course there should be a degree of repetitiveness, they're both written by the same guy after all. Don't bestselling authors like Stephen King, Tom Clancy, John Grisham and Wilbur Smith write with some amount of repetitiveness? They do. But, it’s the degree of consistency in overall characterization, plot and drama in 'The Da Vinci Code' and 'Angels and Demons' that had me reminiscing about my fifth grade reading.

Franklin W. Dixon does not exist. It is a pen name for a group of ghost writers called the Stratemeyer Syndicate. In fact, Edward Stratemeyer (the founder of the syndicate) is a name that very few have heard of. His Syndicate was responsible for the bestselling series, 'The Hardy Bodys' and 'Nancy Drew'. Franklin W. Dixon was the chosen pen name for the Hardy Boys’ authors.

What Franklin W. Dixon managed was a recipe for pre-teen/early-teen literary success. By the time I was 12, I think I had read the first 100 (yes, 100!) Mysteries and was reading the then-newly launched Case Files as they were being launched (more than 50 by the time I outgrew Dixon, I recall).
Apart from having characters who appealed to a teenage boy's macho sense, the Mysteries also contained a surefire plot, assured in the sense that everything would be fine in the end, for *all* the good guys. It’s a well known fact that the Hardy Boys Mysteries have done way better than the Case Files. The Case Files were more adenturous in intent, getting rid of, as in killing, Joe Hardy's girfriend early on, often showcasing murder and not-so-naive 'bad' things happening to our favourite characters. They flopped, I think, largely because they tried to bridge that gap between 11 and 13 year olds, boys and girls who were too old to read Mysteries and too young to be into Forsyth, Smith, King, Grisham, Sheldon et al. Apparently that gap was not bridged with the Case Files. I have a feeling that it is a chasm that will be extremely hard to cross as boys and girls hitting their teens grope and feel their way into (and shortly after, out of) larger, more substantiative works.

Dan Brown has managed pretty much the same thing as Dixon did with the Hardy Boys Mysteries, in *method*, not content (obviously). And this is hardly an understatement, if you consider the wide readership that the Hardy Boys have enjoyed. What Mr. Brown has in his two bestsellers is another formula. A recipe that is based on conspiracy theories, a degree of raciness not seen since Forsyth's glory days, plenty of controversy, and, above all, readability. And what's more, the cornerstone of this recipe is something that can't be copied easily, as his works are immersed in a deep scholarship/reading (note: I am not saying 'understanding' here) of the Catholic past and traditions. To people who know nothing about the historical traditions of the Catholic Church, it provides information. Contentious information with a healthy dose of spice and speculation. Combine that with a handsome Harvard hero, pretty European scientists/mathematicians, action scenes and twists to rival the world’s best rollercoaster ride, and it’s got you hooked.

And the results are astonishing. One of Brown’s books is the best selling adult fiction novel of all time. Even people I know, who would normally favor either (1) Dumb and Dumberer, or (2) Doing anything but reading, have been enticed. This augurs well for an age in which my aunt and I have long conversations about how few of the younger generation read these days.
____________________________

*Just like the excessive clichés and smart-ass replies (which are really quite smart, actually) that have become a part and parcel of ‘The West Wing’, Brown’s recipe has fallen out of favor with me by always delivering on its promises. In Brown’s case, you almost know what to expect, even if it is the unexpected, *all* of the time; in the West Wing, you’ll be hard-pressed to take a toilet break and not miss out of some of the quips.

Wednesday, April 06, 2005

Management Education and Practice Revisited

I’ve just read one half of Henry Mintzberg’s ‘Managers, not MBAs: A Hard Look at the Soft Practice of Managing and Management Development’. As the name suggests, it provides quite a damning (and mostly relevant) account of the current MBA education process, and criticizes the wrong notion (and rightly so!) that anyone with an MBA degree is fit to be a manager.

One of the essential theses of Mintzberg’s book is that there is a fundamental misunderstanding in what the MBA program currently teaches (in the Unites States especially) and what is expected by, and out of, an MBA student/graduate. He argues quite convincingly that the MBA is essentially a degree that teaches about various business functions, but does not ground anyone to be a manager at all. However, when MBAs graduate, they are automatically enrolled into the ranks of management (when they are not becoming consultants or investment bankers), often in industries where they have no prior experience. It is quite interesting that they are expected to manage, let alone lead, entirely out of context.

I haven’t read the other half of Mintzberg’s book (the part where he recommends an ideal educational alternative to develop ‘practicing managers’ in context), partly because the book is long, and also because Mintzberg cannot escape from his inherent academic bent of mind, which reflects in his writing. Not that his writing lacks succinctness; rather, he falls prey to his precise nature, often elaborating and substantiating arguments long after his point has been made and accepted. But, I guess he has a lot of convincing to do, considering the popularity of the degree, so he’s erred on the side of covering his bases. And there’s nothing wrong with that.

But, for the first part of the book alone, I would highly recommend it. Those who’ve read and appreciated Mintzberg before wouldn’t be disappointed with this one either.

Cradle to the Grave

I'm going to reveal (I think for the first time in these pages) the nature of my job, or, at any rate, at least the nature of the industry that I find myself in. While my understanding of a lot of what goes on in this world is more or less superficial, I have a better than average understanding of the underlying dynamics in my industry. And not just from the industry’s point of view, but also, critically (I feel), from the perspective of the entire lifecycle of the industry's products, from development to usage effects to obsolescence, in short, from the cradle to the grave.

Out with it then. My adopted industry is enterprise software (yes I know, I'm Indian, and an Indian software guy today is the poster boy of India's savvy middle class), the kind epitomized by companies like SAP, Oracle, the erstwhile PeopleSoft, and increasingly, Microsoft. Having devoted the dawn of my career in the development of software, nowadays I focus much of my energies in the marketing of it.

For me, one of the distinguishing aspects of the software industry is its tolerance for risk. This propensity for risk manifests itself in the attitudes of almost all the industry participants, be it large organizational buyers who seem to think that a $10 million CAPEX for software is going to make their companies agile, streamlined, knowledge-based and seamless (despite the glut of research and survey results that suggest that this scenario is extremely unlikely without first incurring tremendous amount of organizational pain) or the enterprise software houses, who promote products that are often a vision of the future, system integrators and VARs (the perennial middlemen of my industry) who profess expertise in integrating any software into any organization, or 'consultants' who come on board in large enterprise software implementations to offer their expertise at rates that can at best be described as highway robbery. No one seems to think that any software implementation is beyond them. No one seems to be deterred by the fact that they have no visibility into the underlying building blocks of the products they claim to be experts in.

And really, who, apart from the originating company (or rather the engineer who wrote the code) really knows *what* went into the software? Most software code is proprietary, which means that you and I know almost nothing about what went into the software. Now this is not new; surely, the layman does not know what chemicals go into the strengthening of rubber in our car tires, what composites go into the manufacture of aircraft bodies or the precise treatment that goes into the manufacture of semiconductors. (One notable exception to this is food processing, which is under mandate to publish the ingredients of each of its products. Another counter-example, for obvious reasons, is big Pharma).

However, I can posit with some credence that the sheer consequence of failure forces the other industries above to go to great lengths to prevent such failures. Really, how often does an aircraft blow up, or computers crash because of faulty memory hardware, or tires explode on freeways (Firestone aside, and see what happened to them in the wake of the Explorer scandal). Enterprise software failure* is not only far more prevalent, but also fairly costly in terms of organizational resources. Why is such failure the case, and are organizations willing to risk it?

One could argue that this is due to the fact that the enterprise software industry is in its infancy, at least as compared to the food processing, if not automotive and aviation industries. Hence, quality assurance and standardization processes are nascent, despite the ISO 9001 software quality standards and the industry-defining SDLC (software development life cycle). The problem with these standards is that they address the processes and context of development and not the software development itself, caused largely by the fact that software is inherently malleable, more open to creative, alternative solutions to the same problem. No two software codes that achieve the same results with the same user interfaces need be similar. Thus, attempts at standardizing software development tend to address everything else except the underlying building blocks (the code), often manifesting themselves in terms of voluminous documentation requirements instead of establishing the robustness of software code.

Another reason for this lack of robustness in software is what I call the ‘functionality trap’, which is also related to the dynamics of the software market. As established software makers consolidate (often by acquisition of competitors), they are only too keen to augment their software’s functionality with that of their latest acquisition to command a higher price premium for this new, expanded set of functions. This sets the trend in an industry that is seduced by ‘comprehensive’ functionality, with smaller software houses left with no choice but to ‘claim’ to have similar functionality to remain competitive. The problems with this are either that the glut of functionality does not mesh well together because it is a case of combining fundamentally different products that were never intended to be part of the same offering (2+2 = 1.5), or, more worryingly, this comprehensive functionality is a marketing vision, not reality. With enterprise software vendors biting off more than they can chew, reality lags behind promise by a fair stretch. What’s worse, this results in a vicious cycle, with customers demanding more ‘hypothetical’ functionality that what is currently being offered in spirit. And software vendors, as we know them, never say no.

A third reason is related to the twisted path that enterprise software has taken since its inception. The origin of enterprise software was in materials resource planning (MRP), a software package that debuted in the manufacturing plant. MRP did reasonably well because it was rooted in the needs of a specific industry (manufacturing) and was focused on a specific set of problems (forecasting and planning resources). MRP led to MRP II and subsequently enterprise resource planning (ERP). ERP systems deviated from the original spirit of MRP in being focused on a specific industry, and instead, looked to take on the whole world by offering finance, supply chain, manufacturing and human resource modules that could supposedly be ‘bolted on’ to any organization irrespective of the nature of the industry.

(In other words, we’re giving you an ocean liner, it’s got everything, even a swimming pool onboard. What? You don’t have oceans? You are in a desert? You need to navigate rivers? You’re in Iceland? Whoops, sorry! Why don’t you hire our certified consultant or VAR to help you strip this ocean liner down to what you need?). You get the drift. Not surprisingly, the success rate of ERP implementations is appalling, arguably the worst among all classes of enterprise systems.

The fourth reason is organizational context, or the blatant disregard of it. In offering the gamut of software functionality that they do, enterprise system vendors can lead one to assume that the inbuilt processes that the software supports are ‘best-of-breed’, one size fits all. Nothing could be further from the truth. Each organization has idiosyncrasies, nuanced workflows, structures and modus operandi. Draws one’s attention back to the days of Henry Ford’s “You can have any color, as long as it’s black”. Another case to bring those certified consultants in, whose certification is, by the way, only one-dimensional, since they don’t know jack about your organization…but we’ll leave that for another day.

So why take such risk? I think it is caused by a fundamental misunderstanding of the role of software in a firm. Software is an enabler of organizational initiatives, and yet it has repeatedly been cast in the past as either *the* initiative that serves as panacea for organizational inefficiencies or the magic bullet for enhanced performance. Enterprise software does not fit on an organization like a skin, independent of organizational context, people and structure. Software is as much a social phenomenon as it is a technological one, not a quick-fix Viagra pill for enhanced organizational performance. The herd mentality does not help either, with organizations proudly claiming that they run on SAP. This statement is unwittingly also a testament to what it took to be able to run on SAP!

Nowadays, it is heartening to see that organizations are far more measured in their approach towards enterprise software acquisitions, even though they still remain bullish about software’s ability to live up to its lofty claims. Today's software also works much better than that of the past, but often in a limited way, and does not accomplish all that it is supposed to in the manner it is supposed to. Organizations still need to be sure that their 7 figure investment in the latest application that allows any kind of data to be hot-synched with nifty wireless gadgets does not end up being the cradle to their grave.

P.S. One could claim that this tolerance for risk is what spurs innovation. Yes, to the extent that innovators need believers to invest in them. But even innovation has its bounds of irresponsibility, something that does not cost almost $60 billion a year in the United States alone.

* Defining failure of software implementations is tricky. For the purpose of this discussion, a software implementation is said to have failed if it did not deliver its promised functionality within the previously agreed-upon timeframe and budget. Has enterprise software even had a 1% success rate?