IT doesn’t matter

IT doesn’t matter

That’s “IT” as in “Information Technology”… doesn’t matter. This article by Nicholas Carr first appeared in the Harvard Business Review in May 2003. He reprinted it here on his blog Rough Type in January of 2007. It’s fairly lengthy (8 parts) but worth the investment.

Here’s one of his key punchlines: the commoditization of IT…

“Behind the change in thinking lies a simple assumption: that as IT’s potency and ubiquity have increased, so too has its strategic value. It’s a reasonable assumption, even an intuitive one. But it’s mistaken. What makes a resource truly strategic -– what gives it the capacity to be the basis for a sustained competitive advantage –- is not ubiquity but scarcity. You only gain an edge over rivals by having or doing something that they can’t have or do. By now, the core functions of IT –- data storage, data processing, and data transport –- have become available and affordable to all. Their very power and presence have begun to transform them from potentially strategic resources into commodity factors of production. They are becoming costs of doing business that must be paid by all but provide distinction to none.”

It’s an extremely well-written article, and very thought-provoking. I’ve been thinking a great deal lately about enrollments in Computer Science programs around the world, and wondering what role this sort of commoditization of information services plays in all of that. Obviously it may have a greater impact on programs like Information Systems or Information Technology, but all of these academic programs seem to be rising and falling together to some extent.

“The death of computing” & “Demise of computer science exaggerated”

The following two articles appeared on the British Computer Society web page over the past few weeks.

“The death of computing” by Neil McBride

“Demise of computer science exaggerated” by Keith Mander

The first article was written 22 January 2007 by Neil McBride, a principal lecturer in the School of Computing at De Montfort University, Leicester, United Kingdom.

The second is a response, posted in February 2007 by Keith Mander, Chair of the Council of Professors and Heads of Computing in the United Kingdom and Professor of Computer Science at the University of Kent.

McBride takes a generally dim view of things…

So where does that leave computing departments in universities? Do we pull up the drawbridge of the castle of computational purity and adopt a siege mentality: a band of brothers fighting to the last man? Or do we recognise that the discipline is dying if not actually dead, and breathing shallowly.

Hmmm… Seems as silly as suggesting that because cars are a commodity, we no longer need automative engineering departments or automotive engineers (or mechanical engineers for that matter). Does the manner in which we train software engineers need to shift? Absolutely. Is the field shutting down? Not a chance. We haven’t even begun to scratch the surface on all the software systems the world will ultimately know. And the remaining pool of fundamental research questions in computing is huge and growing.

Prof. Mander’s rebuttal is well-written and thoughtful…

Suggestions that the teaching of computer science in universities is about to fade away are premature. While it is certainly true that applications for undergraduate courses in computer science have fallen by about 50 per cent since 2001, its value to the graduate remains as strong as ever, and will be so for the foreseeable future.

For those playing in the software industry at any level, I recommend these two articles — the first because it is thought-provoking, the second because it is balanced and rational. The issue of declining enrollments in CS is huge, especially for those of us who make our living training those majors. But the issue is even more critical to an industry that is once again expanding rapidly while university enrollments languish.

Oops! Technician’s error wipes out data for state fund

Oops! Technician’s error wipes out data for state fund

Back to some of the principles that Don Norman writes about in “The Design of Everyday Things”: “I was among a group of social and behavioral scientists who were called in to determine why the control-room operators had made such terrible mistakes [at Three Mile Island]. To my surprise, we concluded that they were not to blame: the fault lay in the design of the control room. Indeed, the control panels of many power plants looked as if they were deliberately designed to cause errors.”

Obviously these aren’t completely analogous situations. But a computer technician accidentally reformats disks containing data pertaining to $38 billion in oil dividends for Alaskans? How does that happen?

Who’s fault is this? The poor schmuck who accidentally toasts off $38 billion of data on both primary and backup disks? Or the software system that let him do it?!

Norman identifies a rule of thumb for door design — if you have to label it “push” or “pull” you designed it wrong. Here’s another rule of thumb — if you have to warn people not to push that one big red switch… maybe the switch shouldn’t be out in the open.

I’d love to know more about just what happened to allow a single computer technician to innocently wreak such devastating havok. I appreciate the fact that they weren’t holding the technician responsible, or conducting a witch hunt. But you’d think that someone (software designers? software architects?) should be responsible, and probably at the design level.

Stop surfing, make friends, Indian students told

“Stop surfing, make friends, Indian students told”

This is a fascinating article from a few days ago. Apparently administrators at several of the Indian Institute of Technology (IIT) campuses are becoming concerned with the toll that Internet addiction is taking on its students.

“The old hostel culture of camaraderie and socializing among students is gone. This is not healthy in our opinion,” said Prakash Gopalan, dean of student affairs at IIT-Mumbai.

They’ve consequently imposed policies aimed at redeeming the misguided souls.

Starting Monday, Internet access will be barred between 11 p.m. and 12.30 p.m. at IIT-Mumbai’s 13 hostels to encourage students to sleep early and to try and force them out of their “shells”, Gopalan said.

“There has been a decline in academic performance and also participation in sporting, cultural and social activities has gone down,” he said.

In case you aren’t familiar with the IIT system in India, these are top-notch academic institutions, where the very brightest students in India prepare for their careers. If this were America, think Stanford, Harvard, MIT, Berkeley (forgive me if I neglected your favorite top-ranked academic institution in this randomly ordered short list… okay, Carnegie Mellon).

I haven’t decided whether I agree with the new policy or not, but I absolutely agree that Internet addiction is increasingly taking a toll on the youth of the world, including bright engineering students who are otherwise committed to making something of their lives.

In our household we’ve gone through various iterations of Internet lockdown because of deliterious effects on the children as perceived by the parents. Obvious concerns include pornography in all its forms, Internet predators and the usual cast of foul characters. But even more innocent pastimes, like online gaming, can produce addictive behaviors.

Maybe what I’m about to say is hypocritical coming from a Computer Science professor, but I’d much rather my teenager were outside riding a motorcycle than inside surfing the net or playing World of Warquest (apologies to FoxTrot) or Run-escape (hyphen added by the author). I’ve watched individuals outside my own home throw away their lives, their education, their careers, their futures living 16 hours a day in a virtual world, while almost entirely ignoring the actual world around them.

Computers and technology play an important role in improving the quality of our lives. I enjoy the heck out of solving interesting problems in the software field and playing with the latest gadgets. But there’s a time and a place for appropriate use of technology, and it’s not all the time, and it’s not every place.

Never solve a personnel problem with a policy

This is the inaugural axiomatic observation for a reason. Violation of this fundamental principle is frighteningly widespread, and its effects are broadly demoralizing.

The common form looks something like this: You’re an employee in an organization (company, university, whatever) and someone in said organization (call him S) makes a pretty bad mistake (M). Management’s next move should look something like this: S is taken to the woodshed, a proper willow switch is selected, and appropriate counsel regarding M is applied to S’s hindquarters. Other employees become aware that S now walks with a limp, and hence become aware of said woodshed experience, thus learning from the mistakes of others.

The problem seems to be that this course of action requires making an exception out of S in a way that may make management uncomfortable. It also requires deep individual thinking about the particular circumstances surrounding S’s behavior (M). I suppose in our modern litigious society it also exposes management to a lawsuit if they don’t treat everyone precisely the same way in the same circumstances. The problem is that the notion of identical circumstances is largely a myth. Every occurrence of M_i really has to be dealt with on an individual basis in order to actually be fair. That means time to discuss and deal subjectively, which makes many people nervous.

At this point, the easiest (read “most spineless”) way to manage is to craft a policy (P) which is then imposed on everyone in the organization, even though 99 out of 100 never had a problem with M. The real problem is when P has negative effects on innocent individuals, or (as is often the case) on the organization as a whole.

True story to back this up. Long long ago I was a manager in a company that will remain nameless to protect the guilty. I received a memo (which I still have in my files) pointing out that some managers were overspending their catering budget. Therefore… (drumroll please…) effective immediately, all catering will require a Vice President signature in order to be approved. I swear I’m not making this up.

What’s wrong with this picture? Gosh, where to start? How many managers were guilty? None that I personally knew. That’s not statistically pure empiricism, but I seriously doubt the problem was widespread. But even if it was, it’s not like they didn’t know who was blowing their budgets on donuts! Proper behavior would have been to systematically bring each offending manager (or both of them, or all three, whatever) in to the Vice President, who would select a suitable willow switch, etc.

Instead we now create the following scenario. A manager like me, who had never overspent any budget on anything ever, was punished in the following manner. First of all, do you know how much overhead there is in getting a VP signature on anything?! Could take weeks. So if I want donuts at my team meeting this Friday, I should have submitted a request in writing a week or so ago. But since the new policy was put into place, the VP’s desk has sprouted a signing pile of biblical proportion.

All that aside, we have to keep two things in mind. First, we’re talking about five bucks! Second, management is actually asserting with a straight face that the VP can make rationale, informed judgments about 30 instances of $5 worth of donuts on several dozen teams over multiple weeks better than the managers of those teams could have?! Still makes me crazy just thinking about it.

After the first couple attempts, I found that I’d wait a month, and then get 3 approvals in the same day, two of which were too late because the meetings had already happened. We began to figure out that with the influx of paperwork to the VP of Signature Bottlenecks, the stack would more or less sit until the overall numbers looked good, and then he would sign the entire stack (or the top half of it) in one mad flourish.

Ultimately my course was fairly straightforward. On Fridays, on my way to work, I would just go by the local donut joint, buy a couple dozen donuts myself, out of my own pocket, and bring them to team meeting. As a manager I felt that donuts were a positive thing in a meeting, and the amount of pain inflicted on me and my team by the system exceeded the $5 it cost me to just pony up and buy the refreshments myself.

But of course, it was the beginning of the end for the anonymous company. What does it suggest to anyone involved in the communal effort of constructing world-class and industry-leading software products when a few goofy excesses by an isolated pod of individuals leads to such a wildly inefficient and universally punitive policy for all? Suggests that there may be a leadership void somewhere between you and the top.

In machine learning, when an algorithm becomes too adapted to a particular training set, they call that overfitting the data. It’s bad, because while the algorithm smokes the test data, it ultimately fails (often badly) in the general case. It’s a fundamental result in machine learning and folks in that community are very cautious about it. Establishing a universal policy to correct isolated personnel problems is a classic example of overfitting the data. It’s like a conductor attaching radio tags to the ears of all orchestra members because the 2nd chair in the oboe section missed practice three times.

“The Design of Everyday Things” by Donald A. Norman

This is the inaugural post in the “Best Books” forum. I selected “The Design of Everyday Things” by Don Norman for three reasons: 1) It’s excellent; 2) The principles expressed within it are relatively simple but profoundly impactful; 3) I read it recently, so it’s been on my mind lately.

norman-design_of_everyday_things.jpg

This book was originally published in 1988, and has since sold more than 100,000 copies (as the cover proudly asserts). Somehow I managed to miss reading it for 18 years, but finally stumbled onto my own copy just last year. I found it moving, motivating, and life changing. Even more than that I found it affirming as a user of the world around me.

“Most accidents are attributed to human error, but in almost all cases the human error was the direct result of poor design.”

“When you have trouble with things — whether it’s figuring out whether to push or pull a door or the arbitrary vagaries of the modern computer and electronics industry — it’s not your fault. Don’t blame yourself: blame the designer. It’s the fault of the technology, or, more precisely, of the design.”

As someone with a Ph.D. in Computer Science I’ve had the repeatedly uncomfortable experience of helping someone with some random, poorly designed program or device, and they look at me like my background and education will allow me to divine the purpose and processes that underly a pitifully designed piece of whatever. Alas, my doctoral research was not in bad design or horrible user interface. But alack, much of my experience as a user is!

Even more demoralizing is witnessing a user being systematically dehumanized by software, or by some device. I watched a desk clerk stumble through a pitifully designed program, running on Windows, and then watched her look up at me, completely defeated, and apologetically murmur, “I’m just not good with computers.” I looked back at her and said, “Maybe it’s the program that’s stupid. Maybe it was poorly designed, and that’s why you’re struggling to do something so simple.” Her look was priceless, like this idea had never before dawned on her in her entire life.

Some tidbits that I found particularly meaningful:

“… good design is also an act of communication between the designer and the user, except that all the comunication has to come about by the appearance of the device itself.”

“Rule of thumb: when instructions have to be pasted on something (push here, insert this way, turn off before doing this), it is badly designed.”

Norman talks about door knobs, light switches, keyboards, and lots of other everyday devices that routinely drive us nuts.

“Designers go astray for several reasons. First, the reward structure of the design community tends to put aesthetics first. Design collections feature prize-winning clocks that are unreadable, alarms that cannot easily be set, can openers that mystify. Second, designers are not typical users. They become so expert in using the object they have designed that they canot believe that anyone else might have problems; only interaction and testing with actual users throughout the design process can forestall that. Third designers must please their clients, and the clients may not be the users.”

You get the idea. I don’t care what you do for a living. Read this book!

Axiomatic observations

I’ve had a number of conversations recently in which I became acutely aware that my deeply held beliefs about design, leadership, quality — really anything else in life that I value — were strongly anchored in beliefs so firmly ingrained in my thinking and feeling as to function in an axiomatic function. This is very much a “no duh” realization, since it’s not like I’ve never had those thoughts before. But in light of this recently launched blog, I realized that I needed to create a new place to record and discuss these heartfelt and strongly held axioms and their impact on other important issues.

Ultimately these axioms form patterns (axiomatic patterns? idiomatic patterns?) that are essential or foundational to the higher level patterns of excellence (like well-functioning teams, excellent design, and quality of whatever form). Having said that, I’m also aware of the risk of digressing to platitudes in the name of principle. You know, “Work smarter, not harder!” Or, “Think outside the box!” What in the heck do these things mean?! Do they actually capture something fundamental and axiomatic, or are they just smoke screens for deeper thinking? Perhaps I need to launch the anti-axiom forum as well and call it “Platitudes.” Hmmm… Not a bad idea… 🙂

In any case, at this point I’m not trying to overanalyze the impact of these axioms, but just let the ideas flow and be captured. It may provide a resource in the future for deeper analysis. And of course, a counterpoint to the impending “Platitudes” forum…

Namin’ the Lab

One of my all-time favorite a cappella groups, The Bobs, did a brilliant song in 1989 called “Naming the Band” which includes the following lyrics:

“We’re lookin’ for a drummer
Or someone with a van
Our hair is getting longer
But the most important thing is namin’ the band
Namin’ the band.”

It totally captures the dilemma of naming inanimate objects like bands and labs. With my recent research transition from wirelessness to software engineering, we’ve been going through the pain. The old entity was the “Mobile Computing Lab.” Pretty clean, reasonably catchy, only 862 Google hits — and the first hit is us at BYU!

But who wants to be the “Software Engineering Lab”?! Apart from being polysyllabic and terribly boring and generic, it generates 48,500 Google hits. Who can throw their support behind something that vanilla?! Besides, the TLA (“three-letter acronym”) is SEL. It begs the question, “At what price?!” You could refine it to “Software Engineering Research Lab,” which adds two syllables, generates a slightly silly ETLA (“extended three-letter acronym,” a.k.a., “four-letter acronym”), SERL, and more than 2,000 Google hits. So far no good.

We toyed with “Software Engineering Research Group,” which is also polysyllabic, generates 45,900 Google hits, and sports an acronym (SERG) that suggests a sycophantic relationship with one of the Google co-founders. No good.

“We should be writing tunes
and learning where to stand
Instead we’re spending all our time
Doing nothing but … naming the band”

er… lab…

Refusing to accept long-winded mediocrity, we struggle tremendously with naming the lab. We went through “Software Quality Research Lab” (SQRL, pronounced “squirrel”), and the extended version, “Software Quality Research Lab Big Basic Questions” (SQRL BBQ — draw your own conclusions).

“We were gonna call ourselves Elvis Hitler
But someone beat us to the punch”

We had an inspired idea to call it LASER (“Laboratory for Advanced Software Engineering Research”) until we realized that Lori Clarke and Lee Osterweil at UMass Amherst had already stolen our idea. We then toyed with settling for “Laboratory for *Ordinary* Software Engineering Research” (LOSER). Despite its draw, we rejected it for obvious reasons.

“We’ve got our own equipment
and a great rehearsal space
All we need’s a heavy name
to throw in your face”

Like Archimedes, I had my “Eureka!” moment in the tub while struggling in desperation, scribbling ideas on a partially soaked notepad. Unlike Archimedes, I did not consequently run through the streets of Syracuse (or Salem for that matter) naked. Best for everyone involved really.

Okay. Here we go… (Cue the drummer…)

The Sequoia Lab. SEQUOIA — Software Engineering Quality: Observation, Insight, Analysis.

Everyone in the lab immediately jumped on board. Unanimous consent. One explanation is that the idea was brilliant. Another is that the lab members were sick of namin’ the lab and would have agreed to just about anything I threw myself behind. Another is that a rumor had begun to circulate that I was seriously considering going back to SQRL BBQ.

For the record, “Sequoia Lab” generates only 282 Google hits. Also for the record, all of those labs involve forestry (go figure). Not a single software hit. Looks like it’s ours for better or worse. If we begin to be pestered by the spotted owl people, we can always talk to Mr. Brin for potential lab sponsorship and a convenient name change.

Visitors, this way

visitors-this-way2.jpg

What does it mean?! Perhaps it makes more sense in Chinese.

This picture was taken at the Emporer’s Summer Palace near Beijing in May 2006.

I guess I always presumed that directional signs were there for clarification, helping you to avoid the non-obvious bad path. But what’s the alternative here? Straight over the wall and into the creek? Perhaps.