“I Spy for the CIO”
January 1, 2007 § Leave a comment
I recently came across two interesting articles that got me thinking about the validity of my own work with Agile Project Management (APM).
“Open-Source Spying,” by Clive Thompson, published in the December 3, 2006 of the New York Times Magazine.
“The Political Brain: A recent brain-imaging study shows that our political predilections are a product of unconscious confirmation bias,” by Michael Shermer, publisher of Skeptic Magazine and contributer to Scientific American.
APM’s two tenets are collaboration (community development) and evidence-based (just-in-time) decision-making. For any meaningful collaboration to occur, and to reap the benefits from peer review, all stakeholders must be aware of any and all activity underway regarding the project. Eric S. Raymond, declared “Linus’ Law” as “given enough eyeballs, all bugs are shallow,” or more formally, “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” While this has traditionally been applied in software development, I see the same principle extremely valuable to project management in general: Aren’t your faculty, staff and students beta-testers and your facilities, finance and student life departments, co-developers? Sharing everyones issues (bugs) will not only expose them to more brains, brains that might help resolve those issues, but also help to assess their priority (insignificant to show-stopper).
Clive Thompson, pushes Linus’ Law into reality in his article, “Open-Source Spying.” The article follows a Defense Intelligence Agency (DIA) Analyst, whose “mind boggled at the futuristic, secret spy technology he would get to play with: search engines that can read minds. Desktop video conferencing with colleagues around the world. If the everyday Internet was so awesome, just imagine how much better the spy tools would be.” But when he got to his cubicle, his high-tech dreams collapsed. “The reality was a colossal letdown.”
The problem: Lack of inter-departmental communication and collaboration.
Intelink — the spy agencies’ secure internal computer network — was developed to provide searching, contacts and resources, but the search engine was a pale shadow of Google, flooding him with thousands of useless results. Locating departmental experts through the personnel directories was futile. Instant messaging with colleagues was impossible: every three-letter agency used different discussion groups and chat applications that couldn’t connect to one another. “In a community of secret agents supposedly devoted to quickly amassing information, nobody had even a simple blog [or wiki] that ubiquitous tool for broadly distributing your thoughts.
Something had gone horribly awry. Theoretically, the intelligence world ought to revolve around information sharing. With the Internet information flourished,however within spy agencies, networks where developed to keep secrets safe. This control over the flow of information, as the 9/11 Commission noted in its final report, was a crucial reason American intelligence agencies failed to prevent [the 911] attacks. All the clues were there, but none of the agents knew about the existence of the other evidence. The report concluded that the agencies failed to “connect the dots.”
The answer: Intellipedia, a wiki that any intelligence employee with classified clearance could read and contribute to.
Intelligence heads wanted to try to find some new answers to this problem. So the C.I.A. set up a competition, later taken over by the D.N.I., called the Galileo Awards: any employee at any intelligence agency could submit an essay describing a new idea to improve information sharing, and the best ones would win a prize. The first essay selected was by Calvin Andrus, chief technology officer of the Center for Mission Innovation at the C.I.A. In his essay, “The Wiki and the Blog: Toward a Complex Adaptive Intelligence Community,” Andrus posed a deceptively simple question: How did the Internet become so useful in helping people find information?
|NOTE: Andrus’ abstract, with some slight alterations, would make for a great introduction on the merits of APM within academic institutions:US policy-makers [Senior Campus Administrators], war-fighters [faculty & staff], and law-enforcers [students] now operate in a real-time worldwide [campus, even system-wide] decision and implementation environment. The rapidly changing circumstances in which they operate take on lives of their own, which are difficult or impossible to anticipate or predict. The only way to meet the continuously unpredictable challenges ahead of us is to match them with continuously unpredictable changes of our own. We must transform the Intelligence [IT] Community into a community that dynamically reinvents itself by continuously learning and adapting as the national security [campus and acaddemic] environment changes.|
Andrus argued that the real power of the Internet comes from the boom in self-publishing: everyday people surging online to impart their thoughts and views. He was particularly intrigued by Wikipedia, the “reader-authored” encyclopedia, where anyone can edit an entry or create a new one without seeking permission from Wikipedia’s owners. This open-door policy, as Andrus noted, allows Wikipedia to cover new subjects quickly. The day of the London terrorist bombings, Andrus visited Wikipedia and noticed that barely minutes after the attacks, someone had posted a page describing them. Over the next hour, other contributors — some physically in London, with access to on-the-spot details — began adding more information and correcting inaccurate news reports. “You could just sit there and hit refresh, refresh, refresh, and get a sort of ticker-tape experience,” Andrus told me. What most impressed Andrus was Wikipedia’s self-governing nature. No central editor decreed what subjects would be covered. Individuals simply wrote pages on subjects that interested them — and then like-minded readers would add new facts or fix errors.
This same process is at work through APM: needs analysis, requirements gathering, resource analysis and allocation, prioritizing, etc. can all take place through, “This open-door policy.” Once campus stakeholders define a problem, other contributors — all with different backgrounds, needs, expertise, etc., yet with access to on-the-spot real-world details — can add information and correct inaccurate reports, assumptions, expectations. The result is a clearly defined issue developed from the perspectives of all potential end-users. The wiki provides a forum where needs analysis can occur (the gap between current operations and new services/systems). Requirements can even be gathered based on the real world descriptions and use cases described in the wiki by contributers.
Using the activity within the wiki as a barometer to identify where pressure is coming from on campus, or as Andrus states the market has an “invisible hand” that decides which services or systems survive over time and which do not. Using the invisible hand, priorities can be set. Consider the following: faculty members post a request to use the same user name and password for the LMS as for the SIS; students want course events from the LMS to appear in their personal calendar; “which one” another contributer may ask, “the email or portal calendar?” On and on the this discussion may go, with various people from different areas contributing their unique needs and ideas. The result of the dialog here may be the recognition for an identity management solution on campus, single sign-on, a portal, or other integration issues. Devoting resources to fix each individual request (a custom LDAP person attribute for SIS and LMS users, a custom XML adapter from the LMS to Outlook, oh and the campus calendar as well!) may never actually solve the larger issues. The larger issues (the forest) may never be undertaken due to all of the narrowly defined, specific needs (the trees).
This bottom-up approach for determining where to invest in development is based on real world needs collected by actual stakeholders and can be presented as evidence to the administration for future enterprise development. This contrasts dramatically with a top down approach traditionally employed by administrators. Unfortunately the administration’s vision may not represent the real world needs affecting campus life. This bottom-up approach is exactly the process proposed for the nation’s intelligence agencies:
Imagine having tools that could spot emerging patterns for you and guide you…
Spies, Andrus theorized, could take advantage of these rapid, self-organizing effects. If analysts and agents were encouraged to post personal blogs and wikis on Intelink — linking to their favorite analyst reports or the news bulletins they considered important — then mob intelligence would take over. In the traditional cold-war spy bureaucracy, an analyst’s report lived or died by the whims of the hierarchy. If he was in the right place on the totem pole, his report on Soviet missiles could be pushed up higher; if a supervisor chose to ignore it, the report essentially vanished. Blogs and wikis, in contrast, work democratically. Pieces of intel would receive attention merely because other analysts found them interesting. This grass-roots process, Andrus argued, suited the modern intelligence challenge of sifting through thousands of disparate clues: if a fact or observation struck a chord with enough analysts, it would snowball into popularity, no matter what their supervisors thought. This grass-roots process, Andrus argued, suited the modern intelligence challenge of sifting through thousands of disparate clues: if a fact or observation struck a chord with enough analysts, it would snowball into popularity, no matter what their supervisors thought.
Just as transparency and inclusion are the keys for success in collaboration, facts—not opinions—are required in evidence-based decision-making.
Intellipedia also courts the many dangers of wikis — including the possibility of error. What’s to stop analysts from posting assertions that turn out to be false? Fingar admits this will undoubtedly happen. But if there are enough people looking at an entry, he says, there will always be someone to catch any grave mistakes [Linus’ Law?]. Rasmussen notes that though there is often strong disagreement and debate on Intellipedia, it has not yet succumbed to the sort of vandalism that often plagues Wikipedia pages, including the posting of outright lies. This is partly because, unlike with Wikipedia, Intellipedia contributors are not anonymous. Whatever an analyst writes on Intellipedia can be traced to him. “If you demonstrate you’ve got something to contribute, hey, the expectation is you’re a valued member,” Fingar said. “You demonstrate you’re an idiot, that becomes known, too.”
With all of the activities exposed to project stakeholders, many will argue for a particular direction or solution based on their own needs. Departments may already have significant investments in disparate technologies, there may be staff with specific skills, other priorities may be influencing participation, etc. For example, when discussing what support should be given to a new computer lab, the following position may be taken within the wiki discussion by the network administrator: “Windows is better than Linux. We should deploy Windows.” This is an opinion. However evidence that would help to define a new service (supporting the computer lab) would be: “Windows is already deployed across the campus and in the Data Center.” The former is an opinion that can not help in evidence-based decision-making, the latter is a fact and should be used as evidence when considering software/hardware deployment, network architecture, etc.
The above example of Windows vs, Linux provides a simple example. Obviously most debates that may influence the direction of not only an IT project but potentially academic/campus technology as a whole, is much more complex. Michael Shermer, in the second article mentioned above states, “no matter the issue under discussion, both sides are equally convinced that the evidence overwhelmingly supports their position. This surety is called ‘conformation bias.’”
His article relates recent medical studies using functional magnetic resonance imaging (fMRI) that indicated we unconsciously (and emotionally) seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirmatory evidence. While the study focused on political beliefs, Shermer wonders if there are implications beyond politics: “A jury assessing evidence against a defendant, a CEO evaluating information about a company, or a scientist weighing data in favor of a theory will undergo the same cognitive process?” I wonder, how about a Technology Director, CIO,CFO or even campus president assessing IT needs?
Shermer, then asks, “What can we do about it?” He suggests using the same standards used within the scientific community, “built-in self-correcting machinery,” or rather like Andrus suggests, publishing. “Colleagues are rewarded for being skeptical. Extraordinary claims require extraordinary evidence.” Everything; data collection, experimentation and even disconfirmatory and contradictory evidence should be included in the documentation. This allows vetting, peer review, replication and finally acceptance and adoption.
I would hope these are the same processes we undertake when determining the direction for technology on our campuses.