November 30, 2006

Information Architecture 3.0

Ok, I know any post that has something with the format "xxxxxxx n.0" makes us all want to poke our eyes out with dull spoon, but bear with me on this one.

Peter Morville, librarian, information architect and co-author of the classic Information Architecture for the World Wide Web, has an interesting article, Information Architecture 3.0, at his Semantic Studios site where he makes a plea of sorts for web designers to pay more attention to design considerations when they cobble together their shiny new 2.0 web sites.

[T]his future is self-evident in the undisciplined, unbalanced quest for sexy Ajaxian interaction at the expense of usability, findability, accessibility, and other qualities of the user experience.

Of course, user hostile web sites are only the tip of the iceberg. Beneath the surface lurk multitudes of Web 2.0 startups and Ajaxian mashups that are way behind schedule and horribly over budget. Apparently, nobody told the entrepreneurs about the step change in design and development cost between pages and applications.

Followed by an interesting definition of Information Architecture:
Perhaps we should take a moment, before proceeding, to review the definition of information architecture:
  1. The structural design of shared information environments.
  2. The combination of organization, labeling, search, and navigation systems within web sites and intranets.
  3. The art and science of shaping information products and experiences to support usability and findability.
  4. An emerging discipline and community of practice focused on bringing principles of design and architecture to the digital landscape.

He goes on to explore the discipline of IA, the role that information architects play and the community of practice they belong to.
Over the past decade, information architecture has matured as a role, discipline, and community. Inevitably, we’ve traded some of that newborn sparkle for institutional stability and a substantive body of knowledge. It’s for this reason that some of the pioneers feel restless. And, while I applaud their courage and entrepreneurial zeal, as they step beyond the role and the discipline, I hope (for their sake and ours) that they stay connected to the information architecture community.

For those of us who continue to embrace the role and discipline, there’s so much going on already, and the world of Information Architecture 3.0 will only bring more challenges, more opportunities, and more work.
The post has attracted a number of comments which Morville addresses very directly and honestly. Good stuff.

November 29, 2006

Back to the Basics on Science Education

That's the title of an article by Paul D. Thacker a few days ago in InsideHigherEd.

The best approach to teaching science is to understand not education, but the scientific method, according to Carl Wieman. In a speech on this idea Friday night, he began with a hypothesis: “We should approach teaching like a scientist,” he said. The outcome will rely on data, not anecdote. “Teaching can be rigorous just like doing physics research.”

*snip*

During the talk on Friday, Wieman said that traditional science instruction involves lectures, textbooks, homework and exams. Wieman said that this process simply doesn’t work. He cited a number of studies to make his point. At the University of Maryland, an instructor found that students interviewed immediately after a science lecture had only a vague understanding of what the lecture had been about. Other researchers found that students only retained a small amount of the information after watching a video on science.

*snip*

While Wieman said that he does not have all the answers for restructuring how science is taught, and added that he is still trying to figure out the best way to teach, he did offer suggestions. First, reduce cognitive load in learning by slowing down the amount of information being offered, by providing visuals, and by organizing the information for the student as it is being presented. Second, address students’ beliefs about science by explaining how a lecture is worth learning and by helping the students to understand how the information connects to the world around them.

Finally, actively engage with students, so that you can connect with them personally and help them process ideas. “We have good data that the traditional does not work, but the scientific approach does work,” he said. He added that is important that members of a technologically advanced nation that is dealing with difficult topics such as global warming and genetic modification, begin to think like scientists.
The talk was given at the recent Carnegie Foundation for the Advancement of Teaching’s centennial celebration. I think it's valuable that science faculty are engaging the needs of the students in their classes, the need to engage and, yes, entertain, students rather than just try and open up the tops of their heads and pour it all in. One of the ways to attract and retain good science students is to make it seem like fun to be a scientist, even fun to learn to be a scientist; more fun than whatever a given student's second or third choice might have been.

As we can all imagine, the comments section for the article was pretty lively, with at least one low blow:
Like Humanists

Well well well. So scientists are going to have to begin teaching like Humanists: smaller classes, real discussion, close reading, theoretical underpinnings. About time, too.

Joseph Duemer, Professor at Clarkson University
Ouch. Like no one's ever been in a boring history class? Or an overcrowded psych or poly sci? Probably no one should be too smug:
Science not the only problem

The difficulties students encounter in learning science have been well documented, and Carl Wieman has certainly been one of the heros in this story. But we should also note that we have not done so well in other very important areas as well. For example, Derek Bok in Our Underachieving Colleges refers to extensive research showing that universities and colleges have depressingly little effect on critical thinking and postformal reasoning — areas that we claim to be very good at teaching. And our lack of success in these important areas seem to be independent of major, type of institution, etc. This would seem to indicate that we all -scientist and humanist — need to pay a lot more attention to the research in teaching, as suggested by Wieman.

Lloyd Armstrong, Professor

November 25, 2006

Stop me before I post on science books again!

There seems to be something in the water as lists are everywhere these days. It must be the holiday shopping season. Well, the Globe and Mail has joined the fun with their annual Globe 100 list of the most notable books they've reviewed in the last year. There's Science & Nature section as well as relevant selections in the Biography and History sections. The list seems pretty good, if a more than a little heavy on the environmental and cognitive science books this year, but I guess that's just the way it is with book reviewing practices in newspapers. Hot topics rule the day rather than any kind of balanced coverage that would indicate real editorial direction.

In any case, if you're buying a present for a science-y person this year, you probably can't go wrong with one of these but I'm sure there are other lists with a bit more variety:


  • Reluctant Genius: The Passionate Life and Inventive Mind of Alexander Graham Bell by Charlotte Gray
  • The Reluctant Mr. Darwin: An Intimate Portrait of Charles Darwin and the Making of His Theory of Evolution by David Quammen
  • Heat: How to Stop the Planet From Burning by George Monbiot
  • The Revenge of Gaia: Why the Earth is Fighting Back -- and How We Can Still Save Humanity by James Lovelock
  • The Creation: An Appeal to Save Life on Earth by E. O. Wilson
  • Darwinism and Its Discontents by Michael Ruse
  • Pandemonium: Bird Flu, Mad Cow Disease, and Other Biological Plagues of the 21st Century by Andrew Nikiforuk
  • Theatre of the Mind: Raising the Curtain on Consciousness by Jay Ingram
  • This Is Your Brain On Music: The Science of a Human Obsession by Daniel J. Levitin
  • The Weather Makers: How We are Changing the Climate and What It Means for Life on Earth by Tim Flannery
  • Field Notes from a Catastrophe by Elizabeth Kolbert
  • Being Caribou: Five Months on Foot with an Arctic Herd by Karsten Heuer
  • Bringing Back the Dodo: Lessons in Natural and Unnatural History by Wayne Grady
  • Stumbling on Happiness by Daniel Gilbert
  • Thunderstruck by Eric Larson
A few other non-science books caught my eye as well such as The Immortal Game: A History of Chess or How 32 Carved Pieces on a Board Illuminated Our Understanding of War, Art, Science, and the Human Brain by David Shenk, A Writer at War: Vasily Grossman with the Red Army, 1941-1945, edited and translated by Antony Beevor and Luba Vinogradova and The Library at Night by Alberto Manguel.

These types of list always beg the question of what's missing. As far as I can tell, the most glaring omission this year are the David Suzuki memoir and the Donald Coxeter biography, both of which should have made the cut at very least based on an important Canadian connection. Of course, the lack of any science fiction or fantasy books in the list was particularly galling for me -- the Globe's reviewing decisions are generally quite shameful in their ignorance of fantastic fiction.

Update:
In the comments, Richard Akerman points to the running science books list for the CBC Radio show Quirks & Quarks. There's about a dozen books recommended based on 2006 shows (and many more from older shows), inlcuding another strong book with a Canadian connection that probably should have made the Globe list: Lee Smolen's The Trouble With Physics. Most of the books (and all the recent ones) have links to the audio of the show where they were discussed. Podcasts of Quirks & Quarks are also available.

November 24, 2006

Friday Fun: The last man vs machine match?

Tomorrow Undisputed World Chess Champion Vladimir Kramnik begins a six game match with the ChessBase program Deep Fritz 10.

Given the history of these types of matches, this might be the last time the human has a decent chance to win or draw. Chessbase has an English translation of a long article, The last match man vs machine? By André Schulzin, from Der Spiegel talking about the match.

Much depends on preparation. Kramnik is being assisted by the German grandmaster and openings specialist Christopher Lutz. In addition he has included a chess programmer in his team, one who will, he hopes, be able to explain to him how his opponent “thinks”.

For the preparation phase Kramnik received in May this year the latest version of Deep Fritz. The final version, the one against which he will play in Bonn, was sent to him in the middle of October. Since then he and his seconds have been able to search for weaknesses in the real thing.

That is exactly what Kramnik did in the Bahrain match. At the time he discovered that Deep Fritz 7 was not playing well in positions that included doubled pawns. As a result Kramnik played a Scotch opening against the machine, one that gave Black doubled pawns on c7 and c6.

In earlier days the youthful Deep Fritz would often be manoeuvred into positions with an isolated centre pawn by its human opponents. This is normally a weakness, but the program would defend this pawn like a tiger its cub, cleverly using the adjacent open files to do so. The weakness became a strength.

For the opening preparation against Kramnik the Deep Fritz team has hired a top grandmaster, who is a great openings specialist. But his name is a secret. This is normal in important chess tournaments, where players don’t want their opponents to know what they are planning. The exact speed of the computer and the modification to the openings book are the two unknown factors for Kramnik in this match.
The rules are a bit bizarre, as Kramnik gets to follow along with Fritz on his own computer while Fritz is in its pre-determined opening book.
As long as Deep Fritz is “in book”, that is playing moves from memory and not calculating variations, Mr. Kramnik sees the display of the Deep Fritz opening book. For the current board position he sees all moves, including all statistics (number of games, ELO performance, score) from grandmaster games and the move weighting of Deep Fritz. To this purpose, Mr. Kramnik uses his own computer screen showing the screen of the Deep Fritz machine with book display activated.

As soon as Deep Fritz starts calculating variations during the game the operator informs the arbiter. The arbiter confirms this on the screen of the playing machine and then shuts down the second screen.
This eliminates one of the computer's advantage, the ability to completely and perfectly "memorize" extremely long opening variations (customized by Chessbase's team of grandmasters to be as effective as possible against Kramnik), something a human can't do. Once Fritz is out of the book, then they get down to real tactics and strategy. It should be interesting to see who prevails in this kind of struggle. Mig has some nice commentary here.

Update: Game one was a draw, you can follow the live action here.

Update 2006.11.26:
Game two was a brutal loss by Kramnik in the worst blunder of his pro career. People are already calling it the worst move ever made by a sitting chamption. Commentary here, here and here. To show that there are no hard feelings, however, I may post one of my own really brutal blunders a bit later on.

November 23, 2006

Enough with the science books, already

Via Critical Mass, a two part interview (one, two) with science author & journalist Michael Lemonick in the Kenyon Review.

A great interview with lots of interesting bits on the life of the science writer. A taste from each of the two parts:

LL: Is it difficult for you to write science stories for things you don’t necessarily have a background in?

ML: It’s certainly harder. What that just means is that I have to ask more questions, and ask for more basic explanations than I might for other areas I’m more familiar with, but my strong belief is that with a bit of effort, I can understand pretty much any area of science at the level I need to in order to explain it to—well, to you. Except mathematics, which I don’t think is possible to write about in a coherent way. Mostly. There’s a very small number of things.

*snip*

LL: You also blog. How does writing for the blog differ for you?

ML: It’s a more informal form, and also the choice of topics is much looser. In the magazine, everything is done by committee. Everything is done by getting a group of people to agree at many levels to do the story. But the blog—I don’t get permission, I just do what I feel like. Some things I write about are very silly, some are serious, and some are argumentative. It’s more informal in writing style and also in the way I think about the whole process. I think of a blog as a conversation, and the sort of thing like where I run into a friend and say, “Oh, you’ll never guess what I just heard! Did you know they are doing such and such?”
Horgan seems to have started a bit of a meme on science books, with posts turning up all over. Horgan's third post on the worst books is here. If you poke around in the various search engines, you'll also get a lot of interesting hits. Some examples: Technorati, ScienceBlogs, Google Blog Search. From what I can tell, Richard Akerman of Science Library Pad is the only other one from the biblioblogosphere to weigh in.

November 22, 2006

While we're on the topic of science books...

Following up on yesterday's post, I thought I'd mention a couple of my favourite science book resources.


  • First of all, the Science Book Reviews by Philip Manning is great. He reviews a fair number of books, as well as listing the new books he sees every week. He also has compiled "best of the year" lists for the last few years. I find this site handy for my own interest and for collection development.

  • The Science Books Blog by Jon Turney grew out of the Royal Institution's attempt a little while back to select the Best Science Book Ever, which turned out to be Primo Levi's Periodic Table. There are lots of good lists and discussion on the blog, making it a useful addition to any science person's blogroll. The posting frequency seems to have declined a bit since the contest ended, but I do hope that Turney will keep up the good work with commentary and reviews about good science writing.

  • LabLit doesn't review a lot of non-fiction, but they do review some, such as Dawkins' God Delusion. Their mission is mostly to promote fiction about science and scientists (as opposed to science fiction), so a lot of the stuff they talk about is peripherally related to public perceptions of science and the place of science in society, topics all covered in science non-fiction as well. They did also post an article about the Royal Institution's contest mentioned above.

The book sitting on the table beside me right now is Kings of Infinite Space: Donald Coxeter, the Man Who Saved Geometry by Siobhan Roberts. I'm about 100 pages into it and am enjoying it tremendously.

Invitation to Digiblog

An interesting new member of the biblioblogosphere has just come on stream. Digiblog describes itself this way:

Until the the ALCTS Midwinter Symposium in Seattle begins, Digiblog will be the home for discussing controversial statements relevant to all those interested in future of library collections, technical services, and services to users. Statements and opinions expressed on Digiblog represent their authors' views only and do not represent the viewpoint of ALCTS.
ALCTS stands for the Association for Library Collections & Technical Services. While it seems that this blog has only been envisioned as a temporary place to start some pre-conference discussions, I hope that the members of ALCTS find a way to make it a permanent blog; collections and technical services are vitally important issues for libraries and librarians and I don't think they get quite the play in the biblioblogosphere as they deserve. Drop by and take a look at their two contoversial statements posts to see what I mean.via Cindy Hepfer on ERIL-L

November 21, 2006

Best and worst science books

John Horgan is helping us all set up our reading lists for the coming holiday season by highlighting a couple of lists of best science books. First he has some critical comments on the recently published Discover Magazine list of 25 best science books. As an antidote to the flaws he sees in that list, he also turns us in the direction of the The Center for Science Writings of the Stevens Institute of Technology where there's a list-in-progress of the 100 Greatest Science Books. That list is up to number fifty and still accepting nominations.

Perhaps even more interesting, Horgan gives us a list of the Ten Worst Science Books.


  1. Capra, Frifjof, The Tao of Physics
  2. Drexler, Eric, Engines of Creation
  3. Edelman, Gerald, Bright Air, Brilliant Fire
  4. Gladwell, Malcolm, The Tipping Point
  5. Gould, Stephen Jay, Rocks of Ages
  6. Greene, Brian, The Elegant Universe
  7. Hamer, Dean, The God Gene
  8. Kramer, Peter, Listening to Prozac
  9. Kurzweil, Ray, The Age of Spiritual Machines
  10. Murray, Charles, and Richard Herrnstein, The Bell Curve
  11. Wilson, Edward, Consilience
Actually, I guess it's eleven.

Luckily, I haven't read any of those yet although I do own the Gould and Wilson and may get around to reading at least the Wilson. I'm afraid I don't do that much better on the list of good books but the main reason for that is that the science books I have tended to read over the years have been mostly on computing or engineering topics, neither of which are terribly well covered in the Stevens or Discover lists. (Yes, I did nominate some good computing books.)

In the interests of self-improvement, I'll list a bunch (11!) of the science books that are in the two lists that I'd like to get around to reading. If and when I do get around to reading them, I'll certainly review them on the other blog where I have been trying to review more science books during my sabbatical this year.

  1. Dawkins, Richard, The Selfish Gene
  2. Diamond, Jared, Guns, Germs, and Steel
  3. Gleick, James, Chaos
  4. Hofstadter, Douglas, Godel, Escher, Bach
  5. Pais, Abraham, Subtle Is the Lord
  6. Penrose, Roger, The Emperor’s New Mind
  7. Rhodes, Richard, The Making of the Atomic Bomb
  8. Watson, James, The Double Helix
  9. Weinberg, Steven, The First Three Minutes
  10. Sagan, Carl, The Cosmic Connection
  11. Gould, Stephen Jay, The Mismeasure of Man
A few of these I already have lying around the house, so there's a pretty good chance they'll turn up sooner rather than later.

Update: Horgan follows up with another post, expanding on his reasons for putting The Bell Curve and Listening to Prozac on the worst book list.

November 17, 2006

Technology Leaders: Scientific American 50

Via the SciAm Blog, a story on the magazine's latest list of top scitech leaders in various walks of life.

The subsections of the article are:


All interesting reading. The first 3 links profile the three most significant leaders. The last link profile all the rest that are listed in the SA 50 Winners and Contributors. Perhaps not surprisingly, the top policy leader is Al Gore -- if you haven't seen An Inconvenient Truth yet, rush out before it's too late.

November 15, 2006

The computer book market

Tim O'Reilly of O'Reilly Books had a two part post a couple of weeks ago on the state of the computer book market.

In Part One, he takes a look at the overall trends in the market:

There's little to say about this picture that will cheer up computer book publishers or authors. The market continues to bump around at about the same level as it has for the past three years. Some publishers express hope that the release of Microsoft Vista and the next release of Office will boost results going into next year, but so far, no new technology release has been able to move the needle for long. We suspect that the combination of increasingly sophisticated online information, easier to use Web 2.0 applications, and customer fatigue with new features of overly complex applications, combined with the consolidation of the retail book market, mean that the market will never return to its pre-2000 highs, despite new enthusiasm for Web 2.0 and the technology market in general. In addition, new distribution channels (including downloadable PDFs) are growing up as retailers allocate less space to computer books.


In Part Two, he looks at how individual technologies are doing in the book market.

His broad comments on the overall trends:
  • Web Design and Development has been the most substantial bright spot in the market, with 22% year-on-year growth in this category. This might well be expected in a period in which Web 2.0 is the buzzword du-jour. In addition to breaking topics like Ruby on Rails, AJAX, Javascript, and ASP.Net, there's been nice growth in books on web design and web page creation. Books on blogging and podcasting have also finally caught on, after several prior false starts.
  • Microsoft's server release earlier in the year is still driving strong sales of books on C#, Visual Basic, and SQL Server. However, other database topics are also up modestly.
  • The growth in books on digital photography has slowed considerably. If not for the inclusion of the iPod category, the Digital Media supercategory would be flat.
  • The hardest-hit part of the market was books on consumer operating systems, down 17% from the same period a year ago.
  • The professional development and administration segment was down 2%, but might have been worse but for the strong performance of Microsoft languages, Python, Ruby, software project management, and database topics.

I'll briefly summarize for each technology he covers:

  • Computer Languages -- Java is down, while web programming languages like Ruby, PHP and Javascript are up, sometimes way up.
  • Databases -- Oracle down, SQL Server, MySQL are up
  • Operating Systems -- Linux, especially Ubuntu, is up a bit, but not a fast moving category.
  • Systems and Programming -- Art of Project Management by Scott Berkun and Jennifer Tidwell's Designing Interfaces are really driving this category. Data Warehousing, Data Analysis and Agile Development are also hot topics.
  • Web Design and Development -- Books on Ruby, AJAX and ASP are hot as are Blogging and Podcasting.
  • Digital Media Applications -- Photoshop is cold, digital photography is hot.
Lots of interesting stuff here, a good view into what the general public wants to read. Similarly, this should give us an idea of what kind of books our students will be wanting to read. If the jobs are in AJAX and Ruby, those are the books they're going to want as they prepare for the job search.

As a point of interest, O'Reilly does this kind of review every quarter or so.

Hey, we knew that already

A controversially titled piece over at InsideHigherEd is causing a bit of a ruckus in the comments area.

Are College Students Techno Idiots? by Paul D. Thacker is on a report by the Educational Testing Service basically stating that students in higher education rely on Google way too much when they search and that they basically only use the first couple of results in a search without giving much thought to issues of accuracy, bias or recency.

Few test takers demonstrated effective information literacy skills, and students earned only about half the points that could have been awarded. Females fared just as poorly as males. For instance, when asked to select a research statement for a class assignment, only 44 percent identified a statement that captured the assignment’s demands. And when asked to evaluate several Web sites, 52 percent correctly assessed the objectivity of the sites, 65 percent correctly judged for authority, and 72 percent for timeliness. Overall, 49 percent correctly identified the site that satisfied all three criteria.

Results also show that students might even lack the basics on a search engine like Google. When asked to narrow a search that was too broad, only 35 percent of students selected the correct revision. Further, 80 percent of students put irrelevant points into a slide program designed to persuade an audience.
Of course, we libarians knew all this, and have been trying to make the case to faculty that we can help with this situation. It's actually nice to see an article like this in a publication like IHE since it helps raise the issue with faculty and also clearly make the case that librarians and libraries can help get students using the resouces that the faculty want them too.

I like the comment by Ross Hunt:
One would expect ETS to get this wrong end first: the idea that this is some sort of short-term processing problem (which willb e addressed by teaching them “how to evaluate information") ignores what the real problem is. Virtually none of my students have any notion of the ecology of texts — the fact that every text exists in a context out of which it comes and to which it speaks. They don’t know what a journal is, they don’t know what a scholarly article is, they don’t know what a magazine is — in the sense, in all those cases, that they don’t know how texts get where they are and why. For them, texts drop from Mars. And that’s because that’s the way they’ve been taught to see them by textbooks and isolated photocopies. My main job as a teacher of English — as I see it, anyway — is to introduce them into the world of texts and help them learn to survive in it. For ETS to tell us that they’re “Techno-Idiots” because they don’t know what nobody’s ever shown them is about what we should expect.

I find as I do more and more IL sessions for scitech students, the main thing I try to teach them is what kind of documents are available, what each type is used for and how to find each type.

This is a great article to pass around to faculty -- the title will certainly get them reading while the content should get them thinking about their friendly neighbourhood librarian.

November 14, 2006

Two from O'Reilly Radar

Two interesting posts via Tim O'Reilly:


  • O'Reilly quotes Sarah Milstein, co-author and editor of Google: The Missing Manual about the State of Search:
    Assuming search winds up lasting 100+ years, it's still in its infancy. Still, it surprises me that the presentation of Google's main search results pages barely changed in the two years from one edition of the book to the next. The main difference is that now, onebox results with specialized information appear more frequently (though randomly) at the top of results listing. At this point, I'm ready for a better results interface.

    *snip*

    Moreover, it's no longer clear that Google is a search company. They're certainly an ad-brokering network (I'm sure everyone saw the announcement last week that they're reaching into newspaper ads now, with plans for basically all major media). And they're a provider of (mostly) Web-based productivity tools of all kinds. But a lot of those activities seem to have little to do with their mission of "organizing the world's information and making it universally accessible and useful." They do seem to have organized the world's top search experts. But as a customer, I'm not sure I'm feeling the benefit of that in my everyday searching.
    And quite a few more interesting points, mostly about Google.

  • Web 2.0 Principles and Best Practices is a post about a book of the same name prepared by O'Reilly and John Musser.
    Web 2.0 is here today—and yet its vast, disruptive impact is just beginning. More than just the latest technology buzzword, it's a transformative force that's propelling companies across all industries towards a new way of doing business characterized by user participation, openness, and network effects.

    What does Web 2.0 mean to your company and products? What are the risks and opportunities? What are the proven strategies for successfully capitalizing on these changes?

    O'Reilly Radar's Web 2.0 Principles and Best Practices lays out the answers—the why, what, who, and how of Web 2.0. It's an indispensable guide for technology decision-makers—executives, product strategists, entrepreneurs, and thought leaders—who are ready to compete and prosper in today's Web 2.0 world.
    There's an excerpt here. Sounds interesting, doesn't it? Well the pdf will cost you us$375 and print+online us$395. The comments on the post are quite interesting, mostly about how the whole Web 2.0 thing is just a marketing bandwagon invented by O'Reilly.

    O'Reilly responds:
    I'm sorry that folks reading this blog are upset about the price -- it's a good sign that you guys want to read what we have to say -- but those commenters who noted that you aren't the target audience are exactly right. The document is targeted at the Forrester/Gartner customer, and believe me, the report is cheap by those standards.

    One of the things that we've noticed at O'Reilly is that because of our exclusive focus on the "alpha geeks," we tend to abandon markets as they mature -- just as the money starts coming in with corporate adoption. We're trying to pitch more products to this audience, so we don't remain solely early stage. It's still a work in progress.

    That being said, those of you who said we'd have done much better at, say $99, might well have been right. Pricing is always a bit of a crapshoot, where you're trading off volume against price.

    And those of you who are looking for something free, please note that my What is Web 2.0? article has been read by hundreds of thousands of people. (And Steve (the first commenter), you clearly need to read that article, since it makes clear that Web 2.0 is NOT Ajax.)

    It's an interesting idea. Good information costs money and if you really need to know what somebody has to say, you may very well be willing to pay a lot of money for the privelege. Kinda like scholarly publishing used to be...
More from Tim O'Reilly, maybe later today or tomorrow.

Review of Countdown: A history of space flight by T.A. Heppenheimer

From the other blog:

The decision to read this book was certainly not rocket science, even if it is a book about rocket science. An engaging and fascinating read, you don't have to be a brain surgeon to understand it either...
Full review here.

November 10, 2006

Science funding in Canada

Earlier this week, Ian Urquhart of The Toronto Star had a very illuminating article, T.O.'s research crossroads, commenting on the future of scientific research funding here in Toronto and in Canada as a whole:

"For all its current research and industrial strengths, it is by no means certain that the Toronto region can continue to prosper," says the report by a team of researchers at the University of Toronto.

"The remarkable growth in global competition in advanced technology industries, together with the major investments being made by governments around the world to strategically support research and innovation, present a major challenge to the Toronto region."

*snip*

Meanwhile, says the report, under the Conservative government in Ottawa, key research agencies "have either reached the end of their terms or have received no word as to their future funding."

In this respect, the report names the Canada Foundation for Innovation (which funds research infrastructure), Genome Canada (which supports genetic research), CANARIE (a national high-bandwidth network for research), and the Canada Research Chairs program (under which research talent is recruited to Canada).

"Undoubtedly, Canada is at a crossroads," says Ross McGregor, president of the Toronto Region Research Alliance, in a preface to the report. "Will our national government choose to make the dramatic investments which will move us into the top tier of innovation-intensive countries in the world? Or will we be satisfied with the status quo, which will effectively mean falling behind in the international R&D arena?"
The Globe and Mail's James Rusk weighs in as well:
Toronto ranked second only to the Boston area when measured by the number of science- and engineering-related papers published.

But it fell to fifth place in terms of patent application in the United States, which the researchers took as the measure of the Toronto region's performance in commercializing research.

The researchers also found that Canada does not support research and development as strongly as other countries in the study, such as Sweden, the United States and Singapore.

They also found that, while the Toronto region is on the cusp of becoming "one of the world's true megacentres of research and advanced technologies," it gets only 21 per cent of federal funding for research and development -- even though 35 per cent of all R&D in Canada is done in the area.
The report, prepared by the Toronto Region Research Alliance, mentioned in the article is here.

I've also had hanging around an article by David Crane in The Star, from October 22nd, Canada must find, exploit new talents which is more directly about the patents issue.
One measure of our ability to turn the results of our efforts in research and development into potentially commercial possibilities is the rate at which we generate new patents. Patents are a legal recognition that an idea is unique and deserves protection, allowing the inventors either to proceed to commercialize the invention themselves or to license it to others. It is one way, though not the only way, of measuring the results of our investments in research and development.

This past week the World Intellectual Property Office, known as WIPO, published its annual report, which showed that we may be getting a poor return for our research investments and that these numbers should be a cause for concern in Canada.

*snip*

We live in what is known as a knowledge economy, where ideas are the new currency — represented by talented people and the discoveries they make. In Canada, companies such as Research in Motion and CAE Inc., , the maker of aircraft flight simiulators, are examples where the ideas and knowledge, not the factory buildings, are the real assets of the enterprise. Indeed, in many businesses today, the real value is not in physical assets but in what we call intangibles such as ideas, skills and reputation. Microsoft is an example. This is the way of the future.

The future will also be a world of much greater competition, much faster development of new ideas, and where brainpower and capacity for risk will be the hallmarks of success. Today's marvel will quickly become tomorrow's commodity — the cellphone is a prime example and the laptop computer another. This is why The Economist recently had a whole section on the global search for talent. The level of risk aversion in our investing community and among public policy makers does not augur well for Canada's future.


The WIPO report is here.

My point here, and I do have one, is that as a society we have to come to grips with how we value science and technology. Do we want to continue to be hewers of wood and drawers of water or do we want to step up and take our place in a new world? The wood and water (and fish and oil) tend not to provide large numbers of high-paying jobs, especially when the Canadian operations are merely branch plants of foreign-owned multinationals. If we don't put our brains to work, we'll fall behind those that do. There are positive signs, as mentioned in the articles above, including a growing acceptance that the environment is an important issue and that science has provided a very good insight into what is happening in our ecosystem and that many of the things we need to do going forward will grow out of science and engineering as well as politics and economics. It would be nice if scientists and engineers had as much clout and respect and were valued as much as politicians, economists, journalists, lawyers and all the rest. When was the last time you saw a tv drama about scientists or engineers?

In all comes down to the idea that we must challenge our governments to take scientific and technical issues seriously and to value the scientists and engineers that do the research in a way that has never really happened here. An awful lot depends on it.

What faculty think of libraries

Or at least one particular faculty member. An interesting post at Slaves of Academe. The post is kind of rambling and disjointed, and mixes up musings about public and academic libraries and their very different missions but it also raises a lot of interesting points:


Still, the library in its edutainment mode fulfills vital social functions, as places where children can be relatively safe, where immigrants and pensioners can check out the material that speaks to their needs, where those without home internet access go to check email or buy something online (or look at porn, when they can), therefore lessening the vaunted digital divide. But will the library eventually lose the books and reading and just become a social centre, even on university campuses? This is an interesting question. It is far too easy to err on the side of entertainment rather than education in the synthesis of 'edutainment.' I do think we lose something when the line between the two becomes too blurry. And this is a shame, for the library as a site of self-formation has been central to my own development as an intellectual, albeit in strange and unconventional ways. The New Library does not really appeal to me. If I wanted to spend time with loud schoolchildren and the odd pensioner, I would go to the Mall. Then again, the digital explosion in library resources means that academic misanthropes such as myself no longer have to actually go to a place and see people whilst pursuing books or articles. Remember photocopying? Ha! Some things are indeed not missed. But the new libraries, whatever their paradigmatic limitations, are packed with people. So obviously, a need is being met. But the old library, the shushing and the schoolmarms and the old maids and the cubicles and the card catalogues, gone oh so long ago, will always be central to my understanding of the role and function of the library, even as seemingly this vision withers away or rather transforms itself into another creature entirely.
Via InsideHigherEd.

November 9, 2006

I will become an active participant in moving my library forward

A great manifesto by Laura Cohen over at Library 2.0: An Academic's Perspective. As Walt notes, what's refreshingly different is that the points are positive and affirming rather than snide or sneering.

Great stuff. A couple of hightlights:


  • I will recognize that libraries change slowly, and will work with my colleagues to expedite our responsiveness to change.
  • I will be courageous about proposing new services and new ways of providing services, even though some of my colleagues will be resistant.
  • I will not fear Google or related services, but rather will take advantage of these services to benefit users while also providing excellent library services that users need.
  • I will validate, through my actions, librarians' vital and relevant professional role in any type of information culture that evolves.

What would be cool would be if that we could all turn this into one of those deranged memes, and all add one or two affirmations of our own.

So, my 2 cents in a typically cautionary vein:

  • I will recognize the diversity in technological aptitudes, curiousity and interests in my users and try to make sure products and services are delivered flexibly enough to meet them where they are, not where I would like them to be.
  • I will understand that not everyone sees change the way I do, nor do they want it to happen as quickly and that resistence can come from unexpected places.

What percentage of librarians have never even heard of Library 2.0? What percentage don't read any blogs at all or really know what a wiki or a podcast is? These may be higher than we think.

November 8, 2006

Giving good presentations using PowerPoint

Chad Orzel of Uncertain Principles has a couple of recent posts on how to use PowerPoint (or other presentation software) to give presentations & lectures without causing too much pain to the audience.

The first post is A Good Craftsman Never Blames His Tools. The idea is that using PowerPoint is really independant of giving a good presentation. You can give good or bad presentations with or without PowerPoint.

Here's the thing: PowerPoint is a tool, nothing more. It doesn't make bad speakers into good speakers, or good speakers into bad speakers. The people you see giving boring and incoherent PowerPoint presentations? They'd be giving boring and incoherent presentations with regular slides or overhead transparencies. The people you see giving clear and inspiring presentations with PowerPoint? They'd give clear and inspiring presentations if they had to chisel their figures into stone tablets while they talked.

PowerPoint doesn't make presentations bad. It enables bad speakers to do a certain type of bad presentation very easily, but getting rid of PowerPoint won't change these people into good speakers. It just changes the mode of their badness slightly.
I agree with this sentiment completely and in my own presentations I always try to speak as naturally as possible, even when using a PowerPoint outline for my talk. In fact, I never use PowerPoint for IL sessions. I only use an outline-style webpage to guide me and provide a summary for the students. This has generally worked very well for me, as I can also save a few trees and hope that the web page replaces the need for me to give handouts that would rehash the same info. Obviously, if the prof posts the URL of my page to a course website, that's great too. As a bit of an aside, I once had a prof tell me that he didn't feel he needed to get me back to talk to new sessions of his class every year because he could just point his students to my page. Oh well.

Of course, it's easier said than done for most of us. As Chad says, PowerPoint enables a certain kind of badness which can be easy to slide into. I once had a prof that used a couple of hundred slides for a 2.5 hour class. Torture. In the next post in his series, How to Do a Good PowerPoint Lecture, he lists of bunch of suggestions to make our presentations work well. Many of his suggestions are definately geared towards his field of physics, but most are still applicable to librarians and beyond.

  1. Know Your Audience -- style, level, focus are all important considerations.
  2. Limit Your Material -- an absolute max of one slide/minute, less if possible
  3. Equations Are Death -- library analogy would be "buzzwords & acronyms are death."
  4. Text Is Death -- lean and mean slides, avoid complete sentences.
  5. Explain Your Graphics -- why is that screen shot here. Will anybody in the audience even be able to see it clearly enough?
  6. Define Your Terms -- he means constants & variables in equations, but could also mean acronyms or buzz words.
  7. Keep the Background Simple -- As in slide background.
  8. Keep the Animation Simple. Ooooh, I hates slide animation. If anyone ever catches me using it, please just shoot me.
  9. Keep Multimedia to a Minimum -- If multimedia can blow up on Bill Gates, it can blow up on you too.
  10. Prompt Yourself -- Prepare & rehearse well enough that you don't lose your place or get confused. Don't help yourself with this problem by having slides that confuse even you.
  11. Structure Matters -- You presentation should make sense.

These are also very sensible suggestions, and the comments at the end of Orzel's posts are also very illuminating, especially in the "yeah but..." sense that there's a good discussion of presentation techniques.

I'm presenting at the Ontario Library Association's 2007 SuperConference at the end of January (on blogs, natch) and I'm starting to work on my presentation. It should be interesting to see how well I do in following Orzel's suggestions. If you're there, let me know how I do. The program is here, you can find me by searching on Dupuis. Cory Doctorow is one of the keynotes, so it should be a hoot.

Update: For what it's worth, I actually plan on using the OpenOffice module Impress to create my presentation.

Update:
Orzel has added some of his own sample presentations for our perusal.

Yet another Update: Janet Stemwedel of Adventures in Ethics and Science has a related post where she discusses the dreaded read-a-paper style at philosophy conferences. Lots of good comments too.

November 7, 2006

Net Neutrality

Net Neutrality is not an issue I've posted on before, but nevertheless it's one I think is vitally important to the future of the internet.

From Net Neutrality in Canada:

The debate about net neutrality can get complex. However, it boils down to the role of Internet Service Providers [ISPs] in the operation of the internet. In a neutral network, ISPs provide non-discriminatory access to all types of traffic. In this model, ISPs compete on the factors of speed (bandwidth) and throughput (bandwidth * time). This is, for the most part, how internet service has been delivered in the past and is a model that works well for the internet as a whole, as ISPs are not involved in controlling the content that you access on the internet.

New technologies are coming to the market which enable ISPs to monitor and "shape" traffic based on its content; that is, ISPs are now able to determine what content you are accessing on the internet and modify your service accordingly. At the most ethical of times, this ability to control traffic, commonly called Quality of Service [QoS], has been applied to protocols such as the Domain Name Service [DNS] allowing DNS lookups to be resolved at a higher priority than HTTP traffic. The management of this type of QoS is typically a matter of internal network operations at an ISP and has no associated cost to the consumer.

Clearly, libraries and librarians should prefer a neutral net, one that gives fair and equal access to all content, regardless of it's origin. The network itself should not prefer one source to another or one application to another. Packets are packets that should be passed along impartially. Clearly, from a scientific point of view, data and information should be fairly and openly available to all. Can you imagine a network that would block information on evolution or stem cell research? Neither can I, so we should make sure we keep this issue a the front of our minds, perhaps even including the basic concepts in some of our instruction.

There's a petition to sign at the bottom of the Neutrality.ca site for those who are interested. The Wikipedia entry I cite above has a quite comprehensive coverage of the issue, focusing mostly on the USA.

Thanks to Kyenta Martins of library Monkey for bringing this issue to the top of my mind.

November 6, 2006

He's Back....

Yes, Chris Leonard is back with a new blog, Egg - like a bird's egg. Back in the day when Chris was an editor at Elsevier, working on theoretical CS journals, he had a great blog called Computing Chris. Well, he left Elsevier about a year ago and the science blogosphere has been much the poorer without his contributions. (Note to self: Chris's Elsevier blog, and all those great posts, are now lost; not a good idea to blog on a platform controlled by your employer. Chris is now on Blogger.)

Chris's new job is at at BioMed Central where he's developing their new PhysMath Central, which will apparently have quite a bit of CS content.

Chris is already posting good stuff on his new blog and I hope he continues for many moons to come.

November 3, 2006

FSOSS: Web 2.0, eLearning, and Open Source

Web 2.0, eLearning, and Open Source by Kevin Pitts, eLearning Faculty Advisor - Seneca College and James Humphreys, eLearning Faculty Advisor - Seneca College

The evolving “Web” (some have referred to the next generation of web use as Web 2.0) brings with it a need to rethink the wants and needs of the individual, and how that individual collaborates and shares with the communities to which he or she belongs. In this presentation we’ll look at the implications of Web 2.0 on elearning, speculate on the design and development of open source tools and services to meet the elearning needs of Web 2.0 users, and demonstrate some early innovations that may take hold as we move forward in the elearning web space.
This was another great session, also the one closest to my heart as a librarian as it was about the intersectin between education and technology and how the web can be a platform for education directly, not just as an adjunct to another delivery method. Great stuff.

The presenters began by defining Web 2.0, something that they did very quickly given that the audience was generally quite tech-savvy. They note that W2 is all about dynamic social networking and that it fits well with contstructivist pedagogy. They propose a kind of emergent elearning model, noting that in the past elearning has mostly been about course management software (CMS) or course web pages. This new model would include things in addition to CMSes such as blogs, wikis and virtual environments. This is symptomatic of an shift in the use of technology in education from the administrative side to the academic side. The tech begins to be used as an extra to a course, like a blog or wiki or a social network, feeding the long tail of niche markets in knowledge.

This is where FOSS comes in handy. With the merging of a lot of the proprietary systems companies, like Blackboard & WebCT, there's a lot of opportunity for FOSS products to get in the door, as they become part of the review process as product decisions are made. The niche markets in educational needs, the long tail, can be well served by FOSS products as they move out of the backoffice apps like operating systems to mainstream elearning applications. They mentioned there is a kind of "sweet spot" in FOSS products where the combine a specific niche and product maturity. Some examples are Moodle and Sakai for CMSes.

Some future directions? The most fascinating one was a promotional video they showed of a college that's established a campus in the Second Life virtual world, demonstrating that people are actually teaching in a game-like vitual world. Next they talked about the Seneca social networking product ELGS which uses a "friends" metaphor to provide blogs & wikis. This lead to a discussion of the CLEA environment, short for Create/learn/explore/activities.

To sum up, FOSS operatings systems seem to be at the tipping point leading to greater acceptance; we are entering the "teaching and learning" era of elearning systems which leads to a more distributed and democratized learning experience, making institutional/technological cultures more open, surrendering some of the need for ownership & control.

(TOC of my FSOSS posts, FSOSS agenda, video recordings of sessions)

(There were 3 presenters, not two like in the agenda. Unfortunately, I don't have the third name, anyone else have it so I can include it.)

Friday Fun

It's been a while, lots to catch up on:

November 2, 2006

FSOSS Keynote: The selling of Linux and Open Source : Do we suck at this or what?

The selling of Linux and Open Source : Do we suck at this or what? by
Marcel Gagné, Author of "Moving to Linux: Kiss the Blue Screen of Death Goodbye!" and Linux Journal Columnist

If Linux desktops and Open Source solutions are the answers, then we might just be doing a terrible job at getting the message out. By anybody's count, the 'year of the Linux desktop' has come and gone a few times now. Arguably, the open source community has a better product, but it's impact in the marketplace is uninspiring. Marcel will explore the techniques we've used to sell Linux (including FOSS), analyzing what works and what doesn't. He'll present some outlandish ideas for improving our image and, hopefully, making a bigger splash.
Marcel Gagné's keynote was clearly the highlight of the conference for me. Almost a therapy session for the FOSS community, it was basically about how to spread the word about the good things that are going on out there. The talk certainly had a lot of resonance for me, as I think self-promotion is something libraries and librarians could do a much better job of on campus and in academia as a whole these days. Gagné used a lot of humour to make his points; he was by far the best speaker of the day. A lot more people like him would go a long way to making the problem he highlights go away.

Gagné started by mentioning that every year for the past 5 or 6 someone has declared it "The Year if the Linux Desktop" and that Linux would finally make the breakthrough into mainstream computing. Well, every year, it doesn't quite happen that way. And Gagné wants to know why. The first problem he sees is that FOSS proponents are always apologizing for the shortcomings of the products. On the other hand, Microsoft and those guys never apologise for the shortcomings of their products. perhaps FOSS advocates should also adopt the same damn-the-torpedoes attitude.

But how to do it? Adopt the same strategies as professional marketers (and sellers, too; he makes a distinction between marketing and selling). Use the tried and true marketing strategies to market what is definately a superior product. First of all, the Product: emphasize Linux, the poster child of free software. Next, Price: he makes sure to note that FOSS isn't really totally free and we need to reassure business and others that we understand the costs of support, training and the like. Place: how to we distribute FOSS? Online, stores, magazines, install fests. And finally, Promotion: two things need to be done here. First, FOSS advocates need to educate the public and second, they need to understand the market.

He points to one of the main things holding FOSS and that's fear. Fear of Microsoft taking over any market, fear of being sued, fear of dumb laws like DMCA, fear of the unkown and, mostly, FUD -- a general fear, uncertainty and doubt that prevents businesses and comsumers from taking the plunge.

What have some successes been so far? Gagné mentions the famous Firefox ad in the NYT, the Ubuntu billboard and the IBM prodigy commercials, the Red Hat truth happens ads; real attempts to mobilize & inform the public. Has it helped? Server side, there's 20% growth for FOSS; desktop has been very disappointing.

What is to be done? Gagné says we need to do market research, to find out what potential customers really want, like www.betterdesktop.org. If you or I could put up a billboard in downtown Toronto, what would it say? We need a Linux & FSOSS marketing board like commodities do in Canada, such as wheat or dairy products. We need more crazy ideas. And most of all, we need to just stop talking to each other about how our marketing efforts suck and just get out and start talking to potential customers, developers and evangelists.

(Update: TOC of my FSOSS posts, FSOSS agenda, video recordings of sessions)

(Update: fixed some accent problems on the é)

FSOSS: Curious George and the $100 Million Supercomputer

Curious George and the $100 Million Supercomputer by Phil Schwan

For the last four years I helped build storage systems for the largest government, corporate, and academic clusters in the world. This morning we'll do a whirlwind tour of some of the corner cases nobody talks about: not just how to store petabytes of data and transfer tens of gigabytes per second (that part is easy?), but our struggles with Linux kernel disasters, physicists who think they're software developers, POSIX-induced migraines, and debugging a 10,000-node state machine on hardware nobody else has from the wrong side of a classified network.
The most gloriously techie of all the sessions I attended, so much so that quite a bit of the hard-core unix/linux stuff was a bit beyond me. In any event, it was still very interesting. Schwan co-founded Lustre, a company that does file systems for super huge cluster computing systems. The talk was about the challenges of creating these highly parallel cluster systems.

The aim of the Lustre project he discussed was to create a petabyte storage system that was also very fast, supporting 10GB/sec transfer rates. Some of the other goals of the system were to be able to support the largest supercomputer systems, be one of the top 500 cluster computing sites, to use the POSIX Semantics file system and to have a 100% recovery rate.

There were a couple of uncomfortable lessons from this effort: don't bother writing your own operating systems for a project like this, as it will only cause a lot of delays. The api interface to the Linux kernal is far too unstable
to use for a high-profile project (in particular, this part was very techy). One good thing they learned: using POSIX as a file system was a good choice as it made the whole cluster behave like one machine.

The next part of the talk was called, "Software Sucks," basically about the perils of very large software projects. A bit naive about such things they learned many lessons (perhaps they should have read the Fred Brooks' book above...or perhaps any book on software engineering?). First of all, the software industry is a disaster, allowing error rates no other industry would tolerate. Typical comparason: if bridges or planes failed at the same rate as software? The did discover the Personal Software Process methodology which saved their bacon, but I suspect virtually any methodology would have done the same.

(Update: TOC of my FSOSS posts, FSOSS agenda, video recordings of sessions)

FSOSS Keynote: You Should Be Giving This Keynote

You Should Be Giving This Keynote by Mike Shaver, Co-founder, Mozilla Project

In addition to the too-good-to-be-true economics created by the upsurge in software released under increasingly liberal licenses, open development practices and "loosely coupled" projects are demonstrating the power inherent in large and diverse communities. Mike will convince you, through examples, analogies and speaking really, really quickly that the ability to capture even "trivial" contributions from all directions will be more important to the success of modern open source projects than version control, usability guidelines, review processes, licensing, marketing or a really awesome T-shirt design. You will be tested on this material.
This was a very good keynote, a great way to start off the conference. Shaver started by emphasizing the importance of project management in any software development endeavor, namechecking Fred Brooks' classic The Mythical Man Month, especially the human skills needed by software developers and their managers. He then went on to discuss the overall theme of his keynote, that small is beautiful. Small in the sense that FOSS projects can survive and thrive on a lot of little contributions from a lot of different people and that small can be a good way to get started. In fact, Shaver says that we should beware people that jump into a project with grand, world-changing ideas as they care often undoable in FOSS projects. Starting small means people with only limited interest or limited skill levels can jump in and make a contribution right away. The danger of small can be that you risk loss of knowledge and continuity when the one advocate/developer of an idea or feature drifts away.

About small, to make it work you have to make it easy for interested people to go from consumers of a product to producers of that same product, find ways to draw people in. Similarly, you also need to make it easy for people to get out too, so that they don't risk burning out. People should be able to make their contribution and withdraw, if that's what they want. Small also needs a culture of testing, to make sure that "small" contributions don't cause the whole system to fail. You also need to make your needs clear on a project, possibly via a public wish list, so people know immediately if they're interested in contributing. Related to this is managing expectations, almost an anti-wishlist so people know what you don't want ie. out-of-scope, clearly for a later version.

Finally, Shaver made a plea for everyone to contribute to FOSS projects, that everyone has some X-Factor, something that they're really good at, that they can contribute. The kinds of competencies he mentioned include: writing skills, math, single-mindedness & focus, artistic ability, reading, listening (ie. tech support, bug finding).

(Update: TOC of my FSOSS posts, FSOSS agenda, video recordings of sessions)

Free Software and Open Source Symposium

I spent last Friday at the Free Software and Open Source Symposium held at the Seneca@York campus. It was a fascinating day of presentations that I'm very glad I attended. It was the 5th annual symposium, which was interesting since I somehow managed to miss the first four, even though they were held just a stone's throw from my office!

Overall, a very stimulating conference with lots of ideas for the free & open source software (FOSS) community. The entire agenda is here, recordings of all the sessions are here. Some of the sessions I missed but am looking forward to catching the videos for:


In any case, the next bunch of posts will include the abstracts and some impressions from a few of the sessions I attended.

Update: Here's links to the rest of my posts:

November 1, 2006

A Halloween anniversary

For what it's worth, yesterday was the 4th anniversary of CoaSL. It's been fun and rewarding to be here, part of the extended families of librarian and science bloggers and I hope to be able to keep it going for many more years.

Yesterday was also Halloween, of course. Last night, I was out and about in the neighbourhood for 1h45m with my younger son (grade 6) and one of his friends. The haul was about 3 bags of candy each! My older son (grade 8) went over to one of his friends after school and a gang of them went out for a couple of hours. They seemed a bit less dedicated to the pursuit of candy and he only came back with about 2 bags worth. In case you're wondering what the science/librarian connection is here, check out this post at The World's Fair on the taxonomy of Halloween candy. Mmmmmm, candy.

Recently in ACM's Ubiquity & Interactions

From Ubiquity (all OA):


  • AI Re-Emerging as Research in Complex Systems by Kemal A. Delic, Umeshwar Dayal. Yet another renaming of the AI field to something that sounds less impossible?
  • Ubiquity Interviews USC'S Dr. Alice Parker
    UBIQUITY: And that seems like a good place to end the interview. Is there anything else you might like to say to our readers?

    PARKER: Well, I think the great frontier is software and applications at this point, and not technology, even though the technology is very exciting. I look at the demand for complex software systems that we're going to be required to produce, and I'm awestruck by how difficult it's going to be to produce these systems and have them be reliable and fail-safe and just generally safe. So, I think that's the looming frontier and one that technologists have often ignored. They've said, "Oh, yes, software -- that's over there somewhere." I think some of the technologists are going to need to focus on software and applications to figure out how can we best support the software enterprise, because it's a critical one and it's one that is very difficult to envision proceeding the same direction it's been going. Things are getting larger and more complex and harder and harder to construct so that they function in a fail-safe manner. That's the true frontier.

  • Books without Boundaries: A Brief Tour of the System-wide Print Book Collection by Brian F. Lavoie and Roger C. Schonfeld
    As the digital transformation reshapes the nature of print collections, these and many other issues will require the attention of librarians and other decision makers. As we learn how the system-wide collection contextualizes local collections, we might be able to develop new strategies for print-collection management that reflect system-wide, rather than purely local, considerations. The observations and findings discussed in this paper are only a first step in this direction, but we hope they may set direction for discussions about the future of print books in the digital age.
    Pretty good article, with a lot of sensible things to say about the long term future of print & ebooks in academic environments.

From Interactions, the one of usability testing sample sizes seems especially relevant (all req sub):

  • Fresh: pushing the envelope: Whither the web? by Fred Sampson
    We hear calls for innovation so often, and from so many sources, that the word is in danger of losing meaning. Innovation does not happen in a vacuum. Innovation feeds on shared knowledge and experience, collaboration, interactions. Even Isaac Newton (borrowing from earlier writers) acknowledged the sources of his inspiration when he said, "If I have seen further it is by standing on the shoulders of giants." The new Web, the new Internet, helps us not only to stand on the shoulders of giants, but to find more and more giants, and to become giants ourselves.

    All this innovation is going to affect how we develop and distribute information. From the perspective of an information developer (you might know us as technical writers), findability and delivery are just as important as the content: Facts that can't be found are useless. If the younger users of our information expect content in small chunks, we should be prepared to deliver information that way. I shudder to think of the challenges of delivering software-installation instructions via cell-phone text message, but that might well be a useful delivery option, providing exactly what the user needs, exactly where and when the user needs it. Or maybe we'll provide instruction by podcast, updated by RSS subscription, and put task-oriented instruction in your ear. Or, at the risk of being lost among the ordinary and profane, we might provide tutorials in video posted to YouTube.com. (Shudder again.)

  • Forum: under development: How do you manage your contacts if you can't read or write? by Jan Chipchase
    The mobile phone enables personal, convenient synchronous and asynchronous communication—in essence allowing its users' communication to transcend time and space, at a time and in a context of his or her choosing. It is therefore unsurprising that with these almost superhuman characteristics, many people consider their mobile phone to be one of the essential objects to carry when leaving home. These benefits (and associated costs) apply equally to an urban city dweller in London and a rural farmer in Bangladesh.
    An important thought. The web is great, technology is great, but masses of humanity are just left by the wayside.

  • Sample sizes for usability tests: mostly math, not magic by James R. Lewis
    Why do we keep talking about appropriate sample sizes for usability tests?

    Perhaps the most important factor is the economics of usability testing. For many practitioners, usability tests are fairly expensive events, with much of the expense in the variable cost of the number of participants observed (which includes cost of participants, cost of observers, cost of lab, and limited time to obtain data to provide to developers in a timely fashion). Excessive sampling is always wasteful of resources [9], but when the cost of an additional sample (in usability testing, an additional participant) is high, it is very important that the benefit of additional sampling outweighs the cost.

    Another factor is the wide range of test and evaluation situations that fall under the umbrella of usability testing. Usability testing includes three key components: representative participants, representative tasks, and representative environments, with participants' activities monitored by one or more observers [2]. Within this framework, however, usability tests have wide variation in method and motivation. They can be formal or informal, think-aloud or not, use low-fidelity prototypes or working systems. They can have a primary focus on task-level measurements (summative testing) or problem discovery (formative testing). This latter distinction is very important, as it determines the appropriate general approach to sample-size estimation for usability tests.

  • Bridge the gap: Toward a common ground: practice and research in HCI by Avi Parush
    There is indeed a gap between research and practice in HCI. Primarily, practitioners express difficulties in benefiting from research. It was proposed here that there are different types of research in HCI, and those types were delineated as a taxonomy. The ability to utilize and benefit from any of the research types depends on how a practitioner defines his practical problem as a research question. The abstraction of the question on different levels can lead one to search and find potentially beneficial research that can be applied in the practical arena.

  • People: fast forward: SeniorCHI: the geezers are coming! by Aaron Marcus
    What do seniors want and need from human-computer interaction and communication? What are the long-term effects on them with mobile/computing devices? How late in their lives can and should we expose them to the latest technology?

    *snip*

    As we all grow older, the time to begin thinking about user interfaces for the elderly becomes an issue to which all can relate. Perhaps that portends a boom time for SeniorCHI. We shall soon see.