October 31, 2006

Canadian Bloggers take note: Blog River episode of Corner Gas coming up

Next week's (Nov 6) episode of CTV's absolutely brilliant sitcom Corner Gas (official site) features, among other things, dim bulb Hank starting up a blog.

Brent regrets convincing Hank to start an on-line blog when he realizes that he’s actually going to have to read it. And when Lacey gets bummed out over a friend’s recent success, the food at the Ruby goes downhill. To take Lacey’s mind off it, Emma invites her to a BBQ but things turn ugly when Lacey takes on Oscar on in a friendly game of horseshoes. Meanwhile Wanda gives Davis a hand with his taxes with expensive results.
At least here in Toronto, it's on Mondays at 8pm. CTV has started airing commercials for the show.

Google, or not

My recentish spate of Google-related posts seems to have attracted the attention of Jimmy Atkinson of the Online Education Database, who pointed out to me their page Research Beyond Google: 119 Authoritative, Invisible, and Comprehensive Resources, which list a bunch of pretty good non-Google sources. The general areas are: Deep Web Search Engines, Art, Books Online, Business, Consumer, Economic and Job Data, Finance and Investing, General Research, Government Data, International, Law and Politics, Library of Congress, Medical and Health, Science and Transportation. I'm sure that he would be more than happy to take some suggestions, I know I'm going to send a few.

And while I'm on the subject of google, I also want to draw attention to the Curious Cat Science and Engineering Search using the new Google Co-op Custom Search Engine functionality. Thanks to Curious Cat for pointing it out in the comments a few posts ago.

October 30, 2006

WILU 2007: Teach Every Angle

WILU (Workshop on Instruction in Library Use) is at my institution, York University, in 2007.

York University, a multi-cultural metropolitan campus in Toronto, is well known for its unique, interdisciplinary approach to teaching and learning, coupled with an academic environment where students and faculty are encouraged to think critically and test the boundaries and structures of knowledge. The WILU 36 theme at York University - "Teach Every Angle" - has been conceived to reflect these core values and ideals. At this year's conference, presenters and participants alike, will be challenged to think beyond the traditional parameters of information literacy theory and practice in sessions which will centre around progressive and alternative approaches to teaching, learning and inquiry.

The call for papers is here.
The theme of this conference is Teach Every Angle. York University is known for its multicultural and diverse student population that reflects the broader Canadian mosaic, its tradition of interdisciplinarity, and an emphasis on social justice in its approach to teaching, learning and inquiry. We aim to offer a conference program that encourages delegates to think beyond the traditional boundaries of theory and practice to progressive and alternative approaches to information literacy.

Topics may include but are not limited to:

  • Information literacy research/practices for interdisciplinary programs
  • Information literacy in multicultural and diverse environments
  • International and national advocacy for information literacy
  • Intellectual freedom, intellectual property and information literacy
  • Critical pedagogy and fostering critical thinking skills
  • Best practices in faculty liaison and developing partnerships with educational colleagues and centres
  • Integration of information literacy competencies within the curriculum
  • Assessment of information literacy and impacts on academic success and lifelong learning
  • Research on learning theory and alternative educational theory as it applies to information literacy practices
  • Relationship between information literacy and other forms of literacy - visual, cultural, media, etc
  • Teaching and learning applications of new internet-based technologies in academe e.g. Web 2.0 technologies (wikis, podcasts, blogs, folksonomies, social bookmarking etc.); coursewares; clicker technology etc.


Presentation Information & Formats

Submissions for 90-minute research papers, 90-minute case studies and 3-hour workshops that relate to the theme of the conference in either English or French are welcomed.

  • Research Paper: A 90-minute presentation based on scholarly research. The time period will include space for questions and discussion.
  • Case Study: A description of an activity or project with reflections and implications. 90-minute presentation will include space for questins and discussion.
  • Hands-on Workshop: A 3-hour session that encourages active learning on the part of the participants via exercises, guided discussions, activities and so forth. Includes summary and conclusions by the organizers and time for questions and discussion.

The deadline for submissions is November 24th, 2006.

Celebrating Engineering in the Globe and Mail

This morning's Globe and Mail newspaper contained an insert from the Ontario Society of Professional Engineers entitled "Celebrating Engineering: From Nuclear to Life Sciences" with some very nice promotional articles on engineering.

The front page article is on Prof Molly Shoichet of UofT and how she applies engineering to spinal cord research. Inside there are also articles on engineers at Toronto's innovative and multidisciplinary MaRS Centre, Engineers without Borders and the role of engineers in today's complex engery mix. I applaud this effort, especially as it clearly makes the case that engineering isn't the boring old techy image of the past, that it can be about improving our health, saving the environment and making the world a better place. The insert doesn't appear to be online. I've emailed the OSPE to see if they have something I can like to and will update this post if necessary.

On that note, I'd also like to point out that there's an article in the Professional Engineers of Ontario's magazine Engineering Dimensions, What can diversity bring to engineering which is clearly in the same vein as the OSPE insert.

Update: Cleared up some OSPE/PEO linkage confusion on my part. Thanks David & Elizabeth!

Update 2006.12.20: Added link to the pdf of the supplement.

October 28, 2006

LISZEN: Coolest. Thing. Ever.

Okay, maybe not ever, but still plenty cool. Garrett Hungerford has used the brand new Google Co-op Customizable Search Engine feature to create a search engine just for library blogs. It's called LISZEN and is well worth fiddling around with.

Using listings such as at LISWiki, he's compiled a list of over 500 blogs to search. You can even suggest new blogs for the engine to pick up.

Interestingly, it's a rather modest search engine, as it has not picked up any discussion on itself yet. As of right now, if you do a search on "liszen" you will get no hits (for some reason, I can't create a link to the search results page in the engine, perhaps a bug with Google CSE?). Compare that with 16 hits on Google Blog Search. I wonder how long the Google CSE database is out-of-sync with what should be more-or-less the same info in Google Blog Search? via LISNews.

It would be great if other communities out there were able to create their own blog search engines -- science blogs for example would be a great idea for an engine, but there must be way more than 500 science blogs out there.

October 25, 2006

I hates dichotomies

As in the way Yosemite Sam always says "I hates rabbits."

Anyways, one of the things that I find constantly annoying about the whole Library 2.0 thing is how some (not all, not even many) advocates manage to turn everything into an either/or proposition rather than a more measured "and" approach. I find I often agree with the spirit of the message but am put-off by the wording and tone.

All this gets us to a post on the Free Range Librarain I read in the latest Carnival of the Infosciences.

The post is titled L1 vs. L2: Adapted from O'Reilly, getting it's inspiration from a famous essay from Tim O'Reilly of O'Reilly Books. The post is basically a list of old-school L1 concepts and their new-and-improved L2 concepts. My purpose here is to take the "or" out of Library 2.0 and talk a little about how we could talk about improving our libraries by talking about "and" -- adding not subtracting or replacing. A lot of these ideas have been around for a long time and are only just finding a new expression and manifestation with new technology.

So, from the FRL's dichotomies:

Closed stacks --> Open stacks

This is the only one I really think is a silly comparison, both literally and figuratively. When was the last time you went into a library with closed stacks? Not that often I bet. The implication that more traditional library spaces are, by definition, closed and inhospitable is not even worth refuting.

Collection development --> Library suggestion box
Don't most libraries already have a suggestion box? Don't most libraries already encourage their communities to make suggestions for the collection? Personally, I almost always buy items that are suggested, even (especially?) from undergrads who are at the front lines of information needs. Two minute ref desk interactions have sent me into buying flurries to fill gaps in the collection. On the other hand, I also use my professional judgement to make sure we have a solid just-in-case collection because we also need to be able to satisfy immediate needs, people that can't wait for just-in-time delivery, or for people that aren't sure what they want but need to browse and look at a bunch of things we already have. Notice how this paragraph hasn't used the word "book." You can't suggest something if you don't know what it is yet.

Preorganized ILS --> User tagging
What I really want is a preorganized ILS with user tagging. Only a relatively small portion of records will ever get a tag, so we're still going to need a taxonomy to support the folksonomy. I think the 80/20 rule will apply here -- 20% of the records will receive 80% of the user tags.

Walk-in services --> Globally available services
Yes, I agree. Both. But, we have to understand that the uses of our physical and virtual spaces are both important and not to ignore one in favour of the other.

“Read-only” catalog --> Amazon-style comments
See "Preorganized ILS --> User tagging" Same idea.

Print newsletter mailed out --> Team-built blog
It's a great idea to have a blog. On the other hand, if I were a public library I would also recognize that I have a very substantial audience that isn't as online as I'd like them to be, either by choice or necessity. We have to meet our communities where they are, not where we'd like them to be. We have to serve all our constituencies, not just the ones we think are the coolest or the ones we identify with the most.

Easy = dumb users --> Easy = smart systems
I think it's a good idea to have our systems as easy to use as possible, while still having a good range functionality. Hitting that sweet spot is really hard, and it's not like library systems are the only ones that have trouble finding it. The question here is why do we want our systems to be easy to use? Is it because we think our users are dumb? No, of course not and using the word "dumb" here is silly. Certainly, we understand that some of our users will be inexperienced with online systems so for them easy is better. We also understand that an "easy" system is harder to misunderstand or misinterpret, so fewer of our patrons will make mistakes in using it. Do we want our systems to be easy because the systems are also smart/cool/sexy/2.0/PHP/AJAX? I guess so, but I think that the old-school reasoning is sufficient to motivate us to make our systems easy to use.

Limited service options --> Broad range of options
Yes, I agree. I'm not sure what this has to do with Library 2.0, though.

Information as commodity --> Information as conversation
Some information is a commodity and some is a conversation; I don't think one kind is better than the other and I don't think it's impossible that the very same users can't be interested in both at different times. Some people may be exclusively interested in one or the other, and that's ok with me.

Monolithic applications --> Flexible, adaptive modules
Sure, this is part of the evolution of computer systems over the last 50-60 years. I'll blame the ILS vendors on this one, though.

Mission focus is output --> Mission focus is outcome
Not sure what this one means, but it sounds like we want our patrons to be happy and satisfied with their experience in our library spaces rather than just happy with the stuff we can lend them. Sure, I'll agree with that, noting that I don't think this is a new value.

Focus on bringing ‘em in --> Focus on finding the user
Reach out to under-represented patron groups, expand our potential patron base, go where the user is instead of relying on them to come to you. Sure, I'll go with that. Again, I don't think this concept is new -- book trucks have been around for a while, for example.

ILS is core operation --> User services are core
User services always have been and always will be at the core of library operations. Have all libraries and librarians completely and perfecting delivered on that core since the dawn of time? Of course not. Are there new services to add to the mix? Sure. But last I checked, most libraries have fairly limited resources so might be understandible (if unfortunatly) wary about experimenting with the newest and shiniest.

October 24, 2006

Another nail in the coffin...

...for traditional A&I services? Or at least a wake-up call to the more farseeing of them. Time to scramble, add value, innovate, make a difference. Justify your cost to your customers.

Google has just announced their new customizable search engine product. The idea is that you can create an engine that searches a select number of sites and nothing else. This, of course, eliminates the tons of false hits you get from most searches because you're searching non-related sites. Essentially, I could create my own Computer Science Google Scholar (in fact, I just might do that...) and to heck with all the other engines.

This is an incredible opportunity to both scholars and libraries. It allow us to harness our own expertise to create fast and efficient research tools.

The downsides? Of course with great power comes great responsibility. The big problem I see right away is that you have to know the best sites to select for your engine. It allows you to search the entire web along with your selected sites, but that seems to limit the power of being selective. So, if I'm a CS grad student creating my own engine and I'm not bothering to consult with my advisor and/or librarian, maybe I'll just select a few tech report sites or something. Will I know to add IEEE or ACM or Elsevier? Even if I'm somewhat sophisticated, will I know how useful the SIAM journals could be to me? Even if I know that I want to add all those digital collections to my engine, am I sure I know what URL to add to make sure the proper metadata is searched? Do publishers have to do something special? Same with services like arXiv or NCSRTL. You really have to know what you want to make this work well. Great for knowledgable power users but probably worse than full Google if you don't know what to choose.

So, some first impressions. An exciting product with lots of possibilities but still some potential liabilities.

Communications of the ACM, Nov 2006

As usual, some interesting stuff from the latest CACM, v49i11. In particular, there's a special section on what they call "Entertainment Networking" which includes some articles that may be relevant to those interested in the educational aspects of gaming.

October 19, 2006

Space, the final frontier

What are some of the kinds of physical spaces scitech students need on campus?

  • Classrooms
  • Labs
  • Informal indoor space, such as common rooms, pubs, restaurants, lounges, cafes. This is a very important kind of space, as it's where a lot of the actual learning happens. So much of what we take from our educational experiences we learn from our fellow students and these spaces are where students gather to just hang out and talk. A lot of the collaboration and team work that is so important takes place in these spaces.
  • Informal outdoor spaces, such as parks, fields, walks, benches. Ditto with the above. On a nice campus, these spaces can really add to the experience of being at school. Relaxing and collaborative at the same time.
  • Computer labs. Places where students can work on online research and on preparing their assignments. We always think of students as being hyper-connected, but lots (more than we think) still don't have off-campus access to good computers. They're definitely expected to hand in or perform work (presentations, media, papers) that is generated on computers and institutions of higher education must provide access to these tools.
  • Quiet space. Often forgotten. Students, especially science students, need quiet space to read, study and absorb the complex material they have to master. Whether they are reading books, paper journals, printouts or off a screen, they need quiet. This is still true in our online world. To do assignments, to think and reflect.
  • Formal collaborative space. Also very important. Science and engineering are almost by definition collaborative these days, both in industry and academia. Students need to have spaces that will model the kind of work they will be doing later in their careers. To work on projects, to study together, to have informal tutorial/bull sessions. These spaces need to have good access to computers and software suites that meet the students' needs. Black/white boards, cork boards, all that stuff is still relevant.
  • And of course, lots of other spaces too, that will vary from place to place, like departmental lounges, faculty offices, drop in centres, student services, etc.
  • I'm not forgetting virtual spaces, like blogs, wikis, virtual communities, gaming environments and so on.


So, what kinds of spaces should libraries be in the business of providing to students?

A tough question, but one that is vitally important to the future of academic libraries. As content becomes less and less dependent on physical space, I believe our roles will become much more tied to the kinds of physical spaces we can provide to supplement and enhance the student's experiences. We have to use our physical spaces to provide services to students that they find valuable, that they will come back to, that they will recommend to their friends, that they will remember fondly later in life when the fundraisers come calling. We need to care enough about them to provide them with the spaces nobody else will.

To me, the most neglected one of those spaces is quiet study space. Sure, it's important for us to provide computer labs, group study space, relaxing space and classrooms. And to provide the staff to support and assist students in the activities they engage in in those spaces.

What falls through the cracks? Quiet.

At my library, whenever the ambient noise levels would rise too much, we would start getting complaints. All the staff work hard to make sure that we balance the need to collaborate with the need for quiet. It's hard in a relatively small space, but students need it and want it and they complain bitterly when they don't get it.

The challenge? Using our limited physical spaces, often in older buildings, with limited renovation budgets, to find a balance between those competing space needs. It's not going to be easy.

(For those that are interested, it was reading A Place to Read By Terry Caesar over at InsideHigherEd that got me thinking about these issues.)

(Update: I swear, this posting was totally not inspired by this article from York's student newspaper, Excalibur, about the noise levels in one of the other libraries in the York system, Scott Library. But the article is also a perfect example of the kinds of space that students want us to provide. )

October 18, 2006

I was a professional COBOL programmer

Via the O'Reilly Radar, a Computerworld article on the computer language that will not die.

Cobol, that mainstay of business programming throughout the 1960s, ’70s and ’80s, is not going away anytime soon. In a Computerworld survey early this year of IT managers at 352 companies, 62% of the respondents reported that they actively use Cobol. Of those, three quarters said they use it “a lot” and 58% said they’re using it to develop new applications.

Nevertheless, with a few exceptions, companies aren’t enthusiastically expanding their use of Cobol. In the survey, of those who use Cobol, 36% said they are “gradually migrating away” from it, 16% said they will replace it “every chance we get,” and 25% said they’d like to replace Cobol with something else but have found that too difficult or too expensive.

The persistence of Cobol — welcome or not — presents a dilemma for many companies. Their legacy code will require significant resources for years to come, yet younger software developers often don’t want to work with Cobol, and in most cases, they’re no longer learning it in school. And while there are thousands of Cobol coders still in the workplace, a large percentage of them are nearing retirement age.

*snip*

For years, pundits have said that the way to avoid the headaches of maintaining Cobol — and mainframes, green screens and other legacy paraphernalia — is to replace them. But that hasn’t happened, even in the massive Y2k remediation effort.

Indeed, Cobol promises to be around for many more years, challenging the IT managers who must support it. “A lot of people have said they were going to get rid of the mainframe, but that hasn’t happened,” says Mark Washik, a consultant at Schneider Electric SA in Palatine, Ill. “And for us, all that code is working. There’s no sense in rewriting it.”
A very interesting article on a growing niche job market. Maintaining old code. It's interesting, because we certainly don't train new programmers/developers/SEs to maintain old code or to re-engineer old systems, but that's often what they end up working on at the beginning of their careers. And schools certainly don't teach COBOL anymore. Fortunately it's easy to learn. I took two courses in it way back in my CS days at Concordia and even they I realized that the second, advanced course, in COBOL programming was a waste of time. I really wanted it to be a advanced database/systems course, but it was just plain old COBOL. A wasted opportunity from the prof (whose name I still remember), who was just calling it in for a course that I'm sure was viewed as low priority by the school.

Of course, a small chunk of that old code out there may very well be mine. For 5 or 6 years, I did a lot of COBOL coding as part of the Wang PACE 4GL system. It was the back-end language for the PACE UI and data dictionary functionality. You also had to do any really tricky reports involving specialized calculations in COBOL.

My first, and favourite, programming language was FORTRAN.

On physics conferences

Chad Orzel of Uncertain Principles gives some thoughts/musings/advice on attending and presenting at physics conferences.


  • Preparing a talk
    One of the biggest misconceptions about science and engineering is that scientists and engineers don't need to be able to write well, or speak well. The popular image of a scientist is a sort of socially retarded obsessive, thoroughly enraptured by odd details of science, but shy and mumbling and inarticulate when talking to other people. There's a little bit of truth to this, mostly in the "obsessive" part, but the reality is that communication skills are at least as important in science as in other disciplines. You can have Nobel Prize-worthy data, but if you can't explain the results, in print and in person, well enough to convince other people of their worth, you'll never shake hands with the King of Sweden. There's a lot of writing involved in science, and a lot of public speaking, though not the same sort of public speaking done by people giving oral reports to their high-school English class.

  • Compared to non-science fields
    The other key difference between science meeting and humanities meetings is in the area of visual aids. Absolutely every talk at a scientific meeting will have some sort of visuals associated with it-- mostly PowerPoint these days, though overhead transparencies used to be the rule-- even if they're just pretty pictures put up on the screen while the speaker natters on about something else. Scientists expect pictures. If you're a humanist asked to give a presentation to scientists, bring some pictures. They don't even have to be all that relevant, but the audience will get very antsy if you don't put something on the screen or chalkboard.
    Good stuff to know. I like these kinds of stories as it helps me understand the people I see at work, the grad students and faculty, and get a glimpse into their work lives.

Spinning A Web of Libraries + computing in physics courses

From What's New @ IEEE in Computing, October 2006, v7i10:

2. SPINNING A WEB OF LIBRARIES
The growth of digital, personal libraries is examined in the latest issue of the "IEEE Computing in Science and Engineering" magazine. Obstacles for these digital libraries still include working out how to pay royalties to publishers, the question of how to scan the books into an OCR program without ruining any physical aspects, which is especially important for older and rarer books, and cost issues, which could run into the millions. It is speculated that once these obstacles are overcome, e-books will likely be downloaded onto devices smaller than those that run portable video games. Because the text is searchable via metatags, online libraries will improve both research and sales drastically, making it easier for people to find what they are looking for. They may even pave the way for online programs that store all sorts of works, from paintings to songs to sculptures, that could be reproduced with startling accuracy into real life objects. Read more, including information on how the IEEE community is involved in these efforts:
LInk Here.
The issue is IEEE Computing in Science and Engineering, v8i5.

While we're at it, there are some other very interesting artciles in that issue related to the use of computing in physics courses.

October 17, 2006

Robert Wright interviews Edward O. Wilson.

An hour-long interview with biologist Edward O. Wilson here. The topics he covers include Being good without God, Consciousness, Death, Emergence, Free will, Intelligent Design, Passion, Science and religion and The biology of religion.

Other intreviews available via the Meaning of Life site: Daniel Dennett, Freeman Dyson, Francis Fukuyama, Owen Gingerich, Ursula Goodenough, Steven Pinker and others. Many of the speakers answer similar questions. via BookSlut.

October 16, 2006

Two on Computer Science

Are there going to be any computer scientists in the future? And if so, how are they going to communicate their research?


  • Universities see sharp drop in computer science majors.via Topix.
    Computer science majors make some of the country's highest starting salaries for college graduates, at nearly $50,000 a year. Computer science and computer engineering jobs are some of the fastest-growing occupations in the nation, according to the U.S. Department of Labor.

    Despite that, universities all across the country are watching enrollments drop in their computer science programs - at almost the exact time employers are saying they can't find enough qualified candidates.

  • What Happened to Departmental Tech Reports?
    Imagine back to the early 90's before we had a world-wide web. You had a new result, a nice result but not so important that you would do a mass email. You wanted to mark the result with a time-stamp and make it available to the public so you created a departmental technical report, basically handing a copy of the paper to a secretary. You would get a report number and every now and then a list of reports was sent out to other universities who could request a copy of any or all of the reports. Eventually the paper would go to some conference and journal but the technical report made the paper "official" right away.

    As the web developed CS departments started putting their tech reports online. But why should you have to go to individual department web sites to track down each report? So people developed aggregators like NCSTRL that collected pointers to the individual paper and let you search among all of them. CiteSeer went a step further, automatically searching and parsing technical reports and matching citations.

    But why have technical reports divided by departments? We each live in two communities—our university and our research field. It's the latter that cares about the content of our papers. So now we see tech report systems by research area, either area specific systems like ECCC or very broad report systems like arXiv that maintain specific lists in individual subareas that bypass the department completely.

    What's next? Maybe I won't submit a tech report at all letting search engines like Google Scholar or tagging systems like CiteULike organize the papers. Departmental tech reports still exist but don't play the role they once did and who can predict how we will publish our new results even five or ten years down the road.
    Interesting that he blows off the whole concept of institutional repositories in one sentence. Probably too harsh on Fortnow's part, but perhaps an insight into why it's hard to get profs to deposit into IRs.

October 14, 2006

Friday Fun, a day late

Some Saturday fun:

October 12, 2006

The Ongoing Struggle of Free vs. Fee

So, how to make "kids today" recognize that some information is worth paying for rather than being happy to find just anything "good enough" on the free web?

Over at Search Engine Watch, a two part special report from the ASIDIC Fall Meeting. Part two here.

Some of the questions the articles ask:

Does information really want to be free? If so, how can traditional information publishers and aggregators deal with shifting value propositions and revenue models of premium content and survive in the era of free web content?...How can traditional information industry companies survive in the world of free web content? How can they appeal to "digital natives" who question the value of paying for information?
Some very important questions. Let's see some more of the articles. I'm going to exerpt a good chunk of each article here, but really the whole thing is very much worth reading as it has a lot of implications for the scholarly world.
Over the course of the two-day conference, an attendee mentioned that they thought the fear of change in the industry came from Stewart Brand's often-quoted statement "information wants to be free." Understandably, such a statement would be intimidating to a long established industry that has based its entire existence on the model of selling information.

This fear isn't paranoid: New business and revenue models based on new distribution methodology are arising almost daily. The value proposition has been shifted from information itself to the organization, credibility and trust of information. Information itself may want to be free—but an overabundance of free information is causing a shift in the value proposition associated with content. Content is king, but it seems that everyone is now a newly crowned monarch. It is no longer valuable to be a king—value now comes from organizing, and reviewing which content is most credible, has the least bias, and offers the most value to its' specifically targeted user segment. These fundamentals will be critical to the new monetization of content.

*snip*

One of the other very heavily discussed topics of the conference was the idea of digital natives and digital emigrants presented by Matthew Hong of Thomson Gale publishing, expanding on ideas originally developed by visionary/futurist Mark Prensky.

According to Hong, digital immigrant are individuals who were not born into the digital world, but have emigrated to it. Digital natives, by contrast, were born into technology. These groups of people, including "Generation Y," "Millennials," and the "MyPod Generation" are individuals born between 1978 and 1998, and number approximately 76 million in the U.S.

The discussion of transitioning between the two age demographics of digital immigrants vs. natives seemed to be a key component to the strategies of these large publishing companies will use that will ultimately determine whether they will survive or not.

The key takeaway here is that there is a rapidly growing disconnect between traditional information solutions, which tend to cater to digital immigrants, and user behavior of digital natives. While digital immigrants are willing to purchase traditional information services, the internet is clearly the primary research tool of digital natives. Over 71% of students reported using the internet as the major source of information for recent school projects, with 73% reporting using the Internet more frequently than the library.

*snip*

Content may be king, but accessibility to that content and finding new models for the monetization of information will be the only things that keep it from being free. Some content (how to bandage a wound) needs only to be "good enough", where other content (how to perform open heart surgery) must be very precise. Expertise, credibility, and organization is what separates "good enough" from premium content.

The prevalence of "good enough" information has shaken the premium content industry to its core, but also serves t increase the overall value of expert information and reducing the overall noise level. There is a fundamental need for traditional information providers to shift to more creative revenue models embodied by the new distribution channel of the web as it reaches mass adoption.


The articles also discuss the implications of the "long tail," federated & metasearching and other topics. Stimulating reading, stuff that's been floating around in all our heads for a while but this is a very good summary & discussion of the main issues.

In the scholarly world, for-fee information may be better than for-free information, but if nobody cares enough to cough up the cash, does it really matter? Or more precisely, if we build expensive collections of for-fee information that will be less and less used over time, are we allocating our limited resources properly?

Does fee-based information have a future? In the short and medium term future, sure, no question. In the long term, looking 10+ years into the future, I'm not so sure. The challenge to really add value to something comparable that is free will keep on getting harder and harder. The kids that are late-teens/early 20s right now will be the gatekeepers of scholarly information in 20-30 years -- will they continue to place the importance on scholarly, subscription-based, peer-reviewed journals and databases that their predecessors did? I doubt it. I think that they already are chomping at the bit, unable to understand why Wikipedia and Google aren't better than good enough. The explanations we give in our IL classes will only get more and more strained as time goes by. We can't make them care about the same things we do. How about the millions we pay in acquisition and licensing fees for our content? If usage steadily declines over the next decades, will we just loose that money or will we just reorganize our priorities? What does this mean for scholarly societies and publishers? Evolution.

October 11, 2006

Review of Voodoo Science by Robert Park

From the other blog:

This year, during my sabbatical, I'm really trying to read a lot of science non-fiction, as opposed to my usual diet of science fiction. And so far, it's been great.
Full review here.

So Where Are The Academic Librarian Bloggers

Steven Bell reminds us to add our academic librarain blogs to the Academic Blog Portal section for University Librarians. Unless there are some I don't recognize, I seem to be the only scitech one in the list so far.

Scalzi on the Big Post

Admit it, we're all at least somewhat obsessed by our hit and subsciption statistics. I am, you are, we all are. Science fiction writer John Scalzi is obsessed too, and a little while back he had a great post on The Big Post. You know, getting Slashdotted or BoingBoinged, driving up your stats by several orders of magnitude for a day or two.

Scalzi starts with a couple of ideas about things we can do to increase our traffic

1. Update frequency: Updating daily matters in terms of readership.
2. Enabling comments: People who comment feel attached to the site; people who don't comment get updated content when they click through.
3. Quality of content: Putting in interesting stuff so people have a reason to click into the site daily.
Which is nice but pretty basic; I've certainly made an effort to post a bit more regularly the last month or so and the numbers have gone up a bit in high-post weeks. Next he talks about, in great length, about those things that we can't control that can boost our traffic, the Big Post.
A big post, very simply, is a post that more than the usual number of people link to, thus bringing in an entirely different audience of readers. Most of these readers will be one-time readers -- they click through to the link, see it, and click out, never to return -- but some small proportion will root around, enjoy what they see (due to you working on the factors you can control), and put you on their daily reading list. Bang, you've got new readers.

Big posts can happen when one or more of the following conditions exist:

1. You write or create something unusually well-written about a current news event or other hot topic.
2. You do something unusually stupid and/or funny on your site.
3. You are linked to by one or more high-traffic sites (Fark, Slashdot, Digg, Boing Boing, Instapundit, Daily Kos, etc).
He talks a lot about some of his own big posts, ranging from sincere posts about his own life that became popular to obvious gag posts (ie. taping bacon to his cat) that must have been at least somewhat calculated to try and generate a big post.

This is a really interesting post on a generally excellent blog, with lots of good points on different types of Big Post, gaming the system and turing Big Post traffic into regular traffic. What are the lessons for our own scitech library blog community? Hard to know, we're already a small tidal pool in an already small pond of library-related blogs. Scalzi's cat bacon post generated 67,000 hits in one day, several times more hits that the last few years of this blog put together. Personally, I think you just have to concentrate on your core mission and let the traffic stuff work itself out, doing lots of gimmicky stuff has to be a bad idea. If the Big Post comes, so be it. If not, presumably we're not in this for publicity anyway but to contribute to our profession and if we do good work on our blogs, the readers will find us.

My own Big Post? The closest I've come was when I was mentioned in the Internet Resources Newsletter a few years ago, and that generated about 250-300 hits. The "My Job in 10 Years" series was fairly popular, causing small spikes (yes, I will finish it one day). The summer reading poll, 100+ hits, but I didn't really publicise it that much on sf-related sites that might have generated more visits.

Some suggestions related to my own experiences:

  • Put the most important information about what your blog is about in the blog title. This blog is about being a science librarian, and I get lots of hits on just that search term.
  • Have the word "confessions" in your blog title. I get a lot of hits from people just searching on the word "confessions." Not a great strategy long term, as most people who get here via that search are probably pretty disappointed. But hey, a hit is a hit.
  • Link to the Official Google Blog, as that's a really high traffic site and anything that shows up in the "links to this post" is bound to generate a few hits. I found this out with a couple of recent links to that blog. Let's see how this link to the post of their word processing/spreadsheet apps does...
  • Misspell an actor's name in a common way. A couple of years back I mentioned The Librarian: Quest for the Spear with Noah Wylie. And spelled his name "Wiley," you know, like the publisher. Believe it or not, that generated a few hundred hits by other people that can't spell. Of course, it's hard to know if spelling his name properly would have generated more or fewer hits.
  • That Academic Blogs Wiki is already generating a steady trickle, so getting in any kind of relevant directory or listing is a good idea.

Added extra: read about Chad Orzel's experience getting Slashdotted.

October 7, 2006

Grace Hopper Celebration of Women in Computing

Jane was at the Grace Hopper Conference and posts her impressions here:



Some other related links for the conference:

October 6, 2006

Friday Fun, Ig Nobel Edition

The Ig Nobel prizes (Wiki) were awarded last night and, as usual, there's lots of very funny stuff this year.

My two favourites (and really, how do you choose a favourite for one of these?):

PEACE: Howard Stapleton of Merthyr Tydfil, Wales, for inventing an electromechanical teenager repellant -- a device that makes annoying noise designed to be audible to teenagers but not to adults; and for later using that same technology to make telephone ringtones that are audible to teenagers but not to their teachers.
REFERENCE: http://www.compoundsecurity.co.uk

PHYSICS: Basile Audoly and Sebastien Neukirch of the Université Pierre et Marie Curie, in Paris, for their insights into why, when you bend dry spaghetti, it often breaks into more than two pieces.
REFERENCE: "Fragmentation of Rods by Cascading Cracks: Why Spaghetti Does Not Break in Half," Basile Audoly and Sebastien Neukirch, Physical Review Letters, vol. 95, no. 9, August 26, 2005, pp. 95505-1 to 95505-1.
REFERENCE: video and other details at <http://www.lmm.jussieu.fr/spaghetti/index.html>
I also wanted to choose the hiccups one, but I really don't want to get the keyword search hits that one would probably bring. The mind boggles.

October 5, 2006

Google & Source Code

On the GoogleBlog, there's a post on More developer love with Google Code Search. Google Code Search is a search engine for all the open source code they can find on the net. (FAQ)

This is a great idea that I'm sure will make a lot of developers' lives easier. Searches on "prime number", "differential equation" and "climate modelling" all give big hit counts. (Even "library catalog circulation") The sharing of knowledge for solving scientific problems is always a good thing.

A couple of potential problems, of course. First of all, the hit counts on these kinds of searches are huge, making it tough to separate the wheat from the chaff. A typical Google complaint is, of course, that they don't give a list of the major code repositories they crawl, so we really don't know what is (and isn't) in there. You can specify which package you want to search but that's going to be a lot less useful to neophytes without more information. I can see how useful it might be in the future to allow people to connect articles in Google Scholar with the relevant open source code via Code Search.

Plagiarism of programming assignments will be a big academic issue for this search engine, both making it easier for students to find programs to copy and for profs to detect that plagiarism. In that sense, I would be a bit leary of putting this resource in one of my CS pathfinders without first checking with the profs involved.

October 4, 2006

Academic Blog Portal

Via ACRLog, there's a new wiki out there that keeps keeps track of academic blogs, just like the LISWiki keeps track of all library blogs.

The Academic Blog Portal has section for academic librarians, library science scholars, scientists in all their diversity, information technology and engineering as well as a bunch of other sections. Still not that much in there yet, but certainly worth checking out and, for those of us who are academic bloggers of one sort or another, adding our own blogs to the list.

Note to ISI: Please, dear god, stop trying to predict the Nobels

First, the facts related to the science Nobels:

Physics

ISI Predictions


  • Guth/Linde/Steinhardt
  • Fert/Gruenberg
  • Desurvire/Nakazawa/Payne

Actual Winner: Mather & Smoot


Medicine or Physiology

ISI Predictions

  • Chambon/Evans/Jensen
  • Jefferys
  • Capecchi/Evans/Smithies

Actual Winner: Fire & Mello

Chemistry

ISI Predictions

  • Marks
  • Evans/Ley
  • Crabtree/Scheiber

Actual Winner: Kornberg

Update: Economics

ISI Predictions

  • Bhagwati/Dixit/Krugman
  • Hart/Holmstrom/Williamson
  • Jorgenson

Actual Winner: Phelps


So, they got every single one completely wrong (Including the economics prize, not initially part of my post). Now, I don't think they do this bad every year with their predictions, but hopefull their completely, ridiculously, stupidly, ignorantly, shamefully awful performance this year will convince them that the science Nobels aren't awarded on the basis of citation analysis. The main point is that influence is not necessarily reflected in raw citation counts. Different fields and subfields can have varying scholarly communication & citation patterns that make it foolish to try and compare different scholars purely on the basis of citation counts. Citation counts do not equal impact. In the same way that librarians would not just use journal impact factors to make journal subscription decisions, I am sure that citation counts play next to no role in the Nobel Committees' decisions.

Please, please, bibliometrics is a good and useful pursuit that can tell us many things. Don't try and tell us that it's any good at measuring true research impact (or true research quality, anymore than the literature prize should be decided by book sales), you only damage your own credibility.

October 3, 2006

Review of David Suzuki: An Autobiography

From the other blog:

This is a great book, moving and impassioned, and yet still very human. Suzuki is clearly not overly impressed with himself, not caught up with his own celebrity and this makes his memoirs so engaging. There's lots of gentle humour here, often at his own expense. He also balances the story of his public life with the story of his private life. He gives enough insight into his personal to give us a good feeling of who he is without so much that it feels intrusive or exploitative.
Full review here.

Remote labs

From ACM Computing Surveys, v38i3, Hands-on, simulated, and remote laboratories: A comparative literature review by Jing Ma and Jeffrey V. Nickerson:

Laboratory-based courses play a critical role in scientific education. Automation is changing the nature of these laboratories, and there is a long-running debate about the value of hands-on versus simulated laboratories. In addition, the introduction of remote laboratories adds a third category to the debate. Through a review of the literature related to these labs in education, the authors draw several conclusions about the state of current research. The debate over different technologies is confounded by the use of different educational objectives as criteria for judging the laboratories: Hands-on advocates emphasize design skills, while remote lab advocates focus on conceptual understanding. We observe that the boundaries among the three labs are blurred in the sense that most laboratories are mediated by computers, and that the psychology of presence may be as important as technology. We also discuss areas for future research.
I've posted about this topic before here. This is a very interesting area to me as it seems to be a way to truly bring so much of the possibilities of science education to people who, for whatever reason, can't make it too a real lab. Think of the possibilities.

October 2, 2006

Audio/Visual

I must admit to getting addicted to all the A/V stuff going on out there in the science-y web. Nothing better than kicking back and watching or listening to an interesting lecture on a subject that I'm into.

Some of the ones I've dipped into lately:


  • Via The Daily Transcript, Jonathan Miller's BBC series A Brief History of Disbelief and The Atheist Tapes. The first is a three-parter totally three hours. The second includes 5 of the 6 shows that included interview sessions not used in the main series. The interviewees include Colin McGinn, Steven Weinberg, Denys Turner, Daniel Dennett and Richard Dawkins.
  • An audio of a talk by Bruce Schneier on privacy and security in the internet age. Basically, he says that we shouldn't have to trade privacy for security, that we should be able to have both. Great stuff. via BoingBoing.
  • Another audio, this time by Bruce Sterling. This is a fantastic talk by Sterling, about being an author in the modern age. How to get compensated for your work these days is a challenge that Sterling takes up with gusto -- and no pat answers either. via BoingBoing
  • Finally, a video of Bruce Sterling on The Spime Meme Map at Ubicomp 2006. via BoingBoing
  • UPDATE: Just this morning I watched a presentation by Al Gore from the Technology Education Design conference in February 2006. Although the presentation was before An Inconvenient Truth came out, he actually follows up on the themes in the film and talks about a bunch of things ordinary people can do to make the climate crisis better. Good stuff. Al's actually quite funny at the beginning. Also, check the drop down on the page and try a couple of the other presentations from the TED conference, including Jimmy Wales, Daniel Dennett, Nicholas Negroponte, Richard Dawkins and others. via Scientific Indian.


As well, Curious Cat has a couple of recent posts pointing to other video lectures here and here.