This is big right now...Thomas Mann wrote a paper about the Working Group's paper on RDA that they presented back in November. There's another response to his work from the autocat listserv here.
I’m just going to be taking the things I highlighted from his work and talking about them. Not very systematic, but hopefully it acts as a good guide for me personally when I go back through the work. This is a long post, although I think its worth it for anyone who hasn't read the thing yet.
Please keep in mind that I just love anecdotal evidence, unlike many people who do not believe a thing unless it has a graph. I think that a reference librarian with 30+ years of experience has the right to make observations about users, without doing a study on it first. But I'm very "unscientific" that way. So here goes!
Pg. 11: “the goal of cataloging is not merely to provide researchers with ‘something quickly’…its purpose, first and foremost, is to show ‘what the library has’—i.e. in its own local collections, onsite.”
Amen, Thomas. I don’t understand when or how the idea came about, that libraries are not just responsible for their own holdings, but for the entire scope of human knowledge everywhere. If that was the case, we wouldn’t keep physical books at all; we’d be….um, OCLC? Google? A wish list?
Pg.16: “a major weakness of word clouds is that they cannot show cross-references, scope notes, or further subdivisions of their own terms…we must remain clear about the differences between catalog search environments and Web search environments.”
I think that this is important…we get so excited, as librarians, to see word clouds, that we forget that we are the not the user. The user will see a word cloud and think “oh, more words like what I just typed.” A librarian sees a word cloud and thinks “oh, they took the subdivisions from LCSH that relate to my broad term and made those into a word cloud.” Users don’t see relationships in the way that we do.
Pg. 17: paraphrasing here: the Working Group is not being academically rigorous in its research. They are not using the scholarship that already exists, and are reinventing the wheel when it comes to thinking about subject access. We should probably all read the reports that Mann has put out there in these pages.
Also, he makes the point that OCLC has been funding a lot of the research that results in the support of facetization, “whose own WorldCat cannot display either cross-references or browse-menus of precoordinated terms. Why…should the rest of us naively accept OCLC’s oversimplified software to begin with?” Why indeed.
Pg.18: “the first responsibility of LC is to catalog its own—and the nation’s—unique copyright-deposit collection.” This is like page 11, but it bears repeating.
Pg: 20: ”contrary to the widely touted mantra, facetization does not “make the data work harder”; it makes the user work harder…it is a stunning violation of the Principle of Least Effort in information-seeking behavior. ‘Least effort’ is supposed to refer to the level of work done by the user, not the catalogers.”
Wow. Just….wow. He’s right, he really is. Yes, he ignores the basic funding issues that all libraries have (although he does talk more about the cost of cataloging in other places, and makes good arguments against downsizing at LC), but he’s still right. Our job is not to make ourselves as lazy as possible about cataloging and foist all the effort onto the user. That’s actually supposed to be the opposite of what we do.
Pg 21: “Anyone who has ever done a Google search knows that Google’s search mechanism exacerbate rather than solve…problems of information overload that are now created and aggravated by computer and web-environment retrievals.”
His argument is that LCSH avoids those problems. I agree, at least a little. LCSH is certainly better than Google, with the caveat that you have to learn about LCSH to use it, and with Google...you can get away with never learning about it at all.
Pg. 24: Accuse me of soundbites, I don’t care. This is gold: “it is undeniably true that the LCSH system is complex—but so is the literature of the entire world, on all subjects and in all languages and from all time periods, that is has to categorize, standardize, and inter-relate….the complexity of the world’s book literature is a rock-bottom reality that will not vanish simply because neither the Working Group nor LC management wishes to pay for professional catalogers.”
Pg. 34: He moves on to talk about reference work and the user, to great effect, I think: “Most researchers, when left to their own devices, are quite unsophisticated in doing computer searches…what [the user] prefers [keyword searching]…is based on a serious misunderstanding of what their “preferred” search technique is actually capable of delivering.” I think this is another case of librarians not being users, but some librarians not understanding that. The average user does not understand that a keyword search does not bring up everything. They don’t even understand the difference between a browse search and keyword search. You may think I’m kidding, but I’ve talked to enough college students to know that.
And the last sentence: “If the Library of Congress succeeds in dumbing down its own subject cataloging operations through this reorganization, there will be serious negative consequences for all American scholars who wish to pursue their topics comprehensively and at in-depth research levels, and for libraries in every Congressional District whose financial constraints make them more dependent than ever on the continued supply of quality subject cataloging from the Library of Congress.”
No comments:
Post a Comment