(It’s old, but I just stumbled into it again…) Karen Calhoun’s report, The Changing Nature of the Catalog and its Integration with Other Discovery Tools, included a lot of things I agree with, but it also touched something I’m a bit skeptical about: automated metadata production.
Some interviewees noted that today’s catalogs are put together mainly by humans and that this approach doesn’t scale. Several urged building or expanding the scope of catalogs by using automated methods.
And she highlighted this quote in particular:
If you put the money you’re spending on LCSH in automatic classification, you might get something more competitive in the Google world and get better subject access too.
Now, I’m not saying that we shouldn’t looking carefully at LCSH and our cataloging norms, but the notion of entirely giving up on them is a bit dramatic for me.
For the moment, our rich metadata — primarily the LCSH — is one of the best (and least tapped) assets in our catalogs. If the goal is competing with Google or getting better subject access, then what we should start with is building OPACs that leverage this data first, then figure out how our cataloging practice should evolve to serve that new need.
Our systems aren’t hard to use because our cataloging is bad, they’re hard to use because we’ve not invested in their development and ease of use.
 Is the goal really to compete with Google? I’m more interested in how can we leverage those search engines to improve service to our users.
[tags]automation, computer generated metadata, metadata, libraries, lib20, library 2.0, OPAC, library catalogs, ease of use[/tags]