The Texas Regional Group of Catalogers & Classifiers (TRGCC) has started a weblog. Nice layout. It is using Live Journal, a tool I've not played with enough. Now I've got a reason to explore it more.
Friday, March 17, 2006
A new free tool for educators from Winston-Salem State University in North Carolina is the Topic Maps for e-Learning (TM4L).
Towards Reusable and Shareable Courseware: Topic Maps-based Digital Libraries Digital course libraries are educational Web applications that contain instructional materials to assist students' learning in a specific discipline. They play a vital role in out-of-class learning, especially in project-based and problem-based learning, as well as in lifelong learning. Digital course libraries are expected, on one side, to provide learners with powerful and intuitive search tools that allow them to efficiently access learning resources, and on another, to support instructors with powerful authoring tools for efficient creation and updating of instructional materials....We address the problems of findability, reusability, and shareability of learning materials in digital course libraries by suggesting the use of Semantic Web technologies in creating them.... Further on, we propose that the implementation of such libraries is based on the ISO XTM standard - XML Topic Maps. Topic Maps (TM) are an emerging Semantic Web technology, that can be used as a means to organize and retrieve information in e-learning repositories in a more efficient and meaningful way.OK Topic mpas are a kind of metadata, how would we catalog these things? Or reuse the metadta as OPI-PMH, or MODS or whatever? Will RAD make this easier?
Thursday, March 16, 2006
Open WorldCat now supports ContentObjects in Spans (COinS).
On March 12, 2006 OCLC added COinS to its Open WorldCat web pages. COinS is an acronym that stands for Context Objects in Spans, which represent a standardized way to embed citation metadata into a web page. COinS are actually included in the HTML code on the web page using OpenURLs. This allows other processors 'such as your web browser' to find the citation metadata and generate links to other resources that are accessible via OpenURLs.COinS
at 2:38 PM
Wednesday, March 15, 2006
Search Engine Coverage of the OAI-PMH Corpus by Frank McCown, Xiaoming Liu, Michael L. Nelson, and Mohammad Zubair has been submitted to IEEE Internet Computing.
The major search engines are competing to index as much of the Web as possible. Having indexed much of the surface Web, search engines are now using a variety of approaches to index the deep Web. At the same time, institutional repositories and digital libraries are adopting the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to expose their holdings, some of which are indexed by search engines and some of which are not. To determine how much of the current OAI-PMH corpus search engines index, we harvested nearly 10M records from 776 OAI-PMH repositories. From these records we extracted 3.3M unique resource identifiers and then conducted searches on samples from this collection. Of this OAI-PMH corpus, Yahoo indexed 65%, followed by Google (44%) and MSN (7%). Twenty-one percent of the resources were not indexed by any of the three search engines.OAI
A preprint of Thomas B. Hickey, Jenny Toves and Edward T. O'Neill's paper NACO Normalization: A Detailed Examination of the Authority File Comparison Rules is now available. It is soon to be published in Library Resources & Technical Services (LRTS).
Normalization rules are essential for interoperability between bibliographic systems. In the process of working with NACO Authority Files to match records with IFLA's Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing our current implementation.Normalization
at 3:27 PM
Name Authority Challenges for Indexing and Abstracting Databases by Denise Beaubien Bennett and Priscilla Williams appears in the first issue of Evidence Based Library and Information Practice.
The traditional objective of name authority files is to determine precisely when name variations belong to the same individual. Manually?maintained authority files have served library catalogues reasonably well, but the burden of upkeep has made them ill?suited to managing the volume of items and authors in all but the smallest I&A databases. To meet the access needs of the 21st Century, both catalogues and I&A databases may need to implement options that present a high degree of probability that items have been authored by the same individual, rather than options that provide high precision with the expense of manual maintenance. Striving for name disambiguation rather than name authority control may become an attractive option for catalogues, I&A databases, and digital library collections.Names
at 2:56 PM
Tuesday, March 14, 2006
Creative Commons has announced ccPublisher 2 "which we believe to be generally usable."
ccPublisher is Creative Commons's tool for generating license information for a file and optionally uploading it to the Internet Archive for free hosting. ccPublisher also serves as a platform for development of other CC enabled tools.ccPublisher
at 10:15 AM
The updated Bulgarian translation of the IFLA Statement of International Cataloguing Principles (based on the Sept. 2005 draft) has been posted on the IFLA website.
at 9:48 AM
Lots of news from the Dublin Core Metadata Initiative. Check out their news page for all the latest. The one I find most useful is an agreement has been signed on DCMI Conference Paper Repository
An agreement was signed in March 2006 concerning the hosting of the DCMI Conference Paper Repository. The repository is managed by Joe Tennis of the University of British Columbia, Canada, and hosted at Simon Fraser University Library in Canada. The collection includes all papers and posters of the DCMI conferences in Florence (2002), Seattle (2003), Shanghai (2004) and Madrid (2005). The Repository is built on open source software (Linux and Apache Tomcat) and displayed through Siderean Software's Seamark faceted navigation. Access is available through the link to 'Conference Papers' in the left-hand navigation bar on the DCMI Home page.It includes both search and faceted access. Nice.
Monday, March 13, 2006
Tuesday I'll be presenting my poster for LPSC. I'm hoping some of the scientists go back to their libraries and clean-up and get their names submitted into the LC Name Authority File. This is a larger problem for women who marry, change their name, divorce, change their name again, etc. It would also be nice for some of the guys to get their names different from other writers and have a unique heading.
An Extensible Approach to Interoperability Testing: The Use of Special Diagnostic Records in the Context of Z39.50 and Online Library Catalogs by William E. Moen Sebastian Hammer, Mike Taylor, Jason Thomale and Jung Won Yoon appears in Proceedings 68th Annual Meeting of the American Society for Information Science and Technology (ASIST) 42.
Assessing interoperability in the networked information services and applications environment presents difficult challenges due in part to the multi-level and multi-faceted aspects of interoperability. Recent research to establish an interoperability testbed in the context of Z39.50 protocol clients and servers and online catalog applications identified threats to interoperability and defined a question space for interoperability testing. This paper reports on follow-up research to develop an alternative approach for interoperability testing in the context of networked information retrieval that uses specially designed diagnostic records. These records, referred to as radioactive records, enable interoperability assessment at the protocol and semantic levels. This approach appears to offer an extensible method for interoperability testing for other metadata and protocol application environments. The resulting interoperability testbed incorporates additional components to exploit automatic processes for interoperability testing and assessment, thus improving the efficiency of interoperability testing.Searching
at 3:12 PM