Evaluation of Indexation Consistency in Publisher Subject Metadata

This work aims at evaluating the indexing of subject metadata published on the platform. The goal: to verify if the indexing is consistent. First, analysis was done of the tools available on the platform for assigning keywords, and, afterwards, publishers on the platform were documented and verified by running intrinsic evaluation for interconsistency. This was all done to compare the indexing of one work by four authors. The authors were Ciranda Cultural, IBEP, Excelsior Editora, and Via Leitura. The chosen book was “The Alienist” by Machado de Assis, a classic in Brazilian literature. The result of the indexing analysis was that publishers gave keywords such as title, author’s name, characters’ names, names of other books, and excessively used and repeated words. The last category was further broken down being with or without accentuation, being in the singular or plural form. Other terms were assigned that were related to university entrance exams. Thus, it can be concluded that an absence of vocabulary control can make retrieval of a work difficult, simply by assigning terms that inadequately define the subject of the book, and by lacking semantic, syntactic, and morphological standardization among the terms.

Institutional Repository Keyword Analysis with Web Crawler

This study aims at investigating procedures of semantic and linguistic extraction of keywords from metadata of documents indexed in the Institutional Repository Unesp. For that purpose, a web crawler was developed, that collected 325.181 keywords from authors, in all fields of knowledge, from February 28th, 2013 to November 10th, 2021. The preparation of the collection, extraction and analysis environment used the Python programming language, composed of three program libraries: library requests, which allows manipulation of hyperlinks of webpages visited through web crawler; BeautifulSoup library, used to extract HTML data through webpage analysis; and Pandas library, which has an open code (free software) and stands for providing tools for high performance data manipulation and analysis. The final listing consisted of 273,485 keywords, which represents 15.9% of the listing initially collected. Results indicated that the most recurring problem was the duplication of keywords, with 51,696 duplicated keywords, representing indicators of inconsistencies in the search for documents. It is concluded that the refinement of keywords assigned by authors eliminates the incorporation of a set of symbols that do not represent the authors’ keywords with the same spelling, but with upper/lower case variations or lexical variations indexing different documents.