Search

Published After
Published Before

Search Results

  • Institutional Repository Keyword Analysis with Web Crawler
    54-59
    Views:
    179

    This study aims at investigating procedures of semantic and linguistic extraction of keywords from metadata of documents indexed in the Institutional Repository Unesp. For that purpose, a web crawler was developed, that collected 325.181 keywords from authors, in all fields of knowledge, from February 28th, 2013 to November 10th, 2021. The preparation of the collection, extraction and analysis environment used the Python programming language, composed of three program libraries: library requests, which allows manipulation of hyperlinks of webpages visited through web crawler; BeautifulSoup library, used to extract HTML data through webpage analysis; and Pandas library, which has an open code (free software) and stands for providing tools for high performance data manipulation and analysis. The final listing consisted of 273,485 keywords, which represents 15.9% of the listing initially collected. Results indicated that the most recurring problem was the duplication of keywords, with 51,696 duplicated keywords, representing indicators of inconsistencies in the search for documents. It is concluded that the refinement of keywords assigned by authors eliminates the incorporation of a set of symbols that do not represent the authors’ keywords with the same spelling, but with upper/lower case variations or lexical variations indexing different documents.