The Impacts of Bibliometrics Measurement in the Scientific Community A Statistical Analysis of Multiple Case Studies

In recent years, statistical methods such as bibliometrics have increasingly intensified to analyse books, articles, and other publications. Bibliometric methods, as techniques to measure the information distribution models, are frequently used in the field of information science and social research. The main purpose of this article is to offer scholars a general framework for the comparison between positive and negative aspects of bibliometrics, on the methods and tools used. Therefore, both the strengths and the critical points will be highlighted, to obtain a complete and detailed overview of the entire argument. In the methodological part, a bibliometric analysis will be applied to various case studies, such as with the Generalized Error Distribution, analysing and commenting on the data, and using the Bibliometrix software. The results suggest that in the future there will be greater consolidation of bibliometrics, as the introduction of increasingly advanced technologies will create new tools and methods characterized by a high degree of automation and speed.


Introduction
The measurement as the quantification of attributes of an object or event is normally used to compare and evaluate a phenomenon. The scope and application of measurement are dependent on the context and discipline. In statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales. Furthermore, measurement and metrics are fundamental activities for managerial sciences (such as in marketing and retailing) and in the creation and evaluation of value creation processes (Basile, 2019). The development of modern information technologies and the significant changes that are affecting the politics and organization of modern scientific research has led to a more decisive development of a quantitative approach to evaluative analysis. This approach is offered by Bibliometry, which in some way represents a guarantee of objectivity of the results, making the discipline itself an effective and ever-expanding tool in both the scientific evaluation and in that documentary policy in the library. The definition of bibliometrics physiologically approaches other methodologies such as Scientometry and Infometry which, while not presenting differences in methodology, are distinguished by having different purposes and areas of research. Therefore, Bibliometry is also evaluated about the other methodologies, making it a discipline as fascinating as it is complex. Although the coining of the term Bibliometry dates to 1969 by Alan Pritchard, its use and practice can be traced back to 1890. Bibliometry is a discipline that uses mathematical and statistical techniques to study distribution models of scientific productions and verify them. their impact and effect on the entire scientific community. This article after having dealt with all aspects of the discipline in detail highlights lights and shadows. Conceptually, this article will follow a very specific scheme of different parts, which refer to two different objectives. In the first part, we clarify the definitions of Bibliometry and its objectives, historical evolution of the discipline, bibliometric indexes and methods, Impact Factor and H-index and define the theoretical approach. In particular, the definition of the Bibliometry discipline and its objectives will be examined in-depth, dealing with it also about other methodologies. Our attention will focus on the historical and social evolution of the discipline, starting from its birth in the first decades of the twentieth century and reaching the role it covers today. Although the term was coined only in 1969, in this section reference will be made to clear examples that testify that long ago there were the first raw forms of this discipline. Moreover, we will deepen the discussion about bibliometric methods and the different differences between traditional and modern methods, going to define what are the elements that have characterized this change. In addition, space will be given to the most important bibliometric methods, namely citation analysis and content analysis. Continuing, we find bibliometric indices, analysing first what they are and what they are for and highlighting the most important bibliometric indicators in terms of quality, namely the Impact Factor and the H-index. Furthermore, we can define the application approach with a brief introduction of the work carried out, which will indicate the tools used for the research, and an in-depth analysis of the Bibliometrix R package, which allowed us to analyse the data obtained. Subsequently, different case studies will be analysed on Lp-norm, on Ged-Garch models, on the Skewed Ged distribution, and finally on the inferential aspects of Generalized Error distribution. We decided to select a collection of data obtained from Scopus and analyse them using the Bibliometrix R package for each case study. The results obtained in the form of figures will be analysed and commented on, providing the reader with a complete reading key on the opportunities and possibilities offered by these tools. Furthermore, as evidence of the infinite breadth of possibilities, we have decided to use and comment on different graphics, to be able to obtain different and more significant information for each case study for that research. Finally, the last section of this article will be dedicated to our considerations of the role that Bibliometry can acquire in the future. Furthermore, there will also be an overall reflection of the various bibliometric analyses.

Definition of Bibliometry and Its Objectives
Bibliometry can be defined as the science that allows, using mathematical and statistical techniques, to investigate scientific production from a purely statistical point of view, analysing a very vast set of data, which can be modelled by referring to a given interval of time or referring to a specific sector that interests our research. Considering the wide range of applications of Bibliometry, it must also be analysed concerning other methodologies such as Scientometry and Infometry. Furthermore, the definition of Bibliometry physiologically approaches the methodologies just mentioned, which, while not presenting differences in methodology, are distinguished by having different purposes and areas of research. In particular, the main purpose of Bibliometry is to derive quantitative relationships between documents and between the elements that compose them, regardless of the area. Scientometry, whose founding father is the English information specialist Derek John De Solla Price, instead has as its main purpose the evaluation and measurement of the contribution of scientists, institutions, and nations in the advancement of knowledge, and to do this it can resort to a qualitative approach through, for example, peer review, the panel or the degree of internationalization, and a quantitative approach through the calculation of publications and citations. In this second aspect, Bibliometry and Scientometry become, in other words, two indistinguishable entities (De Bellis, 2005). Finally, Infometry, which derives from the term "informetric" and was first proposed in 1979 by Nacke to cover that part of information science that deals with the application of mathematical methods to the discipline, go to study information in any of its forms and areas. It is considered "the set of sets of all other metrics since they all count some kind of information" (De Bellis, 2005). What therefore distinguishes all the methodologies is the object of the analysis if the field of investigation concerns the problems relating to documents, we are in the field of Bibliometry if the investigation concerns information, we are in the field of Infometry if finally, the investigation concerns science we are in the field of Scientometry. Bibliometry helps to describe the history and general state of the art of a specific field or research topic, considering written production as the main channel of formal communication between scientists (Bellardo, 1980). Over time, different definitions have been developed that have made it possible to transform from a raw and primitive concept to a more worked and modelled one, which adapts more easily to the technological, political, scientific, statistical, and mathematical developments that have characterized the different phases of our company. Regarding the objectives, we can commonly delineate among the primary objectives, the search, and reception of useful information obtained only after analysing entire collections and bibliographic services and extracting quantitative relationships between documents and elements that compose them (for example words, citations, authors, institutions, etc.). The use of a bibliometric approach makes it possible to provide more objective and reliable analyses based on statistical techniques (Pritchard 1969;Broadus 1987;Diodato and Gellatly, 2013), having the ability to carry out both basic and advanced analyses of large volumes of documentation. The key procedures that allow this can be traced back to performance analysis (Peters and Van Raan, 1991;White andMccain, 1998) andscientific mapping (Börner et al, 2003;Noyons et al, 1999). The purposes of these procedures are different, on the one hand, the aim is to evaluate everything based on bibliographic data, for example by measuring the power or effect of certain actors on a specific area, on the other hand, on the other hand, it seeks to analyse cognitive models through a synchronic (Callon et al, 1983, Noyons andVan Raan, 1998) or diachronic (Cobo et al, 2011;Garfield, 1994) vision.

Historical Evolution of the Discipline
The term Bibliometry was coined by Alan Pritchard in 1969 who proposed to replace the little-used and somewhat ambiguous term "statistical bibliography" with Bibliometry (bibliometrics) to indicate the application of mathematics and statistical methods to books and other forms of written communication. The phases that have characterized the historical evolution of Bibliometry are 4: a first phase characterized by the very first forms of statistical bibliography, which is the basis of current Bibliometry; a second phase characterized by the increasingly accentuated development of this statistical approach, witnessed by the processing and publication of what is defined by the entire scientific criticism as the fundamental laws of Bibliometry; a third phase that refers to the post-war period and the enormous contribution of Garfield, and finally the last phase that coincides with the last decade, in which new indexes and data databases were developed. Historically, Bibliometry originated in the West and was derived from statistical studies of bibliographies (Egghe and Rousseau, 1990). Although the term Bibliometry was coined in 1969, its use and application date back to the 90s of the nineteenth century. Campbell's work (1896) which used statistical methods to study the scattering of subjects in publications is probably the first concrete attempt at bibliometric studies (Sengupta, 1992). Arriving at this point of historical analysis, it may be natural to ask whether there were primitive forms of Bibliometry before the twentieth century, the answer is yes as it is enough to think, for example, of the complex phases of counting, conservation, and archiving that has characterized the most ancient times. ancient, starting from the Romans up to the Royal Library of Paris, which can be received as the earliest forms of this science. Only at the end of the nineteenth century did we begin to witness a process that would have led to the application of biometric techniques to books and other products of written communication, to obtain useful information for the acquisition policy in the library (De Bellis 2005). Let's say that in some way it represented the first phase of statistical bibliographic, which over time would change into the definitive Bibliometry. There are several studies carried out in the various fields of knowledge that have made an evolution of this magnitude possible, just think for example Franck Campbell who is a publication 1896 used statistical calculation to summarize the thematic breakdown of the publications belonging to the most important bibliographies of the time, or the work of FJ Cole and Nellie who applied statistical analysis to the literature of the sector, broken down by country and referred to a time interval, to measure the degree of growth of the reference discipline. A real turning point in the discipline, however, is represented by three works that appeared between 1926 and 1935, which contributed to the formulation and publication of the empirical laws on the behaviour of literature which are the basis of Bibliometry, and which have in some way started the second phase. Among the main military milestones of its development (Tague-Sutcliffe, 1992) are Lotka's method of measuring the productivity of scientists (1926), Bradford's law of dispersion of scientific knowledge (1934), and the distribution and frequency model of words in a Zipf's law (Ferrer i Cancho and Solé , 2001). To understand the concept behind these three fundamental laws we use table 1, which summarizes the key idea and relationships of Lotka, Bradford, and Zipf (Chen and Leimkuhler, 1986). The third historical phase of the discipline opens with the end of the Second World War, in the new not only political but also socio-economic structure and the organization of knowledge, especially of a scientific nature. In the immediate post-war period, however, the evaluation of the research was still conducted without the use of bibliometric indicators, using instead either the tested system of peer-reviewing or macro-level economic indicators. Basically, in the post-war period, the idea and the need that the object of scientific activity can be kept under control and planned towards specific objectives is recovered, especially when it has been shown that science could be a lever for economic growth. At this point, the figure of Eugene Garfield becomes very important in the history of Bibliometry, they began working at the John Hopkins Welch Medical Library Project as he realized the inadequacy of the tools to receive useful information. Garfield introduced an interdisciplinary index project in 1955 for the first time in the journal Science. Thanks to Lederberg, Garfield's project found full realization only in 1963 with the publication of the Science Citation Index (SCI). The citations were the fundamental element for Garfield as they created a network of connections between documents, which

Bibliometric bases Findings/Main contributions Main authors and references
Lotka's law Lotka's law is a statistical observation offered by Alfred Lotka, to whom a relatively small number of scientists would be responsible for a large part of the contributions produced by the entire scientific community. Zipf's law Zipf's law states that, if you take a text, which can be more or less long, and proceed to draw up a ranking of the words based on the decreasing frequency of appearance, it is possible to obtain a constant by multiplying the position number of the word in the ranking by the number of times it appears in the text. became what he had called the association-of-ideas index, capable of linking together documents that can hardly be placed under the same heading. Garfield also unwittingly managed to develop citations as an evaluation tool, and this evolution was received not only by the United States but also in all other countries around the world. Finally, then we move on to the fourth and last phase of the evolution process, the one that coincides with the last decade. In this period there have been important innovations such as the creation of new citational indices, including open-access ones, the development of increasingly sophisticated bibliometric indicators, and the application of the methodology in increasingly vast areas such as the world wide web. Furthermore, in addition to this, we must remember that over time various data databases have been born, such as Scopus, Web of Science, and Google Scholar, which allow us to analyse bibliographic collections in a short time and in a completely automatic way to obtain a series of charts to use for our analyses. In other words, in the last decade, there has been a change that has made bibliometric analysis an easy-to-use tool characterized by a fair speed of results.

Bibliometric Methods
Bibliometry guarantees a wide application in the field of knowledge, where bibliometric methods are used to be able to verify the impact and effect of a group of researchers or a particular publication in the entire scientific community. The most important and well-known bibliometric methods are certainly citation analysis and content analysis, alternatively, there are also other bibliometric applications such as the measurement of the frequency of terms and the exploration of grammatical structures, which compared to those mentioned previously have a less significant impact. Citation analysis is one of the main bibliometric methods that use citations in scientific productions to create a series of connections to other works or other researchers. In addition, a series of specific models operate within the same citation analysis, the coupling of co-citations and bibliographic couplings. As for the citation analysis, over time there have been numerous changes although for many decades the Science Citation Index (now owned by Clarivate Analytics) has been considered the main tool for measuring citations, for a time now. web services are questioning its entire dominance. The reason for this change coincides with the fact that the studies carried out in recent years provide rather different results, so it seems essential to resort to different sources to evaluate the citational impact of scientific production. Therefore, the Web is decisive in this sense, which has also allowed the birth of various data databases, such as Scopus and Google Scholar, which have introduced new methods of evaluating citations. Content analysis, on the other hand, born as a textual analysis when conducted exclusively on texts, is a character analysis in the social sciences applied to the study of the content of the communication. In practice, through the analysis of content, researchers are provided with a method that allows them to analyse large texts, for example by analysing the most used keywords to identify the supporting structures of their content or for example to detect which are the most used words within of the publication, etc. We say that these are the most important bibliometric methods we must consider that compared to the past these methods have acquired a certain speed and automation that allows a researcher or a group of them to be able to use these methods with exceptional ease. Therefore, the key element that makes traditional bibliometric methods different from modern ones is automation, which has been permanently mobilized within today's society and has influenced all its aspects.

The Bibliometrics Indexes
The evaluation criteria of scientific research can be of different types, for example, those of a quantitative type, those of a qualitative type, and finally those of another type. The quantitative criteria, i.e., through bibliometric indices, consist of the extrapolation of quantitative relationships between the various documents, using a quantitative analysis of bibliographic citations. The qualitative criteria, through peer review, or a procedure for selecting a series of articles and projects carried out following an evaluation by specialists who verify the suitability and correctness of the information. And finally, the criteria of another type, using the various forms, for example, participation by invitation to conferences, awarding of prizes and awards, publications on Wikipedia, and open access systems. What interests us mainly are the quantitative criteria, and therefore basically the bibliometric indexes. Bibliometric indices thus allow to quantitatively assess the impact of research within the disciplinary community to which they belong. They are usually based on the weight of the citations, but some different natures are not based on citation algorithms. Bibliometric indicators can be applied to the production of a single researcher, a periodical, a working group, a scientific community, a university, an entire country, etc. The elements considered for scientific research are different, for example, the number of citations of an author's publications, the number of citations received by journals (Impact Factor), and the productivity and impact of a single author's publications (H-index). Furthermore, not all indexes are easily usable as some may be covered by copyright that limits their use only to those who request appropriate authorization, for example, the case of the Impact Factor, and others that can be used freely. For all those who need it using online programs and databases, a clear example could be H-Index.

The Impact Factor (IF)
The Impact Factor (IF) is considered one of the most important bibliometric indices, developed in 1955 by the American res.ccsenet.org Vol. 14, No. 3; Eugene Garfield. The IF is contained in the Journal Citation Reports (JCR) publication, the publication of which took place for the first time in 1975. In this sense it is appropriate to make an accurate clarification, that is, it must be remembered that the IF does not is the only index contained in the JCR, but there are other indicators such as for example, the immediacy index, created to detect the speed with which the article is cited, The Cited Half-Life, which somehow measures the stability over time of the citations received and the 5-Year Impact Factor. The IF is obtained by dividing the number of citations received each year by the articles published in the same period and in the same magazine. Over time, moreover, a heated discussion has developed around this index regarding its effectiveness in obtaining acceptable results, in particular, the main criticisms derive from the fact that the two-year time base proved to be penalizing and not very representative for some disciplines; the impossibility of checking the data used for the calculation of the indicator; risk of distortion of results due to uncertainty in the definition of the quantities; the controlled abuse of self-citations and the lack of cleanliness of all those articles that present clear errors, considered as such by both the authors themselves and the scientific community. We have mentioned only some of the criticalities that characterize this index, moreover, as in all things, in addition to the weak points there are the strengths, more precisely we refer to the ease of the index, the stability of the measure over time and to the constant updating of the data. After having given a more theoretical reading key, let's move on to the practical one with the help of table 2 (below).  In the case under consideration, we just need to divide the total number of citations, equal to 450, by the total number of articles, equal to 120. In conclusion, as we can see from the example just given, we are faced with an index that in its entirety it is characterized by a high degree of simplicity, which makes it one of the most used indices.

H-Index
The H-Index, the Hirsch index or also commonly called H-Index, is one of the most used indexes in the bibliometric field. This index, compared to the one discussed above, is born and works differently. The name derives from its founder, the physicist E. Hirsch of the University of California at San Diego, who invented this index to allow us to quantify the prolificacy and scientific impact of the author, based on both the number of publications and the number of citations received. According to Hirsch (2005), "a researcher has an index h if h of his published articles in n years (Np) have obtained at least h citations each, and the remaining (Np-h) articles have each received fewer than h citations". So practically, a researcher who has an index equal to 5, basically means that he has made public 5 articles each of which has received 5 citations, while the remainder did not obtain any citation or in any case a small number. The H-Index was designed to make comparable the production of less prolific authors, but with a high number of citations, and that of very prolific authors but with a low number of citations (Piazzini, 2010). What perhaps makes this index highly appreciated by scientific critics is the fact that it is a free access index, unlike IF, and therefore usable by all those who need it without further authorizations. both an indicator that at the same time measures the quantity of the scientific production (i.e., the number of articles published in each period) of a researcher, and the quality/impact of the same (citational impact totalled in each period) (Hirsch, 2005). In addition to the advantages offered by bibliometric indices in general, such as the extreme objectivity of the results and the ease of calculation, this index, albeit a summary index of the quality and quantity of scientific production, is better suited than other indices (Hirsch 2005). Furthermore, being an objective information provider, it could be used to allocate funds to universities or researchers' promotions (Costas and Bordons 2007). Another important benefit is its robustness (Vanclay, 2008), given its insensitivity to the little-mentioned articles and the possibility res.ccsenet.org

Review of European Studies
Vol. 14, No. 3; 2022 of calculating it easily with simple internet databases, from studies carried out it has been shown that its effectiveness is shown above all for comparisons that occur between scientists working in the same field, particularly physicists and mathematicians. Having said that, now let's move on to indicate which are the phases that allow us to derive the H-Index, and to do this we use one of the many databases, namely Scopus. Once you have logged into Scopus through your browser, you must select the search by authors by typing in "Author Search" and indicating the name parameters of the author in question. Once the research has started, it is necessary to select the documents relating to our survey and click on "View Citation Overview". Once this is done, the h-index will be shown on the right of the screen with the relative link for the figure. This procedure can also be repeated using other databases, as previously mentioned, or using the bibliometric R package, whereby going to the "Authors" tab we can open a figure that shows us, based on the selected data, the authors who have a value of the index h -upper index, as shown in the figure relating to the Ged-Garch case study.

Bibliometrics Analysis
The use of bibliometrics is gradually spreading to all disciplines. It is mainly adopted for science mapping due to the emphasis on empirical contributions in producing voluminous, fragmented, and controversial research streams. After the study design phase, analysed in the previous paragraphs, the second step includes the use of open-source statistics. In the data collection phase, we used the Scopus database to create the .bib file, ready for the third phase of data analysis. Our choice fell precisely on the Scopus database, founded by the Elsevier Science publishing house in 2004, as it is one of the most widespread databases on various scientific fields that are frequently used for research in the literature (Guz and Rushchitsky, 2009 about Scopus is the frequent periodic updating, which allows us to obtain very recent data, the wide range of articles from more than about 5000 publishers, and above all the presence of open access journals. That said, the case studies examined will focus on the Generalized Error Distribution, the Lp-norm estimators, the asymmetric Generalized Error distribution, and finally the Ged-Garch models.

Bibliometrix R-tool
Science mapping is manifold and bulky because it is multi-step and frequently requires numerous and several software tools, which are not all necessarily freeware (or freemium). Although automated workflows that integrate these software tools into an organized data flow are emerging. After selecting the publications relevant to our research, we used the Bibliometrix R package (Aria and Cuccurullo, 2016). Bibliometrix is an open-source tool for quantitative research in scientometry and Bibliometry that includes all major bibliometric methods. Bibliometix as a package for bibliometric analysis written in R works with ecosystem software and operates in an integrated environment consisting of open libraries, the open algorithm and open-graphical software. The Bibliometrix package allows you to import data from major databases, perform bibliometric analyses and build data matrices for co-citation, coupling, scientific collaboration analysis and co-word analysis. Furthermore, Biblioshiny, a web interface for Bibliometrix, was used for the creation of a concept map and a co-citation network (Aria and Cuccurullo, 2016). Bibliographic data are processed through a workflow: study design, data collection, data analysis, data visualization and interpretation. Aria and Cuccurullo (2017) stated that bibliometric analysis is a cumbersome activity, which contains many producers. However, according to Guler et al. (2016), there are automated software tools that are used by information scientists or practitioners. By extracting descriptive and network data within bibliographic literature, one can perform citation analysis. Citation analyses are used to reveal the scientific growth in a specific field at three levels: micro, macro and meso. The conventional method used in citation analysis is bibliographic coupling, co-citation, co-author and co-word (Derviş, 2019). The analysis of the results then continued with their visualization using the data reduction technique. Bibliometrix supports a recommended workflow to perform bibliometric analyses. The proposed tool is flexible and can be rapidly upgraded and integrated with other statistical R packages as it is programmed in R. It is therefore useful in a constantly changing science such as bibliometrics. Data importing and converting in Bibliometrix follow a common workflow to search and export bibliographic documents, mainly based on three steps i .

Write and submit a query:
A query is usually based on a set of terms linked by Boolean operators. The search engine will query the DB to identify records matching the query.
((i.e., TI = (bibliometric AND analysis), the search engine will search for all the records in which the title will contain the words 'bibliometric' and 'analysis' simultaneously.)) 2. Refine search results: Search results can be refined by applying some filtering criteria for additional fields.
res.ccsenet.org Vol. 14, No. 3; (i.e., selecting Document Type = 'Journal Article' AND Language = 'English' AND Timespan = 1990:2020 AND Subject Category = 'Management') 3. Export search results: In this step, the user must choose which metadata he wants to download and the export file format to save the results. To be able to work, *bibliometrix* requires a minimum set of mandatory metadata.

Review of European Studies
(i.e., Authors' name, Title, Journal Title, Affiliation, Publication year, etc.). Our advice is to always select all the metadata fields to be sure you can perform all the analyses implemented in bibliometrix. Many databases support a variety of different export formats, some commercial (i.e., EndNote, Mendeley, etc.) and some standard (i.e., HTML, plaintext, BibTeX, etc.).

Generalized Error Distribution
The Generalized Error Distribution is a family of statistical distributions which constitutes a generalization in the Gauss curve. It contains a series of infinite probability distributions all symmetric (Giacalone, 2021). The first phase of our analysis coincides with researching the topic in question on the Scopus website and selecting the publications relevant to our research. After a series of unsuccessful attempts, due to the lack of relevance of the topics sought, and after the appropriate modifications, we managed to obtain an acceptable result in terms of quality and quantity. We decided to search for the more complex "Generalized Error Distribution parameter estimation" instead of the simple "Generalized Error Distribution", which allowed us to obtain a total number of documents equal to 1104 and several selected documents equal to 160 Then we downloaded the .bib file and using R-studio and the Bibliometrix R package we were able to obtain an enormous amount of information to analyse. In the "Dataset" tab, for example, it might be interesting to analyse what are the main information, discovering for example what is the reference range taken into consideration or what type of documents we have selected. All this information is summarized in the table below, which shows us that in our research we have selected several documents equal to 160 referring to a period ranging from 1983 to 2021 and with a total number of authors equal to 350.  Vol. 14, No. 3; Source: own elaboration.
In the source tab, we can determine the most relevant source using a specific chart entitled "Most Relevant Sources". Concerning our research, we can note, in figure 1, that about the number of documents the most relevant source is Communications in statistics simulation and computer with several documents that goes beyond 8, while the position of the least relevant source is disputed between different sources that hold less than 2.5 documents.

Figure 1. Statistic simulation of most relevant sources
Source: own elaboration.

Figure 2. Country collaboration map
Source: own elaboration.
In figure 2, the Country Collaboration Map is shown instead, that is the collaboration map. This map allows us to analyse the collaboration network using a geographic map. As we can see, the world countries take on a more or less bright blue res.ccsenet.org Vol. 14, No. 3; colour depending on the number of publications, for example, the brightest blue can be seen in the United States while a less bright blue is noted in Europe. Furthermore, it is important in this sense to follow the red lines that are precisely the links of collaborations, for example, the most important one is the one that connects the part of the United States with China. This figure is important because it allows us to understand which links are underlying the individual publications within the data collection. As regards the analysis of the content, we can draw a lot of other information from the R-Bibliometrix package in the form of figures. A concrete example is given in figure 3 below, which shows us the so-called "TreeMap", which is a tree-lined map that allows us to show hierarchical data using nested rectangles. This map can be obtained for 4 types of parameters: title, abstract, keywords and authors keywords. In this case, we take into consideration the one related to the title and we put the maximum number of words or 200. As we can see from the map, the percentage and the size of the squares, the term Generalized has a value equal to 84 and a percentage equal to 8% on the entire set of titles, this means that generalized has been used in the title 84 times out of 100. Below we have for example estimation, parameter, model, regression, distribution etc. with the relative values and percentages, which can be useful for identifying which are the keywords used by the authors in the title. In general, if we analyse the trend of scientific production in this argument, we can see how their significant growth has been, due to the speed of obtaining these data. With this speed compared to the past, not only has there been certain automation, but the same speed has made this type of information more attractive to all those who have scientific production as their objective. Source: own elaboration.

Lp-Norm
Lp-Norm estimators are characterized by an exponent p which generalizes the method of minima, when p is a 2 the estimators Lp-Norm coincide with the method of least squares (Giacalone, 2020). The first necessary step is to search for Lp-Norm on the Scopus website, selecting the articles and publications that interest us and are relevant to our research. We initially searched only for the term without inserting any time limitation or limitation concerning the area to which it belongs, obtaining a result of 2,369 documents (see Figure 4).  However, analysing a collection of 2,369 documents, in which some are fundamentally not relevant to our research, appears to work as complex as it is useless. So, we worked to make some changes to the research, searching for "LP-NORM REGRESSION" instead of the cruder "Lp-NORM" and manually entering the relevant fields (Giacalone et al., 2018). This resulted in a result of 150 documents, which represents an acceptable number to start phase 2 of this analysis, which consists in selecting the items and exporting them. Subsequently, using RStudio and the R Bibliometrix package (Aria and Cuccurullo, 2017) and loading the previously obtained file, we were able to obtain a series of information that is filtered and catalogued according to different criteria. In the "Dataset" tab, for example, we can obtain what are the "Main Information" (Main information, see table 4).  Vol. 14, No. 3; This  (1005); a second part that contains information relating to the type of documents, in this case, we have 33 articles, 3 conference papers and 1 review; a third party concerning the contents of the documents; a fourth part that summarizes the information relating to the authors, for example, the total number (63), the appearances of the author (72) etc.; and finally the last part concerning the collaboration of the authors. Continuing our analysis, in the "Authors" tab, for example, we may be interested in seeing the most relevant authors (in the Most Relevant Authors section, see figure 5). Furthermore, in the data collection we have selected, it could also be interesting to note which keywords are most used by the authors, by inserting the "author's keywords" field in the parameter and entering the value 5 as the number of words to obtain a more precise result (see the following figure 6).   Vol. 14, No. 3; The result obtained (figure 7 below) shows what we mentioned previously, with the keyword "LP-NORM" occupying the first position and the keyword "Exponential Power Function" the last. Finally, figure 7 (historical direct citation network) shows us the network of historical citations from 1973 to 2019. This allows us to understand the passages over time that led to the citation network using a grey line. as a whole.

Figure 7. Historical direct citation network
Source: own elaboration.

GED-GARCH Model
This paragraph will analyse the data obtained by querying the Scopus database on GED-GARCH Model on 5 November 2020. First, it is important to define the concept of GED-GARCH to better understand what we will go into later. The Garch are time series models created to predict the volatility of time series, while GED-GARCH is attributable to the factual situation that occurs when the time series models have residues coming from the Generalized Error Distribution. After following all the usual steps to the letter, we were able to obtain a file of 59 documents (Cerqueti et al., 2020); information obtainable in the "Main Information" table present in the Dataset sheet.  Vol. 14, No. 3; In addition to the same information seen previously (see Lp-Norm case study), it is important to note that the time interval analysed is 2000: 2020, i.e., only publications and articles from 2000 to 2020 were taken into consideration. The reasons for this choice derive from the fact that selecting data that are too old would have somehow altered the results of the research.

Figure 8. Source impact on I-Index
Source: own elaboration. H-index referred to the sources, as we can see from figure 8, is constant for all sources. A sort of objective balance of sources and sources can therefore be noted. Different speech regarding H-Index referred to the authors (see figure 9), in which there is no clear equilibrium but there are authors who have a higher index than others. Obviously, in these figures we have inserted a maximum number of sources and authors, to avoid obtaining a figure that is too dispersive and not very precise. Although the two figures have common elements, we cannot fail to highlight the differences in the results obtained while the H-Index referred to the sources is constant due to the more or less equal importance of the same, the H-Index referred to the individual authors tends to be more differentiated due to its high subjective character. Usually, the comparison between two figures that refer to two different objects is fruitless, in this case, however, our main goal is yes to comment and analyse the data obtained but one of our goals is also to allow the reader to fully understand what the opportunities and possibilities are offered. For example, these figures can be useful to the scientific researcher if you want to carry out a cross-analysis of the authors and sources about a specific topic, taking the H-index value. In bibliometric analysis, even elements that are apparently of little value may become the subject of investigation.
To explain what has just been said, let's look at figure 9, which examines the scientific production of the different countries, and table 6, which summarizes all the frequencies relating to the countries in question. Table 6. Frequency and relative region Source: own elaboration. Table 6 clearly shows us that China holds the highest value of scientific documents produced, equal to 60 (as can be seen in the map of figure 9 below), in the time interval from 2000 to 2020, this information is reproduced in figure 9 in the form of a geographical map using a more marked blue. The most relevant information is undoubtedly the enormous gap/gap that exists between the leader's China and the following countries, including the USA, which is still at value 5. Furthermore, the quantitative analysis of scientific production can also be analysed by taking as a reference a figure showing the progressive growth of the different countries. After comparing the different figures, to be able to understand, for example, what progress has been made within the entire scientific community or even analyse how China's growth has occurred from 2000 to today. res.ccsenet.org Review of European Studies Vol. 14, No. 3; 2022 Figure 9. Country scientific production Source: own elaboration.

Skewed GED
The latest bibliometric analysis of this article has as its research object Skewed Ged, that is, a family of an asymmetric probability distribution. Among the special cases, we remember the asymmetric Gauss distribution, the asymmetric uniform distribution and the asymmetric Laplace distribution. For Skewed Ged we obtained a data file consisting of 75 documents from 65 different sources (Cerqueti, 2021). Furthermore, in this research, we refer exclusively to 3 types of documents: 68 articles, 2 book chapters and 5 conference papers. Based on these data and using the R-Bibliometrix package we have obtained several interesting figures, including figure 10 below, which shows us an overview of the average citations of the article per year.   Vol. 14, No. 3;2022 25 2000 to today is the one that shows the annual scientific production. Figure 11. Annual scientific production Source: own elaboration.
As shown in figure 11, over time there has been a progressive growth in scientific production what stands out most is the difference between the 2000s and the previous years. This gap could be justified by the introduction of new technologies that have made it possible to develop new methods and tools, characterized by a certain basic speed and a certain degree of automation. In the "Conceptual Structure" tab it is interesting to study the information contained in figure 12. This figure shows us the thematic evolution that has affected the data selected from 1984 to today.

Limitations
The present study has some limitations. First, the application is based only on the Scopus database excluding other useful important databases such as Google Scholar, Research Gate and Academia. However, the thematic analysis performed by authors is qualitative and, consequently, subjective other researchers' analyses might result in indifferent themes and areas (Kokol et al., 2021). We must note that using only the Scopus database we have excluded other important papers for the res.ccsenet.org Vol. 14, No. 3;2022 26 research topics considered in our application study. We have not considered other papers that are not indexed in this database. For example, in our application, we have not included papers such as conference proceedings and this is an important limitation. Therefore, we advise that the interpretation of the results take into consideration these limitations. For this reason, we suggest that it is necessary to consider data from a different database to confirm our application and compare the results obtained on the research topics considered.

Conclusions
The need for an objective approach to research evaluation, much desired by the entire scientific community to be able to count on tools that would allow for impartial evaluations, is the same as that of librarians who have introduced Bibliometry among the most used methodologies to reduce difficulties inherent to the subjectivity of assessments. This need for this type of approach will allow bibliometrics to be able to enjoy ample room for growth and redefine its role within society in the future. In our opinion, there will be greater consolidation of Bibliometry in the future, as the introduction of increasingly advanced technologies will create new tools and methods characterized by a high degree of automation and speed. Furthermore, I believe that it is extremely important, to favour this type of approach, to introduce over time new open-access systems or in any case new tools that are at the same time easy to use and above all free. As for the results obtained from the figures, it is important to underline the fact that they cannot be generalized, this is because we must consider that we have used Scopus as a reference database compared to others. What we are sure of, or rather we expect, is that over time there will be a progressive growth in both entire scientific productions, because the community will physiologically tend to progress, and a growth that will lead to an improvement in the results obtained. In conclusion, I think we are dealing with a discipline capable of guaranteeing enormous satisfaction over time both from the point of view of objective evaluation and from the point of view of research, representing one more piece of the complicated mosaic of scientific literature.