Assessing the publication output on country level in the research field communication using Garfield’s Impact Factor

The ever-increasing evaluation of science has led to the development of indicators at different levels. Our objective is to describe and analyze the publication output of those countries that were most active in the field between 2011 and 2020 according to the data retrieved for this category in Web of Science Core Collection. To this purpose, we are using Garfield’s Impact Factor and applying this indicator for countries instead of journals. Our results show that the most publication active countries are not those that make the most impact. We also confirm that English-speaking countries dominate the scenario in terms of number of publications and that states such as Spain and the Netherlands benefit from the Emerging Source Citation Index. Furthermore, we have found that at least 30% of most countries’ scientific production involves international collaboration and that the United States of America is the collaborator of choice in “Communication Studies”. Our study corroborates that our “country-based Impact factor” provides a quick and valuable bibliometric picture in good agreement with the results supplied by other indicators such as the Category Normalized Citation Impact (CNCI), the 5-year impact factor, or the Percentage of publications in the top 10%.


Introduction
In order to understand how the communication research environment is developing at the international level, it is necessary to perform studies on publication activity at the country level. For this reason, nations have been stablished as the analysis unit in this study, from which we aim to identify the contribution made by the different countries to the category of Communication. Nowadays, there is not much work done in this regard although they are essential for analyzing how communication research evolves globally.
The "Communication" is a subject category frequently used in bibliometric studies (Leydesdorff and Probst, 2009;Repiso et al. 2019) because it is a model case of cross-disciplinary nature. It is a field that, according to Craig, gathers seven traditions: Rhetorical; Semiotic; Phenomenological; Cybernetic; Socio-psychological; Socio-cultural and Critical (Craig 1999). Thus, under this subject category we can find publications of the disciplines Science, Humanities and most of the research field of the Social Sciences, dealing with the study of the communicative elements on their own area, and their application to other ones.
The main function of bibliometrics is to synthesize and describe complex information through mathematical methods in order to analyze the scientific process, reveal patterns, and understand how it develops. Pritchard (1969) defined bibliometrics as "the application of mathematics and statistical methods to publications and other media of communication".
Scientometric methods are applied to the evaluation of science at different levels of aggregation: the macro-level, for countries and fields of study; the meso-level, for institutions and journals; and the micro-level, for research teams and individual researchers (Glänzel and Moed 2002). At the macro level, science indicators are in high demand as national economies are increasingly knowledge-based and science has been organized on a grand scale, as well as receiving substantial economic investment (Leydesdorff et al. 2016).
Scientometric indicators to evaluate science have emerged and their number has increased in step with institutional and governmental demands for evaluation. These indicators are widely debated, as they have developed to management tools that are applied at different levels. Leydesdorff et al. (2016) distinguish between four groups of agents that use the same indicator in different ways: producers (of indicators), bibliometricians, science policy makers, and scientists. From different standpoints, each agent provides their own interpretations of results that may have different implications in different contexts. While one group may think that a particular methodology is justified, another differs from this. Furthermore, indicators like university rankings have a significant global audience. Even in countries like the United States of America, the media often report their results (González-Riaño et al. 2014) and both university administration, staff and even pre-university incoming students frequently rely on them (Meredith 2004).
Thus, the evaluation of science remains a topic of debate, with an ever-growing audience but a system that meets all needs and addresses all issues remains a utopian dream. One single approach cannot possibly fit all realities. Nonetheless, we can approach scientific reality via existing methods and tools that have already been validated and accepted by the scientific community.
Bibliometric indicators approximate scientific results that have been recorded in publications. Like any indicator, they can be used in different forms: "frequencies, percentages, ranks, means, rates, ratings" (Schmitz 1993). "Their use is based on the important role that publications play in the dissemination of new knowledge, a role assumed at all levels of the scientific process" (Gómez Caridad and Bordons 2009). Indicators used for scientific evaluation often generate classifications like informal rankings that are based on the 1 3 number of documents or citations. They synthesize and reduce information about a given phenomenon and this renders them inaccurate and flawed. However, they provide data that can trigger important decisions about the allocation of resources, student admissions, staffing, curriculum validation, and other issues.
The most widely recognized bibliometric indicator is the Impact factor (IF), calculated and available in the Journal Citation Report (JCR). It is so well known that it has given birth to a range of other indicators based on production and citations received: so-called impact indicators. Other indicators, like the h-index, are popularly used to characterize the publication activity of individual researchers (Hirsch 2005). In addition, the h-index has also been applied to journals, or collections of publications among others.
Garfield's impact factor has served as a model for all the other more recent journal impact measures like SCImago Journal Rank; Eigenfactor Score and Article Influence Score; h-5 index; CiteScore; SNIP, which are just corrected or more sophisticated versions. Work is currently under way to standardize citation impact (Bornmann and Marx 2015), especially with regard to fields of study, and to overcome the limitations of the IF through new metrics (Glänzel and Moed 2002).
Since 1955, when Garfield first described the IF (Garfield 1955)-and despite the malicious or biased use that can be made of it-it remains an essential quantitative instrument in order to assess the impact and prestige of journals in each area, means of identifying and, by association, of evaluating research performance in the most recent years where the citation window is too short for the sound use of other citation indicators (Hoeffel 1998;Glänzel et al. 2016;Gorraiz et al. 2020). Moreover, the IF is widely misused as a measure of quality because it can be molded to match expert opinions as to which specialized journals are the "best" in any given field (Hoeffel 1998). In this regard, the San Francisco Declaration on Research Assessment (DORA) emerged to improve the ways in which the output of scientific research is evaluated. The DORA general recommendation is not to use "journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions". 1 The first JCR was published by the Institute for Scientific Information-now Clarivate Analytics-in 1975. The JCR provides quantitative tools that classify, evaluate, categorize, and compare journals. Of these tools, the IF is the most important. It is a measure of the frequency with which the "'average' article" in a journal has been cited in a given year or period (Garfield 1976). The annual IF published in the JCR relates citations to recently-published cited articles which, according to its creator, tends to diminish bias caused by journal age, size or frequency of publication. Following Garfield (1976), "… the 1979 impact factor of journal X would be calculated by dividing the number of all the SCI source journals' 1979 citations of articles journal X published in 1977 and 1978 by the total number of source items it published in 1977 and 1978." Hence, it relativizes the size of the journals. It should be stressed that Garfield used that indicator as one of a whole battery of indicators assessing the journal impact, such as the total number of citations accumulated by each journal in the study year, the speed with which the journal is cited (immediacy index), and the half-life of the citations attracted as generated by each journal.
However, despite being one of the most widely-used indicators, the IF is frequently criticized because of how it is used, rather than as an indicator in itself. Glänzel and Moed (2002) collated many of its defects previously identified elsewhere. These include the lack of any discipline-or field-related standardization; the absence of any distinction between the nature or merits of the journals cited; bias in favor of journals with long articles; frequency of citation; the absence of any indication of statistical deviations; errors made in calculating the JCR IF due to incorrect identification of the journals cited, among others. Furthermore, when calculating the IF, the asymmetry between the numerator and denominator cannot be ignored. While the numerator includes all citations received, in all document types, in a given period, the denominator consists of those documents considered citable: i.e. articles, reviews and proceedings papers. Add to this the fact that the data from which the JCR is generated is not reproducible (Glänzel and Moed 2002). The literature contains numerous proposals to supplement and minimize these biases, although none of them has been put into practice. Only one, the short citation window (2 Years) has been corrected, originating the 5 years Impact Factor. Glänzel and Moed (2002) also highlight the IF's strengths and argue that these lie in its comprehensibility, stability, and apparent reproducibility. Garfield (1972) points out that the IF on its own cannot be used as a unique measure for any purpose. Perhaps the most important application of citation analysis is in scientific policy studies and research evaluation (Garfield, 1972). Despite its failings, the IF is widely accepted by the community and-through IF-generated rankings-is undeniably important both in bibliometrics and in science management.
This work proposes a different use of the Garfield Impact Factor to analyze the scientific activity of nations. This type of study is useful both for quantitative analyses of production and for evaluating productive activity policies or for developing curricular plans. Additionally, it is also helpful for understanding and explaining the efficiency of the investment made by national and international organizations and for evaluating and developing past, present and future science policies.
Studies of scientific performance draw on widely-studied and analyzed databases that, despite the well-known biases described in the literature, facilitate our approach to scientific reality. Archambault et al. (2006) point to geographical deficiencies in the Web of Science Core Collection (WoS CC), and especially in the Social Science Citation and Arts & Humanities Citation Indexes (SSCI, A&HCI). They warn that any country-based comparison is impossible because English-speaking countries like the USA, England and Canada are favored over Germany, Spain, France and other non-English-speaking countries-a bias that could affect publication counts and citation analysis. To diminish this bias, the WoS included the Emerging Sources Citation Index (ESCI) in 2015. The ESCI covers all SSCI and Science Citation Index Expanded disciplines and includes both wide-ranging international publications and those that provide regional or more specialized coverage 2 (De Filippo and Gorraiz 2020; Repiso and Torres-Salinas 2016;Somoza-Fernández et al. 2018).
The Web of Science (WoS) or Scopus can provide access to classifications based on a simple record count. Indicators such as the h-index or the h-5 index have also been applied for this purpose. For example, the SCImago group's Scimago Journal & Country Rank applies the h-index-and other indicators-to countries. The WoS also facilitates classification on the basis of record number-counts. And Clarivate Analytics' Incites provides indicators that can be applied to countries: the h-index, Category Normalized Citation 1 3 Impact (CNCI), or the percentage in the top 10%, among others. Using these comparisons, it has been shown elsewhere that the most productive countries are not those appearing in the higher ranks of citation impact classifications (Bornmann and Leydesdorff 2012;Trabadela-Robles et al. 2020).
In "Communication Studies", these evaluations have been scarcely developed but, potentially, they constitute a starting point for the study of the research output or performance. Trabadela-Robles et al. (2020) analyzed the scientific production of the 27 most productive countries in the field for the period 2003-2018. Previous studies comprised of bibliometric analyses of "Communication Studies" at journal level Lauf 2005;Park and Leydesdorff 2009). They identified and generated collaboration networks linking disciplines or showcased the dominant position of English-speaking countries.
From an academic perspective, several authors have approached the field by studying university doctoral programs. Barnett et al. (2010) proposed systems that measure quality by studying programs and the tenured professors teaching them through the recruitment of recent PhD graduates and professors. Barnett and Feeley (2011) compared the National Research Council and previously-studied recruitment data with results indicating the importance of reputation, publications, and scholarships, among other factors relevant to recruitment. Cervi et al. (2020) analyzed the study programs in Communication studies and Journalism of the highest placed European universities in the QS World University Rankings. Most such rankings are linked to the publication output because statistical calculations include number of publications and number of citations, among other elements.
University rankings, such as the Academic Ranking of World Universities, the Global Ranking of Academic Subjects or the QS World University Rankings by Subject, distinguish between fields of study. This facilitates analysis of the countries of origin of the institutions being studied. These rankings are based on bibliometric indicators: citations per article, number of publications, h-index, articles in top journals, Category Normalized Citation Impact, quartiles, percentiles, or research collaboration between countries.
Collaboration between authors, institutions and countries is an important part of scientific evaluation. According to Kwiek, scientific collaboration implies international recognition, the possibility of being eligible for more funds, and improved career opportunities in the academic world (Kwiek 2018). Research studies supported by a number of institutions are more frequently cited than those coming from a single affiliation. Moreover, when institutions are in different countries, the importance of their production surpasses that of studies from a single country (Kwiek 2018). Hence, collaboration is more than an individual matter as it impacts on funding and on institutional prestige; it is even positively weighted in rankings like the Scimago Institution Ranks. Earlier studies pointed to an increase in international collaboration and in the number of countries with which any given country collaborates (Arunachalam and Doss 2000). Furthermore, the USA has been described as the collaborating country of choice (Arunachalam and Doss 2000). Not only is collaboration taken into account in the development and implementation of scientific policies, it is rewarded with both funding and academic recognition. Governments need evaluations to optimize research allocations, re-orient research support, rationalize research organizations, restructure research in specific fields, or increase research productivity (Moed 2016). For these reasons, we will also include in this study an analysis of the scientific cooperation in this research field measured through co-publication.
To summarise, this study will allow, on the one hand, to suggest a further application of the IF as indicator, and, on the other hand, to scrutinize the evolution of the publication output in Communication at country level. Eventually, this study identifies the most 1 3 successful countries in terms of publication activity and impact, what will be an additional help in order to consolidate and enlarge study programs, as well as to amend and develop new collaboration strategies.

Research questions
The objective of the present study is to characterize the scientific impact of the TOP 25 countries in Communication using the Impact Factor calculated for each country as an indicator. Furthermore, it will complete and provide a more accurate interpretation of the data, the raw production of each country and the collaboration between them and their relationship within the scientific impact of each country. This study aims to answer the following questions:

Data collection
Our sample comprises of the 25 countries with the highest publication activity (number of research articles, reviews, and proceedings papers) in the category of "Communication Studies" (SSCI) between 2011 and 2020 in the WoS Core Collection (SCI, SSCI, and A&HCI). However, the records for 2020 are still not complete, as WoS continues to update this data. Therefore, the results of this more recent year require caution. That is why we have used 2019 as the latest year for the calculation of the 5-year IF and ulterior comparison with CNCI and Top 10%. The documents needed to calculate the IFs from 2011 to 2020 were published between 2009 and 2019. We have also included 2020: the last year for which we had complete WoS records. We identified 38,801 records for this period (2011-2020). However, the total number of documents listed is 46152, which is clearly in excess of the number mentioned earlier. This discrepancy is due to collaboration between countries as the table reflects duplicate records. Taking this into consideration, our sample the top 25 countries in the field generate 91.65% of all publications (Table 1). These 25 countries are: The country-based Impact Factor is calculated as explained in Fig. 1 considering countries as if they were journals.

Analysis of results
Excel (Figs. 2, 3, 4 and 5) and Inkscape software have been used to analyze and display our results. Inkscape superimposes different aspects of the results to display these in a single figure and, thus, facilitate their comprehension and visualization.
To calculate the Category Normalized Citation Impact (CNCI), WoS Core Collection records (SCI, SSCI and A&HCI, articles, reviews and proceeding papers) in "Communication Studies" for 2013-2018 were exported to Incites. We then conducted 2-year searches from Incites (2013-2014; 2014-2015; 2015-2016; 2016-2017; 2017-2018) to calculate CNCI, with and without ESCI citations, and the percentage of studies in the top 10%, also with and without consideration of the ESCI data. Standardized indicators were then correlated with the country-based IF. The CNCI of a document is calculated by dividing the actual count of citing items by the expected citation rate for documents with the same document type, year of publication and subject area. When a document is assigned to more than one subject area an average of the ratios of the actual to expected citations is used. The CNCI of a set of documents, for example the collected works of an individual, institution or country/region, is the average of the CNCI values for all the documents in the set. 3 Percentage of publications in the top 10% are based on number of citations attracted by each document and are calculated in the corresponding subject category, publication year, and for the same document type.

Network analysis
Pajek software was used to create the social network (Batagelj 2008). This shows collaboration between countries through the number of co-publications, as well as those documents published by each country alone (Loop). The Kamada Kaway algorithm (Kamada and Kawai 1988) and the Louvain clustering algorithm have been applied (Blondel et al. 2008) for vector size. The generated network and the corresponding vector file generated in Pajek were then opened with VOSviewer (van Eck and Waltman 2010) ( Table 2).
All the data obtained for this work are available in open access in Zenodo. 4

Results
In "Communication Studies" research, the publication activity of the 25 most active countries increased over the 10 years studied    We have calculated and ordered the IFs of the countries studied for 2011,2012,2013,2014,2015,2016,2017,2018,2019,2020. These data show slight changes in rank order ( Fig. 3) with Switzerland, the Netherlands and Austria recording the highest IFs in 2019 and 2020, relegating the most productive countries to 2nd place. This shows that Switzerland has improved since 2011 while Austria and the Netherlands have maintained the second and third positions they held respectively in 2011. Singapore has climbed from 12th in 2012 to 4th in 2020. In 2020, only three countries maintain their positions of 2011: the Netherlands (3rd) at the top of the table; Denmark (10th) in the middle and New Zealand (22nd) at the bottom. The countries with the highest levels of publication activity-the USA, England and Australia-are 11th, 7th and 15th, respectively in 2020 IF.
The 5-year IF for 2019 shows a strong correlation with the 2-year IF for 2019 (R 2 = 0.886) as Fig. 4 depicts.
We have also analyzed collaboration between countries. We have identified how much of each country's publication output in this research field has been entirely the work of national institutions and how much has been the result of international collaboration (Fig. 5). At least 30% of the scientific production of most countries results from international collaboration except for Taiwan (29.54%), Israel (26.89%), Australia (24.8%), Spain (21.93%) and the USA (16.86%). In contrast, the international collaboration of a few countries outperforms their production involving no collaboration or only national collaboration. This is the case of Austria (54.13%), Switzerland (53.85%), France (52.33%), South Korea (51.32%) and Singapore (51.23%). Overall, when measured in terms of citations, international collaboration makes a greater impact than the mean of the country. Between 2011 and 2020, in 87.6% of the studied events, the calculated impact of collaborative papers is higher than non-collaborative papers, taking as events the 250 analyzed cases (25 countries 10 years). The most significant case is Switzerland whose IF fell in 2015, 2018, 2019 and 2020 due to the low mean number of citations obtained by articles produced in international collaboration. However, the fact that Switzerland does have a high IF score must be taken into account.
To better understand the impact of international collaboration, we have identified which countries have collaborated with which during the period 2011-2020. To do so, we have generated a social network that identifies 9 distinct (color-coded) groups (Fig. 6). Of these, the largest is the red cluster, which is made up of 40 countries and is led by the USA. USA is the most important node in both size and connections followed by England. Other groups also appear: the Nordic and the central European countries (green); a group of Ibero-American countries led by Spain (yellow). Line thickness indicates the strength of 1.000 1.500 2.000 2.500 3.000 3.500 4.000 4.500 5.000

Fig. 5
Country-based ranking of the evolution of the IF calculated by country over 10 years, (2011 to 2022) and international collaboration, Note Yellow circles indicate the percentage of national publications; green or red circles indicate the percentage of international collaboration. Red indicates those countries with a collaborative IF lower than their non-collaborative IF. Green lines denote those countries that rise in the ranking between 2011 and 2020; yellow lines denote those that fell; gray lines denote those that remained unchanged  collaboration and in almost all cases collaboration with the USA is strongest. Highly independent countries with substantial levels of production generate their own groups, as is the case of Taiwan, Israel, Belgium, France and South Africa. In order to validate the results obtained, correlations were made between country-based IF, CNCI, and Percentage of publications in the top 10%. In this regard, the IF shows a strong correlation with Incites CNCI (Category Normalized Citation impact), 0.834, especially when ESCI data is included (Table 3, Fig. 7). The correlation with the percentage of items in the first 10% of publications is lower but quite similar when ESCI data is included. In contrast, CNCI and percentage in the first 10% of publications correlate strongly (> 0.8).
Countries vary little when we analyze those cases-countries and periods-for which the difference between the IF and CNCI is greatest, regardless of whether ESCI data is included. Time periods do not show inequality, but countries do. Spain differs most if we compare the IF with normalized impact and the Percentage of publications in the top 10%, with and without ESCI data. This suggests that publications in "Communication Studies" significantly influence publications in ESCI-indexed journals. To a lesser extent, something similar happens to the Netherlands.

Discussion
Our results provide an accurate picture of the development of research in Communication by nation. This study is an analysis of the scientific production in the area of Communication of each country in total numbers and in terms of scientific impact. This work has multiple applications ranging from the in-depth analysis of successful and leading roles in academic curricula to the development of strategical decisions and portfolios selection on the collaborative level. Furthermore, it proposes another valuable and interesting application of the Impact Factor indicator.
The fundamental issue of debate in this paper is whether or not the introduced countrybased IF satisfactorily reflects the scientific impact of countries. For this purpose, other indicators have been used like for example the h-index, which was originally designed to describe individual (Csajbók et al. 2007;Jacsó 2009). For example, the Scimago Journal & Country Rank calculates the h-index of countries, as well as the mean number of citations, as Csajbók et al. (2007) pointed out, highlighting the leadership of the USA. However, when using the IF, country size is normalized providing a more accurate perspective, as it can be observed in our results, in which the USA ranks first in terms of raw production, but it is ranked eleventh in terms of impact. When describing countries with the h-index, size is not relativized, which generates very unequal values. The Impact Factor has become so deeply embedded in the academic consciousness that, although its weaknesses have been studied in depth and dozens of alternatives have appeared, it remains the preferred reference point in scientific evaluation, the benchmark and, for many applications, the only indicator they know and pay attention to.
Our results resemble those of King (2004), who analyzed citation production and the number of studies in the 1st percentile in all fields of study between 1993 and 2001. The dominant role of the English-speaking countries, the leadership of the USA, and the good performance figures of central and northern European countries remain unchanged more than a decade later. In our sample, the USA contributes 41.84% of production during the study period-a leading position that can be explained by the over-representation of English-speaking countries' publications indexed in WoS Core Collection. Archambault et al. (2006) warn that any country-based comparison is impossible because Englishspeaking countries-like the USA, England, Australia and Canada-are over-represented in the SCCI, while non-English-speaking countries such as Germany, Spain and France are adversely affected. In our study, the USA with the England and Australia contribute 56.18% of the total number of records analyzed between 2011 and 2020.
This bias could affect publication counts and citation analysis. However, the WoS remains one of the most important, comprehensive and used databases worldwide. Furthermore, it is the foundation on which the JCR and consequently for the IF calculation are built.
To limit the bias, we need to include databases like the ESCI, which record data from local publications in all fields of study. Doing so led to an increase in the number of journals from peripheral regions such as Latin America. Including their citations when calculating the JCR IF increased the impact of Spanish journals and placed one-Comunicarin the first quartile of the Communication studies category. Similarly, country-based IF scores for Spain vary most when comparing normalized impact and percentage in the first 10% of publications, with and without ESCI data, which demonstrates the extent to which the impact of Spanish research depends on ESCI-indexed journals. Furthermore, research in "Communication Studies" has been uneven and countries like Spain joined the field later in the day. In this sense, the results of this study are in very good agreement with those published recently by De Filippo and . However, the Asian regions analyzed, South Korea, Japan, China and Taiwan, do not improve their results with ESCI. In agreement with the results of Huang et al. (2017), it is mainly because there are only a few Asian journals added to the Emerging Source Citation Index. In fact, only South Korea has one journal added to the area entitled "Science Editing", the most large country, China, indexes its journals in the Chinese Citation Index, because to be included in Emerging it is mandatory to publish using the Latin alphabet.
Our data shows a strong correlation between the country impact factor and the Incites CNCI, 0.834, being stronger when the ESCI data are included. The correlation with the percentage of articles in the top 10% of publications is lower, 0.686 but quite similar when ESCI data is included, 0.704. Top 10% is normally used in order to assess the level of excellence .
In their previous analysis of "Communication Studies" journals indexed by the WoS, De Filippo (2013) identified the countries that publish the highest number of journals as the most productive. However, as our results show, the most productive country does not make the most impact. Although English-speaking countries dominate the field in our data set, higher numbers of publications do not equate with being considered "better," as Trabadela-Robles et al. (2020) also noted. In this regard, Switzerland, Austria and the Netherlands rank first, second and third, respectively in the ranking generated on the basis of the calculated impact. Thus, a country-based IF classification provides us with a more comprehensive and more accurate view of the current state of research in "Communication Studies".
Aside from the Scimago Journal & Country Rank, no other classification evaluates the country-based impact of research by subject area. Only rankings of institutions by field of study can bring us closer to the reality of countries. A comparison of our results with the number of universities per country in the Shanghai Ranking 5 shows much similarity with size. In our study, 56.18% of papers are published by the USA, England and Australia; in the Shanghai Specialties Ranking (Communication studies) 60% of the universities belong to these countries, but this does not reflect their impact. Again, we are witness to the dominance of the English-speaking countries in university rankings, although this may be obvious given that one of the main indicators for generating these classifications is WoS publications.
Across the academic world, international collaboration is seen as a means of obtaining greater impact. Earlier studies reported an increase in production worldwide and greater international collaboration, as well as a rise in the number of countries with which any given country collaborates (Arunachalam and Doss, 2000). These authors described the status of the USA as collaborator-of-choice across all fields and Trabadela-Robles et al. 2020 described this specifically with reference to Communication studies-both findings that are supported by the present study. Our data also show that the USA is the preferred collaborator for the rest of countries, although it is the country with the lowest percentage of articles in collaboration, at 16.86%. Gingras and Khelfaoui (2018) found that the very presence of the USA in the WoS means collaborating countries benefit from citation. The data shown in this paper highlight that collaborations contribute to improving the country's FI in "Communication Studies". Between 2011 and 2020, in 87.6% of the events studied the calculated impact of collaborative papers is higher than non-collaborative papers. In a bibliometric study of research in astronomy in the Netherlands, Van Raan (1998) considers it reasonable that international collaboration should lead to an increase in impact beyond that resulting from self-citation as internationalization expands readership. Sud and Thelwall (2016) do not identify international collaboration as necessarily advantageous but they do stress that collaboration with certain countries-the USA, among others-increases impact.
Our results allow us to observe the evolution of the FI calculated over 10 years, which is a new feature in the field since the FI had never before been applied to countries and, therefore, its evolution had not been studied on the basis of this indicator. In this sense, the implementation of the tool allows us to study the factors that modify the impacts and, in order to understand them, future investigations are needed to analyze in depth the different factors that may modify the impact of the countries.

Conclusions
In response to RQ1, in "Communication Studies" we have found that the publication activity of the 25 countries studied increased over the ten years analyzed . The average growth rate is 13.93 ± 5.56%, with Austria (34 ± 47.82%), Scotland (20.01 ± 40.48%) and China (19.37 ± 17.16%) performing as the top 3. On the other side, Taiwan (4.48 ± 10.58%), Spain (7.07 ± 15.22%) or the United States (6.26 ± 8.92%) rank as the countries with the lowest growth rate. In addition, we also observed that 61.45% of the analyzed records are the product of English-speaking countries; the USA alone accounts for 41.84%; and the USA, England and Australia together-the three countries making the largest contributions-account for 56.18%. The least productive countries are Scotland, France and Japan.
In response to RQ2, in 2020, the countries with the highest Country-based IF and 5-year Country-based IF scores are Switzerland, Austria and Netherlands. In the lower end, Japan, Taiwan and South Africa score lowest in 2020 and the lowest 5-year IF scores are for Scotland, South Africa and Japan. Between 2011 and 2020, the most significant changes in IF ranking positions are reported by Norway rising from 19 to 6th in 2020; Belgium, climbing from 17th in 2011 to 8th; Singapore, rising from 12 to 4th; Spain, up from 25 to 12th; Australia climbing from 24 to 15th; Italy falling from 1st to 14th; and Finland falling down from 5 to 20th.
In response to RQ3, the inclusion of the ESCI has most benefited Spain, followed by Norway and Switzerland. In contrast, the Asian countries-South Korea, Japan, China and Taiwan-have benefited the least. Singapore alone ranks 8th among those that have most benefited.
In response to RQ4, the Country-based IF shows a strong correlation with Incitescalculated normalized impact, especially when ESCI data is included (0.834). Though weaker, the relationship with the Percentage of publications in the top 10% approximates more closely when ESCI data are included. On the other hand, there is a strong correlation (0.880) between CNCI and the Percentage of publications in the top 10% Spain, followed by the Netherlands, shows the greatest difference when IF is compared with normalized impact and Percentage of publications in the top 10% both with and without ESCI data. This suggests that publications in "Communication Studies" significantly influence publications in ESCI-indexed journals.
In response to RQ5, at least 30% of the scientific production of most countries involves international collaboration and the favourite collaborating country is the USA. Collaboration impact data is improving over the years. In 2020 only two countries have a lower impact in their collaborative articles than in non-collaborative papers. Switzerland is an exception, as its IF falls in 7 of the 10 years studied.
Future research in "Communication Studies" should compare the results from this study with the position of the related countries in Rankings like the Shanghai Ranking Universities, determine if there are strong divergences and analyze the reasons, e.g., due to its interdisciplinary nature, publications are often the work of investigators from other fields.