Eck, N.J.P. van (Nees Jan)
http://repub.eur.nl/ppl/633/
List of Publicationsenhttp://repub.eur.nl/logo.png
http://repub.eur.nl/
RePub, Erasmus University RepositoryMethodological Advances in Bibliometric Mapping of Science
http://repub.eur.nl/pub/26509/
Thu, 13 Oct 2011 00:00:01 GMT<div>Eck, N.J.P. van</div>
Bibliometric mapping of science is concerned with quantitative methods for visually representing scientific literature based on bibliographic data. Since the first pioneering efforts in the 1970s, a large number of methods and techniques for bibliometric mapping have been proposed and tested. Although this has not resulted in a single generally accepted methodological standard, it did result in a limited set of commonly used methods and techniques.
In this thesis, a new methodology for bibliometric mapping is presented. It is argued that some well-known methods and techniques for bibliometric mapping have serious shortcomings. For instance, the mathematical justification of a number of commonly used normalization methods is criticized, and popular multidimensional-scaling-based approaches for constructing bibliometric maps are shown to suffer from artifacts, especially when working with larger data sets.
The methodology introduced in this thesis aims to provide improved methods and techniques for bibliometric mapping. The thesis contains an extensive mathematical analysis of normalization methods, indicating that the so-called association strength measure has the most satisfactory mathematical properties. The thesis also introduces the VOS technique for constructing bibliometric maps, where VOS stands for visualization of similarities. Compared with well-known multidimensional-scaling-based approaches, the VOS technique is shown to produce more satisfactory maps. In addition to the VOS mapping technique, the thesis also presents the VOS clustering technique. Together, these two techniques provide a unified framework for mapping and clustering. Finally, the VOSviewer software for constructing, displaying, and exploring bibliometric maps is introduced.
An evolutionary model of price competition among spatially distributed firms
http://repub.eur.nl/pub/22805/
Thu, 24 Mar 2011 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div><div>Dekker, R.</div><div>Kaymak, U.</div>
Various studies have shown the emergence of cooperative behavior in evolutionary models with spatially distributed agents. We investigate to what extent these findings generalize to evolutionary models of price competition among spatially distributed firms. We consider both one- and two-dimensional models, and we vary the amount of information firms have about competitors in their neighborhood. Our computer simulations show that the emergence of cooperative behavior depends strongly on the amount of information available to firms. Firms tend to behave most cooperatively if they have only a very limited amount of information about their competitors. We provide an intuitive explanation for this phenomenon. Our simulations further indicate that three other factors in our models, namely the accuracy of firms’ information, the probability of experimentation, and the spatial distribution of consumers, have little effect on the emergence of cooperative behavior.A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS
http://repub.eur.nl/pub/21979/
Wed, 01 Dec 2010 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div><div>Dekker, R.</div><div>Berg, J. van den</div>
VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS
http://repub.eur.nl/pub/21747/
Fri, 01 Oct 2010 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div><div>Dekker, R.</div><div>Berg, J. van den</div>
VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.Automatic term identification for bibliometric mapping
http://repub.eur.nl/pub/19551/
Mon, 01 Mar 2010 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div><div>Noyons, E.C.M.</div><div>Buter, R.K.</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Automatic term identification for bibliometric mapping
http://repub.eur.nl/pub/19808/
Thu, 11 Feb 2010 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div><div>Noyons, E.C.M.</div><div>Buter, R.K.</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Software survey: VOSviewer, a computer program for bibliometric mapping
http://repub.eur.nl/pub/20358/
Fri, 01 Jan 2010 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer's functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer's ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.Some comments on Egghe's derivation of the impact factor distribution
http://repub.eur.nl/pub/16886/
Thu, 01 Oct 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] presents a mathematical analysis of the rank-order distribution of journal impact factors. The analysis is based on the central limit theorem. We criticize the empirical relevance of Egghe's analysis. More specifically, we argue that Egghe's analysis relies on an unrealistic assumption and we show that the analysis is not in agreement with empirical data.On the proper understanding of the limiting behavior of generalizations of the h- and g-indices
http://repub.eur.nl/pub/16890/
Thu, 01 Oct 2009 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
How to normalize cooccurrence data? An analysis of some well-known similarity measures
http://repub.eur.nl/pub/18647/
Sat, 01 Aug 2009 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
In scientometric research, the use of cooccurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this article, we theoretically analyze the properties of similarity measures for cooccurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely, set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that cooccurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.A simple alternative to the h-index
http://repub.eur.nl/pub/16556/
Wed, 22 Jul 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
The h-index is a popular bibliometric performance indicator. We discuss a fundamental problem of the h-index. We refer to this problem as the problem of inconsistency. There turns out to be a very simple bibliometric indicator that has similar properties as the h-index and that does not suffer from the inconsistency problem. We argue that the use of this indicator is preferable over the use of the h-index.Economic Modeling Using Evolutionary Algorithms: The Effect of a Binary Encoding of Strategies
http://repub.eur.nl/pub/16014/
Wed, 20 May 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div><div>Dekker, R.</div><div>Kaymak, U.</div>
We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational economics research. In many studies, however, there is no clear reason for the use of a binary encoding of strategies. We therefore examine to what extent the use of such an encoding may influence the results produced by an evolutionary algorithm. It turns out that the use of a binary encoding can have quite significant effects. Since these effects do not have a meaningful economic interpretation, they should be regarded as artifacts. Our findings indicate that in general the use of a binary encoding is undesirable. They also highlight the importance of employing evolutionary algorithms with a sensible economic interpretation.Some Comments on Egghe’s Derivation of the Impact Factor Distribution
http://repub.eur.nl/pub/15184/
Wed, 18 Mar 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] provides a mathematical analysis of the rank-order distribution of journal impact factors. We point out that Egghe’s analysis relies on an unrealistic assumption, and we show that his analysis is not in agreement with empirical data.A Taxonomy of Bibliometric Performance Indicators Based on the Property of Consistency
http://repub.eur.nl/pub/15182/
Thu, 12 Mar 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
We propose a taxonomy of bibliometric indicators of scientific performance. The taxonomy relies on the property of consistency. The h-index is shown not to have this important property.A Mathematical Analysis of the Long-run Behavior of Genetic Algorithms for Social Modeling
http://repub.eur.nl/pub/15181/
Mon, 09 Mar 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
We present a mathematical analysis of the long-run behavior of genetic algorithms that are used for modeling social phenomena. The analysis relies on commonly used mathematical techniques in evolutionary game theory. Assuming a positive but infinitely small mutation rate, we derive results that can be used to calculate the exact long-run behavior of a genetic algorithm.
Using these results, the need to rely on computer simulations can be avoided. We also show that if the mutation rate is infinitely small the crossover rate has no effect on the long-run behavior of a genetic algorithm. To demonstrate the usefulness of our mathematical analysis, we replicate a well-known study by Axelrod in which a genetic algorithm is used to model the evolution of strategies in iterated prisoner’s dilemmas. The theoretically predicted long-run behavior of the genetic algorithm turns out to be in perfect agreement with the long-run behavior observed in computer simulations. Also, in line with our theoretically informed expectations, computer simulations indicate that the crossover rate has virtually no long-run effect. Some general new insights into the behavior of genetic algorithms in the prisoner’s dilemma context are provided as well.VOSviewer: A Computer Program for Bibliometric Mapping
http://repub.eur.nl/pub/14841/
Wed, 11 Feb 2009 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
We present VOSviewer, a computer program that we have developed for constructing and viewing bibliometric maps. VOSviewer combines the VOS mapping technique and an advanced viewer into a single easy-to-use computer program that is freely available to the bibliometric research community. Our aim in this paper is to provide an overview of the functionality of VOSviewer and to elaborate on the technical implementation of specific parts of the program.Robust evolutionary algorithm design for socio-economic simulation: some comments
http://repub.eur.nl/pub/18660/
Sun, 01 Feb 2009 00:00:01 GMT<div>Waltman, L.</div><div>Eck, N.J.P. van</div>
How to Normalize Co-Occurrence Data? An Analysis of Some Well-Known Similarity Measures
http://repub.eur.nl/pub/14528/
Wed, 07 Jan 2009 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
In scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we theoretically analyze the properties of similarity measures for co-occurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that co-occurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.Automatic Term Identification for Bibliometric Mapping
http://repub.eur.nl/pub/14056/
Wed, 03 Dec 2008 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div><div>Noyons, E.C.M.</div><div>Buter, R.K.</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Generalizing the h- and g-indices
http://repub.eur.nl/pub/14298/
Wed, 01 Oct 2008 00:00:01 GMT<div>Eck, N.J.P. van</div><div>Waltman, L.</div>
We introduce two new measures of the performance of a scientist. One measure, referred to as the hα-index, generalizes the well-known h-index or Hirsch index. The other measure, referred to as the gα-index, generalizes the closely related g-index. We analyze theoretically the relationship between the hα- and gα-indices on the one hand and some simple measures of scientific performance on the other hand. We also study the behavior of the hα- and gα-indices empirically. Some advantages of the hα- and gα-indices over the h- and g-indices are pointed out.