L. Waltman (Ludo)
http://repub.eur.nl/ppl/632/
List of Publicationsenhttp://repub.eur.nl/logo.jpg
http://repub.eur.nl/
RePub, Erasmus University RepositoryComputational and Game-Theoretic Approaches for Modeling Bounded Rationality
http://repub.eur.nl/pub/26564/
Thu, 13 Oct 2011 00:00:01 GMT<div>L. Waltman</div>
This thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic agents are assumed to be boundedly rational. Abandoning the assumption of full rationality has a number of consequences for the way in which economic reality is being modeled. Traditionally, economic models are mostly of a static nature and have a strong focus on deriving equilibria. Also, models are usually analyzed mathematically. In models of boundedly rational behavior, dynamic elements play a much more prominent role and there is less emphasis on equilibrium behavior. Also, to analyze models of boundedly rational behavior, researchers not only use mathematical techniques but they also rely heavily on computer simulations.
This thesis presents four studies into the modeling of boundedly rational behavior of economic agents. Two studies are concerned with investigating the emergence of cooperation among boundedly rational agents. One study focuses on cooperation among firms in a Cournot oligopoly market, while the other study examines cooperation in a spatial model of price-competing firms. The other two studies in this thesis are concerned with methodological issues in the use of evolutionary algorithms for economic modeling purposes. One study shows how evolutionary algorithms can be analyzed mathematically rather than using computer simulations. The other study criticizes the use of a so-called binary encoding in evolutionary algorithms.
An evolutionary model of price competition among spatially distributed firms
http://repub.eur.nl/pub/22805/
Thu, 24 Mar 2011 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div><div>R. Dekker</div><div>U. Kaymak</div>
Various studies have shown the emergence of cooperative behavior in evolutionary models with spatially distributed agents. We investigate to what extent these findings generalize to evolutionary models of price competition among spatially distributed firms. We consider both one- and two-dimensional models, and we vary the amount of information firms have about competitors in their neighborhood. Our computer simulations show that the emergence of cooperative behavior depends strongly on the amount of information available to firms. Firms tend to behave most cooperatively if they have only a very limited amount of information about their competitors. We provide an intuitive explanation for this phenomenon. Our simulations further indicate that three other factors in our models, namely the accuracy of firms’ information, the probability of experimentation, and the spatial distribution of consumers, have little effect on the emergence of cooperative behavior.A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS
http://repub.eur.nl/pub/21979/
Wed, 01 Dec 2010 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div><div>R. Dekker</div><div>J. van den Berg</div>
VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS
http://repub.eur.nl/pub/21747/
Fri, 01 Oct 2010 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div><div>R. Dekker</div><div>J. van den Berg</div>
VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.Automatic term identification for bibliometric mapping
http://repub.eur.nl/pub/19551/
Mon, 01 Mar 2010 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div><div>E.C.M. Noyons</div><div>R.K. Buter</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Automatic term identification for bibliometric mapping
http://repub.eur.nl/pub/19808/
Thu, 11 Feb 2010 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div><div>E.C.M. Noyons</div><div>R.K. Buter</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Software survey: VOSviewer, a computer program for bibliometric mapping
http://repub.eur.nl/pub/20358/
Fri, 01 Jan 2010 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div>
We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer's functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer's ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.Some comments on Egghe's derivation of the impact factor distribution
http://repub.eur.nl/pub/16886/
Thu, 01 Oct 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] presents a mathematical analysis of the rank-order distribution of journal impact factors. The analysis is based on the central limit theorem. We criticize the empirical relevance of Egghe's analysis. More specifically, we argue that Egghe's analysis relies on an unrealistic assumption and we show that the analysis is not in agreement with empirical data.On the proper understanding of the limiting behavior of generalizations of the h- and g-indices
http://repub.eur.nl/pub/16890/
Thu, 01 Oct 2009 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div>
How to normalize cooccurrence data? An analysis of some well-known similarity measures
http://repub.eur.nl/pub/18647/
Sat, 01 Aug 2009 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div>
In scientometric research, the use of cooccurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this article, we theoretically analyze the properties of similarity measures for cooccurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely, set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that cooccurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.A simple alternative to the h-index
http://repub.eur.nl/pub/16556/
Wed, 22 Jul 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
The h-index is a popular bibliometric performance indicator. We discuss a fundamental problem of the h-index. We refer to this problem as the problem of inconsistency. There turns out to be a very simple bibliometric indicator that has similar properties as the h-index and that does not suffer from the inconsistency problem. We argue that the use of this indicator is preferable over the use of the h-index.Economic Modeling Using Evolutionary Algorithms: The Effect of a Binary Encoding of Strategies
http://repub.eur.nl/pub/16014/
Wed, 20 May 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div><div>R. Dekker</div><div>U. Kaymak</div>
We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational economics research. In many studies, however, there is no clear reason for the use of a binary encoding of strategies. We therefore examine to what extent the use of such an encoding may influence the results produced by an evolutionary algorithm. It turns out that the use of a binary encoding can have quite significant effects. Since these effects do not have a meaningful economic interpretation, they should be regarded as artifacts. Our findings indicate that in general the use of a binary encoding is undesirable. They also highlight the importance of employing evolutionary algorithms with a sensible economic interpretation.Some Comments on Egghe’s Derivation of the Impact Factor Distribution
http://repub.eur.nl/pub/15184/
Wed, 18 Mar 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] provides a mathematical analysis of the rank-order distribution of journal impact factors. We point out that Egghe’s analysis relies on an unrealistic assumption, and we show that his analysis is not in agreement with empirical data.A Taxonomy of Bibliometric Performance Indicators Based on the Property of Consistency
http://repub.eur.nl/pub/15182/
Thu, 12 Mar 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
We propose a taxonomy of bibliometric indicators of scientific performance. The taxonomy relies on the property of consistency. The h-index is shown not to have this important property.A Mathematical Analysis of the Long-run Behavior of Genetic Algorithms for Social Modeling
http://repub.eur.nl/pub/15181/
Mon, 09 Mar 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
We present a mathematical analysis of the long-run behavior of genetic algorithms that are used for modeling social phenomena. The analysis relies on commonly used mathematical techniques in evolutionary game theory. Assuming a positive but infinitely small mutation rate, we derive results that can be used to calculate the exact long-run behavior of a genetic algorithm.
Using these results, the need to rely on computer simulations can be avoided. We also show that if the mutation rate is infinitely small the crossover rate has no effect on the long-run behavior of a genetic algorithm. To demonstrate the usefulness of our mathematical analysis, we replicate a well-known study by Axelrod in which a genetic algorithm is used to model the evolution of strategies in iterated prisoner’s dilemmas. The theoretically predicted long-run behavior of the genetic algorithm turns out to be in perfect agreement with the long-run behavior observed in computer simulations. Also, in line with our theoretically informed expectations, computer simulations indicate that the crossover rate has virtually no long-run effect. Some general new insights into the behavior of genetic algorithms in the prisoner’s dilemma context are provided as well.VOSviewer: A Computer Program for Bibliometric Mapping
http://repub.eur.nl/pub/14841/
Wed, 11 Feb 2009 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div>
We present VOSviewer, a computer program that we have developed for constructing and viewing bibliometric maps. VOSviewer combines the VOS mapping technique and an advanced viewer into a single easy-to-use computer program that is freely available to the bibliometric research community. Our aim in this paper is to provide an overview of the functionality of VOSviewer and to elaborate on the technical implementation of specific parts of the program.Robust evolutionary algorithm design for socio-economic simulation: some comments
http://repub.eur.nl/pub/18660/
Sun, 01 Feb 2009 00:00:01 GMT<div>L. Waltman</div><div>N.J.P. van Eck</div>
How to Normalize Co-Occurrence Data? An Analysis of Some Well-Known Similarity Measures
http://repub.eur.nl/pub/14528/
Wed, 07 Jan 2009 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div>
In scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we theoretically analyze the properties of similarity measures for co-occurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that co-occurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.Automatic Term Identification for Bibliometric Mapping
http://repub.eur.nl/pub/14056/
Wed, 03 Dec 2008 00:00:01 GMT<div>N.J.P. van Eck</div><div>L. Waltman</div><div>E.C.M. Noyons</div><div>R.K. Buter</div>
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.Q-learning agents in a Cournot oligopoly model
http://repub.eur.nl/pub/15935/
Wed, 01 Oct 2008 00:00:01 GMT<div>L. Waltman</div><div>U. Kaymak</div>
Q-learning is a reinforcement learning model from the field of artificial intelligence. We study the use of Q-learning for modeling the learning behavior of firms in repeated Cournot oligopoly games. Based on computer simulations, we show that Q-learning firms generally learn to collude with each other, although full collusion usually does not emerge. We also present some analytical results. These results provide insight into the underlying mechanism that causes collusive behavior to emerge. Q-learning is one of the few learning models available that can explain the emergence of collusive behavior in settings in which there is no punishment mechanism and no possibility for explicit communication between firms.