A. de Bruin (Arie)
http://repub.eur.nl/ppl/40/
List of Publicationsenhttp://repub.eur.nl/eur_signature.png
http://repub.eur.nl/
RePub, Erasmus University RepositoryA Modular Agent-Based Environment for Studying Stock Markets
http://repub.eur.nl/pub/1929/
Sun, 03 Apr 2005 00:00:01 GMT<div>K. Boer-Sorban</div><div>U. Kaymak</div><div>A. de Bruin</div>
Artificial stock markets are built with diffuse priors in mind regarding trading strategies
and price formation mechanisms. Diffuse priors are a natural consequence of the
unknown relation between the various elements that drive market dynamics and the large
variety of market organizations, findings, however, might hold only within the specific market
settings. In this paper we propose a framework for building agent-based artificial stock
markets. We present the mechanism of the framework based on a previously identified list
of organizational and behavioural aspects. Within the framework experiments with arbitrary
many trading strategies, acting in various market organizations can be conducted in a
flexible way, without changing its architecture. In this way experiments of other artificial
stock markets, as well as theoretical models can be replicated and their findings compared.
Comparisons of the different experimental results might indicate whether findings are due
to tradersâ€™ behaviour or to the chosen market structure and could suggest how to improve
market quality.On the Design of Artificial Stock Markets
http://repub.eur.nl/pub/1900/
Fri, 18 Feb 2005 00:00:01 GMT<div>K. Boer-Sorban</div><div>A. de Bruin</div><div>U. Kaymak</div>
Artificial stock markets are designed with the aim to study and understand market dynamics
by representing (part of) real stock markets. Since there is a large variety of real
stock markets with several partially observable elements and hidden processes, artificial
markets differ regarding their structure and implementation. In this paper we analyze to
what degree current artificial stock markets reflect the workings of real stock markets. In
order to conduct this analysis we set up a list of factors which influence market dynamics
and are as a consequence important to consider for designing market models. We differentiate
two categories of factors: general, well-defined aspects that characterize the organization
of a market and hidden aspects that characterize the functioning of the markets and the
behaviour of the traders.Trends in game tree search
http://repub.eur.nl/pub/459/
Thu, 26 Jun 2003 00:00:01 GMT<div>A. de Bruin</div><div>W.H.L.M. Pijls</div>
This paper deals with algorithms searching trees generated by two-person, zero-sum games with perfect information. The standard algorithm in this field is alpha-beta. We will discuss this algorithm as well as extensions, like transposition tables, iterative deepening and NegaScout. Special attention is devoted to domain knowledge pertaining to game trees, more specifically to solution trees. The above mentioned algorithms implement depth first search. The alternative is best first search. The best known algorithm in this area is Stockman's SSS*. We treat a variant equivalent to SSS* called SSS-2. These algorithms are provably better than alpha-beta, but it needs a lot of tweaking to show this in practice. A variant of SSS-2, cast in alpha-beta terms, will be discussed which does realize this potential. This algorithm is however still worse than NegaScout. On the other hand, applying a similar idea as the one behind NegaScout to this last SSS version yields the best (sequential) game tree searcher known up till now: MTD(f).A structured design technique for distributed programs
http://repub.eur.nl/pub/464/
Thu, 26 Jun 2003 00:00:01 GMT<div>M. Polman</div><div>M.R. van Steen</div><div>A. de Bruin</div>
This report contains a non-formal motivation and description of ADL-d, a graphical design technique for parallel and distributed software. ADL-d allows a developer to construct an application in terms of communicating processes. The technique distinguishes itself from others by its use of highly orthogonal concepts, and support for automated code generation. Without being committed to one particular design method, ADL-d as a technique can be used from the early phases of application design through phases that concentrate on algorithmic design, and final implementation on some target platform. In this report, we discuss and motivate all ADL-d components, including recently incorporated features such as support for connection-oriented communication, support for modeling dynamically changing communication structures, and a formal semantical basis for each ADL-d component. Also, we discuss our ADL-d implementation, and place ADL-d in context by discussing some related work.Finding a feasible solution for a class of distributed problems with a single sum constraint using agents
http://repub.eur.nl/pub/14408/
Tue, 01 Apr 2003 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>T. Vredeveld</div><div>A.P.M. Wagelmans</div>
In this paper, we describe a Multi-Agent System which is capable of finding a feasible solution of a class of distributed problems, in which the subproblems share a single a sum constraint. Emphasis is given to correctness issues and termination detection.Finding a feasible solution for a class of distributed problems with a single sum constraint using agents
http://repub.eur.nl/pub/61609/
Tue, 01 Apr 2003 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>T. Vredeveld</div><div>A.P.M. Wagelmans</div>
An Introduction to Paradigm
http://repub.eur.nl/pub/161/
Thu, 31 Jan 2002 00:00:01 GMT<div>S.C. van der Made-Potuijt</div><div>A. de Bruin</div>
By using Paradigm, it is possible to model cooperating processes and to make the communication between these processes very clear. This report gives a formal description of this modeling method using state-transition diagrams in order to model processes and homomorphismes and interleavings in order to model the cooperation and synchronization of the processes involved. Paradigm has been used succesfully as the modeling language of Socca, a software proces modeling method.Game tree algorithms and solution trees
http://repub.eur.nl/pub/72029/
Wed, 01 Aug 2001 00:00:01 GMT<div>W.H.L.M. Pijls</div><div>A. de Bruin</div>
In this paper a theory of game tree algorithms is presented, entirely based upon the concept of a solution tree. Two types of solution trees are distinguished: max and min trees. Every game tree algorithm tries to prune as many nodes as possible from the game tree. A cut-off criterion in terms of solution trees will be formulated, which can be used to eliminate nodes from the search without affecting the result. Further, we show that any algorithm actually constructs a superposition of a max and a min solution tree. Finally, we will see how solution trees and the related cutoff criterion are applied in major game tree algorithms like alphabeta and MTD.Finding a Feasible Solution for a Simple LP Problem using Agents
http://repub.eur.nl/pub/7722/
Wed, 26 May 1999 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>T. Vredeveld</div><div>A.P.M. Wagelmans</div>
In this paper we will describe a Multi-Agent System which is capable of finding a feasible solution of a specially structured linear programming problem. Emphasis is given to correctness issues and termination detection.Game tree algorithms and solution trees
http://repub.eur.nl/pub/763/
Thu, 01 Jan 1998 00:00:01 GMT<div>W.H.L.M. Pijls</div><div>A. de Bruin</div>
In this paper, a theory of game tree algorithms is presented, entirely based upon the concept of solution tree. Two types of solution trees are distinguished: max and min trees. Every game tree algorithm tries to prune nodes as many as possible from the game tree. A cut-off criterion in terms of solution trees will be formulated, which can be used to eliminate nodes from the search without affecting the result. Further, we show that any algorithm actually constructs a superposition of a max and a min solution tree. Finally, we will see, how solution trees and the related cutoff criterion are applied in major game tree algorithms, like alpha-beta and MTD.Kahn's fixed-point characterization for linear dynamic networks
http://repub.eur.nl/pub/520/
Wed, 01 Jan 1997 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>A. de Bruin</div>
We consider dynamic Kahn-like dataflow networks defined by a simple language L containing the fork-statement. The first part of the Kahn principle states that such networks are deterministic on the I/O level: for each network, different executions provided with the same input deliver the same output. The second part of the principle states that the function from input streams to output streams (which is now defined because of the first part) can be obtained as a fixed point of a suitable operator derived from the network specification. The first part has been proven by us in an earlier publication. To prove the second part, we will use the metric framework. We introduce a nondeterministic transition system NT from which we derive an operational semantics On. We also define a deterministic transition system DT and prove that the operational semantics Od derived from DT is the same as On. Finally, we define a denotational semantics D and prove D = Od. This implies On =A theory of game trees, based on solution trees
http://repub.eur.nl/pub/468/
Mon, 01 Jan 1996 00:00:01 GMT<div>W.H.L.M. Pijls</div><div>A. de Bruin</div><div>A. Plaat</div>
In this paper a complete theory of game tree algorithms is presented, entirely based upon the notion of a solution tree. Two types of solution trees are distinguished: max and min solution trees respectively. We show that most game tree algorithms construct a superposition of a max and a min solution tree. Moreover, we formulate a general cut-off criterion in terms of solution trees. In the second half of this paper four well known algorithms, viz., alphabeta, SSS*, MTD and Scout are studied extensively. We show how solution trees feature in these algorithms and how the cut-off criterion is applied.An object oriented approach to generic branch and bound
http://repub.eur.nl/pub/511/
Mon, 01 Jan 1996 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>H.W.J.M. Trienekens</div><div>R.A. van der Goot</div>
Branch and bound algorithms can be characterized by a small set of basic rules that are applied in a divide-and-conquer-like framework. The framework is about the same in all applications, whereas the specification of the rules is problem dependent. Building a framework is a rather simple task in sequential implementations, but must not be underestimated in the parallel case, especially if an efficient branch and bound algorithm is required. In generic branch and bound models, the basic rules can be clearly identified within the framework, and, hence, it can be developed independently from the application. Furthermore, it gives the user the opportunity to concentrate on the actual problem to be solved, without being distracted by user-irrelevant issues like the properties of the underlying architecture. In this paper, we will discuss an object oriented approach to generic branch and bound. We will show how object orientation can help us to build a flexible branch and bound framework, that is able to perform like any branch and bound algorithm that fits into some powerful taxonomies known from the literature. We will define an interface for the specification of the problem dependent parts, and we will give a first indication of how the user can tune the framework if a non-default behavior is desired.Asynchronous parallel branch and bound and anomalies
http://repub.eur.nl/pub/1438/
Sun, 01 Jan 1995 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>H.W.J.M. Trienekens</div>
The parallel execution of branch and bound algorithms can result in seemingly unreasonable speedups or slowdowns. Almost never the speedup is equal to the increase in computing power. For synchronous parallel branch and bound, these effects have been studiedd extensively. For asynchronous parallelizations, only little is known.
In this paper, we derive sufficient conditions to guarantee that an asynchronous parallel
branch and bound algorithm (with elimination by lower bound tests and dominance) will be
at least as fast as its sequential counterpart. The technique used for obtaining the results seems to be more generally applicable.
The essential observations are that, under certain conditions, the parallel algorithm will
always work on at least one node, that is branched from by the sequential algorithm, and
that the parallel algorithm, after elimination of all such nodes, is able to conclude that
the optimal solution has been found.
Finally, some of the theoretical results are brought into connection with a few practical
experiments.Towards an abstract parallel branch and bound machine
http://repub.eur.nl/pub/1439/
Sun, 01 Jan 1995 00:00:01 GMT<div>A. de Bruin</div><div>G.A.P. Kindervater</div><div>H.W.J.M. Trienekens</div>
Many (parallel) branch and bound algorithms look very different from each other at first
glance. They exploit, however, the same underlying computational model. This phenomenon
can be used to define branch and bound algorithms in terms of a set of basic rules that are applied in a specific (predefined) order.
In the sequential case, the specification of Mitten's rules turns out to be sufficient for
the development of branch and bound algorithms. In the parallel case, the situation is a
bit more complicated. We have to consider extra parameters such as work distribution and
knowledge sharing. Here, the implementation of parallel branch and bound algorithms can be
seen as a tuning of the parameters combined with the specification of Mitten's rules.
These observations lead to generic systems, where the user provides the specifications of
the problem to be solved, and the system generates a branch and bound algorithm running on
a specific architecture. We will discuss some proposals that appeared in the literature.
Next, we raise the question whether the proposed models are flexible enough. We analyze
the design decisions to be taken when implementing a parallel branch and bound algorithm.
It results in a classification model, which is validated by checking whether it captures
existing branch and bound implementations.
Finally, we return to the issue of flexibility of existing systems, and propose to add an
abstract machine model to the generic framework. The model defines a virtual parallel
branch and bound machine, within which the design decisions can be expressed in terms of
the abstract machine. We will outline some ideas on which the machine may be based, and
present directions of future work.A new paradigm for minimax search
http://repub.eur.nl/pub/1440/
Sun, 01 Jan 1995 00:00:01 GMT<div>A. Plaat</div><div>J. Schaeffer</div><div>W.H.L.M. Pijls</div><div>A. de Bruin</div>
This paper introduces a new paradigm for minimax game-tree search algorithms. MT is a memory-enhanced version of Pearl's Test procedure. By changing the way MT is called, a number of best-first game-tree search algorithms can be simply and elegantly constructed (including SSS*).
Most of the assessments of minimax search algorithms have been based on simulations.
However, these simulations generally do not address two of the key ingredients of high
performance game-playing programs: iterative deepening and memory usage. This paper
presents experimental data from three game-playing programs (checkers, Othello and chess),
covering the range from low to high branching factor. The improved move ordering due to
iterative deepening and memory usage results in significantly different results from those
portrayed in the literature. Whereas some simulations show alpha-beta expanding almost
100% more leaf nodes than other algorithms [Marsland, Reinefeld & Schaeffer, 1987],
our results showed variations of less than 20%.
One new instance of our framework MTD(f) out-performs our best alpha-beta searcher
(aspiration NegaScout) on leaf nodes, total nodes and execution time. To our knowledge,
these are the first reported results that compare both depth-first and best-first algorithms given the same amount of memory.SSS* = AB+TT
http://repub.eur.nl/pub/1441/
Sun, 01 Jan 1995 00:00:01 GMT<div>A. Plaat</div><div>J. Schaeffer</div><div>W.H.L.M. Pijls</div><div>A. de Bruin</div>
In 1979 Stockman introduced the SSS* minimax search algorithm that dominates alpha-beta
in the number of leaf nodes expanded. Further investigation of the algorithm showed that it had three serious drawbacks, which prevented its use by practitioners: it is difficult to understand, it has large memory requirements, and it is slow. This paper presents an alternate formulation of SSS*, in which it is implemented as a series of alpha-beta calls that use a transposition table (ABSSS). The reformulation solves all three perceived drawbacks of SSS*, making it a practical algorithm. Further, because the search is now based on alpha-beta, the extensive research on minimax search enhancements can be easily integrated into ABSSS.
To test ABSSS in practise, it has been implemented in three state-of-the-art programs: for checkers, Othello and chess. ABSSS is comparable in performance to alpha-beta on leaf node count in all three games, making it a viable alternative to alpha-beta in practise.
Whereas SSS* has usually been regarded as being entirely different from alpha-beta, it
turns out to be just an alpha-beta enhancement, like null-window searching. This runs
counter to published simulation results. Our research leads to the surprising result that
iterative deepening versions of alpha-beta can expand fewer leaf nodes than iterative
deepening versions of SSS* due to dynamic move re-ordering.Towards a proof of the Kahn principle for linear dynamic networks
http://repub.eur.nl/pub/1455/
Sat, 01 Jan 1994 00:00:01 GMT<div>A. de Bruin</div><div>S-H. Nienhuys-Cheng</div>
We consider dynamic Kahn-like data flow networks, i.e. networks consisting of deterministic processes each of which is able to expand into a subnetwork. The Kahn principle states that such networks are deterministic, i.e. that for each network we have that each execution provided with the same input delivers the same output. Moreover, the principle states that the output streams of such networks can be obtained as the smallest fixed point of a suitable operator derived from the network specification.
This paper is meant as a first step towards a proof of this principle. For a specific
subclass of dynamic networks, linear arrays of processes, we define a transition system
yielding an operational semantics which defines the meaning of a net as the set of all
possible interleaved executions. We then prove that, although on the execution level there
is much nondeterminism, this nondeterminism disappears when viewing the system as a
transformation from an input stream to an output stream. This result is obtained from the
graph of all computations. For any configuration such a graph can be constructed. All
computation sequences that start from this configuration and that are generated by the
operational semantics are embedded in it.Solution trees as a basis for game tree search
http://repub.eur.nl/pub/1456/
Sat, 01 Jan 1994 00:00:01 GMT<div>A. de Bruin</div><div>W.H.L.M. Pijls</div><div>A. Plaat</div>
A game tree algorithm is an algorithm computing the minimax value of the root of a game tree. Many algorithms use the notion of establishing proofs that this value lies above or below some boundary value. We show that this amounts to the construction of a solution tree. We discuss the role of solution trees and critical trees in the following algorithms: Principal Variation Search, alpha-beta, and SSS-2. A general procedure for the
construction of a solution tree, based on alpha-beta and Null-Window-Search, is given.
Furthermore two new examples of solution tree-based algorithms are presented, that surpass
alpha-beta, i.e., never visit more nodes than alpha-beta, and often less.A framework for game tree algorithms
http://repub.eur.nl/pub/1466/
Fri, 01 Jan 1993 00:00:01 GMT<div>W.H.L.M. Pijls</div><div>A. de Bruin</div>
A unifying framework for game tree algorithms is GSEARCH, designed by Ibaraki. In general, a relatively great deal of memory is necessary for instances of this framework. Another framework from Ibaraki is RSEARCH, in which the use of memory can be controlled. In this paper variants of above frameworks are introduced, to be called Gsearch and Rsearch respectively. It is shown that, in these frameworks, the classical alpha-beta algorithm is the depth-first search instance and H* is a best first search instance.
Furthermore two new algorithms, Maxsearch and Minsearch, are presented, both as best-first
search instances. Maxsearch is close to SSS* and SSS-2, whereas Minsearch is close to dual SSS*.