S-H. Nienhuys-Cheng (Shan-Hwei)
http://repub.eur.nl/ppl/3316/
List of Publicationsenhttp://repub.eur.nl/logo.jpg
http://repub.eur.nl/
RePub, Erasmus University RepositoryGeneralizing Refinement Operators to Learn Prenex Conjunctive Normal Forms
http://repub.eur.nl/pub/56/
Thu, 16 Nov 2000 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>W. van Laer</div><div>J. Ramon</div><div>L. de Raedt</div>
Inductive Logic Programming considers almost exclusively universally quantied theories. To add expressiveness, prenex conjunctive normal forms (PCNF) with existential variables should also be considered. ILP mostly uses learning with refinement operators. To extend refinement operators to PCNF, we should first do so with substitutions. However, applying a classic substitution to a PCNF with existential variables, one often obtains a generalization rather than a specialization. In this article we define substitutions that specialize a given PCNF and a weakly complete downward refinement operator. Moreover, we analyze the complexities of this operator in different types of languages and search spaces. In this way we lay a foundation for learning systems on PCNF. Based on this operator, we have implemented a simple learning system PCL on some type of PCNF.Distance between Herbrand interpretations: a measure for approximations to a target concept
http://repub.eur.nl/pub/519/
Wed, 01 Jan 1997 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div>
We can use a metric to measure the differences between elements in a domain or subsets of that domain (i.e. concepts). Which particular metric should be chosen, depends on the kind of difference we want to measure. The well known Euclidean metric and its generalizations are often used for this purpose, but such metrics are not always suitable for concepts where elements have some structure different from real numbers. For example, in (Inductive) Logic Programming a concept is often expressed as an Herbrand interpretation of some first-order language. Every element in an Herbrand interpretation is a ground atom which has a tree structure. We start by defining a metric d on the set of expressions (ground atoms and ground terms), motivated by the structure and complexity of the expressions and the symbols used therein. This metric induces the Hausdorff metric h on the set of all sets of ground atoms, which allows us to measure the difference between Herbrand interpretations. We then give some necessary and some sufficient conditions for an upper bound of h between two given Herbrand interpretations, by considering the elements in their symmetric difference.Kahn's fixed-point characterization for linear dynamic networks
http://repub.eur.nl/pub/520/
Wed, 01 Jan 1997 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>A. de Bruin</div>
We consider dynamic Kahn-like dataflow networks defined by a simple language L containing the fork-statement. The first part of the Kahn principle states that such networks are deterministic on the I/O level: for each network, different executions provided with the same input deliver the same output. The second part of the principle states that the function from input streams to output streams (which is now defined because of the first part) can be obtained as a fixed point of a suitable operator derived from the network specification. The first part has been proven by us in an earlier publication. To prove the second part, we will use the metric framework. We introduce a nondeterministic transition system NT from which we derive an operational semantics On. We also define a deterministic transition system DT and prove that the operational semantics Od derived from DT is the same as On. Finally, we define a denotational semantics D and prove D = Od. This implies On =The specialization problem and the completeness of unfolding
http://repub.eur.nl/pub/1434/
Mon, 01 Jan 1996 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>R. de Wolf</div>
We discuss the problem of specializing a definite program with respect to sets of positive and negative examples, following Bostrom and Idestam-Almquist. This problem is very relevant in the field of inductive learning. First we show that there exist sets of examples that have no correct program, i.e., no program which implies all positive and no negative examples. Hence it only makes sense to talk about specialization problems for which a solution (a correct program) exists.
To solve such problems, we first introduce UD1-specialization, based upon the transformation rule unfolding. We show UD1-specialization is incomplete - some solvable specialization problems do not have a UD1-specialization as solution - and generalize it to the stronger UD2-specialization. UD2 also turns out to be incomplete. An analysis of program specialization, using the subsumption theorem for SLD-resolution, shows the reason for this incompleteness. Based on that analysis, we then define UDS-specialization (a generalization of UD2-specialization), and prove that any specialization problem has a UDS-specialization as a solution. We also discuss the relationship between this specialization technique, and the generalization technique based on inverse resolution. Finally, we go into several more implementational matters, which outline an interesting topic for future research.Least generalizations and greatest specializations of sets of clauses
http://repub.eur.nl/pub/513/
Mon, 01 Jan 1996 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>R. de Wolf</div>
The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one non-tautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages.The subsumption theorem for several forms of resolution
http://repub.eur.nl/pub/514/
Mon, 01 Jan 1996 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>R. de Wolf</div>
The Subsumption Theorem is the following completeness result for resolution: if S is a set of clauses and C is a clause, then S logically implies C iff C is a tautology, or there exists a clause D which subsumes C, and which can be derived from S by some form of resolution. Different versions of this theorem exist, depending on the kind of resolution we use. It provides a more `direct' form of completeness than the better known refutation-completeness, which often makes the Subsumption Theorem better suited for theoretical research. In this paper we investigate for which forms of resolution the theorem holds, and for which it does not. We collect results earlier obtained by others, and contribute some results of our own. The main results of the paper are as follows. For `unconstrained' resolution, the Subsumption Theorem holds, and is equivalent to the refutation-completeness: the one can be proved from the other. The same is true for linear resolution. For input resolution, the theorem is false, even in the special case where S contains only one clause. In case of SLD-resolution for Horn clauses, the Subsumption Theorem again holds, and is equivalent to the refutation-completeness of SLD-resolution.Towards a proof of the Kahn principle for linear dynamic networks
http://repub.eur.nl/pub/1455/
Sat, 01 Jan 1994 00:00:01 GMT<div>A. de Bruin</div><div>S-H. Nienhuys-Cheng</div>
We consider dynamic Kahn-like data flow networks, i.e. networks consisting of deterministic processes each of which is able to expand into a subnetwork. The Kahn principle states that such networks are deterministic, i.e. that for each network we have that each execution provided with the same input delivers the same output. Moreover, the principle states that the output streams of such networks can be obtained as the smallest fixed point of a suitable operator derived from the network specification.
This paper is meant as a first step towards a proof of this principle. For a specific
subclass of dynamic networks, linear arrays of processes, we define a transition system
yielding an operational semantics which defines the meaning of a net as the set of all
possible interleaved executions. We then prove that, although on the execution level there
is much nondeterminism, this nondeterminism disappears when viewing the system as a
transformation from an input stream to an output stream. This result is obtained from the
graph of all computations. For any configuration such a graph can be constructed. All
computation sequences that start from this configuration and that are generated by the
operational semantics are embedded in it.Constructing refinement operators by decomposing logical implication
http://repub.eur.nl/pub/1464/
Fri, 01 Jan 1993 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>P.R.J. van der Laag</div><div>L.W.N. van der Torre</div>
Inductive learning models [Plotkin 1971; Shapiro 1981] often use a search space of clauses, ordered by a generalization hierarchy. To find solutions in the model, search algorithms use different generalization and specialization operators. In this article we will decompose the quasi-ordering induced by logical implication into six increasingly weak orderings. The difference between two successive orderings will be small, and can therefore be understood easily. Using this decomposition, we will describe upward and downward refinement operators for all orderings, including $theta$-subsumption and logical implication.Subsumption and refinement in model inference
http://repub.eur.nl/pub/1482/
Wed, 01 Jan 1992 00:00:01 GMT<div>P.R.J. van der Laag</div><div>S-H. Nienhuys-Cheng</div>
In his famous Model Inference System, Shapiro [1981] uses so-called refinement operators to replace too general hypotheses by logically weaker ones. One of these refinement operators works in the search space of reduced first order sentences. In this article we show that this operator is not complete for reduced sentences, as he claims. We investigate the relations between subsumption and refinement as well as the role of a complexity measure. We present an inverse reduction algorithm which is used in a new
refinement operator. This operator is complete for reduced sentences. Finally, we will
relate our new refinement operator with its dual, a generalization operator, and its possible application in model inference using inverse resolution.Complexity dimensions and learnability
http://repub.eur.nl/pub/1483/
Wed, 01 Jan 1992 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div><div>M. Polman</div>
A stochastic model of learning from examples has been introduced by Valiant [1984]. This PAC-learning model (PAC = probably approximately correct) reflects differences in complexity of concept classes, i.e. very complex classes are not efficiently PAC-learnable. Blumer et al. [1989] found, that efficient PAC-learnability depends on the size of the Vapnik Chervonenkis dimension [Vapnik & Chervonenkis, 1971] of a class. We will first discuss this dimension and give an algorithm to compute it, in order to provide the reader with the intuitive idea behind it. Natarajan [1987] defines a new, equivalent
dimension is defined for well-ordered classes. These well-ordered classes happen to satisfy a general condition, that is sufficient for the possible construction of a number of equivalent dimensions. We will give this condition, as well as a generalized notion of an equivalent dimension. Also, a relatively efficient algorithm for the calculation of one such dimension for well-ordered classes is given.The V- and W-operators in inverse resolutions
http://repub.eur.nl/pub/1495/
Tue, 01 Jan 1991 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div>
This article gives algorithms for V- and W-operators in inverse resolution. It discusses
also the completeness of these algorithms.Flattening, generalizations of clauses and absorption algorithms
http://repub.eur.nl/pub/1497/
Tue, 01 Jan 1991 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div>
In predicate logic, flattening can be used to replace terms with functions by variables.
It can also be used for expressing absorption in inverse resolution. This has been done by
Rouveirol and Puget. In this article three kinds of absorption algorithms are compared.Term partitions and minimal generalizations of clauses
http://repub.eur.nl/pub/1498/
Tue, 01 Jan 1991 00:00:01 GMT<div>S-H. Nienhuys-Cheng</div>
Term occurrences of any clause C are determined by their positions. The set of all term
partitions defined on subsets of term occurrences of C form a partially ordered set. This poset is isomorphic to the set of all generalizations of C. The structure of this poset can be inferred from the term occurrences in C alone. We can apply these constructions in this poset in machine learning.