title
string | abstract
string |
|---|---|
Transforming Business Rules Into Natural Language Text
|
The aim of the project presented in this paper is to design a system for an
NLG architecture, which supports the documentation process of eBusiness models.
A major task is to enrich the formal description of an eBusiness model with
additional information needed in an NLG task.
|
Corpus based Enrichment of GermaNet Verb Frames
|
Lexical semantic resources, like WordNet, are often used in real applications
of natural language document processing. For example, we integrated GermaNet in
our document suite XDOC of processing of German forensic autopsy protocols. In
addition to the hypernymy and synonymy relation, we want to adapt GermaNet's
verb frames for our analysis. In this paper we outline an approach for the
domain related enrichment of GermaNet verb frames by corpus based syntactic and
co-occurred data analyses of real documents.
|
Context Related Derivation of Word Senses
|
Real applications of natural language document processing are very often
confronted with domain specific lexical gaps during the analysis of documents
of a new domain. This paper describes an approach for the derivation of domain
specific concepts for the extension of an existing ontology. As resources we
need an initial ontology and a partially processed corpus of a domain. We
exploit the specific characteristic of the sublanguage in the corpus. Our
approach is based on syntactical structures (noun phrases) and compound
analyses to extract information required for the extension of GermaNet's
lexical resources.
|
Transforming and Enriching Documents for the Semantic Web
|
We suggest to employ techniques from Natural Language Processing (NLP) and
Knowledge Representation (KR) to transform existing documents into documents
amenable for the Semantic Web. Semantic Web documents have at least part of
their semantics and pragmatics marked up explicitly in both a machine
processable as well as human readable manner. XML and its related standards
(XSLT, RDF, Topic Maps etc.) are the unifying platform for the tools and
methodologies developed for different application scenarios.
|
Perspectives for Strong Artificial Life
|
This text introduces the twin deadlocks of strong artificial life.
Conceptualization of life is a deadlock both because of the existence of a
continuum between the inert and the living, and because we only know one
instance of life. Computationalism is a second deadlock since it remains a
matter of faith. Nevertheless, artificial life realizations quickly progress
and recent constructions embed an always growing set of the intuitive
properties of life. This growing gap between theory and realizations should
sooner or later crystallize in some kind of paradigm shift and then give clues
to break the twin deadlocks.
|
Neural-Network Techniques for Visual Mining Clinical
Electroencephalograms
|
In this chapter we describe new neural-network techniques developed for
visual mining clinical electroencephalograms (EEGs), the weak electrical
potentials invoked by brain activity. These techniques exploit fruitful ideas
of Group Method of Data Handling (GMDH). Section 2 briefly describes the
standard neural-network techniques which are able to learn well-suited
classification modes from data presented by relevant features. Section 3
introduces an evolving cascade neural network technique which adds new input
nodes as well as new neurons to the network while the training error decreases.
This algorithm is applied to recognize artifacts in the clinical EEGs. Section
4 presents the GMDH-type polynomial networks learnt from data. We applied this
technique to distinguish the EEGs recorded from an Alzheimer and a healthy
patient as well as recognize EEG artifacts. Section 5 describes the new
neural-network technique developed to induce multi-class concepts from data. We
used this technique for inducing a 16-class concept from the large-scale
clinical EEG data. Finally we discuss perspectives of applying the
neural-network techniques to clinical EEGs.
|
Estimating Classification Uncertainty of Bayesian Decision Tree
Technique on Financial Data
|
Bayesian averaging over classification models allows the uncertainty of
classification outcomes to be evaluated, which is of crucial importance for
making reliable decisions in applications such as financial in which risks have
to be estimated. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the diversity of a
classifier ensemble and the required performance. The interpretability of
classification models can also give useful information for experts responsible
for making reliable classifications. For this reason Decision Trees (DTs) seem
to be attractive classification models. The required diversity of the DT
ensemble can be achieved by using the Bayesian model averaging all possible
DTs. In practice, the Bayesian approach can be implemented on the base of a
Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior
distribution. For sampling large DTs, the MCMC method is extended by Reversible
Jump technique which allows inducing DTs under given priors. For the case when
the prior information on the DT size is unavailable, the sweeping technique
defining the prior implicitly reveals a better performance. Within this Chapter
we explore the classification uncertainty of the Bayesian MCMC techniques on
some datasets from the StatLog Repository and real financial data. The
classification uncertainty is compared within an Uncertainty Envelope technique
dealing with the class posterior distribution and a given confidence
probability. This technique provides realistic estimates of the classification
uncertainty which can be easily interpreted in statistical terms with the aim
of risk evaluation.
|
Comparison of the Bayesian and Randomised Decision Tree Ensembles within
an Uncertainty Envelope Technique
|
Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of
classification outcomes that is of crucial importance for safety critical
applications. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the classifier diversity and
the required performance. The interpretability of MCSs can also give useful
information for experts responsible for making reliable classifications. For
this reason Decision Trees (DTs) seem to be attractive classification models
for experts. The required diversity of MCSs exploiting such classification
models can be achieved by using two techniques, the Bayesian model averaging
and the randomised DT ensemble. Both techniques have revealed promising results
when applied to real-world problems. In this paper we experimentally compare
the classification uncertainty of the Bayesian model averaging with a
restarting strategy and the randomised DT ensemble on a synthetic dataset and
some domain problems commonly used in the machine learning community. To make
the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo
technique. The classification uncertainty is evaluated within an Uncertainty
Envelope technique dealing with the class posterior distribution and a given
confidence probability. Exploring a full posterior distribution, this technique
produces realistic estimates which can be easily interpreted in statistical
terms. In our experiments we found out that the Bayesian DTs are superior to
the randomised DT ensembles within the Uncertainty Envelope technique.
|
Proceedings of the Pacific Knowledge Acquisition Workshop 2004
|
Artificial intelligence (AI) research has evolved over the last few decades
and knowledge acquisition research is at the core of AI research. PKAW-04 is
one of three international knowledge acquisition workshops held in the
Pacific-Rim, Canada and Europe over the last two decades. PKAW-04 has a strong
emphasis on incremental knowledge acquisition, machine learning, neural nets
and active mining.
The proceedings contain 19 papers that were selected by the program committee
among 24 submitted papers. All papers were peer reviewed by at least two
reviewers. The papers in these proceedings cover the methods and tools as well
as the applications related to develop expert systems or knowledge based
systems.
|
Temporal and Spatial Data Mining with Second-Order Hidden Models
|
In the frame of designing a knowledge discovery system, we have developed
stochastic models based on high-order hidden Markov models. These models are
capable to map sequences of data into a Markov chain in which the transitions
between the states depend on the \texttt{n} previous states according to the
order of the model. We study the process of achieving information extraction
fromspatial and temporal data by means of an unsupervised classification. We
use therefore a French national database related to the land use of a region,
named Teruti, which describes the land use both in the spatial and temporal
domain. Land-use categories (wheat, corn, forest, ...) are logged every year on
each site regularly spaced in the region. They constitute a temporal sequence
of images in which we look for spatial and temporal dependencies. The temporal
segmentation of the data is done by means of a second-order Hidden Markov Model
(\hmmd) that appears to have very good capabilities to locate stationary
segments, as shown in our previous work in speech recognition. Thespatial
classification is performed by defining a fractal scanning ofthe images with
the help of a Hilbert-Peano curve that introduces atotal order on the sites,
preserving the relation ofneighborhood between the sites. We show that the
\hmmd performs aclassification that is meaningful for the agronomists.Spatial
and temporal classification may be achieved simultaneously by means of a 2
levels \hmmd that measures the \aposteriori probability to map a temporal
sequence of images onto a set of hidden classes.
|
An ontological approach to the construction of problem-solving models
|
Our ongoing work aims at defining an ontology-centered approach for building
expertise models for the CommonKADS methodology. This approach (which we have
named "OntoKADS") is founded on a core problem-solving ontology which
distinguishes between two conceptualization levels: at an object level, a set
of concepts enable us to define classes of problem-solving situations, and at a
meta level, a set of meta-concepts represent modeling primitives. In this
article, our presentation of OntoKADS will focus on the core ontology and, in
particular, on roles - the primitive situated at the interface between domain
knowledge and reasoning, and whose ontological status is still much debated. We
first propose a coherent, global, ontological framework which enables us to
account for this primitive. We then show how this novel characterization of the
primitive allows definition of new rules for the construction of expertise
models.
|
A Constrained Object Model for Configuration Based Workflow Composition
|
Automatic or assisted workflow composition is a field of intense research for
applications to the world wide web or to business process modeling. Workflow
composition is traditionally addressed in various ways, generally via theorem
proving techniques. Recent research observed that building a composite workflow
bears strong relationships with finite model search, and that some workflow
languages can be defined as constrained object metamodels . This lead to
consider the viability of applying configuration techniques to this problem,
which was proven feasible. Constrained based configuration expects a
constrained object model as input. The purpose of this document is to formally
specify the constrained object model involved in ongoing experiments and
research using the Z specification language.
|
A Study for the Feature Core of Dynamic Reduct
|
To the reduct problems of decision system, the paper proposes the notion of
dynamic core according to the dynamic reduct model. It describes various formal
definitions of dynamic core, and discusses some properties about dynamic core.
All of these show that dynamic core possesses the essential characters of the
feature core.
|
Two-dimensional cellular automata and the analysis of correlated time
series
|
Correlated time series are time series that, by virtue of the underlying
process to which they refer, are expected to influence each other strongly. We
introduce a novel approach to handle such time series, one that models their
interaction as a two-dimensional cellular automaton and therefore allows them
to be treated as a single entity. We apply our approach to the problems of
filling gaps and predicting values in rainfall time series. Computational
results show that the new approach compares favorably to Kalman smoothing and
filtering.
|
ATNoSFERES revisited
|
ATNoSFERES is a Pittsburgh style Learning Classifier System (LCS) in which
the rules are represented as edges of an Augmented Transition Network.
Genotypes are strings of tokens of a stack-based language, whose execution
builds the labeled graph. The original ATNoSFERES, using a bitstring to
represent the language tokens, has been favorably compared in previous work to
several Michigan style LCSs architectures in the context of Non Markov
problems. Several modifications of ATNoSFERES are proposed here: the most
important one conceptually being a representational change: each token is now
represented by an integer, hence the genotype is a string of integers; several
other modifications of the underlying grammar language are also proposed. The
resulting ATNoSFERES-II is validated on several standard animat Non Markov
problems, on which it outperforms all previously published results in the LCS
literature. The reasons for these improvement are carefully analyzed, and some
assumptions are proposed on the underlying mechanisms in order to explain these
good results.
|
Planning with Preferences using Logic Programming
|
We present a declarative language, PP, for the high-level specification of
preferences between possible solutions (or trajectories) of a planning problem.
This novel language allows users to elegantly express non-trivial,
multi-dimensional preferences and priorities over such preferences. The
semantics of PP allows the identification of most preferred trajectories for a
given goal. We also provide an answer set programming implementation of
planning problems with PP preferences.
|
Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble
Approach
|
Clustering is a widely used technique in data mining applications for
discovering patterns in underlying data. Most traditional clustering algorithms
are limited to handling datasets that contain either numeric or categorical
attributes. However, datasets with mixed types of attributes are common in real
life data mining applications. In this paper, we propose a novel
divide-and-conquer technique to solve this problem. First, the original mixed
dataset is divided into two sub-datasets: the pure categorical dataset and the
pure numeric dataset. Next, existing well established clustering algorithms
designed for different types of datasets are employed to produce corresponding
clusters. Last, the clustering results on the categorical and numeric dataset
are combined as a categorical dataset, on which the categorical data clustering
algorithm is used to get the final clusters. Our contribution in this paper is
to provide an algorithm framework for the mixed attributes clustering problem,
in which existing clustering algorithms can be easily integrated, the
capabilities of different kinds of clustering algorithms and characteristics of
different types of datasets could be fully exploited. Comparisons with other
clustering algorithms on real life datasets illustrate the superiority of our
approach.
|
K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset
|
Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-histogram, a new
efficient algorithm for clustering categorical data. The k-histogram algorithm
extends the k-means algorithm to categorical domain by replacing the means of
clusters with histograms, and dynamically updates histograms in the clustering
process. Experimental results on real datasets show that k-histogram algorithm
can produce better clustering results than k-modes algorithm, the one related
with our work most closely.
|
Integration of the DOLCE top-level ontology into the OntoSpec
methodology
|
This report describes a new version of the OntoSpec methodology for ontology
building. Defined by the LaRIA Knowledge Engineering Team (University of
Picardie Jules Verne, Amiens, France), OntoSpec aims at helping builders to
model ontological knowledge (upstream of formal representation). The
methodology relies on a set of rigorously-defined modelling primitives and
principles. Its application leads to the elaboration of a semi-informal
ontology, which is independent of knowledge representation languages. We
recently enriched the OntoSpec methodology by endowing it with a new resource,
the DOLCE top-level ontology defined at the LOA (IST-CNR, Trento, Italy). The
goal of this integration is to provide modellers with additional help in
structuring application ontologies, while maintaining independence
vis-\`{a}-vis formal representation languages. In this report, we first provide
an overview of the OntoSpec methodology's general principles and then describe
the DOLCE re-engineering process. A complete version of DOLCE-OS (i.e. a
specification of DOLCE in the semi-informal OntoSpec language) is presented in
an appendix.
|
Using Interval Particle Filtering for Marker less 3D Human Motion
Capture
|
In this paper we present a new approach for marker less human motion capture
from conventional camera feeds. The aim of our study is to recover 3D positions
of key points of the body that can serve for gait analysis. Our approach is
based on foreground segmentation, an articulated body model and particle
filters. In order to be generic and simple no restrictive dynamic modelling was
used. A new modified particle filtering algorithm was introduced. It is used
efficiently to search the model configuration space. This new algorithm which
we call Interval Particle Filtering reorganizes the configurations search space
in an optimal deterministic way and proved to be efficient in tracking natural
human movement. Results for human motion capture from a single camera are
presented and compared to results obtained from a marker based system. The
system proved to be able to track motion successfully even in partial
occlusions.
|
Markerless Human Motion Capture for Gait Analysis
|
The aim of our study is to detect balance disorders and a tendency towards
the falls in the elderly, knowing gait parameters. In this paper we present a
new tool for gait analysis based on markerless human motion capture, from
camera feeds. The system introduced here, recovers the 3D positions of several
key points of the human body while walking. Foreground segmentation, an
articulated body model and particle filtering are basic elements of our
approach. No dynamic model is used thus this system can be described as generic
and simple to implement. A modified particle filtering algorithm, which we call
Interval Particle Filtering, is used to reorganise and search through the
model's configurations search space in a deterministic optimal way. This
algorithm was able to perform human movement tracking with success. Results
from the treatment of a single cam feeds are shown and compared to results
obtained using a marker based human motion capture system.
|
Evidence with Uncertain Likelihoods
|
An agent often has a number of hypotheses, and must choose among them based
on observations, or outcomes of experiments. Each of these observations can be
viewed as providing evidence for or against various hypotheses. All the
attempts to formalize this intuition up to now have assumed that associated
with each hypothesis h there is a likelihood function \mu_h, which is a
probability measure that intuitively describes how likely each observation is,
conditional on h being the correct hypothesis. We consider an extension of this
framework where there is uncertainty as to which of a number of likelihood
functions is appropriate, and discuss how one formal approach to defining
evidence, which views evidence as a function from priors to posteriors, can be
generalized to accommodate this uncertainty.
|
Neuronal Spectral Analysis of EEG and Expert Knowledge Integration for
Automatic Classification of Sleep Stages
|
Being able to analyze and interpret signal coming from electroencephalogram
(EEG) recording can be of high interest for many applications including medical
diagnosis and Brain-Computer Interfaces. Indeed, human experts are today able
to extract from this signal many hints related to physiological as well as
cognitive states of the recorded subject and it would be very interesting to
perform such task automatically but today no completely automatic system
exists. In previous studies, we have compared human expertise and automatic
processing tools, including artificial neural networks (ANN), to better
understand the competences of each and determine which are the difficult
aspects to integrate in a fully automatic system. In this paper, we bring more
elements to that study in reporting the main results of a practical experiment
which was carried out in an hospital for sleep pathology study. An EEG
recording was studied and labeled by a human expert and an ANN. We describe
here the characteristics of the experiment, both human and neuronal procedure
of analysis, compare their performances and point out the main limitations
which arise from this study.
|
An efficient memetic, permutation-based evolutionary algorithm for
real-world train timetabling
|
Train timetabling is a difficult and very tightly constrained combinatorial
problem that deals with the construction of train schedules. We focus on the
particular problem of local reconstruction of the schedule following a small
perturbation, seeking minimisation of the total accumulated delay by adapting
times of departure and arrival for each train and allocation of resources
(tracks, routing nodes, etc.). We describe a permutation-based evolutionary
algorithm that relies on a semi-greedy heuristic to gradually reconstruct the
schedule by inserting trains one after the other following the permutation.
This algorithm can be hybridised with ILOG commercial MIP programming tool
CPLEX in a coarse-grained manner: the evolutionary part is used to quickly
obtain a good but suboptimal solution and this intermediate solution is refined
using CPLEX. Experimental results are presented on a large real-world case
involving more than one million variables and 2 million constraints. Results
are surprisingly good as the evolutionary algorithm, alone or hybridised,
produces excellent solutions much faster than CPLEX alone.
|
Evolutionary Computing
|
Evolutionary computing (EC) is an exciting development in Computer Science.
It amounts to building, applying and studying algorithms based on the Darwinian
principles of natural selection. In this paper we briefly introduce the main
concepts behind evolutionary computing. We present the main components all
evolutionary algorithms (EA), sketch the differences between different types of
EAs and survey application areas ranging from optimization, modeling and
simulation to entertainment.
|
Towards a Hierarchical Model of Consciousness, Intelligence, Mind and
Body
|
This article is taken out.
|
Evolution of Voronoi based Fuzzy Recurrent Controllers
|
A fuzzy controller is usually designed by formulating the knowledge of a
human expert into a set of linguistic variables and fuzzy rules. Among the most
successful methods to automate the fuzzy controllers development process are
evolutionary algorithms. In this work, we propose the Recurrent Fuzzy Voronoi
(RFV) model, a representation for recurrent fuzzy systems. It is an extension
of the FV model proposed by Kavka and Schoenauer that extends the application
domain to include temporal problems. The FV model is a representation for fuzzy
controllers based on Voronoi diagrams that can represent fuzzy systems with
synergistic rules, fulfilling the $\epsilon$-completeness property and
providing a simple way to introduce a priory knowledge. In the proposed
representation, the temporal relations are embedded by including internal units
that provide feedback by connecting outputs to inputs. These internal units act
as memory elements. In the RFV model, the semantic of the internal units can be
specified together with the a priori rules. The geometric interpretation of the
rules allows the use of geometric variational operators during the evolution.
The representation and the algorithms are validated in two problems in the area
of system identification and evolutionary robotics.
|
Branch-and-Prune Search Strategies for Numerical Constraint Solving
|
When solving numerical constraints such as nonlinear equations and
inequalities, solvers often exploit pruning techniques, which remove redundant
value combinations from the domains of variables, at pruning steps. To find the
complete solution set, most of these solvers alternate the pruning steps with
branching steps, which split each problem into subproblems. This forms the
so-called branch-and-prune framework, well known among the approaches for
solving numerical constraints. The basic branch-and-prune search strategy that
uses domain bisections in place of the branching steps is called the bisection
search. In general, the bisection search works well in case (i) the solutions
are isolated, but it can be improved further in case (ii) there are continuums
of solutions (this often occurs when inequalities are involved). In this paper,
we propose a new branch-and-prune search strategy along with several variants,
which not only allow yielding better branching decisions in the latter case,
but also work as well as the bisection search does in the former case. These
new search algorithms enable us to employ various pruning techniques in the
construction of inner and outer approximations of the solution set. Our
experiments show that these algorithms speed up the solving process often by
one order of magnitude or more when solving problems with continuums of
solutions, while keeping the same performance as the bisection search when the
solutions are isolated.
|
Processing Uncertainty and Indeterminacy in Information Systems success
mapping
|
IS success is a complex concept, and its evaluation is complicated,
unstructured and not readily quantifiable. Numerous scientific publications
address the issue of success in the IS field as well as in other fields. But,
little efforts have been done for processing indeterminacy and uncertainty in
success research. This paper shows a formal method for mapping success using
Neutrosophic Success Map. This is an emerging tool for processing indeterminacy
and uncertainty in success research. EIS success have been analyzed using this
tool.
|
Mathematical Models in Schema Theory
|
In this paper, a mathematical schema theory is developed. This theory has
three roots: brain theory schemas, grid automata, and block-shemas. In Section
2 of this paper, elements of the theory of grid automata necessary for the
mathematical schema theory are presented. In Section 3, elements of brain
theory necessary for the mathematical schema theory are presented. In Section
4, other types of schemas are considered. In Section 5, the mathematical schema
theory is developed. The achieved level of schema representation allows one to
model by mathematical tools virtually any type of schemas considered before,
including schemas in neurophisiology, psychology, computer science, Internet
technology, databases, logic, and mathematics.
|
Truecluster: robust scalable clustering with model selection
|
Data-based classification is fundamental to most branches of science. While
recent years have brought enormous progress in various areas of statistical
computing and clustering, some general challenges in clustering remain: model
selection, robustness, and scalability to large datasets. We consider the
important problem of deciding on the optimal number of clusters, given an
arbitrary definition of space and clusteriness. We show how to construct a
cluster information criterion that allows objective model selection. Differing
from other approaches, our truecluster method does not require specific
assumptions about underlying distributions, dissimilarity definitions or
cluster models. Truecluster puts arbitrary clustering algorithms into a generic
unified (sampling-based) statistical framework. It is scalable to big datasets
and provides robust cluster assignments and case-wise diagnostics. Truecluster
will make clustering more objective, allows for automation, and will save time
and costs. Free R software is available.
|
Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal
Planning
|
An original approach, termed Divide-and-Evolve is proposed to hybridize
Evolutionary Algorithms (EAs) with Operational Research (OR) methods in the
domain of Temporal Planning Problems (TPPs). Whereas standard Memetic
Algorithms use local search methods to improve the evolutionary solutions, and
thus fail when the local method stops working on the complete problem, the
Divide-and-Evolve approach splits the problem at hand into several, hopefully
easier, sub-problems, and can thus solve globally problems that are intractable
when directly fed into deterministic OR algorithms. But the most prominent
advantage of the Divide-and-Evolve approach is that it immediately opens up an
avenue for multi-objective optimization, even though the OR method that is used
is single-objective. Proof of concept approach on the standard
(single-objective) Zeno transportation benchmark is given, and a small original
multi-objective benchmark is proposed in the same Zeno framework to assess the
multi-objective capabilities of the proposed methodology, a breakthrough in
Temporal Planning.
|
Artificial and Biological Intelligence
|
This article considers evidence from physical and biological sciences to show
machines are deficient compared to biological systems at incorporating
intelligence. Machines fall short on two counts: firstly, unlike brains,
machines do not self-organize in a recursive manner; secondly, machines are
based on classical logic, whereas Nature's intelligence may depend on quantum
mechanics.
|
Certainty Closure: Reliable Constraint Reasoning with Incomplete or
Erroneous Data
|
Constraint Programming (CP) has proved an effective paradigm to model and
solve difficult combinatorial satisfaction and optimisation problems from
disparate domains. Many such problems arising from the commercial world are
permeated by data uncertainty. Existing CP approaches that accommodate
uncertainty are less suited to uncertainty arising due to incomplete and
erroneous data, because they do not build reliable models and solutions
guaranteed to address the user's genuine problem as she perceives it. Other
fields such as reliable computation offer combinations of models and associated
methods to handle these types of uncertain data, but lack an expressive
framework characterising the resolution methodology independently of the model.
We present a unifying framework that extends the CP formalism in both model
and solutions, to tackle ill-defined combinatorial problems with incomplete or
erroneous data. The certainty closure framework brings together modelling and
solving methodologies from different fields into the CP paradigm to provide
reliable and efficient approches for uncertain constraint problems. We
demonstrate the applicability of the framework on a case study in network
diagnosis. We define resolution forms that give generic templates, and their
associated operational semantics, to derive practical solution methods for
reliable solutions.
|
Avoiding the Bloat with Stochastic Grammar-based Genetic Programming
|
The application of Genetic Programming to the discovery of empirical laws is
often impaired by the huge size of the search space, and consequently by the
computer resources needed. In many cases, the extreme demand for memory and CPU
is due to the massive growth of non-coding segments, the introns. The paper
presents a new program evolution framework which combines distribution-based
evolution in the PBIL spirit, with grammar-based genetic programming; the
information is stored as a probability distribution on the gra mmar rules,
rather than in a population. Experiments on a real-world like problem show that
this approach gives a practical solution to the problem of intron growth.
|
Classifying Signals with Local Classifiers
|
This paper deals with the problem of classifying signals. The new method for
building so called local classifiers and local features is presented. The
method is a combination of the lifting scheme and the support vector machines.
Its main aim is to produce effective and yet comprehensible classifiers that
would help in understanding processes hidden behind classified signals. To
illustrate the method we present the results obtained on an artificial and a
real dataset.
|
Open Answer Set Programming with Guarded Programs
|
Open answer set programming (OASP) is an extension of answer set programming
where one may ground a program with an arbitrary superset of the program's
constants. We define a fixed point logic (FPL) extension of Clark's completion
such that open answer sets correspond to models of FPL formulas and identify a
syntactic subclass of programs, called (loosely) guarded programs. Whereas
reasoning with general programs in OASP is undecidable, the FPL translation of
(loosely) guarded programs falls in the decidable (loosely) guarded fixed point
logic (mu(L)GF). Moreover, we reduce normal closed ASP to loosely guarded OASP,
enabling for the first time, a characterization of an answer set semantics by
muLGF formulas. We further extend the open answer set semantics for programs
with generalized literals. Such generalized programs (gPs) have interesting
properties, e.g., the ability to express infinity axioms. We restrict the
syntax of gPs such that both rules and generalized literals are guarded. Via a
translation to guarded fixed point logic, we deduce 2-exptime-completeness of
satisfiability checking in such guarded gPs (GgPs). Bound GgPs are restricted
GgPs with exptime-complete satisfiability checking, but still sufficiently
expressive to optimally simulate computation tree logic (CTL). We translate
Datalog lite programs to GgPs, establishing equivalence of GgPs under an open
answer set semantics, alternation-free muGF, and Datalog lite.
|
Metatheory of actions: beyond consistency
|
Consistency check has been the only criterion for theory evaluation in
logic-based approaches to reasoning about actions. This work goes beyond that
and contributes to the metatheory of actions by investigating what other
properties a good domain description in reasoning about actions should have. We
state some metatheoretical postulates concerning this sore spot. When all
postulates are satisfied together we have a modular action theory. Besides
being easier to understand and more elaboration tolerant in McCarthy's sense,
modular theories have interesting properties. We point out the problems that
arise when the postulates about modularity are violated and propose algorithmic
checks that can help the designer of an action theory to overcome them.
|
Estimation of linear, non-gaussian causal models in the presence of
confounding latent variables
|
The estimation of linear causal models (also known as structural equation
models) from data is a well-known problem which has received much attention in
the past. Most previous work has, however, made an explicit or implicit
assumption of gaussianity, limiting the identifiability of the models. We have
recently shown (Shimizu et al, 2005; Hoyer et al, 2006) that for non-gaussian
distributions the full causal model can be estimated in the no hidden variables
case. In this contribution, we discuss the estimation of the model when
confounding latent variables are present. Although in this case uniqueness is
no longer guaranteed, there is at most a finite set of models which can fit the
data. We develop an algorithm for estimating this set, and describe numerical
simulations which confirm the theoretical arguments and demonstrate the
practical viability of the approach. Full Matlab code is provided for all
simulations.
|
Application of Support Vector Regression to Interpolation of Sparse
Shock Physics Data Sets
|
Shock physics experiments are often complicated and expensive. As a result,
researchers are unable to conduct as many experiments as they would like -
leading to sparse data sets. In this paper, Support Vector Machines for
regression are applied to velocimetry data sets for shock damaged and melted
tin metal. Some success at interpolating between data sets is achieved.
Implications for future work are discussed.
|
Approximation Algorithms for K-Modes Clustering
|
In this paper, we study clustering with respect to the k-modes objective
function, a natural formulation of clustering for categorical data. One of the
main contributions of this paper is to establish the connection between k-modes
and k-median, i.e., the optimum of k-median is at most twice the optimum of
k-modes for the same categorical data clustering problem. Based on this
observation, we derive a deterministic algorithm that achieves an approximation
factor of 2. Furthermore, we prove that the distance measure in k-modes defines
a metric. Hence, we are able to extend existing approximation algorithms for
metric k-median to k-modes. Empirical results verify the superiority of our
method.
|
Can an Organism Adapt Itself to Unforeseen Circumstances?
|
A model of an organism as an autonomous intelligent system has been proposed.
This model was used to analyze learning of an organism in various environmental
conditions. Processes of learning were divided into two types: strong and weak
processes taking place in the absence and the presence of aprioristic
information about an object respectively. Weak learning is synonymous to
adaptation when aprioristic programs already available in a system (an
organism) are started. It was shown that strong learning is impossible for both
an organism and any autonomous intelligent system. It was shown also that the
knowledge base of an organism cannot be updated. Therefore, all behavior
programs of an organism are congenital. A model of a conditioned reflex as a
series of consecutive measurements of environmental parameters has been
advanced. Repeated measurements are necessary in this case to reduce the error
during decision making.
|
Adaptative combination rule and proportional conflict redistribution
rule for information fusion
|
This paper presents two new promising rules of combination for the fusion of
uncertain and potentially highly conflicting sources of evidences in the
framework of the theory of belief functions in order to palliate the well-know
limitations of Dempster's rule and to work beyond the limits of applicability
of the Dempster-Shafer theory. We present both a new class of adaptive
combination rules (ACR) and a new efficient Proportional Conflict
Redistribution (PCR) rule allowing to deal with highly conflicting sources for
static and dynamic fusion applications.
|
Retraction and Generalized Extension of Computing with Words
|
Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a
formal model of computing with values. Motivated by Zadeh's paradigm of
computing with words rather than numbers, Ying proposed a kind of fuzzy
automata, whose input alphabet consists of all fuzzy subsets of a set of
symbols, as a formal model of computing with all words. In this paper, we
introduce a somewhat general formal model of computing with (some special)
words. The new features of the model are that the input alphabet only comprises
some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy
transition function can be specified arbitrarily. By employing the methodology
of fuzzy control, we establish a retraction principle from computing with words
to computing with values for handling crisp inputs and a generalized extension
principle from computing with words to computing with all words for handling
fuzzy inputs. These principles show that computing with values and computing
with all words can be respectively implemented by computing with words. Some
algebraic properties of retractions and generalized extensions are addressed as
well.
|
A Knowledge-Based Approach for Selecting Information Sources
|
Through the Internet and the World-Wide Web, a vast number of information
sources has become available, which offer information on various subjects by
different providers, often in heterogeneous formats. This calls for tools and
methods for building an advanced information-processing infrastructure. One
issue in this area is the selection of suitable information sources in query
answering. In this paper, we present a knowledge-based approach to this
problem, in the setting where one among a set of information sources
(prototypically, data repositories) should be selected for evaluating a user
query. We use extended logic programs (ELPs) to represent rich descriptions of
the information sources, an underlying domain theory, and user queries in a
formal query language (here, XML-QL, but other languages can be handled as
well). Moreover, we use ELPs for declarative query analysis and generation of a
query description. Central to our approach are declarative source-selection
programs, for which we define syntax and semantics. Due to the structured
nature of the considered data items, the semantics of such programs must
carefully respect implicit context information in source-selection rules, and
furthermore combine it with possible user preferences. A prototype
implementation of our approach has been realized exploiting the DLV KR system
and its plp front-end for prioritized ELPs. We describe a representative
example involving specific movie databases, and report about experimental
results.
|
Perspective alignment in spatial language
|
It is well known that perspective alignment plays a major role in the
planning and interpretation of spatial language. In order to understand the
role of perspective alignment and the cognitive processes involved, we have
made precise complete cognitive models of situated embodied agents that
self-organise a communication system for dialoging about the position and
movement of real world objects in their immediate surroundings. We show in a
series of robotic experiments which cognitive mechanisms are necessary and
sufficient to achieve successful spatial language and why and how perspective
alignment can take place, either implicitly or based on explicit marking.
|
Reasoning and Planning with Sensing Actions, Incomplete Information, and
Static Causal Laws using Answer Set Programming
|
We extend the 0-approximation of sensing actions and incomplete information
in [Son and Baral 2000] to action theories with static causal laws and prove
its soundness with respect to the possible world semantics. We also show that
the conditional planning problem with respect to this approximation is
NP-complete. We then present an answer set programming based conditional
planner, called ASCP, that is capable of generating both conformant plans and
conditional plans in the presence of sensing actions, incomplete information
about the initial state, and static causal laws. We prove the correctness of
our implementation and argue that our planner is sound and complete with
respect to the proposed approximation. Finally, we present experimental results
comparing ASCP to other planners.
|
Approximate Discrete Probability Distribution Representation using a
Multi-Resolution Binary Tree
|
Computing and storing probabilities is a hard problem as soon as one has to
deal with complex distributions over multiple random variables. The problem of
efficient representation of probability distributions is central in term of
computational efficiency in the field of probabilistic reasoning. The main
problem arises when dealing with joint probability distributions over a set of
random variables: they are always represented using huge probability arrays. In
this paper, a new method based on binary-tree representation is introduced in
order to store efficiently very large joint distributions. Our approach
approximates any multidimensional joint distributions using an adaptive
discretization of the space. We make the assumption that the lower is the
probability mass of a particular region of feature space, the larger is the
discretization step. This assumption leads to a very optimized representation
in term of time and memory. The other advantages of our approach are the
ability to refine dynamically the distribution every time it is needed leading
to a more accurate representation of the probability distribution and to an
anytime representation of the distribution.
|
Diagnosability of Fuzzy Discrete Event Systems
|
In order to more effectively cope with the real-world problems of vagueness,
{\it fuzzy discrete event systems} (FDESs) were proposed recently, and the
supervisory control theory of FDESs was developed. In view of the importance of
failure diagnosis, in this paper, we present an approach of the failure
diagnosis in the framework of FDESs. More specifically: (1) We formalize the
definition of diagnosability for FDESs, in which the observable set and failure
set of events are {\it fuzzy}, that is, each event has certain degree to be
observable and unobservable, and, also, each event may possess different
possibility of failure occurring. (2) Through the construction of
observability-based diagnosers of FDESs, we investigate its some basic
properties. In particular, we present a necessary and sufficient condition for
diagnosability of FDESs. (3) Some examples serving to illuminate the
applications of the diagnosability of FDESs are described. To conclude, some
related issues are raised for further consideration.
|
Classification of Ordinal Data
|
Classification of ordinal data is one of the most important tasks of relation
learning. In this thesis a novel framework for ordered classes is proposed. The
technique reduces the problem of classifying ordered classes to the standard
two-class problem. The introduced method is then mapped into support vector
machines and neural networks. Compared with a well-known approach using
pairwise objects as training samples, the new algorithm has a reduced
complexity and training time. A second novel model, the unimodal model, is also
introduced and a parametric version is mapped into neural networks. Several
case studies are presented to assert the validity of the proposed models.
|
Imagination as Holographic Processor for Text Animation
|
Imagination is the critical point in developing of realistic artificial
intelligence (AI) systems. One way to approach imagination would be simulation
of its properties and operations. We developed two models: AI-Brain Network
Hierarchy of Languages and Semantical Holographic Calculus as well as
simulation system ScriptWriter that emulate the process of imagination through
an automatic animation of English texts. The purpose of this paper is to
demonstrate the model and to present ScriptWriter system
http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar
for simulation of the imagination.
|
Belief Calculus
|
In Dempster-Shafer belief theory, general beliefs are expressed as belief
mass distribution functions over frames of discernment. In Subjective Logic
beliefs are expressed as belief mass distribution functions over binary frames
of discernment. Belief representations in Subjective Logic, which are called
opinions, also contain a base rate parameter which express the a priori belief
in the absence of evidence. Philosophically, beliefs are quantitative
representations of evidence as perceived by humans or by other intelligent
agents. The basic operators of classical probability calculus, such as addition
and multiplication, can be applied to opinions, thereby making belief calculus
practical. Through the equivalence between opinions and Beta probability
density functions, this also provides a calculus for Beta probability density
functions. This article explains the basic elements of belief calculus.
|
The Cumulative Rule for Belief Fusion
|
The problem of combining beliefs in the Dempster-Shafer belief theory has
attracted considerable attention over the last two decades. The classical
Dempster's Rule has often been criticised, and many alternative rules for
belief combination have been proposed in the literature. The consensus operator
for combining beliefs has nice properties and produces more intuitive results
than Dempster's rule, but has the limitation that it can only be applied to
belief distribution functions on binary state spaces. In this paper we present
a generalisation of the consensus operator that can be applied to Dirichlet
belief functions on state spaces of arbitrary size. This rule, called the
cumulative rule of belief combination, can be derived from classical
statistical theory, and corresponds well with human intuition.
|
New Millennium AI and the Convergence of History
|
Artificial Intelligence (AI) has recently become a real formal science: the
new millennium brought the first mathematically sound, asymptotically optimal,
universal problem solvers, providing a new, rigorous foundation for the
previously largely heuristic field of General AI and embedded agents. At the
same time there has been rapid progress in practical methods for learning true
sequence-processing programs, as opposed to traditional methods limited to
stationary pattern association. Here we will briefly review some of the new
results, and speculate about future developments, pointing out that the time
intervals between the most notable events in over 40,000 years or 2^9 lifetimes
of human history have sped up exponentially, apparently converging to zero
within the next few decades. Or is this impression just a by-product of the way
humans allocate memory space to past events?
|
Belief Conditioning Rules (BCRs)
|
In this paper we propose a new family of Belief Conditioning Rules (BCRs) for
belief revision. These rules are not directly related with the fusion of
several sources of evidence but with the revision of a belief assignment
available at a given time according to the new truth (i.e. conditioning
constraint) one has about the space of solutions of the problem.
|
Islands for SAT
|
In this note we introduce the notion of islands for restricting local search.
We show how we can construct islands for CNF SAT problems, and how much search
space can be eliminated by restricting search to the island.
|
About Norms and Causes
|
Knowing the norms of a domain is crucial, but there exist no repository of
norms. We propose a method to extract them from texts: texts generally do not
describe a norm, but rather how a state-of-affairs differs from it. Answers
concerning the cause of the state-of-affairs described often reveal the
implicit norm. We apply this idea to the domain of driving, and validate it by
designing algorithms that identify, in a text, the "basic" norms to which it
refers implicitly.
|
Representing Knowledge about Norms
|
Norms are essential to extend inference: inferences based on norms are far
richer than those based on logical implications. In the recent decades, much
effort has been devoted to reason on a domain, once its norms are represented.
How to extract and express those norms has received far less attention.
Extraction is difficult: as the readers are supposed to know them, the norms of
a domain are seldom made explicit. For one thing, extracting norms requires a
language to represent them, and this is the topic of this paper. We apply this
language to represent norms in the domain of driving, and show that it is
adequate to reason on the causes of accidents, as described by car-crash
reports.
|
Target Type Tracking with PCR5 and Dempster's rules: A Comparative
Analysis
|
In this paper we consider and analyze the behavior of two combinational rules
for temporal (sequential) attribute data fusion for target type estimation. Our
comparative analysis is based on Dempster's fusion rule proposed in
Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution
rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We
show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a
very efficient Target Type Tracking and reduces drastically the latency delay
for correct Target Type decision with respect to Demspter's rule. For cases
presenting some short Target Type switches, Demspter's rule is proved to be
unable to detect the switches and thus to track correctly the Target Type
changes. The approach proposed here is totally new, efficient and promising to
be incorporated in real-time Generalized Data Association - Multi Target
Tracking systems (GDA-MTT) and provides an important result on the behavior of
PCR5 with respect to Dempster's rule. The MatLab source code is provided in
|
Fusion of qualitative beliefs using DSmT
|
This paper introduces the notion of qualitative belief assignment to model
beliefs of human experts expressed in natural language (with linguistic
labels). We show how qualitative beliefs can be efficiently combined using an
extension of Dezert-Smarandache Theory (DSmT) of plausible and paradoxical
quantitative reasoning to qualitative reasoning. We propose a new arithmetic on
linguistic labels which allows a direct extension of classical DSm fusion rule
or DSm Hybrid rules. An approximate qualitative PCR5 rule is also proposed
jointly with a Qualitative Average Operator. We also show how crisp or interval
mappings can be used to deal indirectly with linguistic labels. A very simple
example is provided to illustrate our qualitative fusion rules.
|
An Introduction to the DSm Theory for the Combination of Paradoxical,
Uncertain, and Imprecise Sources of Information
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this
introduction, we present a survey of our recent theory of plausible and
paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the
literature, developed for dealing with imprecise, uncertain and paradoxical
sources of information. We focus our presentation here rather on the
foundations of DSmT, and on the two important new rules of combination, than on
browsing specific applications of DSmT available in literature. Several simple
examples are given throughout the presentation to show the efficiency and the
generality of this new approach.
|
Relation Variables in Qualitative Spatial Reasoning
|
We study an alternative to the prevailing approach to modelling qualitative
spatial reasoning (QSR) problems as constraint satisfaction problems. In the
standard approach, a relation between objects is a constraint whereas in the
alternative approach it is a variable. The relation-variable approach greatly
simplifies integration and implementation of QSR. To substantiate this point,
we discuss several QSR algorithms from the literature which in the
relation-variable approach reduce to the customary constraint propagation
algorithm enforcing generalised arc-consistency.
|
Using Sets of Probability Measures to Represent Uncertainty
|
I explore the use of sets of probability measures as a representation of
uncertainty.
|
A State-Based Regression Formulation for Domains with Sensing
Actions<br> and Incomplete Information
|
We present a state-based regression function for planning domains where an
agent does not have complete information and may have sensing actions. We
consider binary domains and employ a three-valued characterization of domains
with sensing actions to define the regression function. We prove the soundness
and completeness of our regression formulation with respect to the definition
of progression. More specifically, we show that (i) a plan obtained through
regression for a planning problem is indeed a progression solution of that
planning problem, and that (ii) for each plan found through progression, using
regression one obtains that plan or an equivalent one.
|
Semantic Description of Parameters in Web Service Annotations
|
A modification of OWL-S regarding parameter description is proposed. It is
strictly based on Description Logic. In addition to class description of
parameters it also allows the modelling of relations between parameters and the
precise description of the size of data to be supplied to a service. In
particular, it solves two major issues identified within current proposals for
a Semantic Web Service annotation standard.
|
The ALVIS Format for Linguistically Annotated Documents
|
The paper describes the ALVIS annotation format designed for the indexing of
large collections of documents in topic-specific search engines. This paper is
exemplified on the biological domain and on MedLine abstracts, as developing a
specialized search engine for biologists is one of the ALVIS case studies. The
ALVIS principle for linguistic annotations is based on existing works and
standard propositions. We made the choice of stand-off annotations rather than
inserted mark-up. Annotations are encoded as XML elements which form the
linguistic subsection of the document record.
|
Modular self-organization
|
The aim of this paper is to provide a sound framework for addressing a
difficult problem: the automatic construction of an autonomous agent's modular
architecture. We combine results from two apparently uncorrelated domains:
Autonomous planning through Markov Decision Processes and a General Data
Clustering Approach using a kernel-like method. Our fundamental idea is that
the former is a good framework for addressing autonomy whereas the latter
allows to tackle self-organizing problems.
|
A Typed Hybrid Description Logic Programming Language with Polymorphic
Order-Sorted DL-Typed Unification for Semantic Web Type Systems
|
In this paper we elaborate on a specific application in the context of hybrid
description logic programs (hybrid DLPs), namely description logic Semantic Web
type systems (DL-types) which are used for term typing of LP rules based on a
polymorphic, order-sorted, hybrid DL-typed unification as procedural semantics
of hybrid DLPs. Using Semantic Web ontologies as type systems facilitates
interchange of domain-independent rules over domain boundaries via dynamically
typing and mapping of explicitly defined type ontologies.
|
Why did the accident happen? A norm-based reasoning approach
|
In this paper we describe an architecture of a system that answer the
question : Why did the accident happen? from the textual description of an
accident. We present briefly the different parts of the architecture and then
we describe with more detail the semantic part of the system i.e. the part in
which the norm-based reasoning is performed on the explicit knowlege extracted
from the text.
|
Une expérience de sémantique inférentielle
|
We develop a system which must be able to perform the same inferences that a
human reader of an accident report can do and more particularly to determine
the apparent causes of the accident. We describe the general framework in which
we are situated, linguistic and semantic levels of the analysis and the
inference rules used by the system.
|
Farthest-Point Heuristic based Initialization Methods for K-Modes
Clustering
|
The k-modes algorithm has become a popular technique in solving categorical
data clustering problems in different application domains. However, the
algorithm requires random selection of initial points for the clusters.
Different initial points often lead to considerable distinct clustering
results. In this paper we present an experimental study on applying a
farthest-point heuristic based initialization method to k-modes clustering to
improve its performance. Experiments show that new initialization method leads
to better clustering accuracy than random selection initialization method for
k-modes clustering.
|
Comparing Typical Opening Move Choices Made by Humans and Chess Engines
|
The opening book is an important component of a chess engine, and thus
computer chess programmers have been developing automated methods to improve
the quality of their books. For chess, which has a very rich opening theory,
large databases of high-quality games can be used as the basis of an opening
book, from which statistics relating to move choices from given positions can
be collected. In order to find out whether the opening books used by modern
chess engines in machine versus machine competitions are ``comparable'' to
those used by chess players in human versus human competitions, we carried out
analysis on 26 test positions using statistics from two opening books one
compiled from humans' games and the other from machines' games. Our analysis
using several nonparametric measures, shows that, overall, there is a strong
association between humans' and machines' choices of opening moves when using a
book to guide their choices.
|
Local approximate inference algorithms
|
We present a new local approximation algorithm for computing Maximum a
Posteriori (MAP) and log-partition function for arbitrary exponential family
distribution represented by a finite-valued pair-wise Markov random field
(MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\em
appropriately} chosen small components; then computing estimates locally in
each of these components and then producing a {\em good} global solution. We
show that if the underlying graph $G$ either excludes some finite-sized graph
as its minor (e.g. Planar graph) or has low doubling dimension (e.g. any graph
with {\em geometry}), then our algorithm will produce solution for both
questions within {\em arbitrary accuracy}. We present a message-passing
implementation of our algorithm for MAP computation using self-avoiding walk of
graph. In order to evaluate the computational cost of this implementation, we
derive novel tight bounds on the size of self-avoiding walk tree for arbitrary
graph.
As a consequence of our algorithmic result, we show that the normalized
log-partition function (also known as free-energy) for a class of {\em regular}
MRFs will converge to a limit, that is computable to an arbitrary accuracy.
|
Constant for associative patterns ensemble
|
Creation procedure of associative patterns ensemble in terms of formal logic
with using neural net-work (NN) model is formulated. It is shown that the
associative patterns set is created by means of unique procedure of NN work
which having individual parameters of entrance stimulus transformation. It is
ascer-tained that the quantity of the selected associative patterns possesses
is a constant.
|
Adaptation Knowledge Discovery from a Case Base
|
In case-based reasoning, the adaptation step depends in general on
domain-dependent knowledge, which motivates studies on adaptation knowledge
acquisition (AKA). CABAMAKA is an AKA system based on principles of knowledge
discovery from databases. This system explores the variations within the case
base to elicit adaptation knowledge. It has been successfully tested in an
application of case-based decision support to breast cancer treatment.
|
Decentralized Failure Diagnosis of Stochastic Discrete Event Systems
|
Recently, the diagnosability of {\it stochastic discrete event systems}
(SDESs) was investigated in the literature, and, the failure diagnosis
considered was {\it centralized}. In this paper, we propose an approach to {\it
decentralized} failure diagnosis of SDESs, where the stochastic system uses
multiple local diagnosers to detect failures and each local diagnoser possesses
its own information. In a way, the centralized failure diagnosis of SDESs can
be viewed as a special case of the decentralized failure diagnosis presented in
this paper with only one projection. The main contributions are as follows: (1)
We formalize the notion of codiagnosability for stochastic automata, which
means that a failure can be detected by at least one local stochastic diagnoser
within a finite delay. (2) We construct a codiagnoser from a given stochastic
automaton with multiple projections, and the codiagnoser associated with the
local diagnosers is used to test codiagnosability condition of SDESs. (3) We
deal with a number of basic properties of the codiagnoser. In particular, a
necessary and sufficient condition for the codiagnosability of SDESs is
presented. (4) We give a computing method in detail to check whether
codiagnosability is violated. And (5) some examples are described to illustrate
the applications of the codiagnosability and its computing method.
|
DSmT: A new paradigm shift for information fusion
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been and
still remains of primal importance for the development of reliable information
fusion systems. In this short survey paper, we present the theory of plausible
and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in
literature, developed for dealing with imprecise, uncertain and potentially
highly conflicting sources of information. DSmT is a new paradigm shift for
information fusion and recent publications have shown the interest and the
potential ability of DSmT to solve fusion problems where Dempster's rule used
in Dempster-Shafer Theory (DST) provides counter-intuitive results or fails to
provide useful result at all. This paper is focused on the foundations of DSmT
and on its main rules of combination (classic, hybrid and Proportional Conflict
Redistribution rules). Shafer's model on which is based DST appears as a
particular and specific case of DSm hybrid model which can be easily handled by
DSmT as well. Several simple but illustrative examples are given throughout
this paper to show the interest and the generality of this new theory.
|
The Reaction RuleML Classification of the Event / Action / State
Processing and Reasoning Space
|
Reaction RuleML is a general, practical, compact and user-friendly
XML-serialized language for the family of reaction rules. In this white paper
we give a review of the history of event / action /state processing and
reaction rule approaches and systems in different domains, define basic
concepts and give a classification of the event, action, state processing and
reasoning space as well as a discussion of relevant / related work
|
Fuzzy Logic Classification of Imaging Laser Desorption Fourier Transform
Mass Spectrometry Data
|
A fuzzy logic based classification engine has been developed for classifying
mass spectra obtained with an imaging internal source Fourier transform mass
spectrometer (I^2LD-FTMS). Traditionally, an operator uses the relative
abundance of ions with specific mass-to-charge (m/z) ratios to categorize
spectra. An operator does this by comparing the spectrum of m/z versus
abundance of an unknown sample against a library of spectra from known samples.
Automated positioning and acquisition allow I^2LD-FTMS to acquire data from
very large grids, this would require classification of up to 3600 spectrum per
hour to keep pace with the acquisition. The tedious job of classifying numerous
spectra generated in an I^2LD-FTMS imaging application can be replaced by a
fuzzy rule base if the cues an operator uses can be encapsulated. We present
the translation of linguistic rules to a fuzzy classifier for mineral phases in
basalt. This paper also describes a method for gathering statistics on ions,
which are not currently used in the rule base, but which may be candidates for
making the rule base more accurate and complete or to form new rule bases based
on data obtained from known samples. A spatial method for classifying spectra
with low membership values, based on neighboring sample classifications, is
also presented.
|
A Neutrosophic Description Logic
|
Description Logics (DLs) are appropriate, widely used, logics for managing
structured knowledge. They allow reasoning about individuals and concepts, i.e.
set of individuals with common properties. Typically, DLs are limited to
dealing with crisp, well defined concepts. That is, concepts for which the
problem whether an individual is an instance of it is yes/no question. More
often than not, the concepts encountered in the real world do not have a
precisely defined criteria of membership: we may say that an individual is an
instance of a concept only to a certain degree, depending on the individual's
properties. The DLs that deal with such fuzzy concepts are called fuzzy DLs. In
order to deal with fuzzy, incomplete, indeterminate and inconsistent concepts,
we need to extend the fuzzy DLs, combining the neutrosophic logic with a
classical DL. In particular, concepts become neutrosophic (here neutrosophic
means fuzzy, incomplete, indeterminate, and inconsistent), thus reasoning about
neutrosophic concepts is supported. We'll define its syntax, its semantics, and
describe its properties.
|
Genetic Programming for Kernel-based Learning with Co-evolving Subsets
Selection
|
Support Vector Machines (SVMs) are well-established Machine Learning (ML)
algorithms. They rely on the fact that i) linear learning can be formalized as
a well-posed optimization problem; ii) non-linear learning can be brought into
linear learning thanks to the kernel trick and the mapping of the initial
search space onto a high dimensional feature space. The kernel is designed by
the ML expert and it governs the efficiency of the SVM approach. In this paper,
a new approach for the automatic design of kernels by Genetic Programming,
called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a
well-founded fitness function inspired from the margin criterion, and a
co-evolution framework ensuring the computational scalability of the approach.
Empirical validation on standard ML benchmark demonstrates that EKM is
competitive using state-of-the-art SVMs with tuned hyper-parameters.
|
Functional Brain Imaging with Multi-Objective Multi-Modal Evolutionary
Optimization
|
Functional brain imaging is a source of spatio-temporal data mining problems.
A new framework hybridizing multi-objective and multi-modal optimization is
proposed to formalize these data mining problems, and addressed through
Evolutionary Computation (EC). The merits of EC for spatio-temporal data mining
are demonstrated as the approach facilitates the modelling of the experts'
requirements, and flexibly accommodates their changing goals.
|
A Generic Global Constraint based on MDDs
|
The paper suggests the use of Multi-Valued Decision Diagrams (MDDs) as the
supporting data structure for a generic global constraint. We give an algorithm
for maintaining generalized arc consistency (GAC) on this constraint that
amortizes the cost of the GAC computation over a root-to-terminal path in the
search tree. The technique used is an extension of the GAC algorithm for the
regular language constraint on finite length input. Our approach adds support
for skipped variables, maintains the reduced property of the MDD dynamically
and provides domain entailment detection. Finally we also show how to adapt the
approach to constraint types that are closely related to MDDs, such as AOMDDs
and Case DAGs.
|
Conscious Intelligent Systems - Part 1 : I X I
|
Did natural consciousness and intelligent systems arise out of a path that
was co-evolutionary to evolution? Can we explain human self-consciousness as
having risen out of such an evolutionary path? If so how could it have been?
In this first part of a two-part paper (titled IXI), we take a learning
system perspective to the problem of consciousness and intelligent systems, an
approach that may look unseasonable in this age of fMRI's and high tech
neuroscience.
We posit conscious intelligent systems in natural environments and wonder how
natural factors influence their design paths. Such a perspective allows us to
explain seamlessly a variety of natural factors, factors ranging from the rise
and presence of the human mind, man's sense of I, his self-consciousness and
his looping thought processes to factors like reproduction, incubation,
extinction, sleep, the richness of natural behavior, etc. It even allows us to
speculate on a possible human evolution scenario and other natural phenomena.
|
Conscious Intelligent Systems - Part II - Mind, Thought, Language and
Understanding
|
This is the second part of a paper on Conscious Intelligent Systems. We use
the understanding gained in the first part (Conscious Intelligent Systems Part
1: IXI (arxiv id cs.AI/0612056)) to look at understanding. We see how the
presence of mind affects understanding and intelligent systems; we see that the
presence of mind necessitates language. The rise of language in turn has
important effects on understanding. We discuss the humanoid question and how
the question of self-consciousness (and by association mind/thought/language)
would affect humanoids too.
|
Interactive Configuration by Regular String Constraints
|
A product configurator which is complete, backtrack free and able to compute
the valid domains at any state of the configuration can be constructed by
building a Binary Decision Diagram (BDD). Despite the fact that the size of the
BDD is exponential in the number of variables in the worst case, BDDs have
proved to work very well in practice. Current BDD-based techniques can only
handle interactive configuration with small finite domains. In this paper we
extend the approach to handle string variables constrained by regular
expressions. The user is allowed to change the strings by adding letters at the
end of the string. We show how to make a data structure that can perform fast
valid domain computations given some assignment on the set of string variables.
We first show how to do this by using one large DFA. Since this approach is
too space consuming to be of practical use, we construct a data structure that
simulates the large DFA and in most practical cases are much more space
efficient. As an example a configuration problem on $n$ string variables with
only one solution in which each string variable is assigned to a value of
length of $k$ the former structure will use $\Omega(k^n)$ space whereas the
latter only need $O(kn)$. We also show how this framework easily can be
combined with the recent BDD techniques to allow both boolean, integer and
string variables in the configuration problem.
|
Truncating the loop series expansion for Belief Propagation
|
Recently, M. Chertkov and V.Y. Chernyak derived an exact expression for the
partition sum (normalization constant) corresponding to a graphical model,
which is an expansion around the Belief Propagation solution. By adding
correction terms to the BP free energy, one for each "generalized loop" in the
factor graph, the exact partition sum is obtained. However, the usually
enormous number of generalized loops generally prohibits summation over all
correction terms. In this article we introduce Truncated Loop Series BP
(TLSBP), a particular way of truncating the loop series of M. Chertkov and V.Y.
Chernyak by considering generalized loops as compositions of simple loops. We
analyze the performance of TLSBP in different scenarios, including the Ising
model, regular random graphs and on Promedas, a large probabilistic medical
diagnostic system. We show that TLSBP often improves upon the accuracy of the
BP solution, at the expense of increased computation time. We also show that
the performance of TLSBP strongly depends on the degree of interaction between
the variables. For weak interactions, truncating the series leads to
significant improvements, whereas for strong interactions it can be
ineffective, even if a high number of terms is considered.
|
Attribute Value Weighting in K-Modes Clustering
|
In this paper, the traditional k-modes clustering algorithm is extended by
weighting attribute value matches in dissimilarity computation. The use of
attribute value weighting technique makes it possible to generate clusters with
stronger intra-similarities, and therefore achieve better clustering
performance. Experimental results on real life datasets show that these value
weighting based k-modes algorithms are superior to the standard k-modes
algorithm with respect to clustering accuracy.
|
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br>
SAT-Based Planning
|
In Verification and in (optimal) AI Planning, a successful method is to
formulate the application as boolean satisfiability (SAT), and solve it with
state-of-the-art DPLL-based procedures. There is a lack of understanding of why
this works so well. Focussing on the Planning context, we identify a form of
problem structure concerned with the symmetrical or asymmetrical nature of the
cost of achieving the individual planning goals. We quantify this sort of
structure with a simple numeric parameter called AsymRatio, ranging between 0
and 1. We run experiments in 10 benchmark domains from the International
Planning Competitions since 2000; we show that AsymRatio is a good indicator of
SAT solver performance in 8 of these domains. We then examine carefully crafted
synthetic planning domains that allow control of the amount of structure, and
that are clean enough for a rigorous analysis of the combinatorial search
space. The domains are parameterized by size, and by the amount of structure.
The CNFs we examine are unsatisfiable, encoding one planning step less than the
length of the optimal plan. We prove upper and lower bounds on the size of the
best possible DPLL refutations, under different settings of the amount of
structure, as a function of size. We also identify the best possible sets of
branching variables (backdoors). With minimum AsymRatio, we prove exponential
lower bounds, and identify minimal backdoors of size linear in the number of
variables. With maximum AsymRatio, we identify logarithmic DPLL refutations
(and backdoors), showing a doubly exponential gap between the two structural
extreme cases. The reasons for this behavior -- the proof arguments --
illuminate the prototypical patterns of structure causing the empirical
behavior observed in the competition benchmarks.
|
Uniform and Partially Uniform Redistribution Rules
|
This short paper introduces two new fusion rules for combining quantitative
basic belief assignments. These rules although very simple have not been
proposed in literature so far and could serve as useful alternatives because of
their low computation cost with respect to the recent advanced Proportional
Conflict Redistribution rules developed in the DSmT framework.
|
Generic Global Constraints based on MDDs
|
Constraint Programming (CP) has been successfully applied to both constraint
satisfaction and constraint optimization problems. A wide variety of
specialized global constraints provide critical assistance in achieving a good
model that can take advantage of the structure of the problem in the search for
a solution. However, a key outstanding issue is the representation of 'ad-hoc'
constraints that do not have an inherent combinatorial nature, and hence are
not modeled well using narrowly specialized global constraints. We attempt to
address this issue by considering a hybrid of search and compilation.
Specifically we suggest the use of Reduced Ordered Multi-Valued Decision
Diagrams (ROMDDs) as the supporting data structure for a generic global
constraint. We give an algorithm for maintaining generalized arc consistency
(GAC) on this constraint that amortizes the cost of the GAC computation over a
root-to-leaf path in the search tree without requiring asymptotically more
space than used for the MDD. Furthermore we present an approach for
incrementally maintaining the reduced property of the MDD during the search,
and show how this can be used for providing domain entailment detection.
Finally we discuss how to apply our approach to other similar data structures
such as AOMDDs and Case DAGs. The technique used can be seen as an extension of
the GAC algorithm for the regular language constraint on finite length input.
|
Redesigning Decision Matrix Method with an indeterminacy-based inference
process
|
For academics and practitioners concerned with computers, business and
mathematics, one central issue is supporting decision makers. In this paper, we
propose a generalization of Decision Matrix Method (DMM), using Neutrosophic
logic. It emerges as an alternative to the existing logics and it represents a
mathematical model of uncertainty and indeterminacy. This paper proposes the
Neutrosophic Decision Matrix Method as a more realistic tool for decision
making. In addition, a de-neutrosophication process is included.
|
Modelling Complexity in Musical Rhythm
|
This paper constructs a tree structure for the music rhythm using the
L-system. It models the structure as an automata and derives its complexity. It
also solves the complexity for the L-system. This complexity can resolve the
similarity between trees. This complexity serves as a measure of psychological
complexity for rhythms. It resolves the music complexity of various
compositions including the Mozart effect K488.
Keyword: music perception, psychological complexity, rhythm, L-system,
automata, temporal associative memory, inverse problem, rewriting rule,
bracketed string, tree similarity
|
Space-contained conflict revision, for geographic information
|
Using qualitative reasoning with geographic information, contrarily, for
instance, with robotics, looks not only fastidious (i.e.: encoding knowledge
Propositional Logics PL), but appears to be computational complex, and not
tractable at all, most of the time. However, knowledge fusion or revision, is a
common operation performed when users merge several different data sets in a
unique decision making process, without much support. Introducing logics would
be a great improvement, and we propose in this paper, means for deciding -a
priori- if one application can benefit from a complete revision, under only the
assumption of a conjecture that we name the "containment conjecture", which
limits the size of the minimal conflicts to revise. We demonstrate that this
conjecture brings us the interesting computational property of performing a
not-provable but global, revision, made of many local revisions, at a tractable
size. We illustrate this approach on an application.
|
Case Base Mining for Adaptation Knowledge Acquisition
|
In case-based reasoning, the adaptation of a source case in order to solve
the target problem is at the same time crucial and difficult to implement. The
reason for this difficulty is that, in general, adaptation strongly depends on
domain-dependent knowledge. This fact motivates research on adaptation
knowledge acquisition (AKA). This paper presents an approach to AKA based on
the principles and techniques of knowledge discovery from databases and
data-mining. It is implemented in CABAMAKA, a system that explores the
variations within the case base to elicit adaptation knowledge. This system has
been successfully tested in an application of case-based reasoning to decision
support in the domain of breast cancer treatment.
|
Calculating Valid Domains for BDD-Based Interactive Configuration
|
In these notes we formally describe the functionality of Calculating Valid
Domains from the BDD representing the solution space of valid configurations.
The formalization is largely based on the CLab configuration framework.
|
A study of structural properties on profiles HMMs
|
Motivation: Profile hidden Markov Models (pHMMs) are a popular and very
useful tool in the detection of the remote homologue protein families.
Unfortunately, their performance is not always satisfactory when proteins are
in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm
and tool that tries to improve pHMM performance by using structural information
while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs.
Each pHMM is constructed by weighting each residue in an aligned protein
according to a specific structural property of the residue. Properties used
were primary, secondary and tertiary structures, accessibility and packing.
HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP
database to perform our experiments. Throughout, we apply leave-one-family-out
cross-validation over protein superfamilies. First, we used the MAMMOTH-mult
structural aligner to align the training set proteins. Then, we performed two
sets of experiments. In a first experiment, we compared structure weighted
models against standard pHMMs and against each other. In a second experiment,
we compared the voting model against individual pHMMs. We compare method
performance through ROC curves and through Precision/Recall curves, and assess
significance through the paired two tailed t-test. Our results show significant
performance improvements of all structurally weighted models over default
HMMER, and a significant improvement in sensitivity of the combined models over
both the original model and the structurally weighted models.
|
Bayesian approach to rough set
|
This paper proposes an approach to training rough set models using Bayesian
framework trained using Markov Chain Monte Carlo (MCMC) method. The prior
probabilities are constructed from the prior knowledge that good rough set
models have fewer rules. Markov Chain Monte Carlo sampling is conducted through
sampling in the rough set granule space and Metropolis algorithm is used as an
acceptance criteria. The proposed method is tested to estimate the risk of HIV
given demographic data. The results obtained shows that the proposed approach
is able to achieve an average accuracy of 58% with the accuracy varying up to
66%. In addition the Bayesian rough set give the probabilities of the estimated
HIV status as well as the linguistic rules describing how the demographic
parameters drive the risk of HIV.
|
Comparing Robustness of Pairwise and Multiclass Neural-Network Systems
for Face Recognition
|
Noise, corruptions and variations in face images can seriously hurt the
performance of face recognition systems. To make such systems robust,
multiclass neuralnetwork classifiers capable of learning from noisy data have
been suggested. However on large face data sets such systems cannot provide the
robustness at a high level. In this paper we explore a pairwise neural-network
system as an alternative approach to improving the robustness of face
recognition. In our experiments this approach is shown to outperform the
multiclass neural-network system in terms of the predictive accuracy on the
face images corrupted by noise.
|
Ensemble Learning for Free with Evolutionary Algorithms ?
|
Evolutionary Learning proceeds by evolving a population of classifiers, from
which it generally returns (with some notable exceptions) the single
best-of-run classifier as final result. In the meanwhile, Ensemble Learning,
one of the most efficient approaches in supervised Machine Learning for the
last decade, proceeds by building a population of diverse classifiers. Ensemble
Learning with Evolutionary Computation thus receives increasing attention. The
Evolutionary Ensemble Learning (EEL) approach presented in this paper features
two contributions. First, a new fitness function, inspired by co-evolution and
enforcing the classifier diversity, is presented. Further, a new selection
criterion based on the classification margin is proposed. This criterion is
used to extract the classifier ensemble from the final population only
(Off-line) or incrementally along evolution (On-line). Experiments on a set of
benchmark problems show that Off-line outperforms single-hypothesis
evolutionary learning and state-of-art Boosting and generates smaller
classifier ensembles.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.