title
string | abstract
string |
|---|---|
Multiset Ordering Constraints
|
We identify a new and important global (or non-binary) constraint. This
constraint ensures that the values taken by two vectors of variables, when
viewed as multisets, are ordered. This constraint is useful for a number of
different applications including breaking symmetry and fuzzy constraint
satisfaction. We propose and implement an efficient linear time algorithm for
enforcing generalised arc consistency on such a multiset ordering constraint.
Experimental results on several problem domains show considerable promise.
|
Tag Clouds for Displaying Semantics: The Case of Filmscripts
|
We relate tag clouds to other forms of visualization, including planar or
reduced dimensionality mapping, and Kohonen self-organizing maps. Using a
modified tag cloud visualization, we incorporate other information into it,
including text sequence and most pertinent words. Our notion of word pertinence
goes beyond just word frequency and instead takes a word in a mathematical
sense as located at the average of all of its pairwise relationships. We
capture semantics through context, taken as all pairwise relationships. Our
domain of application is that of filmscript analysis. The analysis of
filmscripts, always important for cinema, is experiencing a major gain in
importance in the context of television. Our objective in this work is to
visualize the semantics of filmscript, and beyond filmscript any other
partially structured, time-ordered, sequence of text segments. In particular we
develop an innovative approach to plot characterization.
|
Considerations on Construction Ontologies
|
The paper proposes an analysis on some existent ontologies, in order to point
out ways to resolve semantic heterogeneity in information systems. Authors are
highlighting the tasks in a Knowledge Acquisiton System and identifying aspects
related to the addition of new information to an intelligent system. A solution
is proposed, as a combination of ontology reasoning services and natural
languages generation. A multi-agent system will be conceived with an extractor
agent, a reasoner agent and a competence management agent.
|
A Logic Programming Approach to Activity Recognition
|
We have been developing a system for recognising human activity given a
symbolic representation of video content. The input of our system is a set of
time-stamped short-term activities detected on video frames. The output of our
system is a set of recognised long-term activities, which are pre-defined
temporal combinations of short-term activities. The constraints on the
short-term activities that, if satisfied, lead to the recognition of a
long-term activity, are expressed using a dialect of the Event Calculus. We
illustrate the expressiveness of the dialect by showing the representation of
several typical complex activities. Furthermore, we present a detailed
evaluation of the system through experimentation on a benchmark dataset of
surveillance videos.
|
Knowledge Management in Economic Intelligence with Reasoning on Temporal
Attributes
|
People have to make important decisions within a time frame. Hence, it is
imperative to employ means or strategy to aid effective decision making.
Consequently, Economic Intelligence (EI) has emerged as a field to aid
strategic and timely decision making in an organization. In the course of
attaining this goal: it is indispensable to be more optimistic towards
provision for conservation of intellectual resource invested into the process
of decision making. This intellectual resource is nothing else but the
knowledge of the actors as well as that of the various processes for effecting
decision making. Knowledge has been recognized as a strategic economic resource
for enhancing productivity and a key for innovation in any organization or
community. Thus, its adequate management with cognizance of its temporal
properties is highly indispensable. Temporal properties of knowledge refer to
the date and time (known as timestamp) such knowledge is created as well as the
duration or interval between related knowledge. This paper focuses on the needs
for a user-centered knowledge management approach as well as exploitation of
associated temporal properties. Our perspective of knowledge is with respect to
decision-problems projects in EI. Our hypothesis is that the possibility of
reasoning about temporal properties in exploitation of knowledge in EI projects
should foster timely decision making through generation of useful inferences
from available and reusable knowledge for a new project.
|
Toward a Category Theory Design of Ontological Knowledge Bases
|
I discuss (ontologies_and_ontological_knowledge_bases /
formal_methods_and_theories) duality and its category theory extensions as a
step toward a solution to Knowledge-Based Systems Theory. In particular I focus
on the example of the design of elements of ontologies and ontological
knowledge bases of next three electronic courses: Foundations of Research
Activities, Virtual Modeling of Complex Systems and Introduction to String
Theory.
|
Mnesors for automatic control
|
Mnesors are defined as elements of a semimodule over the min-plus integers.
This two-sorted structure is able to merge graduation properties of vectors and
idempotent properties of boolean numbers, which makes it appropriate for hybrid
systems. We apply it to the control of an inverted pendulum and design a full
logical controller, that is, without the usual algebra of real numbers.
|
Semi-Myopic Sensing Plans for Value Optimization
|
We consider the following sequential decision problem. Given a set of items
of unknown utility, we need to select one of as high a utility as possible
(``the selection problem''). Measurements (possibly noisy) of item values prior
to selection are allowed, at a known cost. The goal is to optimize the overall
sequential decision process of measurements and selection.
Value of information (VOI) is a well-known scheme for selecting measurements,
but the intractability of the problem typically leads to using myopic VOI
estimates. In the selection problem, myopic VOI frequently badly underestimates
the value of information, leading to inferior sensing plans. We relax the
strict myopic assumption into a scheme we term semi-myopic, providing a
spectrum of methods that can improve the performance of sensing plans. In
particular, we propose the efficiently computable method of ``blinkered'' VOI,
and examine theoretical bounds for special cases. Empirical evaluation of
``blinkered'' VOI in the selection problem with normally distributed item
values shows that is performs much better than pure myopic VOI.
|
Updating Sets of Probabilities
|
There are several well-known justifications for conditioning as the
appropriate method for updating a single probability measure, given an
observation. However, there is a significant body of work arguing for sets of
probability measures, rather than single measures, as a more realistic model of
uncertainty. Conditioning still makes sense in this context--we can simply
condition each measure in the set individually, then combine the results--and,
indeed, it seems to be the preferred updating procedure in the literature. But
how justified is conditioning in this richer setting? Here we show, by
considering an axiomatic account of conditioning given by van Fraassen, that
the single-measure and sets-of-measures cases are very different. We show that
van Fraassen's axiomatization for the former case is nowhere near sufficient
for updating sets of measures. We give a considerably longer (and not as
compelling) list of axioms that together force conditioning in this setting,
and describe other update methods that are allowed once any of these axioms is
dropped.
|
A Novel Two-Stage Dynamic Decision Support based Optimal Threat
Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne
threats
|
This paper presents a novel two-stage flexible dynamic decision support based
optimal threat evaluation and defensive resource scheduling algorithm for
multi-target air-borne threats. The algorithm provides flexibility and
optimality by swapping between two objective functions, i.e. the preferential
and subtractive defense strategies as and when required. To further enhance the
solution quality, it outlines and divides the critical parameters used in
Threat Evaluation and Weapon Assignment (TEWA) into three broad categories
(Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a
variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat
Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking
and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic
weapon scheduling algorithm, allowing multiple engagements using
shoot-look-shoot strategy, to compute near-optimal solution for a range of
scenarios. Analysis part of this paper presents the strengths and weaknesses of
the proposed algorithm over an alternative greedy algorithm as applied to
different offline scenarios.
|
General combination rules for qualitative and quantitative beliefs
|
Martin and Osswald \cite{Martin07} have recently proposed many
generalizations of combination rules on quantitative beliefs in order to manage
the conflict and to consider the specificity of the responses of the experts.
Since the experts express themselves usually in natural language with
linguistic labels, Smarandache and Dezert \cite{Li07} have introduced a
mathematical framework for dealing directly also with qualitative beliefs. In
this paper we recall some element of our previous works and propose the new
combination rules, developed for the fusion of both qualitative or quantitative
beliefs.
|
A Novel Two-Staged Decision Support based Threat Evaluation and Weapon
Assignment Algorithm, Asset-based Dynamic Weapon Scheduling using Artificial
Intelligence Techinques
|
Surveillance control and reporting (SCR) system for air threats play an
important role in the defense of a country. SCR system corresponds to air and
ground situation management/processing along with information fusion,
communication, coordination, simulation and other critical defense oriented
tasks. Threat Evaluation and Weapon Assignment (TEWA) sits at the core of SCR
system. In such a system, maximal or near maximal utilization of constrained
resources is of extreme importance. Manual TEWA systems cannot provide
optimality because of different limitations e.g.surface to air missile (SAM)
can fire from a distance of 5Km, but manual TEWA systems are constrained by
human vision range and other constraints. Current TEWA systems usually work on
target-by-target basis using some type of greedy algorithm thus affecting the
optimality of the solution and failing in multi-target scenario. his paper
relates to a novel two-staged flexible dynamic decision support based optimal
threat evaluation and weapon assignment algorithm for multi-target air-borne
threats.
|
Generalized Collective Inference with Symmetric Clique Potentials
|
Collective graphical models exploit inter-instance associative dependence to
output more accurate labelings. However existing models support very limited
kind of associativity which restricts accuracy gains. This paper makes two
major contributions. First, we propose a general collective inference framework
that biases data instances to agree on a set of {\em properties} of their
labelings. Agreement is encouraged through symmetric clique potentials. We show
that rich properties leads to bigger gains, and present a systematic inference
procedure for a large class of such properties. The procedure performs message
passing on the cluster graph, where property-aware messages are computed with
cluster specific algorithms. This provides an inference-only solution for
domain adaptation. Our experiments on bibliographic information extraction
illustrate significant test error reduction over unseen domains. Our second
major contribution consists of algorithms for computing outgoing messages from
clique clusters with symmetric clique potentials. Our algorithms are exact for
arbitrary symmetric potentials on binary labels and for max-like and
majority-like potentials on multiple labels. For majority potentials, we also
provide an efficient Lagrangian Relaxation based algorithm that compares
favorably with the exact algorithm. We present a 13/15-approximation algorithm
for the NP-hard Potts potential, with runtime sub-quadratic in the clique size.
In contrast, the best known previous guarantee for graphs with Potts potentials
is only 1/2. We empirically show that our method for Potts potentials is an
order of magnitude faster than the best alternatives, and our Lagrangian
Relaxation based algorithm for majority potentials beats the best applicable
heuristic -- ICM.
|
The Soft Cumulative Constraint
|
This research report presents an extension of Cumulative of Choco constraint
solver, which is useful to encode over-constrained cumulative problems. This
new global constraint uses sweep and task interval violation-based algorithms.
|
Modelling Concurrent Behaviors in the Process Specification Language
|
In this paper, we propose a first-order ontology for generalized stratified
order structure. We then classify the models of the theory using
model-theoretic techniques. An ontology mapping from this ontology to the core
theory of Process Specification Language is also discussed.
|
The Single Machine Total Weighted Tardiness Problem - Is it (for
Metaheuristics) a Solved Problem ?
|
The article presents a study of rather simple local search heuristics for the
single machine total weighted tardiness problem (SMTWTP), namely hillclimbing
and Variable Neighborhood Search. In particular, we revisit these approaches
for the SMTWTP as there appears to be a lack of appropriate/challenging
benchmark instances in this case. The obtained results are impressive indeed.
Only few instances remain unsolved, and even those are approximated within 1%
of the optimal/best known solutions. Our experiments support the claim that
metaheuristics for the SMTWTP are very likely to lead to good results, and
that, before refining search strategies, more work must be done with regard to
the proposition of benchmark data. Some recommendations for the construction of
such data sets are derived from our investigations.
|
Improvements for multi-objective flow shop scheduling by Pareto Iterated
Local Search
|
The article describes the proposition and application of a local search
metaheuristic for multi-objective optimization problems. It is based on two
main principles of heuristic search, intensification through variable
neighborhoods, and diversification through perturbations and successive
iterations in favorable regions of the search space. The concept is
successfully tested on permutation flow shop scheduling problems under multiple
objectives and compared to other local search approaches. While the obtained
results are encouraging in terms of their quality, another positive attribute
of the approach is its simplicity as it does require the setting of only very
few parameters.
|
Beyond Turing Machines
|
This paper discusses "computational" systems capable of "computing" functions
not computable by predefined Turing machines if the systems are not isolated
from their environment. Roughly speaking, these systems can change their finite
descriptions by interacting with their environment.
|
Pattern Recognition Theory of Mind
|
I propose that pattern recognition, memorization and processing are key
concepts that can be a principle set for the theoretical modeling of the mind
function. Most of the questions about the mind functioning can be answered by a
descriptive modeling and definitions from these principles. An understandable
consciousness definition can be drawn based on the assumption that a pattern
recognition system can recognize its own patterns of activity. The principles,
descriptive modeling and definitions can be a basis for theoretical and applied
research on cognitive sciences, particularly at artificial intelligence
studies.
|
Fact Sheet on Semantic Web
|
The report gives an overview about activities on the topic Semantic Web. It
has been released as technical report for the project "KTweb -- Connecting
Knowledge Technologies Communities" in 2003.
|
Restart Strategy Selection using Machine Learning Techniques
|
Restart strategies are an important factor in the performance of
conflict-driven Davis Putnam style SAT solvers. Selecting a good restart
strategy for a problem instance can enhance the performance of a solver.
Inspired by recent success applying machine learning techniques to predict the
runtime of SAT solvers, we present a method which uses machine learning to
boost solver performance through a smart selection of the restart strategy.
Based on easy to compute features, we train both a satisfiability classifier
and runtime models. We use these models to choose between restart strategies.
We present experimental results comparing this technique with the most commonly
used restart strategies. Our results demonstrate that machine learning is
effective in improving solver performance.
|
Online Search Cost Estimation for SAT Solvers
|
We present two different methods for estimating the cost of solving SAT
problems. The methods focus on the online behaviour of the backtracking solver,
as well as the structure of the problem. Modern SAT solvers present several
challenges to estimate search cost including coping with nonchronological
backtracking, learning and restarts. Our first method adapt an existing
algorithm for estimating the size of a search tree to deal with these
challenges. We then suggest a second method that uses a linear model trained on
data gathered online at the start of search. We compare the effectiveness of
these two methods using random and structured problems. We also demonstrate
that predictions made in early restarts can be used to improve later
predictions. We conclude by showing that the cost of solving a set of problems
can be reduced by selecting a solver from a portfolio based on such cost
estimations.
|
On Classification from Outlier View
|
Classification is the basis of cognition. Unlike other solutions, this study
approaches it from the view of outliers. We present an expanding algorithm to
detect outliers in univariate datasets, together with the underlying
foundation. The expanding algorithm runs in a holistic way, making it a rather
robust solution. Synthetic and real data experiments show its power.
Furthermore, an application for multi-class problems leads to the introduction
of the oscillator algorithm. The corresponding result implies the potential
wide use of the expanding algorithm.
|
Convergence of Expected Utility for Universal AI
|
We consider a sequence of repeated interactions between an agent and an
environment. Uncertainty about the environment is captured by a probability
distribution over a space of hypotheses, which includes all computable
functions. Given a utility function, we can evaluate the expected utility of
any computational policy for interaction with the environment. After making
some plausible assumptions (and maybe one not-so-plausible assumption), we show
that if the utility function is unbounded, then the expected utility of any
policy is undefined.
|
Knowledge Discovery of Hydrocyclone s Circuit Based on SONFIS and SORST
|
This study describes application of some approximate reasoning methods to
analysis of hydrocyclone performance. In this manner, using a combining of Self
Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS)-SONFIS- and Rough Set
Theory (RST)-SORST-crisp and fuzzy granules are obtained. Balancing of crisp
granules and non-crisp granules can be implemented in close-open iteration.
Using different criteria and based on granulation level balance point
(interval) or a pseudo-balance point is estimated. Validation of the proposed
methods, on the data set of the hydrocyclone is rendered.
|
A Class of DSm Conditional Rules
|
In this paper we introduce two new DSm fusion conditioning rules with
example, and as a generalization of them a class of DSm fusion conditioning
rules, and then extend them to a class of DSm conditioning rules.
|
View-based Propagator Derivation
|
When implementing a propagator for a constraint, one must decide about
variants: When implementing min, should one also implement max? Should one
implement linear constraints both with unit and non-unit coefficients?
Constraint variants are ubiquitous: implementing them requires considerable (if
not prohibitive) effort and decreases maintainability, but will deliver better
performance than resorting to constraint decomposition.
This paper shows how to use views to derive perfect propagator variants. A
model for views and derived propagators is introduced. Derived propagators are
proved to be indeed perfect in that they inherit essential properties such as
correctness and domain and bounds consistency. Techniques for systematically
deriving propagators such as transformation, generalization, specialization,
and type conversion are developed. The paper introduces an implementation
architecture for views that is independent of the underlying constraint
programming system. A detailed evaluation of views implemented in Gecode shows
that derived propagators are efficient and that views often incur no overhead.
Without views, Gecode would either require 180 000 rather than 40 000 lines of
propagator code, or would lack many efficient propagator variants. Compared to
8 000 lines of code for views, the reduction in code for propagators yields a
1750% return on investment.
|
A Cognitive Mind-map Framework to Foster Trust
|
The explorative mind-map is a dynamic framework, that emerges automatically
from the input, it gets. It is unlike a verificative modeling system where
existing (human) thoughts are placed and connected together. In this regard,
explorative mind-maps change their size continuously, being adaptive with
connectionist cells inside; mind-maps process data input incrementally and
offer lots of possibilities to interact with the user through an appropriate
communication interface. With respect to a cognitive motivated situation like a
conversation between partners, mind-maps become interesting as they are able to
process stimulating signals whenever they occur. If these signals are close to
an own understanding of the world, then the conversational partner becomes
automatically more trustful than if the signals do not or less match the own
knowledge scheme. In this (position) paper, we therefore motivate explorative
mind-maps as a cognitive engine and propose these as a decision support engine
to foster trust.
|
An improved axiomatic definition of information granulation
|
To capture the uncertainty of information or knowledge in information
systems, various information granulations, also known as knowledge
granulations, have been proposed. Recently, several axiomatic definitions of
information granulation have been introduced. In this paper, we try to improve
these axiomatic definitions and give a universal construction of information
granulation by relating information granulations with a class of functions of
multiple variables. We show that the improved axiomatic definition has some
concrete information granulations in the literature as instances.
|
Reasoning with Topological and Directional Spatial Information
|
Current research on qualitative spatial representation and reasoning mainly
focuses on one single aspect of space. In real world applications, however,
multiple spatial aspects are often involved simultaneously.
This paper investigates problems arising in reasoning with combined
topological and directional information. We use the RCC8 algebra and the
Rectangle Algebra (RA) for expressing topological and directional information
respectively. We give examples to show that the bipath-consistency algorithm
BIPATH is incomplete for solving even basic RCC8 and RA constraints. If
topological constraints are taken from some maximal tractable subclasses of
RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of
RA, then we show that BIPATH is able to separate topological constraints from
directional ones. This means, given a set of hybrid topological and directional
constraints from the above subclasses of RCC8 and RA, we can transfer the joint
satisfaction problem in polynomial time to two independent satisfaction
problems in RCC8 and RA. For general RA constraints, we give a method to
compute solutions that satisfy all topological constraints and approximately
satisfy each RA constraint to any prescribed precision.
|
Reasoning about Cardinal Directions between Extended Objects
|
Direction relations between extended spatial objects are important
commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model,
known as Cardinal Direction Calculus (CDC), for representing direction
relations between connected plane regions. CDC is perhaps the most expressive
qualitative calculus for directional information, and has attracted increasing
interest from areas such as artificial intelligence, geographical information
science, and image retrieval. Given a network of CDC constraints, the
consistency problem is deciding if the network is realizable by connected
regions in the real plane. This paper provides a cubic algorithm for checking
consistency of basic CDC constraint networks, and proves that reasoning with
CDC is in general an NP-Complete problem. For a consistent network of basic CDC
constraints, our algorithm also returns a 'canonical' solution in cubic time.
This cubic algorithm is also adapted to cope with cardinal directions between
possibly disconnected regions, in which case currently the best algorithm is of
time complexity O(n^5).
|
On Planning with Preferences in HTN
|
In this paper, we address the problem of generating preferred plans by
combining the procedural control knowledge specified by Hierarchical Task
Networks (HTNs) with rich qualitative user preferences. The outcome of our work
is a language for specifyin user preferences, tailored to HTN planning,
together with a provably optimal preference-based planner, HTNPLAN, that is
implemented as an extension of SHOP2. To compute preferred plans, we propose an
approach based on forward-chaining heuristic search. Our heuristic uses an
admissible evaluation function measuring the satisfaction of preferences over
partial plans. Our empirical evaluation demonstrates the effectiveness of our
HTNPLAN heuristics. We prove our approach sound and optimal with respect to the
plans it generates by appealing to a situation calculus semantics of our
preference language and of HTN planning. While our implementation builds on
SHOP2, the language and techniques proposed here are relevant to a broad range
of HTN planners.
|
Assessing the Impact of Informedness on a Consultant's Profit
|
We study the notion of informedness in a client-consultant setting. Using a
software simulator, we examine the extent to which it pays off for consultants
to provide their clients with advice that is well-informed, or with advice that
is merely meant to appear to be well-informed. The latter strategy is
beneficial in that it costs less resources to keep up-to-date, but carries the
risk of a decreased reputation if the clients discover the low level of
informedness of the consultant. Our experimental results indicate that under
different circumstances, different strategies yield the optimal results (net
profit) for the consultants.
|
A multiagent urban traffic simulation Part I: dealing with the ordinary
|
We describe in this article a multiagent urban traffic simulation, as we
believe individual-based modeling is necessary to encompass the complex
influence the actions of an individual vehicle can have on the overall flow of
vehicles. We first describe how we build a graph description of the network
from purely geometric data, ESRI shapefiles. We then explain how we include
traffic related data to this graph. We go on after that with the model of the
vehicle agents: origin and destination, driving behavior, multiple lanes,
crossroads, and interactions with the other vehicles in day-to-day, ?ordinary?
traffic. We conclude with the presentation of the resulting simulation of this
model on the Rouen agglomeration.
|
n-Opposition theory to structure debates
|
2007 was the first international congress on the ?square of oppositions?. A
first attempt to structure debate using n-opposition theory was presented along
with the results of a first experiment on the web. Our proposal for this paper
is to define relations between arguments through a structure of opposition
(square of oppositions is one structure of opposition). We will be trying to
answer the following questions: How to organize debates on the web 2.0? How to
structure them in a logical way? What is the role of n-opposition theory, in
this context? We present in this paper results of three experiments
(Betapolitique 2007, ECAP 2008, Intermed 2008).
|
Paired Comparisons-based Interactive Differential Evolution
|
We propose Interactive Differential Evolution (IDE) based on paired
comparisons for reducing user fatigue and evaluate its convergence speed in
comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User
interface and convergence performance are two big keys for reducing Interactive
Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE,
users of the proposed IDE and tournament IGA do not need to compare whole
individuals each other but compare pairs of individuals, which largely
decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate
another factor, IEC convergence performance, using IEC simulators and show that
our proposed IDE converges significantly faster than IGA and tournament IGA,
i.e. our proposed one is superior to others from both user interface and
convergence performance points of view.
|
Back analysis based on SOM-RST system
|
This paper describes application of information granulation theory, on the
back analysis of Jeffrey mine southeast wall Quebec. In this manner, using a
combining of Self Organizing Map (SOM) and rough set theory (RST), crisp and
rough granules are obtained. Balancing of crisp granules and sub rough granules
is rendered in close-open iteration. Combining of hard and soft computing,
namely finite difference method (FDM) and computational intelligence and taking
in to account missing information are two main benefits of the proposed method.
As a practical example, reverse analysis on the failure of the southeast wall
Jeffrey mine is accomplished.
|
Similarity Matching Techniques for Fault Diagnosis in Automotive
Infotainment Electronics
|
Fault diagnosis has become a very important area of research during the last
decade due to the advancement of mechanical and electrical systems in
industries. The automobile is a crucial field where fault diagnosis is given a
special attention. Due to the increasing complexity and newly added features in
vehicles, a comprehensive study has to be performed in order to achieve an
appropriate diagnosis model. A diagnosis system is capable of identifying the
faults of a system by investigating the observable effects (or symptoms). The
system categorizes the fault into a diagnosis class and identifies a probable
cause based on the supplied fault symptoms. Fault categorization and
identification are done using similarity matching techniques. The development
of diagnosis classes is done by making use of previous experience, knowledge or
information within an application area. The necessary information used may come
from several sources of knowledge, such as from system analysis. In this paper
similarity matching techniques for fault diagnosis in automotive infotainment
applications are discussed.
|
Performing Hybrid Recommendation in Intermodal Transportation-the
FTMarket System's Recommendation Module
|
Diverse recommendation techniques have been already proposed and encapsulated
into several e-business applications, aiming to perform a more accurate
evaluation of the existing information and accordingly augment the assistance
provided to the users involved. This paper reports on the development and
integration of a recommendation module in an agent-based transportation
transactions management system. The module is built according to a novel hybrid
recommendation technique, which combines the advantages of collaborative
filtering and knowledge-based approaches. The proposed technique and supporting
module assist customers in considering in detail alternative transportation
transactions that satisfy their requests, as well as in evaluating completed
transactions. The related services are invoked through a software agent that
constructs the appropriate knowledge rules and performs a synthesis of the
recommendation policy.
|
Decomposition of the NVALUE constraint
|
We study decompositions of NVALUE, a global constraint that can be used to
model a wide range of problems where values need to be counted. Whilst
decomposition typically hinders propagation, we identify one decomposition that
maintains a global view as enforcing bound consistency on the decomposition
achieves bound consistency on the original global NVALUE constraint. Such
decompositions offer the prospect for advanced solving techniques like nogood
learning and impact based branching heuristics. They may also help SAT and IP
solvers take advantage of the propagation of global constraints.
|
Symmetries of Symmetry Breaking Constraints
|
Symmetry is an important feature of many constraint programs. We show that
any symmetry acting on a set of symmetry breaking constraints can be used to
break symmetry. Different symmetries pick out different solutions in each
symmetry class. We use these observations in two methods for eliminating
symmetry from a problem. These methods are designed to have many of the
advantages of symmetry breaking methods that post static symmetry breaking
constraint without some of the disadvantages. In particular, the two methods
prune the search space using fast and efficient propagation of posted
constraints, whilst reducing the conflict between symmetry breaking and
branching heuristics. Experimental results show that the two methods perform
well on some standard benchmarks.
|
Elicitation strategies for fuzzy constraint problems with missing
preferences: algorithms and experimental studies
|
Fuzzy constraints are a popular approach to handle preferences and
over-constrained problems in scenarios where one needs to be cautious, such as
in medical or space applications. We consider here fuzzy constraint problems
where some of the preferences may be missing. This models, for example,
settings where agents are distributed and have privacy issues, or where there
is an ongoing preference elicitation process. In this setting, we study how to
find a solution which is optimal irrespective of the missing preferences. In
the process of finding such a solution, we may elicit preferences from the user
if necessary. However, our goal is to ask the user as little as possible. We
define a combined solving and preference elicitation scheme with a large number
of different instantiations, each corresponding to a concrete algorithm which
we compare experimentally. We compute both the number of elicited preferences
and the "user effort", which may be larger, as it contains all the preference
values the user has to compute to be able to respond to the elicitation
requests. While the number of elicited preferences is important when the
concern is to communicate as little information as possible, the user effort
measures also the hidden work the user has to do to be able to communicate the
elicited preferences. Our experimental results show that some of our algorithms
are very good at finding a necessarily optimal solution while asking the user
for only a very small fraction of the missing preferences. The user effort is
also very small for the best algorithms. Finally, we test these algorithms on
hard constraint problems with possibly missing constraints, where the aim is to
find feasible solutions irrespective of the missing constraints.
|
Flow-Based Propagators for the SEQUENCE and Related Global Constraints
|
We propose new filtering algorithms for the SEQUENCE constraint and some
extensions of the SEQUENCE constraint based on network flows. We enforce domain
consistency on the SEQUENCE constraint in $O(n^2)$ time down a branch of the
search tree. This improves upon the best existing domain consistency algorithm
by a factor of $O(\log n)$. The flows used in these algorithms are derived from
a linear program. Some of them differ from the flows used to propagate global
constraints like GCC since the domains of the variables are encoded as costs on
the edges rather than capacities. Such flows are efficient for maintaining
bounds consistency over large domains and may be useful for other global
constraints.
|
The Weighted CFG Constraint
|
We introduce the weighted CFG constraint and propose a propagation algorithm
that enforces domain consistency in $O(n^3|G|)$ time. We show that this
algorithm can be decomposed into a set of primitive arithmetic constraints
without hindering propagation.
|
Building upon Fast Multipole Methods to Detect and Model Organizations
|
Many models in natural and social sciences are comprised of sets of
inter-acting entities whose intensity of interaction decreases with distance.
This often leads to structures of interest in these models composed of dense
packs of entities. Fast Multipole Methods are a family of methods developed to
help with the calculation of a number of computable models such as described
above. We propose a method that builds upon FMM to detect and model the dense
structures of these systems.
|
A multiagent urban traffic simulation. Part II: dealing with the
extraordinary
|
In Probabilistic Risk Management, risk is characterized by two quantities:
the magnitude (or severity) of the adverse consequences that can potentially
result from the given activity or action, and by the likelihood of occurrence
of the given adverse consequences. But a risk seldom exists in isolation: chain
of consequences must be examined, as the outcome of one risk can increase the
likelihood of other risks. Systemic theory must complement classic PRM. Indeed
these chains are composed of many different elements, all of which may have a
critical importance at many different levels. Furthermore, when urban
catastrophes are envisioned, space and time constraints are key determinants of
the workings and dynamics of these chains of catastrophes: models must include
a correct spatial topology of the studied risk. Finally, literature insists on
the importance small events can have on the risk on a greater scale: urban
risks management models belong to self-organized criticality theory. We chose
multiagent systems to incorporate this property in our model: the behavior of
an agent can transform the dynamics of important groups of them.
|
A Local Search Modeling for Constrained Optimum Paths Problems (Extended
Abstract)
|
Constrained Optimum Path (COP) problems appear in many real-life
applications, especially on communication networks. Some of these problems have
been considered and solved by specific techniques which are usually difficult
to extend. In this paper, we introduce a novel local search modeling for
solving some COPs by local search. The modeling features the compositionality,
modularity, reuse and strengthens the benefits of Constrained-Based Local
Search. We also apply the modeling to the edge-disjoint paths problem (EDP). We
show that side constraints can easily be added in the model. Computational
results show the significance of the approach.
|
Dynamic Demand-Capacity Balancing for Air Traffic Management Using
Constraint-Based Local Search: First Results
|
Using constraint-based local search, we effectively model and efficiently
solve the problem of balancing the traffic demands on portions of the European
airspace while ensuring that their capacity constraints are satisfied. The
traffic demand of a portion of airspace is the hourly number of flights planned
to enter it, and its capacity is the upper bound on this number under which
air-traffic controllers can work. Currently, the only form of demand-capacity
balancing we allow is ground holding, that is the changing of the take-off
times of not yet airborne flights. Experiments with projected European flight
plans of the year 2030 show that already this first form of demand-capacity
balancing is feasible without incurring too much total delay and that it can
lead to a significantly better demand-capacity balance.
|
On Improving Local Search for Unsatisfiability
|
Stochastic local search (SLS) has been an active field of research in the
last few years, with new techniques and procedures being developed at an
astonishing rate. SLS has been traditionally associated with satisfiability
solving, that is, finding a solution for a given problem instance, as its
intrinsic nature does not address unsatisfiable problems. Unsatisfiable
instances were therefore commonly solved using backtrack search solvers. For
this reason, in the late 90s Selman, Kautz and McAllester proposed a challenge
to use local search instead to prove unsatisfiability. More recently, two SLS
solvers - Ranger and Gunsat - have been developed, which are able to prove
unsatisfiability albeit being SLS solvers. In this paper, we first compare
Ranger with Gunsat and then propose to improve Ranger performance using some of
Gunsat's techniques, namely unit propagation look-ahead and extended
resolution.
|
Integrating Conflict Driven Clause Learning to Local Search
|
This article introduces SatHyS (SAT HYbrid Solver), a novel hybrid approach
for propositional satisfiability. It combines local search and conflict driven
clause learning (CDCL) scheme. Each time the local search part reaches a local
minimum, the CDCL is launched. For SAT problems it behaves like a tabu list,
whereas for UNSAT ones, the CDCL part tries to focus on minimum unsatisfiable
sub-formula (MUS). Experimental results show good performances on many classes
of SAT instances from the last SAT competitions.
|
Imitation learning of motor primitives and language bootstrapping in
robots
|
Imitation learning in robots, also called programing by demonstration, has
made important advances in recent years, allowing humans to teach context
dependant motor skills/tasks to robots. We propose to extend the usual contexts
investigated to also include acoustic linguistic expressions that might denote
a given motor skill, and thus we target joint learning of the motor skills and
their potential acoustic linguistic name. In addition to this, a modification
of a class of existing algorithms within the imitation learning framework is
made so that they can handle the unlabeled demonstration of several tasks/motor
primitives without having to inform the imitator of what task is being
demonstrated or what the number of tasks are, which is a necessity for language
learning, i.e; if one wants to teach naturally an open number of new motor
skills together with their acoustic names. Finally, a mechanism for detecting
whether or not linguistic input is relevant to the task is also proposed, and
our architecture also allows the robot to find the right framing for a given
identified motor primitive. With these additions it becomes possible to build
an imitator that bridges the gap between imitation learning and language
learning by being able to learn linguistic expressions using methods from the
imitation learning community. In this sense the imitator can learn a word by
guessing whether a certain speech pattern present in the context means that a
specific task is to be executed. The imitator is however not assumed to know
that speech is relevant and has to figure this out on its own by looking at the
demonstrations: indeed, the architecture allows the robot to transparently also
learn tasks which should not be triggered by an acoustic word, but for example
by the color or position of an object or a gesture made by someone in the
environment. To demonstrate this ability to find the ...
|
Significance of Classification Techniques in Prediction of Learning
Disabilities
|
The aim of this study is to show the importance of two classification
techniques, viz. decision tree and clustering, in prediction of learning
disabilities (LD) of school-age children. LDs affect about 10 percent of all
children enrolled in schools. The problems of children with specific learning
disabilities have been a cause of concern to parents and teachers for some
time. Decision trees and clustering are powerful and popular tools used for
classification and prediction in Data mining. Different rules extracted from
the decision tree are used for prediction of learning disabilities. Clustering
is the assignment of a set of observations into subsets, called clusters, which
are useful in finding the different signs and symptoms (attributes) present in
the LD affected child. In this paper, J48 algorithm is used for constructing
the decision tree and K-means algorithm is used for creating the clusters. By
applying these classification techniques, LD in any child can be identified.
|
Detecting Ontological Conflicts in Protocols between Semantic Web
Services
|
The task of verifying the compatibility between interacting web services has
traditionally been limited to checking the compatibility of the interaction
protocol in terms of message sequences and the type of data being exchanged.
Since web services are developed largely in an uncoordinated way, different
services often use independently developed ontologies for the same domain
instead of adhering to a single ontology as standard. In this work we
investigate the approaches that can be taken by the server to verify the
possibility to reach a state with semantically inconsistent results during the
execution of a protocol with a client, if the client ontology is published.
Often database is used to store the actual data along with the ontologies
instead of storing the actual data as a part of the ontology description. It is
important to observe that at the current state of the database the semantic
conflict state may not be reached even if the verification done by the server
indicates the possibility of reaching a conflict state. A relational algebra
based decision procedure is also developed to incorporate the current state of
the client and the server databases in the overall verification procedure.
|
Gradient Computation In Linear-Chain Conditional Random Fields Using The
Entropy Message Passing Algorithm
|
The paper proposes a numerically stable recursive algorithm for the exact
computation of the linear-chain conditional random field gradient. It operates
as a forward algorithm over the log-domain expectation semiring and has the
purpose of enhancing memory efficiency when applied to long observation
sequences. Unlike the traditional algorithm based on the forward-backward
recursions, the memory complexity of our algorithm does not depend on the
sequence length. The experiments on real data show that it can be useful for
the problems which deal with long sequences.
|
Reinforcement Learning Based on Active Learning Method
|
In this paper, a new reinforcement learning approach is proposed which is
based on a powerful concept named Active Learning Method (ALM) in modeling. ALM
expresses any multi-input-single-output system as a fuzzy combination of some
single-input-singleoutput systems. The proposed method is an actor-critic
system similar to Generalized Approximate Reasoning based Intelligent Control
(GARIC) structure to adapt the ALM by delayed reinforcement signals. Our system
uses Temporal Difference (TD) learning to model the behavior of useful actions
of a control system. The goodness of an action is modeled on Reward-
Penalty-Plane. IDS planes will be updated according to this plane. It is shown
that the system can learn with a predefined fuzzy system or without it (through
random actions).
|
A New Sufficient Condition for 1-Coverage to Imply Connectivity
|
An effective approach for energy conservation in wireless sensor networks is
scheduling sleep intervals for extraneous nodes while the remaining nodes stay
active to provide continuous service. For the sensor network to operate
successfully the active nodes must maintain both sensing coverage and network
connectivity, It proved before if the communication range of nodes is at least
twice the sensing range, complete coverage of a convex area implies
connectivity among the working set of nodes. In this paper we consider a
rectangular region A = a *b, such that R a R b s s {\pounds}, {\pounds}, where
s R is the sensing range of nodes. and put a constraint on minimum allowed
distance between nodes(s). according to this constraint we present a new lower
bound for communication range relative to sensing range of sensors(s 2 + 3 *R)
that complete coverage of considered area implies connectivity among the
working set of nodes; also we present a new distribution method, that satisfy
our constraint.
|
Target tracking in the recommender space: Toward a new recommender
system based on Kalman filtering
|
In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
|
Should one compute the Temporal Difference fix point or minimize the
Bellman Residual? The unified oblique projection view
|
We investigate projection methods, for evaluating a linear approximation of
the value function of a policy in a Markov Decision Process context. We
consider two popular approaches, the one-step Temporal Difference fix-point
computation (TD(0)) and the Bellman Residual (BR) minimization. We describe
examples, where each method outperforms the other. We highlight a simple
relation between the objective function they minimize, and show that while BR
enjoys a performance guarantee, TD(0) does not in general. We then propose a
unified view in terms of oblique projections of the Bellman equation, which
substantially simplifies and extends the characterization of (schoknecht,2002)
and the recent analysis of (Yu & Bertsekas, 2008). Eventually, we describe some
simulations that suggest that if the TD(0) solution is usually slightly better
than the BR solution, its inherent numerical instability makes it very bad in
some cases, and thus worse on average.
|
Distributed Graph Coloring: An Approach Based on the Calling Behavior of
Japanese Tree Frogs
|
Graph coloring, also known as vertex coloring, considers the problem of
assigning colors to the nodes of a graph such that adjacent nodes do not share
the same color. The optimization version of the problem concerns the
minimization of the number of used colors. In this paper we deal with the
problem of finding valid colorings of graphs in a distributed way, that is, by
means of an algorithm that only uses local information for deciding the color
of the nodes. Such algorithms prescind from any central control. Due to the
fact that quite a few practical applications require to find colorings in a
distributed way, the interest in distributed algorithms for graph coloring has
been growing during the last decade. As an example consider wireless ad-hoc and
sensor networks, where tasks such as the assignment of frequencies or the
assignment of TDMA slots are strongly related to graph coloring.
The algorithm proposed in this paper is inspired by the calling behavior of
Japanese tree frogs. Male frogs use their calls to attract females.
Interestingly, groups of males that are located nearby each other desynchronize
their calls. This is because female frogs are only able to correctly localize
the male frogs when their calls are not too close in time. We experimentally
show that our algorithm is very competitive with the current state of the art,
using different sets of problem instances and comparing to one of the most
competitive algorithms from the literature.
|
Bayesian Modeling of a Human MMORPG Player
|
This paper describes an application of Bayesian programming to the control of
an autonomous avatar in a multiplayer role-playing game (the example is based
on World of Warcraft). We model a particular task, which consists of choosing
what to do and to select which target in a situation where allies and foes are
present. We explain the model in Bayesian programming and show how we could
learn the conditional probabilities from data gathered during human-played
sessions.
|
Reinforcement Learning in Partially Observable Markov Decision Processes
using Hybrid Probabilistic Logic Programs
|
We present a probabilistic logic programming framework to reinforcement
learning, by integrating reinforce-ment learning, in POMDP environments, with
normal hybrid probabilistic logic programs with probabilistic answer set
seman-tics, that is capable of representing domain-specific knowledge. We
formally prove the correctness of our approach. We show that the complexity of
finding a policy for a reinforcement learning problem in our approach is
NP-complete. In addition, we show that any reinforcement learning problem can
be encoded as a classical logic program with answer set semantics. We also show
that a reinforcement learning problem can be encoded as a SAT problem. We
present a new high level action description language that allows the factored
representation of POMDP. Moreover, we modify the original model of POMDP so
that it be able to distinguish between knowledge producing actions and actions
that change the environment.
|
Multimodal Biometric Systems - Study to Improve Accuracy and Performance
|
Biometrics is the science and technology of measuring and analyzing
biological data of human body, extracting a feature set from the acquired data,
and comparing this set against to the template set in the database.
Experimental studies show that Unimodal biometric systems had many
disadvantages regarding performance and accuracy. Multimodal biometric systems
perform better than unimodal biometric systems and are popular even more
complex also. We examine the accuracy and performance of multimodal biometric
authentication systems using state of the art Commercial Off- The-Shelf (COTS)
products. Here we discuss fingerprint and face biometric systems, decision and
fusion techniques used in these systems. We also discuss their advantage over
unimodal biometric systems.
|
A Bayesian Methodology for Estimating Uncertainty of Decisions in
Safety-Critical Systems
|
Uncertainty of decisions in safety-critical engineering applications can be
estimated on the basis of the Bayesian Markov Chain Monte Carlo (MCMC)
technique of averaging over decision models. The use of decision tree (DT)
models assists experts to interpret causal relations and find factors of the
uncertainty. Bayesian averaging also allows experts to estimate the uncertainty
accurately when a priori information on the favored structure of DTs is
available. Then an expert can select a single DT model, typically the Maximum a
Posteriori model, for interpretation purposes. Unfortunately, a priori
information on favored structure of DTs is not always available. For this
reason, we suggest a new prior on DTs for the Bayesian MCMC technique. We also
suggest a new procedure of selecting a single DT and describe an application
scenario. In our experiments on the Short-Term Conflict Alert data our
technique outperforms the existing Bayesian techniques in predictive accuracy
of the selected single DTs.
|
Using ASP with recent extensions for causal explanations
|
We examine the practicality for a user of using Answer Set Programming (ASP)
for representing logical formalisms. We choose as an example a formalism aiming
at capturing causal explanations from causal information. We provide an
implementation, showing the naturalness and relative efficiency of this
translation job. We are interested in the ease for writing an ASP program, in
accordance with the claimed ``declarative'' aspect of ASP. Limitations of the
earlier systems (poor data structure and difficulty in reusing pieces of
programs) made that in practice, the ``declarative aspect'' was more
theoretical than practical. We show how recent improvements in working ASP
systems facilitate a lot the translation, even if a few improvements could
still be useful.
|
URSA: A System for Uniform Reduction to SAT
|
There are a huge number of problems, from various areas, being solved by
reducing them to SAT. However, for many applications, translation into SAT is
performed by specialized, problem-specific tools. In this paper we describe a
new system for uniform solving of a wide class of problems by reducing them to
SAT. The system uses a new specification language URSA that combines imperative
and declarative programming paradigms. The reduction to SAT is defined
precisely by the semantics of the specification language. The domain of the
approach is wide (e.g., many NP-complete problems can be simply specified and
then solved by the system) and there are problems easily solvable by the
proposed system, while they can be hardly solved by using other programming
languages or constraint programming systems. So, the system can be seen not
only as a tool for solving problems by reducing them to SAT, but also as a
general-purpose constraint solving system (for finite domains). In this paper,
we also describe an open-source implementation of the described approach. The
performed experiments suggest that the system is competitive to
state-of-the-art related modelling systems.
|
Are SNOMED CT Browsers Ready for Institutions? Introducing MySNOM
|
SNOMED Clinical Terms (SNOMED CT) is one of the most widespread ontologies in
the life sciences, with more than 300,000 concepts and relationships, but is
distributed with no associated software tools. In this paper we present MySNOM,
a web-based SNOMED CT browser. MySNOM allows organizations to browse their own
distribution of SNOMED CT under a controlled environment, focuses on navigating
using the structure of SNOMED CT, and has diagramming capabilities.
|
A study on the relation between linguistics-oriented and domain-specific
semantics
|
In this paper we dealt with the comparison and linking between lexical
resources with domain knowledge provided by ontologies. It is one of the issues
for the combination of the Semantic Web Ontologies and Text Mining. We
investigated the relations between the linguistics oriented and domain-specific
semantics, by associating the GO biological process concepts to the FrameNet
semantic frames. The result shows the gaps between the linguistics-oriented and
domain-specific semantics on the classification of events and the grouping of
target words. The result provides valuable information for the improvement of
domain ontologies supporting for text mining systems. And also, it will result
in benefits to language understanding technology.
|
Process Makna - A Semantic Wiki for Scientific Workflows
|
Virtual e-Science infrastructures supporting Web-based scientific workflows
are an example for knowledge-intensive collaborative and weakly-structured
processes where the interaction with the human scientists during process
execution plays a central role. In this paper we propose the lightweight
dynamic user-friendly interaction with humans during execution of scientific
workflows via the low-barrier approach of Semantic Wikis as an intuitive
interface for non-technical scientists. Our Process Makna Semantic Wiki system
is a novel combination of an business process management system adapted for
scientific workflows with a Corporate Semantic Web Wiki user interface
supporting knowledge intensive human interaction tasks during scientific
workflow execution.
|
Use of semantic technologies for the development of a dynamic
trajectories generator in a Semantic Chemistry eLearning platform
|
ChemgaPedia is a multimedia, webbased eLearning service platform that
currently contains about 18.000 pages organized in 1.700 chapters covering the
complete bachelor studies in chemistry and related topics of chemistry,
pharmacy, and life sciences. The eLearning encyclopedia contains some 25.000
media objects and the eLearning platform provides services such as virtual and
remote labs for experiments. With up to 350.000 users per month the platform is
the most frequently used scientific educational service in the German spoken
Internet. In this demo we show the benefit of mapping the static eLearning
contents of ChemgaPedia to a Linked Data representation for Semantic Chemistry
which allows for generating dynamic eLearning paths tailored to the semantic
profiles of the users.
|
Using Semantic Wikis for Structured Argument in Medical Domain
|
This research applies ideas from argumentation theory in the context of
semantic wikis, aiming to provide support for structured-large scale
argumentation between human agents. The implemented prototype is exemplified by
modelling the MMR vaccine controversy.
|
Creating a new Ontology: a Modular Approach
|
Creating a new Ontology: a Modular Approach
|
A semantic approach for the requirement-driven discovery of web services
in the Life Sciences
|
Research in the Life Sciences depends on the integration of large,
distributed and heterogeneous data sources and web services. The discovery of
which of these resources are the most appropriate to solve a given task is a
complex research question, since there is a large amount of plausible
candidates and there is little, mostly unstructured, metadata to be able to
decide among them.We contribute a semi-automatic approach,based on semantic
techniques, to assist researchers in the discovery of the most appropriate web
services to full a set of given requirements.
|
Scientific Collaborations: principles of WikiBridge Design
|
Semantic wikis, wikis enhanced with Semantic Web technologies, are
appropriate systems for community-authored knowledge models. They are
particularly suitable for scientific collaboration. This paper details the
design principles ofWikiBridge, a semantic wiki.
|
Populous: A tool for populating ontology templates
|
We present Populous, a tool for gathering content with which to populate an
ontology. Domain experts need to add content, that is often repetitive in its
form, but without having to tackle the underlying ontological representation.
Populous presents users with a table based form in which columns are
constrained to take values from particular ontologies; the user can select a
concept from an ontology via its meaningful label to give a value for a given
entity attribute. Populated tables are mapped to patterns that can then be used
to automatically generate the ontology's content. Populous's contribution is in
the knowledge gathering stage of ontology development. It separates knowledge
gathering from the conceptualisation and also separates the user from the
standard ontology authoring environments. As a result, Populous can allow
knowledge to be gathered in a straight-forward manner that can then be used to
do mass production of ontology content.
|
Querying Biomedical Ontologies in Natural Language using Answer Set
|
In this work, we develop an intelligent user interface that allows users to
enter biomedical queries in a natural language, and that presents the answers
(possibly with explanations if requested) in a natural language. We develop a
rule layer over biomedical ontologies and databases, and use automated
reasoners to answer queries considering relevant parts of the rule layer.
|
Bisimulations for fuzzy transition systems
|
There has been a long history of using fuzzy language equivalence to compare
the behavior of fuzzy systems, but the comparison at this level is too coarse.
Recently, a finer behavioral measure, bisimulation, has been introduced to
fuzzy finite automata. However, the results obtained are applicable only to
finite-state systems. In this paper, we consider bisimulation for general fuzzy
systems which may be infinite-state or infinite-event, by modeling them as
fuzzy transition systems. To help understand and check bisimulation, we
characterize it in three ways by enumerating whole transitions, comparing
individual transitions, and using a monotonic function. In addition, we address
composition operations, subsystems, quotients, and homomorphisms of fuzzy
transition systems and discuss their properties connected with bisimulation.
The results presented here are useful for comparing the behavior of general
fuzzy systems. In particular, this makes it possible to relate an infinite
fuzzy system to a finite one, which is easier to analyze, with the same
behavior.
|
Nondeterministic fuzzy automata
|
Fuzzy automata have long been accepted as a generalization of
nondeterministic finite automata. A closer examination, however, shows that the
fundamental property---nondeterminism---in nondeterministic finite automata has
not been well embodied in the generalization. In this paper, we introduce
nondeterministic fuzzy automata with or without $\el$-moves and fuzzy languages
recognized by them. Furthermore, we prove that (deterministic) fuzzy automata,
nondeterministic fuzzy automata, and nondeterministic fuzzy automata with
$\el$-moves are all equivalent in the sense that they recognize the same class
of fuzzy languages.
|
Experimental Comparison of Representation Methods and Distance Measures
for Time Series Data
|
The previous decade has brought a remarkable increase of the interest in
applications that deal with querying and mining of time series data. Many of
the research efforts in this context have focused on introducing new
representation methods for dimensionality reduction or novel similarity
measures for the underlying data. In the vast majority of cases, each
individual work introducing a particular method has made specific claims and,
aside from the occasional theoretical justifications, provided quantitative
experimental observations. However, for the most part, the comparative aspects
of these experiments were too narrowly focused on demonstrating the benefits of
the proposed methods over some of the previously introduced ones. In order to
provide a comprehensive validation, we conducted an extensive experimental
study re-implementing eight different time series representations and nine
similarity measures and their variants, and testing their effectiveness on
thirty-eight time series data sets from a wide variety of application domains.
In this paper, we give an overview of these different techniques and present
our comparative experimental findings regarding their effectiveness. In
addition to providing a unified validation of some of the existing
achievements, our experiments also indicate that, in some cases, certain claims
in the literature may be unduly optimistic.
|
A new Recommender system based on target tracking: a Kalman Filter
approach
|
In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
|
Dynamic Capitalization and Visualization Strategy in Collaborative
Knowledge Management System for EI Process
|
Knowledge is attributed to human whose problem-solving behavior is subjective
and complex. In today's knowledge economy, the need to manage knowledge
produced by a community of actors cannot be overemphasized. This is due to the
fact that actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing of
knowledge among a group of actors in a particular context. Knowledge expressed
within the context of a problem resolution must be capitalized for future
reuse. In this paper, an approach that permits dynamic capitalization of
relevant and reliable actors' knowledge in solving decision problem following
Economic Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the communication
among actors and in contextualizing expressed knowledge. A prototype is built
to demonstrate the functionalities of a collaborative Knowledge Management
system based on this approach. It is tested with sample cases and the result
showed that dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system can be
adapted to various domains
|
Dynamic Knowledge Capitalization through Annotation among Economic
Intelligence Actors in a Collaborative Environment
|
The shift from industrial economy to knowledge economy in today's world has
revolutionalized strategic planning in organizations as well as their problem
solving approaches. The point of focus today is knowledge and service
production with more emphasis been laid on knowledge capital. Many
organizations are investing on tools that facilitate knowledge sharing among
their employees and they are as well promoting and encouraging collaboration
among their staff in order to build the organization's knowledge capital with
the ultimate goal of creating a lasting competitive advantage for their
organizations. One of the current leading approaches used for solving
organization's decision problem is the Economic Intelligence (EI) approach
which involves interactions among various actors called EI actors. These actors
collaborate to ensure the overall success of the decision problem solving
process. In the course of the collaboration, the actors express knowledge which
could be capitalized for future reuse. In this paper, we propose in the first
place, an annotation model for knowledge elicitation among EI actors. Because
of the need to build a knowledge capital, we also propose a dynamic knowledge
capitalisation approach for managing knowledge produced by the actors. Finally,
the need to manage the interactions and the interdependencies among
collaborating EI actors, led to our third proposition which constitute an
awareness mechanism for group work management.
|
Descriptive-complexity based distance for fuzzy sets
|
A new distance function dist(A,B) for fuzzy sets A and B is introduced. It is
based on the descriptive complexity, i.e., the number of bits (on average) that
are needed to describe an element in the symmetric difference of the two sets.
The distance gives the amount of additional information needed to describe any
one of the two sets given the other. We prove its mathematical properties and
perform pattern clustering on data based on this distance.
|
Artificial Intelligence in Reverse Supply Chain Management: The State of
the Art
|
Product take-back legislation forces manufacturers to bear the costs of
collection and disposal of products that have reached the end of their useful
lives. In order to reduce these costs, manufacturers can consider reuse,
remanufacturing and/or recycling of components as an alternative to disposal.
The implementation of such alternatives usually requires an appropriate reverse
supply chain management. With the concepts of reverse supply chain are gaining
popularity in practice, the use of artificial intelligence approaches in these
areas is also becoming popular. As a result, the purpose of this paper is to
give an overview of the recent publications concerning the application of
artificial intelligence techniques to reverse supply chain with emphasis on
certain types of product returns.
|
Automatic Estimation of the Exposure to Lateral Collision in Signalized
Intersections using Video Sensors
|
Intersections constitute one of the most dangerous elements in road systems.
Traffic signals remain the most common way to control traffic at high-volume
intersections and offer many opportunities to apply intelligent transportation
systems to make traffic more efficient and safe. This paper describes an
automated method to estimate the temporal exposure of road users crossing the
conflict zone to lateral collision with road users originating from a different
approach. This component is part of a larger system relying on video sensors to
provide queue lengths and spatial occupancy that are used for real time traffic
control and monitoring. The method is evaluated on data collected during a real
world experiment.
|
Symmetry Breaking with Polynomial Delay
|
A conservative class of constraint satisfaction problems CSPs is a class for
which membership is preserved under arbitrary domain reductions. Many
well-known tractable classes of CSPs are conservative. It is well known that
lexleader constraints may significantly reduce the number of solutions by
excluding symmetric solutions of CSPs. We show that adding certain lexleader
constraints to any instance of any conservative class of CSPs still allows us
to find all solutions with a time which is polynomial between successive
solutions. The time is polynomial in the total size of the instance and the
additional lexleader constraints. It is well known that for complete symmetry
breaking one may need an exponential number of lexleader constraints. However,
in practice, the number of additional lexleader constraints is typically
polynomial number in the size of the instance. For polynomially many lexleader
constraints, we may in general not have complete symmetry breaking but
polynomially many lexleader constraints may provide practically useful symmetry
breaking -- and they sometimes exclude super-exponentially many solutions. We
prove that for any instance from a conservative class, the time between finding
successive solutions of the instance with polynomially many additional
lexleader constraints is polynomial even in the size of the instance without
lexleaderconstraints.
|
Looking for plausibility
|
In the interpretation of experimental data, one is actually looking for
plausible explanations. We look for a measure of plausibility, with which we
can compare different possible explanations, and which can be combined when
there are different sets of data. This is contrasted to the conventional
measure for probabilities as well as to the proposed measure of possibilities.
We define what characteristics this measure of plausibility should have.
In getting to the conception of this measure, we explore the relation of
plausibility to abductive reasoning, and to Bayesian probabilities. We also
compare with the Dempster-Schaefer theory of evidence, which also has its own
definition for plausibility. Abduction can be associated with biconditionality
in inference rules, and this provides a platform to relate to the
Collins-Michalski theory of plausibility. Finally, using a formalism for wiring
logic onto Hopfield neural networks, we ask if this is relevant in obtaining
this measure.
|
SAPFOCS: a metaheuristic based approach to part family formation
problems in group technology
|
This article deals with Part family formation problem which is believed to be
moderately complicated to be solved in polynomial time in the vicinity of Group
Technology (GT). In the past literature researchers investigated that the part
family formation techniques are principally based on production flow analysis
(PFA) which usually considers operational requirements, sequences and time.
Part Coding Analysis (PCA) is merely considered in GT which is believed to be
the proficient method to identify the part families. PCA classifies parts by
allotting them to different families based on their resemblances in: (1) design
characteristics such as shape and size, and/or (2) manufacturing
characteristics (machining requirements). A novel approach based on simulated
annealing namely SAPFOCS is adopted in this study to develop effective part
families exploiting the PCA technique. Thereafter Taguchi's orthogonal design
method is employed to solve the critical issues on the subject of parameters
selection for the proposed metaheuristic algorithm. The adopted technique is
therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9
and the obtained results are compared with C-Linkage clustering technique. The
experimental results reported that the proposed metaheuristic algorithm is
extremely effective in terms of the quality of the solution obtained and has
outperformed C-Linkage algorithm in most instances.
|
On Elementary Loops of Logic Programs
|
Using the notion of an elementary loop, Gebser and Schaub refined the theorem
on loop formulas due to Lin and Zhao by considering loop formulas of elementary
loops only. In this article, we reformulate their definition of an elementary
loop, extend it to disjunctive programs, and study several properties of
elementary loops, including how maximal elementary loops are related to minimal
unfounded sets. The results provide useful insights into the stable model
semantics in terms of elementary loops. For a nondisjunctive program, using a
graph-theoretic characterization of an elementary loop, we show that the
problem of recognizing an elementary loop is tractable. On the other hand, we
show that the corresponding problem is {\sf coNP}-complete for a disjunctive
program. Based on the notion of an elementary loop, we present the class of
Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class
of Head-Cycle-Free (HCF) programs due to Ben-Eliyahu and Dechter. Like an HCF
program, an HEF program can be turned into an equivalent nondisjunctive program
in polynomial time by shifting head atoms into the body.
|
Extending Binary Qualitative Direction Calculi with a Granular Distance
Concept: Hidden Feature Attachment
|
In this paper we introduce a method for extending binary qualitative
direction calculi with adjustable granularity like OPRAm or the star calculus
with a granular distance concept. This method is similar to the concept of
extending points with an internal reference direction to get oriented points
which are the basic entities in the OPRAm calculus. Even if the spatial objects
are from a geometrical point of view infinitesimal small points locally
available reference measures are attached. In the case of OPRAm, a reference
direction is attached. The same principle works also with local reference
distances which are called elevations. The principle of attaching references
features to a point is called hidden feature attachment.
|
Learning a Representation of a Believable Virtual Character's
Environment with an Imitation Algorithm
|
In video games, virtual characters' decision systems often use a simplified
representation of the world. To increase both their autonomy and believability
we want those characters to be able to learn this representation from human
players. We propose to use a model called growing neural gas to learn by
imitation the topology of the environment. The implementation of the model, the
modifications and the parameters we used are detailed. Then, the quality of the
learned representations and their evolution during the learning are studied
using different measures. Improvements for the growing neural gas to give more
information to the character's model are given in the conclusion.
|
Planning with Partial Preference Models
|
Current work in planning with preferences assume that the user's preference
models are completely specified and aim to search for a single solution plan.
In many real-world planning scenarios, however, the user probably cannot
provide any information about her desired plans, or in some cases can only
express partial preferences. In such situations, the planner has to present not
only one but a set of plans to the user, with the hope that some of them are
similar to the plan she prefers. We first propose the usage of different
measures to capture quality of plan sets that are suitable for such scenarios:
domain-independent distance measures defined based on plan elements (actions,
states, causal links) if no knowledge of the user's preferences is given, and
the Integrated Convex Preference measure in case the user's partial preference
is provided. We then investigate various heuristic approaches to find set of
plans according to these measures, and present empirical results demonstrating
the promise of our approach.
|
Extracting Features from Ratings: The Role of Factor Models
|
Performing effective preference-based data retrieval requires detailed and
preferentially meaningful structurized information about the current user as
well as the items under consideration. A common problem is that representations
of items often only consist of mere technical attributes, which do not resemble
human perception. This is particularly true for integral items such as movies
or songs. It is often claimed that meaningful item features could be extracted
from collaborative rating data, which is becoming available through social
networking services. However, there is only anecdotal evidence supporting this
claim; but if it is true, the extracted information could very valuable for
preference-based data retrieval. In this paper, we propose a methodology to
systematically check this common claim. We performed a preliminary
investigation on a large collection of movie ratings and present initial
evidence.
|
The "psychological map of the brain", as a personal information card
(file), - a project for the student of the 21st century
|
We suggest a procedure that is relevant both to electronic performance and
human psychology, so that the creative logic and the respect for human nature
appear in a good agreement. The idea is to create an electronic card containing
basic information about a person's psychological behavior in order to make it
possible to quickly decide about the suitability of one for another. This
"psychological electronics" approach could be tested via student projects.
|
Meaning Negotiation as Inference
|
Meaning negotiation (MN) is the general process with which agents reach an
agreement about the meaning of a set of terms. Artificial Intelligence scholars
have dealt with the problem of MN by means of argumentations schemes, beliefs
merging and information fusion operators, and ontology alignment but the
proposed approaches depend upon the number of participants. In this paper, we
give a general model of MN for an arbitrary number of agents, in which each
participant discusses with the others her viewpoint by exhibiting it in an
actual set of constraints on the meaning of the negotiated terms. We call this
presentation of individual viewpoints an angle. The agents do not aim at
forming a common viewpoint but, instead, at agreeing about an acceptable common
angle. We analyze separately the process of MN by two agents (\emph{bilateral}
or \emph{pairwise} MN) and by more than two agents (\emph{multiparty} MN), and
we use game theoretic models to understand how the process develops in both
cases: the models are Bargaining Game for bilateral MN and English Auction for
multiparty MN. We formalize the process of reaching such an agreement by giving
a deduction system that comprises of rules that are consistent and adequate for
representing MN.
|
Information-theoretic measures associated with rough set approximations
|
Although some information-theoretic measures of uncertainty or granularity
have been proposed in rough set theory, these measures are only dependent on
the underlying partition and the cardinality of the universe, independent of
the lower and upper approximations. It seems somewhat unreasonable since the
basic idea of rough set theory aims at describing vague concepts by the lower
and upper approximations. In this paper, we thus define new
information-theoretic entropy and co-entropy functions associated to the
partition and the approximations to measure the uncertainty and granularity of
an approximation space. After introducing the novel notions of entropy and
co-entropy, we then examine their properties. In particular, we discuss the
relationship of co-entropies between different universes. The theoretical
development is accompanied by illustrative numerical examples.
|
An architecture for the evaluation of intelligent systems
|
One of the main research areas in Artificial Intelligence is the coding of
agents (programs) which are able to learn by themselves in any situation. This
means that agents must be useful for purposes other than those they were
created for, as, for example, playing chess. In this way we try to get closer
to the pristine goal of Artificial Intelligence. One of the problems to decide
whether an agent is really intelligent or not is the measurement of its
intelligence, since there is currently no way to measure it in a reliable way.
The purpose of this project is to create an interpreter that allows for the
execution of several environments, including those which are generated
randomly, so that an agent (a person or a program) can interact with them. Once
the interaction between the agent and the environment is over, the interpreter
will measure the intelligence of the agent according to the actions, states and
rewards the agent has undergone inside the environment during the test. As a
result we will be able to measure agents' intelligence in any possible
environment, and to make comparisons between several agents, in order to
determine which of them is the most intelligent. In order to perform the tests,
the interpreter must be able to randomly generate environments that are really
useful to measure agents' intelligence, since not any randomly generated
environment will serve that purpose.
|
Intelligent Semantic Web Search Engines: A Brief Survey
|
The World Wide Web (WWW) allows the people to share the information (data)
from the large database repositories globally. The amount of information grows
billions of databases. We need to search the information will specialize tools
known generically search engine. There are many of search engines available
today, retrieving meaningful information is difficult. However to overcome this
problem in search engines to retrieve meaningful information intelligently,
semantic web technologies are playing a major role. In this paper we present
survey on the search engine generations and the role of search engines in
intelligent web and semantic search technologies.
|
Online Least Squares Estimation with Self-Normalized Processes: An
Application to Bandit Problems
|
The analysis of online least squares estimation is at the heart of many
stochastic sequential decision making problems. We employ tools from the
self-normalized processes to provide a simple and self-contained proof of a
tail bound of a vector-valued martingale. We use the bound to construct a new
tighter confidence sets for the least squares estimate.
We apply the confidence sets to several online decision problems, such as the
multi-armed and the linearly parametrized bandit problems. The confidence sets
are potentially applicable to other problems such as sleeping bandits,
generalized linear bandits, and other linear control problems.
We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of
Auer et al. (2002) and show that its regret is with high-probability a problem
dependent constant. In the case of linear bandits (Dani et al., 2008), we
improve the problem dependent bound in the dimension and number of time steps.
Furthermore, as opposed to the previous result, we prove that our bound holds
for small sample sizes, and at the same time the worst case bound is improved
by a logarithmic factor and the constant is improved.
|
Hybrid Model for Solving Multi-Objective Problems Using Evolutionary
Algorithm and Tabu Search
|
This paper presents a new multi-objective hybrid model that makes cooperation
between the strength of research of neighborhood methods presented by the tabu
search (TS) and the important exploration capacity of evolutionary algorithm.
This model was implemented and tested in benchmark functions (ZDT1, ZDT2, and
ZDT3), using a network of computers.
|
New Worst-Case Upper Bound for #XSAT
|
An algorithm running in O(1.1995n) is presented for counting models for exact
satisfiability formulae(#XSAT). This is faster than the previously best
algorithm which runs in O(1.2190n). In order to improve the efficiency of the
algorithm, a new principle, i.e. the common literals principle, is addressed to
simplify formulae. This allows us to eliminate more common literals. In
addition, we firstly inject the resolution principles into solving #XSAT
problem, and therefore this further improves the efficiency of the algorithm.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.