image
imagewidth (px) 1k
1.76k
| markdown
stringlengths 1
43.7k
|
---|---|
number of conflicts by 45% and 31%, respectively. This last result agrees with the realist theory that says far apart countries have less reasons to have conflicts (Oneal and Russett, 1999).
| NN | SVM | | | |
|------------------|-------|-------|-------|-----|
| Variable | Peace | War | Peace | War |
| Test set results | 19464 | 297 | 20914 | 295 |
| Dem-min | 16263 | 325 | 22327 | 205 |
| Dem-max | 2345 | 23761 | 35 | |
| Allies-min | 18555 | 313 | 20469 | 274 |
| 21034 | 237 | 21999 | 153 | |
| Allies-max | 23682 | 24745 | | |
| Contig-min | 164 | 60 | | |
| Contig-max | 12463 | 342 | 18939 | 21 |
| 5351 | 370 | 25067 | 34 | |
| Dist-min | | | | |
| Dist-max | 2525 | 206 | 2224 | 3 |
| Capab-min | 6929 | 373 | 19840 | 180 |
| Capab-max | 26322 | 3 | 26345 | |
| Depnd-min | 19455 | 297 | 20498 | 305 |
| Depnd-max | 20411 | 277 | 26345 | - |
| Majpow-min | 19686 | 289 | 2345 | |
| Majpow-max | 19428 | 29 | 23583 | 136 |
SVM result: The results of the experiment show inconsistency on how the MID outcome is affected when the variables are maximised and minimised. Further investigation is required to understand more clearly the influence of each variable (e.g. exploring some other sensitivity analysis techniques). Therefore, an alternative sensitivity analysis that involves using only one explanatory variable to predict the MID and see the goodness of accuracy is used. The ROC curves were drawn and the area under the curve (AUC) calculated for the purpose of ranking as is |
|
suggested in (Guyon and Elisseeff, 2003). The rankings of the effects of variables on the MID for NN and SVM vary as shown in table III. The reason for the variance may be because in the case of
| Rank | NN | SVM |
|--------|-------------|-------------|
| 1 | Democracy | Contiguity |
| 2 | Capability | Distance |
| 3 | Contiguity | Major power |
| 4 | Distance | Capability |
| S | Alliance | Democracy |
| 6 | Dependency | Dependency |
| 7 | Major power | Alliance |
## V. Conclusion
In this paper two artificial intelligence techniques, neural networks (NN) and support vector machine (SVM), are used to predict militarised interstate disputes. The independent/input variables are Democracy, Allies, Contingency, Distance, Capability, Dependency and Major Power while the dependent/ output variable is MID result which is either peace or conflict. A neural network trained with scaled conjugate gradient algorithm and an SVM with a radial basis kernel function together with grid-search and cross-validation techniques were employed to find the optimal model. |
|
The results found show that SVM has better capacity in forecasting conflicts without effectively affecting the correct peace prediction than NN. Two separate experiments were conducted to see the influence of each variable to the MID outcome. The first one assigns each variable to its possible highest value while keeping the rest to their possible lowest values. The NN results show that both democracy *level* and *capability ratio* **are able to influence the outcome to be peace. On the**
other hand, none of the variables was able to influence the MID outcome to be conflict when all the other variables were maximum. SVM was not able to pick the effects of the variable for this experiment.
The second experiment assigns each variable to its possible highest or lowest value while keeping the other variables fixed to their original values. The results agree with the previous experiment. If we group the variables in terms of their effect and rank them, Democracy level and capability ratio are first, contiguity, distance and alliance second and dependency, major power are ranked third using NN. Although SVM performs better than NN, the results of NN are easier to be interpreted in relation to variable influence.
## References
Beck, N., King, G.and Zeng, L. (2000), Improving Quantitative Studies of International Conflict: A Conjecture. American Political Science Review, vol. 94, no. 1, pp. 21–33, March 2000. Bishop, C. (1995), Neural Networks for Pattern Recognition. Oxford, UK: Oxford University Press Burges, C. (1998), A Tutorial on Support Vector Machines for Pattern Recognition., Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, Chen, D and J. Odobez. (2002), Comparison of Support Vector Machine and Neural Network for Text Texture Verification, IDIAP-RR-02 19, IDIAP, Martigny, April 2002. Ftp://ftp.idiap.ch/pub/reports/2002/rr-02-19.pdf. |
|
COW, (2004), Correlates of War Poject, http://www.umich.edu/ cowproj/index.html, Last date accessed: Sept, 2004. Gochman, C and Z. Maoz. (1984), Militarized Interstate Disputes 1816-1976, Journal of Conflict Resolution, vol. 28, no. 4, pp. 585–615 Guyon, I. and A. Elisseeff, (2003), An introduction to variable and feature selection, Journal of Machine Learning Research, vol. 3, pp. 1157– 1182, 2003. Hanley, J. and B. McNeil, (1983), A Method of Comparing the Areas under the Receiver Operating Characteristic Curves Derived from the Same Cases, Radiology, vol. 148, no. 3 Haykin, S. (1999), Neural Networks: A Comprehensive Foundation. Upper Saddle River, New Jersey 07458: Prentice Hall International, Inc, second edition. Lagazio, M and B. Russett, (2003), A Neural Network Analysis of Militarised Disputes, 18851992: Temporal Stability and Causal Complexity, University of Michigan Press Marwala, T. and M. Lagazio, (2004), Modelling and Controlling Interstate Conflict, Budapest, Hungary: IEEE International Joint Conference on Neural networks, 2004. Moller, M. (1993), A scaled Conjugate Gradient Algorithm for Fast Supervised Learning, Neural Networks, vol. 6, no. 4, pp. 525–533, Muller K.R, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf, (2001), An Introduction to KernelBased Learning Algorithms, IEEE Transactions on Neural Networks, vol. 12, no. 2, March 2001 Oneal, J. and B. Russett. (2001), Clear and Cean: The Fixed Effects of Liberal Peace, International Organization, vol **52, no. 2, pp. 469–85,**
Oneal, J. and B. Russett, (1999), The Kantian Peace: The Pacific Benefits of Democracy, Interdependence, and International Organization, World Politics, vol. 52, no. 1, pp. 1–37 Pires, M. and T. Marwala, (2004), Option pricing using neural networks and support vector machines, In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. The Hague, Holland: IEEE Computer Society TCC, pp. 161-166. Russett, B. and J. Oneal, (2001), Triangulating peace: Democracy, Interdependence, and International Organizations. New York: W.W. Norton. Schölkopf, B and A. J. Smola, (2003), A short introduction to learning with kernels. pp. 41–64. Vapnik, V. and A. Lerner, (1963), Pattern recognition using generalized portrait method, Automation and Remote Control, vol. 24, pp. 774–780, 1963. |
|
military conflict (Beck, King and Zeng 2000). In this study, seven dyadic variables are employed to predict MID outcome.
Interstate conflict is a complex phenomenon that encompasses non-linear pattern of interactions (Beck, King and Zeng, 2000; Lagazio and Russett, 2003; Marwala and Lagazio, 2004). Various efforts have and still are underway to improve the MID data, the underlying theory and the statistical modelling techniques of interstate conflict (Beck, King and Zeng, 2000.) Previously, linear statistical methods were used for quantitative analysis of conflicts, which were far from satisfactory. The results obtained showed high variance, which make them difficult to be reliable (Beck, King and Zeng, 2000). The results have to be taken cautiously and their interpretations require prior knowledge of the problem domain. This makes it inevitable to look for techniques other than the traditional statistical methods to do quantitative analysis of international conflicts. Artificial intelligence techniques have proved to be very good in modelling complex and nonlinear problems without any *a priori* **constraints about the underlying**
functions assumed to govern the distribution of MID data (Beck, King and Zeng, 2000). It then makes sense to model interstate conflicts using artificial intelligence techniques.
Neural networks have previously been used by (Marwala and Lagazio, 2004; Beck, King and Zeng, 2000; Lagazio and Russett, 2003) to model MID. In this paper, two artificial intelligence techniques, neural networks and support vector machines, are used for the same purpose and their results were compared. These two techniques have been compared for other applications (e.g. for text texture verification (Chen and J. Odobez, 2002)] and for option pricing (Pires and |
|
Vapnik, V (1995), The Nature of Statistical Learning Theory. New York: Springer Verlag Westin, L (2001), Receiver operating characteristic (ROC) analysis: Evaluating discriminance effects among decision support systems, Tech. Rep. UMINF 01.18, ISSN-0348-0542, Department of Computer Science, Umea University, SE-90187, Umea, Sweden, http://www.cs.umu.se/research/reports/2001/018/part1.pdf. Zeng, L. (1999), Prediction and Classification With Neural Network Models, Sociological Methods & Research, vol. 27, no. 4, pp. 499–524 Zweig, M.H and G. Campbell, (1993), Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine, Clinical Chemistry, vol. 39, no. 4, pp. 561–577. Biographies Eyasu Hayemariam is an MSc in Electrical Engineering student at the University of the Witwatersrand in South Africa. He graduated with a BSc Degree in Statistics at the University of Osmara in Eritrea Tshilidzi Marwala received a BS in Mechanical Engineering at Case Western Reserve University in Ohio, an MSc in Mechanical Engineering from University of Pretoria and a PhD in Artificial Intelligence from the University of Cambridge. Previously he was a post-doctoral fellow at Imperial College (London). He is currently a professor at the University of the Witwatersrand in South Africa.
Monica Lagazio holds a PhD in Politics and Artificial Intelligence from Nottingham University and an MA in Politics from the University of London. Before joining the University of Kent at Canterbury in 2004, she was Lecturer in Politics at the University of the Witwatersrand and Research Fellow at Yale University. She also held a position of senior consultant in the economic and financial service of one of the leading global consulting companies in London. |
|
<image>
The goal of the learning algorithm as proposed by (Vapnik and Lerner, 1963), is to find the hyperplane with maximum margin of separation from the class of separating hyperplanes. But since real-world data often exhibit complex properties, which cannot be separated linearly, more complex classifiers are required. In order to avoid the complexity of the nonlinear classifiers, the idea of linear classifiers in a feature space comes into place. Support vector machines try to find a linear separating hyperplane by first mapping the input space into a higher dimensional feature space F. This implies each training example xi **is substituted with** Ф(xi)
Yi((w. Ф(xi) +b), i=1,2,…n (6) |
|
The VC dimension h in the feature space F **is bounded according to 1** || ||2 2 h ≤ W R + **where** R **is**
the radius of the smallest sphere around the training data (Müller et al, 2001). Hence minimising the expected risk is stated as an optimisation problem
w,b
$$\begin{array}{l l}{{\mathrm{min}}}&{{}}\\ {{w,b}}&{{}}\end{array}\qquad\begin{array}{l l}{{\frac{1}{2}\,\|\,W\,\|^{2}}}\\ {{}}&{{}}\end{array}$$
However, assuming that we can only access the feature space using only dot products, (7) is transformed into a dual optimisation problem by introducing Lagrangian multipliers αi
,
i = 1,2,..., n **and doing some minimisation, maximisation and saddle point property of the optimal**
point (Burges, 1998; Müller et al, 2001; Schö**lkopf and Smola, 2003), the problem becomes**
Max $\alpha$ $\sum_{i=1}^{n}\alpha_{i}-\frac{1}{2}\sum_{i,j=1}^{n}\alpha_{i}\alpha_{j}y_{i}y_{j}k(x_{i},x_{j})$ subject to $\alpha_{i}\geq0,i=1,...,n$ $\sum_{i=1}^{n}\alpha_{i}y_{i}=0$
$$\left(7\right)$$
$$(8)$$
subject to αi ≥ ,0 i = **1,...,**n
∑==
The Lagrangian coefficients αi
's, are obtained by solving (8) which in turn is used to solve w to give the non-linear decision function (Müller et al, 2001):
$$\begin{split}\text{f}(\text{x})&=\text{sgn}\Bigg{(}\sum_{\text{i}=1}^{\text{n}}\text{y}_{\text{i}}\alpha_{\text{i}}(\Phi(\text{x}).\Phi(\text{x}_{\text{i}}))+\text{b}\Bigg{)}\\ &=\text{sign}\Bigg{(}\sum_{\text{i}=1}^{\text{n}}\text{y}_{\text{i}}\alpha_{\text{i}}\text{k}(\text{x},\text{x}_{\text{i}})+\text{b}\Bigg{)}\end{split}\tag{9}$$
$$(10)$$
In the case when the data is not linearly separable, a slack variable ξ i, i = 1, ..., n **is introduced to**
relax the constraints of the margin as
$$\mathbf{y}_{\mathrm{{i}}}((\mathbf{w},\Phi(\mathbf{x}_{\mathrm{{i}}}))+\mathbf{b})\geq1-\xi_{\mathrm{{i}}},\ \xi_{\mathrm{{i}}}\geq0,\mathrm{{i}}=1,\ldots,\mathbf{n}$$
ξi ≥ i,0 =1,...,n (10) |
|
A trade off is made between the VC dimension and the complexity term of (3) which gives the optimisation problem
$$\min_{\mathbf{w},b,\xi}\qquad\frac{1}{2}\|W\|^{2}+C\sum_{i=1}^{n}\xi_{i}\tag{11}$$
where C **> 0 is a regularisation constant that determines the above-mentioned trade-off. The dual**
optimisation problem is then given by (Müller et al, 2001):
∑ ∑ = =
$$\max_{\alpha}\sum_{i=1}^{n}\alpha_{i}-\frac{1}{2}\sum_{i,j=1}^{n}\alpha_{i}\alpha_{j}y_{i}y_{j}k(x_{i},x_{j})\tag{12}$$ $$\mbox{subject to}\ \ 0\leq\alpha_{i}\leq C_{i}i=1,...,n$$ $$\sum_{i=1}^{n}\alpha_{i}y_{i}=0$$
A Karush-Kuhn-Tucker (KKT) condition which says only the αi
's associated with the training values xi**'s on or inside the margin area have non-zero values, is applied to the above optimisation**
problem to find the αi
's and the threshold variable b reasonably and the decision function f **(Müller**
et al, 2001).
## D. Conflict Modelling
Modelling international conflicts involve quantitative and empirical analysis based on existing dyadic information of countries. Dyad-year in our context refers to a pair of countries in a particular year. Political scientists apply dyadic parameters as a measure of the possibility that two countries will have a militarised conflict. Although extensive data collection efforts have been made, still a lot of research is underway to come up with satisfactory and reliable conflict models. One of the major reasons why conflict modelling is complex, according to (Beck, King, and Zeng, |
|
2000), is that international conflict is a rare event and the processes that drive it vary for each incident. A small change made on the explanatory variables affects greatly the MID outcome. This makes modelling MID to be highly nonlinear, very interactive and context dependent. In modelling interstate conflict, (Marwala and Lagazio, 2004) used seven dyadic variables. They used MID data that came from Correlates of War (COW) which was compiled by (Russett and Oneal, 2001). Since the same variables, which are discussed in (Marwala and Lagazio, 2004; Lagazio and Russett, 2003), are used for this study, their brief description follows.
The variables are classified into *realist* and *kantianas* **described by (Lagazio and Russett, 2003).**
The realist variables include *Allies, Contingency, Distance, Major power and Capability. Allies* **is**
the measure of military alliance between the dyad countries. A value of 1 implies that there is an alliance of some kind between the two countries while a value of 0 indicates no alliance.
Contingency **measures whether the two countries have any common boundary. If they share a**
boundary its value becomes 1 and otherwise 0. *Distance* **is the logarithm to the base 10 of the** distance in Kilometres between the two states' capitals. *Major Power* **is assigned 1 if one or both**
states in the dyad are major powers and otherwise 0. *Capability* **is the logarithm to the base 10 of**
the ratio of the total population plus number of people in urban areas plus industrial energy consumption plus iron and steel production plus number of military personnel in active duty plus military expenditure in dollars in the last 5 years measured on stronger country to weak country.
The other variables used in this study, which are referred as Kantian, are *Democracy* **and**
Dependency. *Democracy* **is a scale in the range [-10, 10] where -10 denotes extreme autocracy and**
10 for extreme democracy. The lowest value of the two countries is taken because it is assumed the |
|
arXiv:0705.1244v1 [cs.AI] 9 May 2007
# Evolving Symbolic Controllers
Nicolas Godzik1, Marc Schoenauer1**, and Mich`ele Sebag**2 1 **Projet Fractales, INRIA Rocquencourt, France**
2 **LRI, Universit´e Paris-Sud, France**
Published in G. Raidl et al., eds, *Applications of Evolutionary Computing*,
pp 638-650, LNCS 2611, Springer Verlag, 2003.
Abstract. **The idea of** *symbolic controllers* **tries to bridge the gap between the top-down manual design of the controller architecture, as advocated in Brooks' subsumption architecture, and the bottom-up designerfree approach that is now standard within the Evolutionary Robotics**
community. The designer provides a set of elementary behavior, and evolution is given the goal of assembling them to solve complex tasks.
Two experiments are presented, demonstrating the efficiency **and showing the recursiveness of this approach. In particular, the sensitivity with**
respect to the proposed elementary behaviors, and the robustness w.r.t.
generalization of the resulting controllers are studied in **detail.**
## 1 Introduction
There are two main trends in autonomous robotics. There are two main trends in autonomous robotics. The first one, advocated by R. Brooks [2], **is a humanspecified deterministic approach: the tasks of the robot are manually decomposed into a hierarchy of independent sub-tasks, resulting in the the so-called**
subsumption architecture.
On the other hand, evolutionary robotics (see e.g. [13]), is generally **viewed as**
a pure black-box approach: some controllers, mapping the sensors to the actuators, are optimized using the Darwinian paradigm of Evolutionary Computation; the programmer only designs the fitness function.
However, the scaling issue remains critical for both approaches, though for different reasons. The efficiency of the human-designed approach **is limited by**
the human factor: it is very difficult to decompose complex tasks into **the subsumption architecture. On the other hand, the answer of evolutionary robotics**
to the complexity challenge is very often to come up with an ad hoc (sequence of) specific fitness function(s). The difficulty is transferred from **the internal architecture design to some external action on the environment. Moreover, the**
black-box approach makes it extremely difficult to understand the results, be they successes or failures, hence forbidding any capitalization of past expertise for further re-use. This issue is discussed in more details in section 2.
The approach proposed in this work tries to find some compromise between the two extremes mentioned above. It is based on the following remarks: First, scaling is one of the most critical issues in autonomous robotics. Hence, the same |
|
## 6.5 Generalization
Several other experiments were performed in order to test the generalization abilities of the resulting controllers. The 10 × **4 best controllers obtained in the** experiments above were tested in new experimental environments.
First, some obstacles were added in the environment (the arena was free of obstacle in the experiments described above). But no controller was able to go to the recharge area whenever an obstacle was in the way. However, when the evolution was performed with the obstacles, the overall results are about the same (with slightly worse overall performance, as predicted).
More interesting are the results obtained when the robots are put **in an arena**
that is three times larger than the one used during evolution. The best generalization results are obtained by the Classical Controller architecture, with only a slight decrease of fitness(a few percents). Moreover, 100 additional generations of evolution in the new environment gives back the same level of fitness.
The Symbolic Controllers come next (in terms of generalization capability!):
they initially lose around 10% of their fitness, then reach the same level again in 150 generations, and even overpass that level (see the discussion in section 6.3).
Surprisingly, both supervisor architectures fail to reach the same level of performance in the new environment even after a new cycle of evolution. The Symbolic Supervisor lose about 12.5% of their fitness, and only recover half of it, while the Classical Supervisors lose more than 20% and never recover.
These results can be at least partly explained by the behaviors that are obtained by the first experiments: whereas all direct controller architectures mainly stay around the recharge area, and thus are not heavily disturbed by the change of size of the arena, the Supervisor architectures use their exploration behavior and fail to turn back on time. The only surprising result is that they also fail to reach the same level of fitness even after some more generations 6.
This difference in the resulting behaviors also explains the results obtained in the last generalization experiment that will be presented here: the recharge of energy was made 2 times slower (or the energy consumption was made twice faster - both experiments give exactly the same results). Here, the results of the Symbolic Supervisors are clearly much better than those of the other architectures: in all cases, the robot simply stays in the recharge area **until the energy**
level is back to maximum**, using the** stop **behavior.**
Surprisingly, most Classical Supervisors, though they also can use **their** STOP
behavior, fail to actually reach the recharge area. On the other hand, both direct controller architecture never stop on the recharge area. However, while the Symbolic Controllers manage to survive more than one epoch for half of the trials, all Classical Controllers fail to do so.
This last generalization experiment shows a clear advantage to the Symbolic Controller architecture: if is the only one that actually learned to recharge the accumulator to its maximum before leaving the recharge area. But the ultimate 6 **However, when restarting the evolution from scratch in the large arena, the SSs**
easily reach the 6400 fitness level, outperforming again all other architectures |
|
test for controllers evolved using a simulator is of course to be applied on the real robot. This is on-going work, and the first experiments, applied to the obstacle avoidance behaviors, have confirmed the good performance of **the symbolic**
controllers in any environment.
## 7 Discussion And Perspectives
The main contribution of this work is to propose some compromise between the pure black box approach where evolution is supposed to evolve everything from scratch, and the "transparent box" approach, where the **programmer must** decompose the task manually.
The proposed approach is based on a toolbox, or library, of behaviors ranging from elementary hand-coded behaviors to evolved behaviors of low **to medium** complexity. The combination and proper use of those tools is left to evolution. The new space of controllers that is explored is more powerful that **the one** that is classically explored in Evolutionary Robotics. For instance, it was able to easily find some loophole in the (very simple) obstacle behavior fitness; moreover, it actually discovered the right way to recharge its accumulator in the more complex homing experiment.
Adding new basic behaviors to that library allows one to gradually increase the complexity of the available controllers without having to cleverly insert those new possibilities in the available controllers: evolution will take care of that, and the sensitivity analysis demonstrated that useless behaviors will be **filtered out** at almost no cost (section 6.4). For instance, there might be some cases where a random behavior can be beneficial - and it didn't harm the energy experiment. More generally, this idea of a library allows one to store experience from past experiments: any controller (evolved or hand-coded) can be added to the toolbox, and eventually used later on - knowing that useless tools will simply not **be used.**
Finally, using such a library increases the intelligibility of the resulting controllers, and should impact the way we evolutionary design controllers, i.e. fitness functions. One can add some constraints on the distribution over the use of the different available controllers, (e.g. use the light following action ε**% of the time);**
by contrast, traditional evolutionary approach had to set up sophisticated ad hoc experimental protocol to reach the same result (as in [16]). Further work will have to investigate in that direction.
But first, more experiments are needed to validate the proposed approach
(e.g. experiments requiring some sort of memory, as in [18, 16]). The **impact of** redundancy will also be investigated: in many Machine Learning tasks, adding redundancy improves the quality and/or the robustness of the result. Several controllers that have been evolved for the same task, but exhibit different behaviors, can be put in the toolbox. It can also be useful to allow the overall controller to use all levels of behaviors simultaneously instead of the **layered** architecture proposed so far. This should allow to discover on the fly specific behaviors whenever the designer fails to include them in the library. |
|
Alternatives for the overall architecture will also be looked for. One crucial issue in autonomous robotics is the adaptivity of the controller. Several architectures have been proposed in that direction (see [13] and references herein) and will be tried, like for instance the idea of auto-teaching networks.
Finally, in the longer run, the library approach helps to keep tracks of the behavior of the robot at a level of generality that can be later exploited by some data mining technique. Gathering the Frequent Item Sets in the best evolved controllers can help deriving some brand new macro-actions. The issue will then be to check how useful such macro-actions can be if added to the library.
## References
1. P. J. Bentley, editor. *Evolutionary Design by Computers***. Morgan Kaufman Publishers Inc., 1999.**
2. R. A. Brooks. A robust layered control system for a mobile robot. *Journal of* Robotics and Automation**, 2(1):14–23, 1986.**
3. R. A. Brooks. How to build complete creatures rather than isolated cognitive simulators. In Kurt VanLehn, editor, Architectures for Intelligence: The 22nd Carnegie Mellon Symp. on Cognition**, pages 225–239. Lawrence Erlbaum Associates, 1991.**
4. R. Dawkins. *The blind watchmaker***. Norton, W. W. and Company, 1988.**
5. D. Floreano and F. Mondada. Evolution of homing navigation in a real mobile robot. *IEEE Transactions on Systems, Man, and Cybernetics***, 26:396–407, 1994.**
6. I. Harvey. *The Artificial Evolution of Adaptive Behaviour***. PhD thesis, University**
of Sussex, 1993.
7. M. Humphrys. *Action Selection Methods Using Reinforcement Learning***. PhD**
thesis, University of Cambridge, 1997.
8. Leslie Pack Kaelbling, Michael L. Littman, and Andrew P. Moore. Reinforcement learning: A survey. *Journal of Artificial Intelligence Research***, 4:237–285, 1996.**
9. M. Keijzer, J. J. Merelo, G. Romero, and M. Schoenauer. Evolving objects: a general purpose evolutionary computation library. In P. Collet et al., editors, Artificial Evolution'01**, pages 229–241. Springer Verlag, LNCS 2310, 2002.**
10. P. Maes. The dynamics of action selection. In Proceedings of the 11th International Joint Conference on Artificial Intelligence**, 1989.**
11. J.R. Millan, D. Posenato, and E. Dedieu. Continuous-action Q-learning. Machine Learning Journal**, 49(23):247–265, 2002.**
12. S. Nolfi and D. Floreano. How co-evolution can enhance the **adaptive power of**
artificial evolution: implications for evolutionary robotics. In P. Husbands and J.A.
Meyer, editors, *Proceedings of EvoRobot98***, pages 22–38. Springer Verlag, 1998.**
13. S. Nolfi and D. Floreano. *Evolutionary Robotics***. MIT Press, 2000.**
14. H.-P. Schwefel. *Numerical Optimization of Computer Models***. John Wiley & Sons,**
New-York, 1981. 1995 - 2nd **edition.**
15. R.S. Sutton and A. G. Barto. *Reinforcement learning***. MIT Press, 1998.**
16. E. Tuci, I. Harvey, and M. Quinn. Evolving integrated controllers for autonomous learning robots using dynamic neural networks. In B. Hallam **et al., editor,** *Proc.*
of SAB'02**. MIT Press, 2002.**
17. V. N. Vapnik. *Statistical Learning Theory***. Wiley, 1998.**
18. B.M. Yamauchi and R.D. Beer. Integrating reactive, sequential, and learning behavior using dynamical neural network. In D. Cliff et al., editor, *Proc. SAB'94*.
MIT Press, 1994. |
|
mechanism **should be used all along the complexity path, leading from primitive**
tasks to simple tasks, and from simple tasks to more complex behaviors.
One reason for the lack of intelligibility is that the language of the controllers consists in low-level orders to the robot actuators (e.g. speeds of the right and left motors). Using instead hand-coded basic behaviors (e.g. forward, turn left or turn right) as proposed in section 3 should allow one to better understand the relationship between the controller outputs and the resulting behavior. Moreover, the same approach will allow to recursively build higher level behaviors **from**
those evolved simple behaviors, thus solving more and more complex problems.
Reported experiments tackle both aspects of the above approach. After describing the experimental setup in section 4, simple behaviors are evolved based on basic primitives (section 5). Then a more complex task is solved using the results of the first step (section 6). Those results are validated by comparison to the pure black-box evolution, and important issues like the sensitivity w.r.t. the available behaviors, and the robustness w.r.t generalization are discussed. The paper ends by revisiting the discussion in the light of those results.
## 2 State Of The Art
The trends in autonomous robotics can be discussed in the light of the "innate vs acquired" cognitive aspects - though at the time scale of evolution. From that point of view, Brooks'subsumption architecture is extreme on the "innate" side: The robots are given all necessary skills by their designer, from basic behaviors to the way to combine them. Complex behaviors so build on some "instinctive" predefined simple behaviors. Possible choices lie in a very constrained **space,** guaranteeing good performances for very specific tasks, but that does not scale up very well: Brooks'initial goal was to reach the intelligence of insects [3].
Along similar lines are several computational models of action-selection (e.g.
Spreading Activation Networks [10], reinforcement learning [8, 7], . . **. ). Such approaches have two main weaknesses. The first one is that such architecture is**
biologically questionable - but do we really care here? The second weakness is concerned with the autonomy issue. Indeed, replacing low level reflexes by decisions about high level behaviors (use this or that behavior now) **might be**
beneficial. However, how to program the interactions **of such reflexes in an open** world amounts to solve the exploration vs exploitation **dilemma - and both Game**
Theory and Evolutionary Computation have underlined the difficulty of answering this question.
At the other extreme of the innate/acquired spectrum is the evolutionary robotics credo: any a priori bias from the designer can be harmful. **Such position is also defended by Bentley [1] in the domain of optimal design, where**
it has been reinforced by some very unexpected excellent solutions **that arose** from evolutionary design processes. In the Evolutionary Robotics **area, this idea** has been sustained by the recent revisit by Tuci, Harvey and Quinn [16] of an experiment initially proposed by Yamauchi and Beer [18]: depending on **some** random variable, the robot should behave differently (i.e. go toward the light, |
|
or away from it). The robot must hence learn from the first epoch the state of that random variable, and act accordingly in the following epochs.
The controller architecture is designed manually in the original experience, whereas evolution has complete freedom in its recent remake [16]. Moreover, Tuci et al. use no explicit reinforcement. Nevertheless, the results obtained by this recent approach are much better than the original results - and the authors claim that the reason for that lies in their complete black-box approach.
However, whereas the designers did decide to use a specifically designed modular architecture in the first experiment, the second experience required a careful design of the fitness function (for instance, though the reward lies under the light only half of the time, going toward the light has to be rewarded more than fleeing away to break the symmetry). So be it at the "innate" or "acquired" level, human intervention is required, and must act at some very high level **of subtlety.**
Going beyond this virtual "innate/acquired" debate, an intermediate issue would be to be able to evolve complex controllers that could benefit from human knowledge but that would not require high level of intervention with respect to the complexity of the target task.
Such an approach is what is proposed here: the designer is supposed to help the evolution of complex controllers by simply seeding the process with some simple behaviors - hand-coded or evolved - letting evolution arrange those building blocks together. An important side-effect is that the designer will hopefully be able to better understand **the results of an experiment, because of the greater**
intelligibility of the controllers. It then becomes easier to manually optimize the experimental protocol, e.g. to gradually refine the fitness in order **to solve some** very complex problems.
## 3 Symbolic Controllers 3.1 Rationale
The proposed approach pertains to Evolutionary Robotics [13]. Its **originality** lies in the representation space of the controllers, i.e. the search space of the evolutionary process. One of the main goals is that the results will be **intelligible**
enough to allow an easy interpretation of the results, thus easing the whole design process.
A frequent approach in Evolutionary Robotics is to use Neural Networks as controllers (feedforward or recurrent, discrete or continuous). The inputs of the controllers are the sensors (Infra-red, camera, . . . ), plus eventually some reward "sensor", either direct [18] or indirect [16].
The outputs of the controllers are the actuators of the robot (e.g. speeds of the left and right motors for a Khepera robot). The resulting controller is hence comparable to a program in machine language, thus difficult to interpret. To overcome this difficulty, we propose to use higher level outputs, namely involving four possible actions: Forward, Right, Left and Backward**. In order to allow**
some flexibility, each one of these symbolic actions should be tunable by some |
|
continuous parameter (e.g. speed of forward displacement, or turning angle for left and right actions).
The proposed symbolic controller has eight outputs with values in [0, **1]: the**
first four outputs are used to specify which action will be executed, namely action i, with i = Argmax(output(j), j = 1..4). Output i **+ 4 then gives the associated**
parameter. From the given action and the associated parameter, **the values of the**
commands for the actuators are computed by some simple hard-coded program.
## 3.2 Discussion
Using some high level representation language for the controller impacts on both the size of the search space, and the possible modularity of the controller.
At first sight, it seems that the size of the search space is increased, as a symbolic controller has more outputs than a classical controller. However, at the end of the day, the high level actions are folded into the two motor commands. On the other hand, using a symbolic controller can be viewed as adding some constraints on the search space, hence reducing the **size of the part** of the search space actually explored. The argument here is similar to the one used in statistical learning [17], where rewriting the learning problem into a very high dimensional space actually makes it simpler. Moreover, the fitness landscape of the space that is searched by a symbolic controller has many neutral plateaus, as only the highest value of the first outputs is used - and neutrality **can be**
beneficial to escape local optima [6].
On the other hand, the high level primitives of symbolic controllers make room for modularity. And according to Dawkins [4], the probability to build a working complex system by a randomized process increases with the **degree of** modularity.It should be noted that this principle is already used in Evolutionary Robotics, for instance to control the robot gripper : the outputs of the controllers used in [13] are high-level actions (e.g. GRAB, RAISE GRIPPER, . . .**), and not**
the commands of the gripper motors.
Finally, there are some similarities between the symbolic controller approach and reinforcement learning. Standard reinforcement learning [15, **8] aims at finding an optimal policy. This requires an intensive exploration of the search space.**
In contrast, evolutionary approaches sacrifices optimality toward satisfactory timely satisfying solutions. More recent developments [11], closer to **our approach, handle continuous state/action spaces, but rely on the specification of**
some relevant initial policy involving manually designed "reflexes".
## 4 Experimental Setup
Initial experiments have been performed using the Khepera simulator EOBot, that was developed by the first author from the EvoRobot **software provided by** S. Nolfi and D. Floreano [13]. EvoRobot **was ported on Linux platform using** OpenGL graphical library, and interfaced with the EO **library [9]. It is hence now** possible to use all features of EO in the context of Evolutionary Robotics, e.g. |
|
other selection and replacement procedures, multi-objective optimization, and even other paradigms like Evolution Strategies and Genetic Programming. However, all experiments presented here use as controllers Neural Networks (NNs)
with fixed topology. The genotype is hence the vector of the (real-valued) weights of the NN. Those weights evolve in [−1, **1] (unless otherwise mentioned), using a**
(30, **150)-Evolution Strategy with intermediate crossover and self-adaptive Gaussian mutation [14]: Each one of 30 parents gives birth to 5 offspring, and the best**
30 of the 150 offspring become the parents for next generation. All experiments run for 250 generations, requiring about 1h to 3h depending on the **experiment.**
All results shown in the following are statistics based on at least 10 independent runs. One fitness evaluation is made of 10 epochs**, and each epoch lasts from**
150 to 1000 time steps (depending on the experiment), starting from a randomly chosen initial position.
## 5 Learning A Simple Behavior: Obstacle Avoidance 5.1 Description
The symbolic controller (SC) with 8 outputs, described in section 3.1, is compared to the classical controller (CC) the outputs of which are the **speeds of both**
motors. Both controllers have 8 inputs, namely the IR sensors of the Khepera robot in active mode (i.e. detecting the obstacles).
Only Multi-Layer Perceptrons (MLP) were considered for this simple **task.**
Some preliminary experiments with only two layers (i.e. without hidden neurons) demonstrated better results for the Symbolic Controllers than for the Classical Controller. Increasing the number of hidden neurons increased the performance of both types of controllers. Finally, to make the comparison "fair", the following architectures were used: 14 hidden neurons for the SC, and 20 hidden neurons for the CC, resulting in roughly the same number of weights (216 vs 221).
The fitness function[12], is defined as Pepoch Pt |V (t)|(1 −pδV (t**)), where**
V (t) is the average speed of the robot at time t, and δV (t**) the absolute value**
of the difference between the speeds of the left and right motors. **The difference** with the original fitness function is the lack of IR sensor values in the **fitness:** the obstacle avoidance behavior is here implicit, as an epoch immediately ends whenever the robot hits an obstacle. The arena is similar to that in [12].
## 5.2 Results
The first results for the SC were surprising: half of the runs, even **without hidden** neurons, find a loophole in the fitness function: due to the absence **of inertia in the**
simulator, an optimal behavior is obtained by a rapid succession of FORWARD - BACKWARD **movements at maximum speed - obviously avoiding all obstacles!**
A degraded SC that has no BACKWARD **action cannot take advantage of**
this bug. Interestingly, classical controllers only discover this trick when provided with more than 20 hidden neurons and **if the weights are searched in a larger**
interval (e.g. [−10, 10]). |
|
Nevertheless, in order to definitely avoid this loophole, the fitness is **modified**
in such a way that it increases only when the robot moves forward (sum of both motor speeds positive)3.
This modification does not alter the ranking of the controllers: the Symbolic Controller still outperforms the Classical Controller. This advantage somehow vanishes when more hidden neurons are added (see Table 1), but the results of the SC exhibit a much smaller variance.
| Architecture | CC | CS |
|----------------------------|----------------------|------|
| 8-2 / 8-6 | 861 ± 105 1030 ± 43 | |
| 8-8-2 / 8-8-6 | 1042 ± 100 1094 ± 55 | |
| 8-20-2 / 8-14-6 | 1132 ± 55 1197 ± 16 | |
| 8-20-2 / 8-14-6∗ 1220 ± 41 | 1315 ± 6 | |
Table 1. Averages and standard deviations for 10 independent runs for the obstacle avoidance experiment. ∗this experiment was performed in a more constrained environment.
## 6 Evolution Of A Complex Behavior 6.1 Description
The target behavior is derived the homing experiment first proposed in [5], combining exploration of the environment with energy management. **The robot is**
equipped with an accumulator. The robot completely consumes the accumulator energy in 285 times steps. A specific recharge area is signaled by a light in the arena. There are no obstacles in the arena, and the position of the **recharge area**
is randomly assigned at each epoch.
The fitness is increased proportionally to the forward speed of the **robot (as**
described in section 5.2), but only when the robot is not in the recharge area.
In the original experiment [5], the accumulator was instantly recharged when the robot entered the recharge area. We suppose here that the **recharge is proportional to the time spent in the recharge area (a full recharge takes 100 times**
steps). Moreover, the recharge area is not directly "visible" for the robot, whereas it was signaled by a black ground that the robot could detect with a sensor in [5]. These modifications increase the complexity of the task.
## 6.2 Supervisor Architectures
The supervisor **architecture is a hierarchical controller that decide at each time**
step which one of the basic behaviors it supervises will be executed. **Its number** of outputs is the number of available basic behaviors, namely:
- Obstacle avoidance**. This behavior is evolved as described in section 5.2;**
- Light following**. The fitness used to evolve this behavior is the number of**
times it reaches the light during 10 epoch (no energy involved);
3 **Further work will introduce inertia in the simulator, thus avoiding this trap - and**
possibly many others. |
|
## 6.3 Results
The statistics over 10 independent runs can be seen on Figure 1. Three criteria can be used to compare the performances of the 4 architectures: the best overall performance, the variance of the results, and how rapidly good performances are obtained. The sensitivity and generalization abilities of the resulting controllers are important criteria that require additional experiments (sections 6.4, 6.5).
The best overall performance are obtained by the SS (Symbolic Supervisor)
architecture. Moreover, it exhibits a very low variance (average best fitness is 6442 ± 28). Note that overpassing a fitness of 6400 means that the resulting behavior could go on for ever, almost optimally storing fitness between **the recharge**
phases).
Next best architecture is the CC (Classical Controller). But whereas its best overall fitness is only slightly less that that of the SS, the variance is **10 times** larger (average best fitness is 6044 ± **316, with best at 6354). The difference is** statistically significant with 95% confidence using the Student T-test.
The SC (Symbolic Controller) and CS (Classical Supervisors) come last, with respective average best fitness of 5902 ± 122 and 5845 ± 27.
Some additional comments can be made about those results. First, **both**
supervisors architectures exhibit a very good best fitness (≈ **3200) in the initial** population: such fitness is in fact obtained when the supervisors only use the obstacle avoidance - they score maximum fitness until their accumulator level goes to 0. Of course, the direct controller architectures require **some time to**
reach the same state (more than 2000 evaluations).
Second, the variance for both supervisor architectures is very low. Moreover, it seems that this low variance is not only true at the performance level, but also at the behavior level: whereas all symbolic supervisors do explore the environment until their energy level becomes dangerously low, and then head toward the light and stay **in the recharge area until their energy level is maximal again,**
most (but not all) of the direct controller architectures seem to simply stay close to the recharge area, entering it randomly.
One last critical issue is the low performance of the Symbolic Controller.
A possible explanation is the existence of the neutrality plateaus discussed in section 3.2: though those plateaus help escaping local minima, they also slow down the learning process. Also it appears clearly on Figure 1-left that the SC architecture is the only one that seems to steadily increase its best **fitness until** the very end of the runs. Hence, the experiment was carried on for another 250 generations, and indeed the SC architecture did continue to improve (over a fitness level of 6200) - while all other architectures simply stagnates.
The evolved behaviors have been further examined. Figure 2-left shows a typical plot of the number of calls of each basic behaviors by the best evolved Symbolic Supervisor during one fitness evaluation. First, it appears **that both**
supervisors architectures mainly use the obstacle avoidance **behavior, never use**
the area sweeping**, and, more surprisingly, almost never use the** light following: |
|
when they see the light, they turn using the stop **behavior (that consists in fast** rotation), and then go to the light using the obstacle avoidance**. However, once** on the recharge area, they use the stop **until the energy level is back over 90%.**
Investigating deeper, it appears that the speeds of the light following and area sweeper **are lower than that of the** obstacle avoidance **– and speed is crucial**
in this experiment. Further experiments will have to modify the speeds of all behaviors to see if it makes any difference. However, this also demonstrates that the supervisor can discard some behavior that proves under-optimal or useless.
## 6.4 Sensitivity Analysis
One critical issue of the proposed approach is how to ensure that the "right" behavior will be available to the supervisor. A possible solution is to propose a large choice - but will the supervisor be able to retain only the useful **ones? In** order to assess this, the same energy experiment was repeated but many useless, or even harmful, behaviors were added to the 4 basic behaviors.
First, 4 behaviors were added to the existing ones: random, light avoiding, crash **(goes straight into the nearest wall!) and** stick to the walls **(tries to stay**
close to a wall). The first generations demonstrated lower performances and higher variance than in the initial experiment, as all behaviors were used with equal probability. However, the plot of the best fitness (not shown) soon catches up with the plots of Figure 1, and after 150 generations, the results are hardly distinguishable. Looking at the frequency of use of each behavior, **it clearly** appears that the same useful behaviors are used (see Figure 2-left, and section 6.3 for a discussion). Moreover, the useless behaviors are scarcely used as evolution goes along, as can be seen on Figure 2-right (beware of the different scale).
These good stability results have been confirmed by adding 20 useless behaviors (5 times the same useless 4). The results are very similar - though a little worse, of course, as the useless behaviors are called altogether a **little more often.**
<image>
|
|
# Robust Multi-Cellular Developmental Design
to appear in D. Thierens et al., Eds., Proceedings of GECCO'07, ACM Press, July 2007
Alexandre Devert
<image>
Nicolas Bredeche
<image>
Marc Schoenauer
<image>
## Abstract
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
## 1. Introduction
Evolutionary Design uses Evolutionary Algorithms to design various structures (e.g. solid objects, mechanical structures, robots, . . . ). It has been known for long [11] that the choice of a representation, i.e. of the space to search in, is crucial for the success of any Evolutionary Algorithm. But this issue is even more critical in Evolutionary Design. On the one hand, the success of a design procedure is not only measured by the optimality, for some physical criteria, of the proposed solutions, but also by the creative side of the process: a rich (i.e. large) search space is hence mandatory. But on the other hand, because scalability, and thus re-usability and modularity, are important characteristics of good design methodologies, the search space should have some structure allowing those properties to emerge.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO 2007 London, England Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$5.00.
The importance of the type of embryogeny (the mapping from genotype to phenotype) of the chosen representation in Evolutionary Design has been highlighted for instance in
[3], and more systematically surveyed in [18]. Direct representations, with no embryogeny (the relation between the phenotype and the genotype is a one-to-one mapping), have been very rapidly replaced in the history of Evolutionary Design by indirect representations, where the embryogeny is an explicit program, generally based on a grammar - and evolution acts on this program. The phenotype is then the result of the execution of the genotype. Many works have used this type of representation in Evolutionary Design, from the seminal works of Gruau [8] and Sims [16] and their many successfull followers (cited e.g. in [18]). However, even though those representations did to some extent address the issues of modularity, re-usability and scalability, there was still room for improvement. First, the scalability is still an issue, possibly because the bigger the structure, the more difficult it is to fine-tune it through the variation operators, due to the uncontrolled causality **(the effect of small mutations is not always small). Second, the embryogeny itself,**
and hence the resulting structures, are not robust to perturbations [2], an important characteristic when it comes to design autonomous systems such as robots.
In order to address those issues, several recent works have chosen to use multicellular developmental models: the embryogeny is implicit, based on exchanges of some 'chemicals' between 'cells', and more or less faithfully connected to Turing's early 'reaction-diffusion' model [19] (see again [18], and the more recent works cited in Section 4). But several instances of this model have been proposed, and a number of issues remain open, if not unsolved: Is the number of cells fixed, and the structure is then the result of their differentiation, or is the whole organism growing from a single cell? Do the chemicals diffuse on a given 'substrate' or only through the interactions and exchanges among neighboring cells –
and is the topology of cell interactions fixed, evolved, or has it emerged during the development process? What is the granularity of the possible values of chemical concentrations or quantities? When and how does development stop (the
'halting problem' of developmental approaches)? Finally, maybe the most important issue when it comes to evolve such embryogenies: what kind of 'reaction' takes place in each cell - or, from an operational point of view, what type of controller is used within each cell, and subject to evolution?
All those questions are of course interwined (e.g. you don't use the same type of controller depending on the type |
|
of values you intend to evolve). However, and whatever the choices when answering the above questions, most works evolving multi-cellular developmental models report convincing results as far as scalability is concerned [7, 5], as well as unexpected robustness properties [14, 2, 6]. Indeed, even though the self-repairing capacities of the biological systems that inspired those models were one motivation for choosing the developmental approach, self-healing properties were not explicitly included on the fitnesses, and initially appeared as a side-effect rather than a target feature (see Section 4 for a more detailed discussion).
This paper proposes yet another model for Multicellular Developmental Evolutionary Design. A fixed number of cells placed on a two-dimensional grid is controlled by a Neural Network. Cells only communicate with their 4 neighbors, and exchange (real-valued) quantities of chemicals. In contrast with previous works (but this will be discussed in more detail in Section 4), the phenotypic function of a cell (its type) is one of the outputs of the controller, i.e. is evolved together with the 'chemical reactions'. Moreover, the halting problem is implicitly left open and solved by evolution itself: development continues until the dynamical system
(the set of cells) comes to a fixed point (or after a –large–
fixed number of iterations). We believe that this is the reason for the excellent self-healing properties of the organisms that have been evolved using the proposed model: they all recover almost perfectly from very strong perturbations - a feature that is worth the additional computational cost in the early generations of evolution.
The paper is organized as follows: Section 2 introduces the details of the proposed model and of its optimization using the NEAT general-purpose Neural Network evolution algorithm [17], that optimizes both the topology and the weights of the network. The approach is then tested in Section 3 on the well-known 'flag' benchmark problems, where the target "structure" is a 2D image. A meaningful validation is obtained by comparing the results of the developmental approach to those of the data-fitting approach: the same neural optimization method is used but the inputs are the coordinates (x,y) of the cell: indeed, it should not be expected to obtain better results with the developmental approach than with this direct data-fitting approach. Furthermore, the excellent self-healing properties of the resulting structures are demonstrated. Those results are discussed in Section 4 and the proposed approach is compared to other existing approaches for Multicellular Developmental Design. Finally, further directions of research are sketched in concluding Section 5.
## 2. Developmental Model
The context of the proposed approach is what is now called Multi-Cellular Development Artificial Embryogeny [15]:
An organism is composed of identical cells; Each cell encapsulates a controller **(loosely inspired from a biological**
cell's regulatory network); All cells, and thus the organism, are placed in a substrata **with a given topology; Cells may**
eventually divide (i.e. create new cells), differentiate (i.e.
assume a predefined function in the phenotype), migrate and/or communicate with one another in the range of their neighborhood.
In the literature there is a clear distinction between approaches that do not rely on cell division, and thus require that the environment is filled with cells at startup [2], and approaches where cells divide and migrate [4, 12]. In both case however, communication may be performed from one cell to another [2, 4] (direct cell-cell **mechanism) or diffused**
through the environment [12] (substrata diffusion mechanism of **chemicals**). A cell or group of cells "grows", or "develops", by interacting with the environment, usually at discrete time steps. This process stops at some point and the organism is evaluated w.r.t. the target objective. In all the works that are cited above, the growth stop is forced (development is stopped after a predefined number of steps). Defining an efficient endogenous stopping criterion can be related to addressing the halting problem **for Developmental Embryogeny.**
In this context, the model proposed in this paper has the following characteristics: a fixed number of cells are positioned on a two-dimensional non-toroidal array (no cell division or migration). The state of each cell is a vector or real values, and the controller is a Neural Network. Cells produce a predefined number of 'chemicals' that diffuse by a pure cell-cell communication mechanism. Time is discretized, and at each time step, the controller of each cell receives as inputs the quantities of chemicals produced by its neighboring cells (4-neighbors Von Neuman neighborhood is used - boundary cells receive nothing from outside the grid). The neural controller takes as external input the chemicals of the neighboring cells and computes a new state for the cell, as well as the concentrations of the chemicals to be sent to neighboring cells at next time step. No global information is available or transmitted from one cell to another –
the challenge is to reach a global target behavior from those local interactions.
As noted in the introduction, this model can be thought of as a simplified instance of Turing's reaction-diffusion model
[19], with discretized time and space. But it can also be considered as a very simple model of a Genetic Regulatory Network [1]. The topology of the network is fixed, all 'genes' produce the same 'proteins', but the activation/inhibition of protein production is given by the (non-linear) neural network function. Finally, looking beyond biological analogies, the proposed model can also be seen as a Continuous Cellular Automata [20], i.e. cellular automata with continuous states and discrete time, more precisely as a Cellular Neural Network [10], cellular automata where the update rule for each cell is given by a neural network, typically used in VLSI design.
## 2.1 The Neural Network Controller
In this work, the state of a cell, that is responsible for for both its differentiation (i.e. its phenotypic expression)
and the communication with other cells though the diffusion of chemicals, is a vector of real values: a single real value (gray level) in the 'flag' applications described in Section 3 - though more complex environments could require more complex differentiation states. Hence the widely used and studied model of Discrete Time, continuous state, Recurrent Neural Network (DTRNN) with sigmoidal transfer functions was chosen for the cell controllers. This choice of a Neural Network as a controller of the cells was inspired by the long-known property that Neural Networks are Universal Approximators [9]. The inputs of the Neural Network are the values of the chemical quantities coming from the 4 neighbors of the cell. Its outputs are the state of the cell |
|
| Population size | 500 |
|-----------------------------------------|-----------------|
| Max. number of evaluations | 250000 |
| Reproduction ratio per species | 0.2 |
| Elite size per species | 1 |
| Crossover prob. | 0.15 |
| Add-node mutation prob. | 0.01 |
| Add-link mutation prob. | 0.01 |
| Enable-link mutation prob. | 0.045 |
| Disable-link mutation prob. | 0.045 |
| Gaussian weights mutation prob. | 0.8 |
| Std. dev. for Gaussian weight mutation | 0.1 |
| Uniform weights mutation prob. | 0.01 |
| Distance parameters for fitness sharing | 1.0 - 1.0 - 0.2 |
<image>
<image>
plus one output per chemical. If there are N **neurons and** M **external inputs, the more general form of update rule at** time step t for neuron i **of a DTRNN is**
$$a_i(t+1)=\sigma(\sum_{j=1}^N w_{i,j}a_j(t)+\sum_{j=1}^M z_{i,j}I_j(t))$$ **En gnral, l'activation est juste la somme ponder des**
entres. Ce que tu donnes est la sortie. De plus, IL FAUT DISTINGUER LES ESPECES ??? **where**
ai(t) is the activation of neuron i at time t, Ij (t**) is the** j th external input at time t, wi,j **is the weight of the connection**
from neuron j to neuron i (0 if no connection exists), zi,j is the weight of the connection from input j to neuron i**, and**
σ(x) = 1 1+e−x **is the standard sigmoid function.**
It is important to note that, even if the neural controller is a feedforward neural network (i.e. there are no loops in the connection graph), the complete system is nevertheless a large recurrent neural network because the exchanges of chemicals between the cells do create loops. In this respect, the chemicals can be viewed as an internal memory of the whole system. Figure 1 shows a schematic view of a cell with its 4 neighbors, that uses two chemical concentrations to communicate. The cell transmit the same concentrations of each chemical to it neighboring cells, so we have only 2 outputs but 8 inputs. An additional output (not shown) is used for the differentiation value.
Obviously, this model can be easily extended to any number of chemicals, as well as to any dimensions for the state of the cells, allowing differentiations into more sophisticated mechanical parts (e.g. robot parts, joints with embedded controller, etc).
## 2.2 Controller Optimization
Even though the smaller class of simple sigmoidal 3-Layer Perceptron has the Universal Approximator property, determining the number of hidden units for a MLP remains an open issue, and practical studies have demonstrated that exploring the space of more complex topologies (including recurrent topologies) could be more efficient than just experiencing with a one hidden layer perceptron. Moreover, many algorithms have been proposed for the evolution of Neural Networks, and a good choice for the evolution of cell controllers was the NEAT **algorithm [17], a state-of-the-art**
evolutionary NN optimization algorithm that makes it possible to explore both feedforward and recurrent topologies.
This algorithm relies on a direct encoding of neural network topologies that are evolved using a classical evolutionary stochastic optimization scheme. The main feature of NEAT is that it explores the topologies from the bottom-up:
starting from the simplest possible topology for the problem at hand, it performs variations over individuals by adding neurons and connections to networks in such a way that the behavior of the network is preserved at first - this makes it possible to explore topology in a non destructive fashion.
Our NEAT implementation has been validated from published results. For all the experiments in this paper, NEAT parameters have been set to the values given in [17] for solving the sample XOR regression and **double-pole balancing**
tasks. Those values seemed robust for the problem at hand, according to a limited parametric study. They are summarized in table 1.
As already noted, an interesting feature of NEAT algorithm is that it can handle the evolution of both feedforward and recurrent neural networks - hence allowing an easy comparison of both models. Another interesting feature of NEAT is that it allows the user to declare some constraints on the topology - in this case, all input and output neurons are forced to be connected to at least one neuron in the controller.
## 2.3 Halting The Growth Process
In Multi-cellular developmental systems, the phenotype
(the target structure to be designed, on which the fitness can be computed) is built from the genotype (the cell-controller, here a Neural Network) through an iterative process: Starting from a uniform initial condition (here, the activity of all neurons is set to 0), all cells are synchronously updated, or, more precisely, all neurons of all cells are synchronously updated, in case the neural network is recurrent. But one major issues of such iterative process is to determine when to stop.
In most previous approaches (see Section 4), the number of iterations is fixed once and for all by the programmer.
However, this amounts to adding one additional constraint to the optimization process: Indeed, it is clear that the num- |
|
In order to try to discriminate between the modeling error and the method error, a fifth model is also run, on the same test cases and with similar experimental conditions than the four developmental approaches described above:
the layout is exactly the same (a 2D grid of cells), the same NEAT parameters are used (to evolve a feedforward neural network), and selection proceeds using the same fitness.
However, there is no chemical nor any exchange of information between neighboring cells, and on the other hand, all cells receive as inputs their (x,y) coordinates on the grid.
Hence the flag approximation problem is reduced to a simple regression problem. In the following, the results of this model will be considered as reference results, as it is not expected that any developmental approach can ever beat a totally informed model using the same NEAT optimization tool. This experiment is termed "f(x, y) = z**" from now on.**
## 3.3 Experimental Setup
All 5 models described in previous section have been run on the 4 flags showed on Figure 2. All results presented in the following are statistics over 16 independent runs.
As already said, the evolutionary neural network optimizer is NEAT, with the settings that are described in Table 1. It is worth noticing that during all runs, no bloat was ever observed for the NEAT genotypes. The mean size of the networks (measured by the total number of edges between neurons) gently grew from its starting value (between 5 and 10 depending on the model) to some final value below 40
- the largest experiment reaching 45. This first result confirms the robustness of this optimization tool, but also, to some extent, demonstrates the well-posedness of the problems NEAT was solving (bloating for Neural Networks can be a sign of overfitting ill-conditioned data).
As argued in section 2.3, the halting of the growth process is based on the stabilization of the energy of the organism, checked over some time window. The width of this time window has been set to 8 time-steps in all experiments. However, because not all networks will stabilize, a maximum number of iterations has to be imposed. This maximum number was set to 1024, and if no stabilization has occurred at that time, the fitness is set to value 0: as 0 is the worst possible value for the fitness, this amounts to using some death penalty for the stabilization constraint.
With such settings, a typical run lasts about one day on a 3.4GHz Pentium IV. Though this might seem a huge computational cost, we believe that it is not a critical issue when designing real-world structures: On the one hand, designing mechanical parts is already a time-consuming process, involving highly trained engineers - and human time nowadays costs much more than CPU time. On the other hand, when the structure that is being designed is bound to be built by thousands or millions, a few days represent a very small overhead indeed.
## 3.4 Results
3.4.1 Comparing fitnesses The statistics for the off-line results are displayed as the usual box-plots1**on Figures 3, 4 and 5 respectively for the**
1**as generated by the R statistical package, see**
http://en.wikipedia.org/wiki/Box plot **for a precise** description.
<image> <image>
3-bands, disc and half-discs problems of Figure 2, and online results (the average over the 16 runs of the fitness of the best-of-generation individuals as evolution progresses)
are shown on Figure 6 for the 3-bands problem.
The results for the 2-bands problem are almost identical for the 5 models, and are not presented here: same average fitness of 0.999, with a slightly larger variance for the developmental approaches (and variance 0 for the regression model). For each setting of the embryogenic approach, though, some runs were able to find a marginally better solution than that of the regression model - but without any statistical significance. For the slightly more difficult target three-bands, the reference model is still able to find an ex-
<image>
|
|
<image>
Figure 6: Evolution of average of best fitness for the 3-bands problem. The lowest curve is that of the 1-recurr model**, and the 3 indistinguishable curves** above the other 2 are those of the other 3 artificial embryogeny models.
<image>
Figure 7: Development stages on the three-bands problem for the recurrent NN with 2 chemicals at iterations 16, 32 and 44 (columns) for the phenotype (top row), and both chemicals.
act solution, as shown in figure 3, while the 3 embryogenic models give nearly optimal individuals.
As expected, the disc **target is difficult for the embryogenic approaches: as can be seen on the box-plots (Figure**
4, all 4 are clearly outperformed by the reference model, that was not trapped in the same local optimum. The online results did not reveal any other conclusion, and are not shown here. It is worth noting that here, experiments using 2 chemicals outperform the same model with a single chemical (with statistically significant differences according **to a**
95% confidence T-test).
Finally, the situation is slightly different for the half-discs, the most difficult target (Figure 5): all embryogenic models are, again, clearly outperformed by the reference model, even though this model doesn't reach such a good fitness than for the disc problem. However, the best results among embryogenic approaches are obtained by the recurrent networks, that exhibit a much larger variance, and thus sometimes reaches much better fitnesses - with a slight advantage for the 2-chemicals recurrent model in this respect.
## 3.4.2 Halting Criterion And Robustness
The evolved halting criterion is one of the main original feature of the proposed approach. It thus needs to be studied in detail, especially as it is closely related to the self-healing properties, i.e. the robustness with respect to noise during
<image>
Figure 8: Development stages on the half-discs problem for the recurrent NN with 2 chemicals at iterations 28, 64 and 122 (columns) for the phenotype
(top row), and both chemicals. Figure 9: Self-healing on the three-bands problem for the recurrent NN and 2 chemicals: Snapshots of the phenotype at iterations 0 (beginning of the perturbation), 4, 11, 17 and 22.
## The Growth Iterations.
Because all organisms are allowed 1024 iterations in their growth process, it could be feared that several hundreds iterations would be needed before stabilization even for the best solutions found by the algorithm. The total computational costs would henceforth have been tremendously higher that it already is. The good news is that in all cases, and for all embryogenic models, the whole population rapidly contains a large majority of organisms that did stabilize within a few dozens iterations. Illustrations of the growth process are given in Figures 7 and 8. For the easy 3-bands problem, only 44 iterations are needed (and chemical 1 doesn't change after the 16th iteration). For the more difficult halfdiscs problem, 122 iterations are needed.
But another important issue is that of robustness: earlier works [12, 6] have demonstrated that developmental approaches lead to robust solutions as far as development is concerned Here, the robustness of the fixed points was checked by applying a centered Gaussian perturbation with unit standard deviation to the states of all neurons. The good news is that for any perturbation, 100% of the feedfoward controllers and 75% of the recurrent controllers return to the very same state they had before the perturbation. The other 25% reccurent controllers returns to a state very close to the one they had before perturbation. An example of perfect and fast self-healing for the three-bands **problem is shown in** Figure 3.4.2.
To sum up, the embryogenic approach perform often nearly as good as a simple regression (f(x, y) = z **), if using the same**
optimizer. The feedforward and the recurrent networks seem hardly distinguishable across the 4 experiments, and a slight advantage of the 2-chemical over the 1-chemical could be hypothesized. The most interesting result concerns the almost perfect self-healing property of the resulting organisms. |
|
## 4. Related Works And Discussion
This section discusses the proposed approach in the light of other works on multi-cellular embryogenies from the litterature.
The pioneering work by Julian Miller [12] belongs to the
'duplicating cells' category: Cells are allowed to duplicate, and growth starts with a single cell. Cells achieve communication by placing chemicals at their location and reading chemicals from their 8 neighbors. Moreover, a hand-written mechanism ensures their diffusion on the grid. Each cell can also differentiate into one of four cell types (one of the three colors, or the 'dead cell' tag) and each cell communicates its type to neighboring cells. The cell controller is designed as a boolean logic circuit optimized with Cartesian Genetic Programming [14] and the task is to find an organism that fits a 12x9 French flag. Experiments are conducted with a varying number of chemicals (from 0 to 4) and results showed that after 10 iterations, the french flag could be reproduced with nearly 95% similarity. Even more interesting results concerning self-repairing showed that with varying perturbations, the system could still recover and converge toward patterns that are somewhat similar (though not identical)
to the ones it would have achieved without perturbations.
In [4], Diego Federici extends Miller's work: again, only one single cell exists at iteration 0, and duplication is allowed. Each cell gets as input the 4 neighboring cell types and one single chemical concentration, resulting here also from a hand-written diffusion rule. The controller is a multilayer perceptron with fixed topology (only the weights are optimized) and the task is to fit a set of 9x6 flags (including the Norwegian flag). One interesting feature is that the optimization process is twisted to favor diversity, and implements a clever problem decomposition scheme named "multiple embryogenic stages" with convincing results.
The work by Gordon and Bentley [2] differs from previous approaches by considering only communication and differentiation in the substrata. The grid starts with a cell at all available grid points, and cells communicate by diffusing chemicals to neighboring cells only. Each cell then receives as input one chemical concentration, computed as the average of the concentrations of all neighboring cells: hence, no orientation information is available. In the Cellular Automata context, such system is called a totalistic automaton. One drawback of this approach is that it requires that some cells have different chemicals concentration at startup. Furthermore, it makes the whole model biased toward symmetrical patterns ("four-fold dihedral symmetry"). The controller is a set of 20 rules that produce one of the four chemicals and sends it towards neighboring cells. The set of rules is represented by a bit vector and is evolved using a classical bitstring GA. The paper ends with some comparisons with previous works, namely [4, 12], demonstrating comparable and sometimes better results. But a possible explanation for that success could be the above-mentionned bias of the method toward symmetrical patterns.
The approach proposed here shares some similarities with the approaches described above. The controller is defined as a neural networks, as in [4]; but in contrast to [4], both the topology and the weights are optimized, thanks to NEAT.
Further work should determine whether this difference is essential or not by running the same algorithm (i.e. with the stabilization incentive in the fitness) and multi-layer perceptron controllers.
However, there are even greater similarities between the present work and that in [2]. In both works, the grid is filled with cells at iteration 0 of the growth process (i.e. no replication is allowed) and chemicals are propagated only in a cellcell fashion without the diffusion mechanisms used in [4, 12].
Indeed, a pure cell-cell communication is theoretically sufficient for modelling any kind of temporal diffusion function, since diffusion in the substrata is the result of successive transformation with non-linear functions (such as the ones implemented by sigmoidal neural networks with hidden neurons). However, this means that the optimization algorithm must tune both the diffusion reaction and the differentiation of the cells. On the other hand, whereas [2] only consider the average of the chemical concentrations of the neighboring cells (i.e. is totalistic **in the Cellular Automata terminology), our approach does take into account the topology**
of the organism at the controller level, de facto benefitting from orientation information. This results in a more general approach, though probably less efficient to reach symmetrical targets. Here again, further experiments must be run to give a solid answer.
But two main issues contribute to the originality of the approach proposed here: (1) the output for cell differentiation is a continuous value, and (2) the halting problem is indirectly addressed through the fitness function, that favors convergence towards a stable state (i.e. a fixed point).
Indeed, all other works consider that a cell may differentiate into one of a given set of discrete states (e.g. blue, red, and white) while output is considered here as a continuous value (discretized into a 256-gray level value). At first sight, this can be thought as making the problem harder by increasing the size of the search space. However, it turns out that a continuous output results in a rather smooth fitness landscape, something that is known to be critical for Evolutionary Algorithms. Additional experiments (not reported here) did demonstrate that it was much harder to solve the same flag problems when discretizing the controller outputs before computing the fitness (Section 3.1). Indeed, discretized outputs lead to a piecewise constant fitness landscape and the algorithm has no clue about where to go on such flat plateaus. However, here again, more experiments are needed before drawing strong conclusions.
Secondly, from a dynamical system viewpoint, the objective function can be seen as selecting only the organisms that do reach a fixed point, starting from given initial conditions defined as uniformly initialized cells. First, all previous works needed to a priori decide the number of iterations that growth would use, and it is clear that such parameter is highly problem dependent, and hence should made adaptive if possible. But more than that, the good news is the strength of the fixed point reached by the organism, its attracting power when starting from other initial conditions -
that is, an extreme case of self-healing capabilities against perturbations for the organism. Of course, previous works [12] already noted the growth process is remarkably stable under perturbations, and is able to reach a pattern quite similar (though not identical) to the original target pattern.
However, it should be noted that the organisms evolved in
[12] keep on growing if growth is continued after the fixed number of iterations, and eventually turn out to completely diverge from the target pattern. Similarly, [4] observes that |
|
a perturbation in the earlier stages of development leads to an increase in the disruption of the final pattern that is linear with respect to the number of development steps. Robustness towards perturbation was later confirmed and more thoroughly studied in [6].
But, as demonstrated by the experiments shown in Section 3.4.2, our model achieves astounding results regarding the self-healing property. Starting from completely random conditions (i.e. inputs and outputs set to random values),
the system is able to perform a 100% recovery and to converge to the exact pattern that was reached during evolution
(i.e. when starting with value 0 for all neuron activations).
Some runs were performed without the stabilization criterion, the final individuals never shown such properties. This seems to be a clear consequence of the way stabilization is favored in the fitness function - though the precise reason for the extraordinary absorbing property of all fixed points reached in the experiments so far remains to be understood.
## 5. Conclusion
This paper has introduced a continuous Neural Network model for Multi-Cellular Developmental Design. The Neural Network is evolved using the state-of-the-art NEAT algorithm that optimizes both the topology and the weights of the network, and can evolve both feedforward and recurrent neural networks. The model was validated on four instances of the 'flag' problem, and on 3 out of 4 instances it performed as good as NEAT applied to the equivalent regression problem: this is a hint that the modeling error of the developmental approach is not much bigger than that of the Neural Network approach for regression (which is proved to be small, thanks to the Universal Approximator property),
and is in any case small compared to the computational error
(i.e. the error done by NEAT when searching the globally optimal network).
But the most salient feature of this model lies in the stopping criterion for the growth process: whereas most previous work required to a priori decide on a number of iterations, the proposed algorithm selects organisms that reach a fixed point, making the stopping criterion implicitly adaptive. The major (and somewhat unexpected) consequence of this adaptivity is the tremendous robustness toward perturbations during the growth process: in almost all experiments, the fixed point that is reached from the initial state used during evolution (all neural activations set to 0) seems to be a global attractor, in the sense that the organism will end up there from any starting point.
## 6. References
[1] W. Banzhaf. On the dynamics of an artificial regulatory network. In ECAL'03**, pages 217–227, 2003.**
[2] P. Bentley. Investigations into graceful degradation of evolutionary developmental software. **Natural** Computing**, 4(4):417–437, 2005.**
[3] P. Bentley and S. Kumar. Three ways to grow designs:
A comparison of embryogenies for an evolutionary design problem. In W. B. et al., editor, **GECCO'99**, pages 35–43. Morgan Kaufmann, 1999.
[4] D. Federici. Increasing evolvability for developmental programs. In J. Miller, editor, **Workshop on**
Regeneration and Learning in Developmental Systems, WORLDS 2004**, 2004.**
[5] D. Federici and K. Downing. Evolution and development of a multicellular organism: scalability, resilience, and neutral complexification. **Artificial Life**, 12(3):381–409, 2006.
[6] D. Federici and T. Ziemke. Why are evolved developing organisms also fault-tolerant? In **SAB'06**,
pages 449–460, 2006.
[7] T. G. W. Gordon and P. J. Bentley. Bias and scalability in evolutionary development. In **GECCO**
'05**, pages 83–90. ACM Press, 2005.**
[8] F. Gruau. Genetic micro programming of neural networks. In K. E. Kinnear Jr., editor, **Advances in**
GP**, pages 495–518. MIT Press, 1994.**
[9] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators.
Neural Networks**, pages 359–366, 1989.**
[10] L. Y. L. O. Chua. Cellular neural networks:
Applications. **IEEE Trans. on Circuits and Systems**,
35(10):1273–1290, 1988.
[11] Z. Michalewicz. **Genetic Algorithms + Data Structures**
= Evolution Programs**. Springer Verlag, New-York,**
1992-1996. 1st-3rd edition.
[12] J. F. Miller. Evolving a self-repairing, self-regulating, french flag organism. In GECCO**, pages 129–139, 2004.**
[13] J. F. Miller and W. Banzhaf. Evolving the program for a cell: from french flags to boolean circuits. In S. Kumar and P. J. Bentley, editors, **On Growth,** Form and Computers**. Academic Press, 2003.**
[14] J. F. Miller and P. Thomson. Cartesian genetic programming. In P. et al, editor, EuroGP'00**, pages** 121–132. LNCS 1802, Springer-Verlag, 2000.
[15] D. Roggen and D. Federici. Multi-cellular development: Is there scalability and robustness to gain? In X. Y. et al., editor, PPSN'04**, pages 391–400.** LNCS 3242, Springer Verlag, 2004.
[16] K. Sims. Evolving virtual creatures. In SIGGRAPH'94**, pages 15–22. ACM Press, July 1994.**
[17] K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. **Evolutionary** Computation**, 10(2):99–127, 2002.**
[18] K. O. Stanley and R. Miikkulainen. A taxonomy for artificial embryogeny. Artificial Life**, 9(2):93–130,** 2003.
[19] A. M. Turing. The chemical basis of morphogenesis.
Philosophical Transactions of the Royal Society of London, Series B, Biological sciences, B
237(641):37–72, Aug. 1952.
[20] S. Wolfram. A new kind of science**. Wolfram Media,**
2002. |
|
# Response Prediction Of Structural System Subject To Earthquake Motions Using Artificial Neural Network
S. Chakraverty*, T. Marwala** , Pallavi Gupta* and Thando Tettey**
*B.P.P.P. Division, Central Building Research Institute Roorkee-247 667, Uttaranchal, India e-mail :[email protected]
****School of Electrical and Information Engineering,**
University of the Witwatersrand, Private Bag 3 Wits, 2050,Republic of South Africa
## Abstract
This paper uses Artificial Neural Network (ANN) models to compute response of structural system subject to Indian earthquakes at Chamoli and Uttarkashi ground motion data. The system is first trained for a single real earthquake data. The trained ANN architecture is then used to simulate earthquakes with various intensities and it was found that the predicted responses given by ANN model are accurate for practical purposes. When the ANN is trained by a part of the ground motion data, it can also identify the responses of the structural system well. In this way the safeness of the structural systems may be predicted in case of future earthquakes without waiting for the earthquake to occur for the lessons. Time period and the corresponding maximum response of the building for an earthquake has been evaluated, which is again trained to predict the maximum response of the building at different time periods. The trained time period versus maximum response ANN model is also tested for real earthquake data of other place, which was not used in the training and was found to be in good agreement.
Keywords : Earthquake, Neural Network, Frequency, Structure, Building. |
|
<image>
**Fig. 2(a). Chamoli Earthquake at Barkot in NE direction**
Peak Acceleration = 0.16885m/sec/sec
<image>
Fig. 2(b) Uttarkashi Earthquake at Barkot in NE direction Peak Acceleration : 0.9317m/sec/sec
<image>
<image>
|
|
<image>
<image>
|
|
<image>
<image>
|
|
<image>
Fig. 5(a). 120% Response Comparison Between Neural and Desired of Chamoli Earthquake at Barkot (NE) (748 points) ω=0.01
(After training from 550 points)
<image>
Time
Fig. 5(b). 80% Response Comparison Between Neural and Desired of Chamoli Earthquake at Barkot (NE) (748 points) ω**=0.5**
(After training from 550 points) |
|
<image>
<image>
<image>
|
|
## 1 Introduction
Real earthquake ground motion at a particular building site is very complicated. The response of a building to an earthquake is dynamic and for a dynamic response, the building is subjected to a vibratory shaking of the base. Exactly how a building responds is complex and depends on the amplitude and frequency of vibration along with the material and design of the building. All buildings have a "natural frequency" associated with them. If strain is placed on to the structure and then let it snap back into equilibrium, it will sway back and forth with an amplitude that decays with time. If the ground shakes with the same frequency as a building's natural frequency, it will cause the amplitude of sway to get larger and larger such that, the ground shaking is in resonance with the building's natural frequency. This produces the most strain on the components of the building and can quickly cause the building to collapse. Powerful technique of Artificial Neural Network (ANN) has been used to model the problem for one storey structure. Among the different types of ANN, the feedforward, multilayer, supervised neural network with error back propagation algorithm, the BPN [1] is the most frequently applied NN model. Dynamic response of a structure to strong earthquake ground motion may be investigated by different methods. The method, that has been used here, is to create a trained black box containing the characteristics of the structure and of the earthquake motion which can predict the dynamic response for any other earthquake for a particular structure. Artificial Neural Network (ANN) have gradually been established as a powerful soft computing tool in various fields because of their excellent learning capacity and their high tolerance to partially inaccurate data. ANN has, recently been applied to assess damage in structures. Stefano et al.[3] used probabilistic Neural Networks for seismic damage prediction. Many methods viz. [4]-[9] were introduced for response estimation and for structural control. Zhao et al.[10] |
|
applied a counter-propagation NN to locate damage in beams and frames. KuZniar and Waszczyszyn [11] simulated the dynamic response for prefabricated building using ANN. Elkordy et al.[12] used a back-propagation neural network with modal shapes in the input layer to detect the simulated damage of structures. Muhammad [13] gives certain ANN applications in concrete structures. Pandey and Barai [14] detected damage in a bridge truss by applying ANN of multilayer perceptron architectures to numerically simulated data. Some studies such as [15]-[17] used artificial neural network for structural damage detection and system identification. In the present paper, the Chamoli earthquake ground acceleration at Barkot (NE) and Uttarkashi earthquake ground acceleration recorded at Barkot (NE and NW) have been considered based on the authors' previous study [18]. From their ground acceleration the responses are computed using the usual procedure. Then the ground acceleration and the corresponding response are trained using Artificial Neural Network (ANN) with and without damping. After training the network with one earthquake, the converged weight matrices are stored. In order to show the power of these converged (trained) networks other earthquakes are used as input to predict the direct response of the structure without using any mathematical analysis of the response prediction. Similarly, the various time periods of one earthquake and its corresponding maximum responses are trained. Then the converged weights are used to predict the maximum response directly to the corresponding time period. Various other results related to use of these trained networks are discussed for future / other earthquakes.
## 2 Artificial Neural Network
Artificial neural systems are present day machines that have great potential to improve the quality of our life. Advances have been made in applying such systems for problems found difficult for traditional computation. A neural network is a parallel, distributed information processing structure consists of processing elements called neurons, which are interconnected and unidirectional signal |
|
channels called connections. The general structure of the network that have been used here is given in Fig.1. the structure consists of three layers : the input layer, the hidden layer and the output layer. The input layer is made up of one or more neurons or processing elements that collectively represent the information in a particular pattern of a training set. The hidden layer also consists of one or more neurons. Its purpose is simply to transform the information from the input layer to prepare it for the output layer. The output layer, which has one or more neurons, uses input from the hidden layer (which is a transformation of the input layer) to produce an output value for the entire network. The output is used to interpret the training and classification results of the network. The processing elements or neurons are connected to each other by adjustable weights. The input/output behaviour of the network changes if the weights are changed. So, the weights of the net may be chosen in such a way so as to achieve a desired output. To satisfy this goal, systematic ways of adjusting the weights have to be developed, which are known as training or learning algorithm.
<image>
|
|
## Fig.1. Layered Feedforward Neural Network 3 Error Back Propagation Training Algorithm (Ebpta)
Here, Error Back Propagation Training algorithm and feedforward recall with one hidden layer has been used. In Fig. 1, Zi, Pj and Ok **are input, hidden and output**
layer respectively. The weights between input and hidden layers are denoted by νji and the weights between hidden and output layers are denoted by Wkj**. The**
procedure may easily be written down for the processing of this algorithm. Given R training pairs
{Z1
, d1
;Z2
, d2
;.........ZR,dR}
where Zi (Ix1) are input and di **(Kx1) are desired values for the given inputs, the**
error value is computed as E ( ) dk Ok
, k 2,1 **,.....**K
2 1 2
= − =
for the present neural network as shown in Fig. 1.
The error signal terms of the output (δOk) and hidden layers (δPj**) are written**
respectively as, Ok 5.0 (* dk Ok
)(1 Ok), k 2,1 **,........**K
2 δ = − − =
= − ∑ =
=
K
k Pj j Ok Pj P W j J
1 2 δ 5.0 1(* ) δ , 2,1 **,....**
Consequently, output layer weights (Wkj) and hidden layer weights (δji**) are**
adjusted as, Z j, 2,1 .......J and i 2,1 **,.......I**
Pj i
(Old)
ji
(New)
ji ν =ν + βδ = =
W W P , k 2,1 .....K and j 2,1 **,.....J** Ok j
(Old)
kj
(New)
kj = + βδ = =
Where, β **is the learning constant.** |
|
## Case (Ii) **: With Damping**
Let M be the mass of the generalized one storey structure, K the stiffness of the structure, C the damping and x be the displacement relative to the ground then the equation of motion may be written as:
Mx&& + Cx& + Kx = −Ma&& )4( where
&&
&
&&
a Ground **acceleration.**
x **Displacement,** x Response **velocity,** x Response **acceleration,**
=
=
=
=
Equation (4) may be written as, 2 )5( 2 x&& + ξω x& + ω x = −a&&
Where ξω **= C/2M and** ω 2**=K/M, is the natural frequency parameter of the**
undamped structure. The solution of equation (5) [Ref.2] is given by
$$\mathbf{x}(\mathbf{t})=-{\frac{1}{\omega}}\int\limits_{0}^{\mathbf{t}}\!{\dot{\mathbf{a}}}(\tau)\mathrm{exp}[-\xi\,\omega(\mathbf{t}-\tau)]\mathrm{sin}[\omega(\mathbf{t}-\tau)]\mathrm{d}\tau$$
$$(6)$$
(a )exp[ t( )]sin[ t( )]d )6(
From this solution the response of the structure viz. acceleration is obtained for damping. Now, the neural network architecture is constructed, taking ground acceleration as input and the response obtained from the above solution is taken as output for each time step. Therefore, the whole network consists of one input layer, one hidden layer with varying nodes and one output layer as shown in Fig.1. Similarly |
|
for the other problem of time period vs. maximum response the input and output layer contain the time period and the corresponding maximum response respectively at each interval for the particular structure.
## 5 Numerical Results And Discussions
For the present study two Indian earthquakes viz. the Chamoli Earthquake (max. ground acceleration =0.16885 m/sec/sec) at Barkot in NE (north–east) direction shown in Fig.2(a) and the Uttarkashi earthquake (maximum ground acceleration = 0.931 m/sec/sec) at Barkot in NE (north–east) and NW (north-west) direction as given in Figs. 2(b) and 2(c) have been considered for training and testing. Initially, the system without damping is studied and for that the ground acceleration of Chamoli earthquake at Barkot (NE) was used to compute the response for single storey structure using usual procedure from Eq.(3). The obtained response and the ground acceleration is trained first for the assumed frequency parameters ω=0.5 and ω**=0.01 for time range 0 to 14.92 sec.(748 data**
points) for the mentioned earthquake. Simulations have been done for different hidden layer nodes and it was seen that the response result is almost same and good for 5 to 20 nodes in the hidden layer. However, 10 hidden layer nodes are used here to generate further results.
After training ground acceleration and response data for Chamoli earthquake at Barkot (NE) for 10 nodes in hidden layer, the weights are stored and they are used to predict responses for various intensity earthquakes. The plot in Fig. 3(a) shows response comparison between neural and desired for the 80% of Chamoli earthquake at Barkot (NE) for ω**=0.01(maximum response=0.135079m/sec/sec).** Similarly, the response comparison for 120% Chamoli earthquake at Barkot (NE)
for ω**=0.5 (Maximum response=0.20260 m/sec/sec) is shown in Fig 3(b).**
Next, a part of the ground acceleration is used for the training and it will be shown that the present ANN can predict the whole period of the response using the trained ANN by the part of the data. So, the ground acceleration and |
|
# Fault Classification Using Pseudomodal Energies And Neuro-Fuzzy Modelling
TSHILIDZI MARWALA, THANDO TETTEY AND SNEHASHISH CHAKRAVERTY
## __________________________________________________________________________________________________________ Abstract:
This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron.
## Introduction
The process of monitoring and identifying faults in structures is of great importance in aerospace, civil and mechanical engineering. Aircraft operators must be sure that aircraft are free from cracks. Bridges and buildings nearing the end of their useful life must be assessed for load-bearing capacities. Cracks in turbine blades lead to catastrophic failure of aero-engines and must be detected early. Many techniques have been employed in the past to locate and identify faults. Some of these are visual (e.g. dye penetrant methods) and others use sensors to detect local faults (e.g. acoustics, magnetic field, eddy current, radiographs and thermal fields). These methods are time consuming and cannot indicate that a structure is fault-free without testing the entire structure in minute detail. Furthermore, if a fault is buried deep within the structure it may not be visible or detectable by these localised techniques. The need to detect faults in complicated structures has led to the development of global methods which are able to utilise changes in the vibration characteristics of the structure as a basis of fault detection [1].
There are four main methods by which vibration data may be represented: time, modal, frequency and time-frequency domains. Raw data is obtained from measurement made in the time domain. From the time domain, Fourier transform techniques may then be used to transform data into the frequency domain. ____________ University of the Witwatersrand, {t.tettey, t.marwala}@ee.wits.ac.za. Central Building Research Institute, [email protected] |
|
From the frequency domain data, and sometimes directly from the time domain, the modal properties may be extracted. All of these domains theoretically contain similar information but in reality this is not necessarily the **case. Because the time domain data** are relatively difficult to interpret, they have not been used extensively for fault identification, and for this reason, the modal properties have been widely considered. In this paper we use the pseudomodal, defined from the Frequency Response Function (FRF), together with a Takagi-Sugeno (TS) Neuro-fuzzy model to classify faults in a population of cylindrical shells.
## Background Pseudomodal Energies
In this work, Pseudomodal Energies are used for the classification of faults in cylinders. Pseudomodal energies have been found to allow for better classification of faults when compared to modal properties. Pseudomodal energy is defined as the integral of the frequency response function (FRF) over various frequency bandwidths [2]. The FRF is defined as the ratio of the Fourier transformed response to the Fourier transformed force. The pseudomodal energies are therefore the intergral of the real and imaginary parts of the FRFs over various frequency ranges that bracket the natural frequencies.
On one hand, receptance expression of the FRF is defined as the ratio of the frequency response of displacement to the frequency response of force. On the other hand, inertance expression of the FRF is defined as the ratio of the frequency response of acceleration to the frequency response of force. Similarly, the pseudomodal energies can be expressed in terms of the receptance and inertance. The commonly used techniques of collecting vibration data involve measuring the acceleration response and therefore it is more useful to calculate the inertance pseudomodal energies. The inertance pseudomodal energy is derived by integrating the inertance FRF written in terms of the modal properties by using the modal summation equation as follows
$${\rm IME}_{\rm kl}^{\rm q}=\int_{\rm a_{q}}^{\rm b_{q}}\sum_{\rm i=1}^{\rm N}\frac{-\omega^{2}\varphi_{\rm k}^{\rm i}\varphi_{\rm i}^{\rm i}}{-\omega^{2}+2\zeta_{\rm i}\omega_{\rm i}\omega{\rm j}+\omega}{\rm d}\omega\.$$
$$\mathrm{(1)}$$
where q a **and** q b **represent, respectively, the lower and the upper frequency bounds**
for the qth pseudomodal energy calculated from the FRF due to excitation at k **and**
measurement at l. N **is the number of mode and** ζ i is the damping ratio of mode i**. For a**
detailed derivation of this equation consult [3]. Assuming the damping is low, Eq (1) becomes [2]:
$$IME_{kl}^{\rm o}\approx\sum_{i=1}^{N}\left\{\phi_{i}^{\rm i}\phi_{l}^{\rm i}(b_{q}-a_{q})-\omega_{i}\phi_{i}^{\rm i}\phi_{i}^{\rm i}\right\}\left[\arctan\left(\frac{-\zeta_{i}\omega_{i}-jb_{q}}{\omega_{i}}\right)-\arctan\left(\frac{-\zeta_{i}\omega_{i}-ja_{q}}{\omega_{i}}\right)\right]\right\}\tag{2}$$ The advantage in using IMEs over the use of the modal properties is that all the modes
in the structure are taken into account as opposed to using the modal properties, which are |
|
limited by the number of modes identified; and integrating the FRFs to obtain the pseudomodal energies smoothes out the zero-mean noise present in the FRFs.
## Neuro-Fuzzy Modelling
A fuzzy inference system is a model that takes a fuzzy set as an input and performs a composition to arrive at the output based on the concepts of fuzzy set theory, fuzzy *if-then* rules and fuzzy reasoning [4]. Simply put, the Fuzzy inference procedure involves: the fuzzification of the input variables, evaluation of rules, aggregation of the rule outputs and finally the defuzzification of the result. There are two popular types of fuzzy models: the Mamdani model and the Takagi-Sugeno model. The Takagi-Sugeno model is popular when it comes to data-driven identification and is used in this study. In this model the antecedent part of the rule is a fuzzy proposition and the consequent is an affine linear function of the input variables as shown in (3) [5].
$$\mathbf{R}_{i}:\mathrm{If\xis\A}_{i}(\mathbf{x})\,\mathrm{then\}y_{i}=\mathbf{a}_{i}^{T}\mathbf{x}+b_{i},[w_{i}]$$
$$({\mathfrak{I}})$$
$$\cdot1,2....,K$$
Where i a is the consequence parameter vector, i b is a scalar offset and i = 2,1 **...,**K .
The symbol K is the number of fuzzy rules in the model and ]1,0[ wi ∈ **is the weight of**
the rule. The antecedent propositions in the model describe the fuzzy regions in the input space in which the consequent functions are valid and can be stated in the following conjunctive form:
If: is ( ) and ... and is ( ) then i 1 i 1, 1 n i,n n R x A x x A x a x [ , . ]
$${\bf n}\stackrel{\wedge}{\bf y}={\bf a}_{i}^{T}{\bf x}+b_{i},[w_{i}]\,.$$
(4)
The degree of fulfilment of the i**th rule is calculated as the product of the individual**
membership degrees and the rule's weight:
$$\beta_{i}(x)=w_{i}\mathbf{A}_{i}(\mathbf{x})=w_{i}\prod_{j=1}^{n}A_{i,j}(x_{j})$$
The output y **is then computed by taking a weighted average of the individual rules'**
contributions as shown below:
$$y=\frac{\sum_{i=1}^{K}\beta_{i}(\mathbf{x})y_{i}}{\sum_{i=1}^{K}\beta_{i}(\mathbf{x})}=\frac{\sum_{i=1}^{K}\beta_{i}(\mathbf{x})(\mathbf{a}_{i}^{T}\mathbf{x}+b_{i})}{\sum_{i=1}^{K}\beta_{i}(\mathbf{x})}\tag{6}$$
$$\left({\mathcal{A}}\right)$$
$$(5)$$
Where )x( β i is the degree of fulfilment of the i **th rule. The parameters** i a are then approximate models of the considered nonlinear system.
Fuzzy rule-based systems with learning ability, also known as neuro-fuzzy networks
[6], will be considered in this work. This system will be referred to as a neuro-fuzzy system (model) from here onwards. There are two approaches to training neuro-fuzzy models [7]: |
|
1. **Fuzzy rules may be extracted from expert knowledge and used to create an initial**
model. The parameters of the model can then be fine tuned using data collected from the operational system being modelled.
2. **The number of rules can be determined from collected numerical data using a**
model selection technique. The parameters of the model are also optimised using the existing data.
The second approach is used in this study as there is no expert knowledge which will allow us to create an adequate initial model.
## Experimental Setup Data Gathering And Pre-Processing
The data in this work is obtained by performing an experiment on a population of cylinders, which are supported by inserting a sponge rested on a bubble-wrap, to simulate a 'free-free' environment. This setup is illustrated in figure 1 below. The sponge is inserted inside the cylinders to control boundary conditions. This will be further discussed below. Conventionally, a 'free-free' environment is achieved by suspending a structure usually with light elastic bands. A 'free-free' environment is implemented so that rigid body modes, which do not exhibit bending or flexing, can be identified. These modes occur at frequency of 0Hz and they can be used to calculate the mass and inertia properties. In the present study, we are not interested in the rigid body modes. Testing the cylinders suspended is approximately the same as testing it while resting on a bubble-wrap, because the frequency of cylinder-on-wrap is below 100Hz. The first natural frequency of cylinders being analysed is over 300Hz and this value is several order of magnitudes above the natural frequency of a cylinder on a bubble-wrap. Therefore the cylinder on the wrap is effectively decoupled from the ground. It should be noted that the use of a bubble-wrap adds some damping to the structure but the damping added is found to be small enough for the modes to be easily identified. The impulse hammer test is then performed on each of the 20 steel seam-welded cylindrical shells. The impulse is applied at 19 different locations as indicated in Figure 1: 9 on the upper half of the cylinder and 10 on the lower half of the cylinder. The sponge is inserted inside the cylinder to control boundary conditions by rotating it every time a measurement is taken. The top impulse positions are located 25mm from the top edge and the bottom impulse positions are located 25mm from the bottom edge of the cylinder. The angle between two adjacent impulse positions is 36o.
For one cylinder the first type of fault is a zero-fault scenario. This type of fault is given the identity [0 0 0], indicating that there are no faults in any of the three substructures. The second type of fault is a one-fault-scenario, where a hole may be located in any of the three substructures. Three possible one-fault-scenarios are [1 0 0], [0 1 0], and [0 0 1] indicating one hole in substructures 1, 2 or 3 respectively. The third type of fault is a two-fault scenario, where one hole is located in two of the three substructures. Three possible two-fault-scenarios are [1 1 0], [1 0 1], and [0 1 1]. The final type of fault is a three-fault-scenario, where a hole is located in all three substructures, and the identity of this fault is [1 1 1]. There are 8 different types of fault-cases considered (including [0 0 0]). |
|
optimum number of rules in the model has been determined by evaluating models with between one and ten fuzzy rules. The optimum number of rules is found to be four as it gives a low prediction error together with a small standard deviation. The remaining parameters of the neuro-fuzzy model are optimised using a combination of the least squares and gradient descent methods.
## Threshold Determination
For a given input set, the neuro-fuzzy model gives an output of three decision values
[x, y, z]. A correct classification must give the correct values of x, y and z i.e. it must correctly predict all the faults in one cylinder. The neuro-fuzzy model gives ouput values in the range [0, 1], and it is expected that the decision point will be around the halfway mark i.e. 0.5. In this experiment, two separate methods of determining the output have been tested. One simply assumes a decision point of 0.5 and the other method finds a decision point which minimises the error on the training set of 167. Individual thresholds are evaluated for each of three fault areas on the cylindrical shell. The selected threshold is the one that yields the maximum accuracy as defined by:
$$A c c=p o s\cdot t p r+n e g\cdot(1-f p r)={\frac{t p r+c(1-f p r)}{c+1}}$$
Where tpr is the true positive rate, also known as the sensitivity given by:
$$\left(7\right)$$
$\text{resistivity given by:}$
$\eqref{eq:walpha}$.
tpr = TP /(TP + FN) **(8)**
In Eq 7, fpr is defined as the false positive rate also known as the specificity given by:
fpr = FP/(FP +TN) **(9)**
TP, FN, FP and TN are all obtained from a confusion matrix and are defined as true positive, false negative, false positive and true negative, respectively. Parameter c **is the**
relative importance of negatives to positives. In this study the fault cases and non-fault cases have been given equal importance in classification, meaning c **has been assigned a**
value of 1. The results obtained from using both the threshold selection techniques are given in the next section. It should be noted that a correct classification is one in which the classifier correctly predicts the condition of all the three substructures of the cylinder.
## Generalisation Performance
One of the problems experienced in machine learning is the assessment of the generalisation capabilities of a model. The K**-fold cross-validation method has been**
shown to be an improved measure of performance over the holdout method, which divides the dataset into training and testing set [8]. With K**-fold cross-validation, the** dataset is divided into K **approximately equal sets. The holdout method is then performed**
K times, where each time one of the unique K **sets are held back as a testing set and the**
model is optimised using the combined, remaining K**-1 sets. The generalisation estimate** is then the average error of the model over all the K **sets. Leave-one-out cross-validation** is K-fold cross validation taken to its extreme. In this case K is equal to N**, the number of**
instances in the given dataset. The cross-validation technique, though computationally |
|
expensive, is useful especially in the cases where modelling data is very limited. Moreover, it has been shown that not only is the leave-one-out cross validation generalisation estimate is a better measure, the worst-case error of this estimate is not much worse than that of the training error estimate [8]. In our work, the number of data points we have is limited to 167. We therefore use the leave-one-out cross-validation method to measure the performance of the TS neuro-fuzzy model.
## Results And Discussions
The classification result is assessed using the leave-one-out cross validation process described above. The performance of the TS neuro-fuzzy model is illustrated are shown in Table 1 below.
Table 1: The table shows the classification results that are obtained when using the neuro-fuzzy model
| neuro-fuzzy model | | |
|-----------------------|---------------------|----------|
| Method | Misclassified cases | Accuracy |
| Varied threshold | 14 | 91.62% |
| Fixed threshold (0.5) | 16 | 90.42% |
From the table we can see that the method of optimising the threshold is slightly superior
<image> in that it allows us to classify two more fault cases. The different thresholds that were selected during the classification are shown in Figure 2 below. Figure 2: An illustration of the different thresholds that were selected for fault positions x, y and z.
The method used in this paper therefore gives a slightly better accuracy than the Bayesian trained neural network used by Marwala [2].
## Conclusion
|
|
In this paper, pseudomodal properties obtained from the FRF have been calculated from vibration signal measured from a population of cylindrical shells. These properties have been used as inputs into a TS neuro-fuzzy model. A model selection process reveals that the optimum number of fuzzy rules for accurate classification is four as it allows for a high accuracy with a low variance. The TS neuro-fuzzy model classifies the faults in the cylindrical shells with an accuracy of 91.42%, which is an improvement over what the probabilistic neural network has been able to do in the past.
## Acknowledgments
The authors gratefully acknowledge the financial contributions and support obtained from the National Research Fund (NRF), Department of Science and Technology (DST) and the Central Building Research Institute (CBRI). It is through the funding of both this institutions that this project was possible.
## References
1. Doebling, S.W., Farrar, C.R., Prime, M.B., and Shevitz, D.W. "Damage identification and health monitoring of structural and mechanical systems from changes in their vibration characteristics: a literature review" Los Alamos Technical Report LA-13070-MS., Los Alamos National Laboratory, New Mexico, USA, 1996.
2. Marwala, T. "On fault identification using a committee of neural networks and vibration data."
American Institute of Aeronautics and Astronautics., Vol. 39, no. 8, 1608-1618, 2001.
3. Erwins, D. J., *Modal Testing: Theory and practice***. Research Studies Pressing, Letchworth, U.K.**
4. J. Jang, C. Sun, and E. Mizutami, *Neuro-fuzzy and soft computing: A computational approach to* Learning and Machine Intelligence**. New Jersey: Prentice Hall, first ed., 1997.**
5. Babuska, R., "*Fuzzy modeling and Identification***," Ph.D. Dissertation, Technical University of Delft,**
Delft, Holland, 1997.
6. Leondes, C. T., "*Fuzzy Logic and Expert Systems Applications***," First Edition, Academic Press, San**
Diego, 1998.
7. R. Babuska and H. Verbruggen, "*Neuro-fuzzy methods for nonlinear system identification***," Annual**
Reviews in Control, vol. 27, pp. 73-85, 2003.
8. M. J. Kearns and D. Ron, "*Algorithmic stability and sanity-check bounds for leave-one-out cross* validation," In Proceedings of the 10th **Annual Conference on Computational Learning Theory,**
1997. |
|
# On-Line Condition Monitoring Using Computational Intelligence
C.B. VILAKAZI, T. MARWALA, P. MAUTLA and E. MOLOTO
School of Electrical and Information Engineering University of the Witwatersrand Private Bag 3, Wits, 2050 SOUTH AFRICA
[email protected] Abstract:- **This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons**
(MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.
Key-Words: - **Dissolve gas analysis (DGA), multi-layer perceptrons (MLP), radial basis function (RBF),** support vector machines (SVM), Learn++
6 Introduction Bushings are important components in transmission and distribution of electricity. The reliability of bushings affects the availability of electricity in an area as well as the economical operation of the area. Transformer failure studies show that bushings are among the top three most common causes of transformer failure [1][2]. Bushing failure is usually followed by a catastrophic event such as tank rupture, violent explosion of the bushing and fire [2]. In such eventuality the major concern is the risk of collateral damage and personnel injury.
Various diagnostic tools exist such as on-line partial discharge (PD) analysis, on-line power factor, and infrared scanning to detect an impending transformer failure [3]. However, few of these methods can in isolation provide all of the information that a transformer operator requires to decide upon a cause of action. Computational intelligence methods can be used in conjunction with the above-mentioned methods for bushing condition monitoring. Condition monitoring has a number of important benefits such as; an unexpected failure can be avoided through the possession of quality information relating to on-line condition of the plant and the consequent ability to identify faults while in incipient levels of development. In this paper, methods that are based on computational intelligence techniques are developed and then used for interpreting data from dissolve gas-in-oil analysis
(DGA) test. The methods use machine learning classifiers multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM).
These methods are compared and the most effective method is implemented within the on-line framework. The justification for an on-line implementation is based on the fact that training data become available in small batches and that some new conditions only appear in subsequent data collection stage and therefore there is a need to update the classifier in an incremental fashion without compromising on the classification performance of the previous data.
7 Background This section gives a background on dissolve gas analysis, artificial neural networks and support vector machines.
## 7.1 Dissolve Gas Analysis (Dga)
DGA is the most commonly used diagnostic technique for transformers and bushings [4][5]. DGA is used to detect oil breakdown, moisture presence and PD activity. Fault gases are produced by degradation of transformer and bushing oil and solid insulation such as paper and pressboard, which are all made of cellulose [6]. The gases produced from the 1 |
|
transformer and bushing operation are [5][7][8]: (1)
Hydrocarbons gases and hydrogen: methane (CH4),
ethane (C2H6), ethylene (C2H4), acetylene (C2H2**) and** hydrogen (H2**); (2) Carbon oxide: carbon monoxide**
(CO) and carbon dioxide (CO2**); and (3) Non-fault**
gases: nitrogen (N2**) and oxygen (O**2).
The causes of faults are classified into two main groups, which are partial discharges and thermal heating. Partial discharges faults are divided into high-energy discharge and low-energy discharge. High-energy discharge is known as arcing and low energy discharge is referred to as corona. The quantity and types of gases reflect the nature and extent of the stressed mechanism in the bushing [9]. Oil breakdown is shown by the presence of hydrogen, methane, ethane, ethylene and acetylene. High levels of hydrogen show that the degeneration is due to corona. High levels of acetylene occur in the presence of arcing at high temperature. Methane and ethane are produced from low temperature thermal heating of oil and high temperature thermal heating produces ethylene, hydrogen as well as a methane and ethane. Low temperature thermal degradation of cellulose produces CO2 **and high temperature produces CO.**
## 7.2 Artificial Neural Network
Artificial neural networks (ANN) are data processing systems that learn complex input-output relationships from data [10]. A typical ANN consists of simple processing elements called neurons that are highly interconnected in an architecture that is inspired by the structure of biological neurons in human brain [10]. There are different types of ANN models; two that are commonly used and considered in this paper are multi-layer perceptrons (MLP) and radial basis functions (RBF) networks.
## 7.2.1 Multi-Layer Perceptrons
MLPs are feed-forward neural networks that provide a general framework for representing non-linear functional mappings between a set of input variable and a set of output variables. This is achieved by representing a non-linear function of many variables in terms of a composition of non-linear single variable, called activation functions [10]. Fig.1 shows the architecture of an MLP with four input layer neurons, three hidden layer neurons and two output layer neurons. From Fig.1, the relationship between input (x) and output (y) can be written as [10]:
$$y_{j}=f\Biggl{(}\sum\limits_{j=0}^{N}w_{kj}^{2}f\Biggl{(}\sum\limits_{i=0}^{d}w_{ji}^{1}x_{i}\Biggr{)}\Biggr{)}\tag{1}$$
In (1), )2( kj w and )1( ji w **represents the weight of the**
layer 2 and layer 1 respectively while N and d **are the**
<image>
number of input layer neurons and output layer neurons, respectively.
The aim of training is to minimize the error function in order to find the most probable weight connection given the training data. MLP training teaches the network to match the input to a corresponding output. Two types of learning algorithms exist, supervised and unsupervised learning. Supervised learning is used in this paper to estimate the weight parameters. In supervised learning, the neural network is presented with both the input and output values. The actual output of MLP together with its associated target is used to evaluate the error function to quantify the error of the mapping [10]. The goal of parameter estimation is to minimize the prediction in equation 1 and the target t**. This is**
achieved by minimising the cross-entropy error (E)
[10]:
$$\begin{array}{l}\mbox{E}=-\beta\sum_{\mbox{n}=1}^{\mbox{N}}\{t_{\mbox{\tiny{nk}}}\ln(y_{\mbox{\tiny{nk}}})+(1-t_{\mbox{\tiny{nk}}})\ln(1-y_{\mbox{\tiny{nk}}})\}+\frac{\alpha}{2}\sum_{\mbox{j}=1}^{\mbox{W}}w_{\mbox{j}}^{2}\end{array}\tag{2}$$
In (2), the cross-entropy function is chosen because it has been found to be ideally suited to classification problems. In equation 2, n **is the index for the**
training pattern, β **is the data contribution to the error** and k **is the index for the output units. The second**
term in equation 2 is the regularisation parameter and it penalises weights of large magnitudes. This regularisation parameter is called the weight decay and its coefficient, α**, determines the relative** contribution of the regularisation term on the training error. This regularisation parameter ensures that the mapping function is smooth. Including the regularisation parameter has been found to give significant improvements in network generalisation [10]. In this paper to minimise equation 2, a method called scaled conjugate gradient method is used [11]. |
|
## 7.2.2 Radial Basis Function
RBFs are type feed-forward neural networks employing a hidden layer of radial units and an output layer of linear units [10]. In RBF, the distance between the input vector and output vector determines the activation function [10]. RBF have their roots in techniques of performing exact interpolation of a set of data points in a multi-dimensional space. This interpolation requires that every input target be mapped exactly onto corresponding target vector. Fig.2 shows the architecture of RBF with four input layer neurons, five hidden layer neurons and two output layer neurons.
<image>
The RBF network can be mathematically described as follows [10]:
$$f_{k}(x)=\sum_{j=0}^{n}w_{kj}\phi_{j}(x)+b_{k}\tag{3}$$
<image>
where fk **represents the k-th output layer transfer**
function, w and b **represents the weights and biases,**
and φ j represents the j-th input layer transfer function represented in this paper by:
field in this paper by. $$\phi_{j}(x)=\exp\left(-\frac{\left\|x-\mu_{j}\right\|^{2}}{2\sigma_{j}^{2}}\right)\tag{4}$$
Here x represents the input, µ **represents the fixed**
centre position and σ **represents fixed variance. RBF**
consists of two stage training technique, in the first stage the input data are used to determine the parameters of the basis function. The basis functions are kept fixed while the second-layer weights are found in the second level of training. The second level training is explained in the next section.
In the first stage, the input data are used to determine the centre and variance of the basis functions. A randomly selected subset of the input training data set is selected for use as the basis centres. Clusters of training data are then identified and a basis function is centred at each cluster. **The**
parameter µ **is then chosen to be maximum distance**
between the basis function centres. In the second stage of training, the basis functions are kept fixed and the output layer weight is modified and this is equivalent to a single-layer neural network.
## 7.3 Support Vector Machines
SVM is a learning approach that implements the principle of Structural Risk Minimization (SRM).
Structural risk minimization principle has been observed to be superior to the empirical risk minimization principle used in conventional neural networks [12]. SVM was developed to solve classification problems [12] and is schematically represented in Fig. 3. The idea behind SVM is to map an input space, x into a higher dimensional feature space, z. The goal is to find a kernel function ( f (x) ) that will map the input space to the training inputs to training outputs. Various feature spaces are used such as polynomial, Gaussian, Fourier series, splines as well as RBF and MLP nested within the activation function [13].
The classification inner activation function is given by [13]:
$$f(x)=\sum_{i=1}^{m}\left(\alpha-\alpha_{i}\right).k\left(x_{i}\right)\tag{5}$$ $k(x_{i})$ is the kernel function and $\alpha_{i}$, $\alpha_{i}^{*}$ are the
Here*, k(x*i) **is the kernel function and** αi, αj Langrage multipliers. The hyperplane that optimally separates the data are derived by minimizing the |
|
Langrangian , Φ with respect to the weights w **, bias** b and α **[13] given by:**
$$\phi=\int_{2}^{\infty}\left|w\right|^{2}-\sum\alpha_{i}\left(y_{i}\left[\left(w,x_{i}\right)+b\right]-1\right)\right.\tag{6}$$
The multipliers are constrained by 0≤αi,αi
*≥ C **where**
C **is the misclassification tolerance or capacity. If the**
value of C **is too large, the kernel function will overfit the training data and will not have good**
generalization properties.
## 8 Proposed Frameworks
<image>
The proposed frameworks for fault diagnosis are a two-level implementation. The first level of the diagnosis identifies if the bushing is faulty or not. If the bushing is faulty, the second level determines the types of faults, which are thermal fault, PD faults and faults caused by an unknown source. Generally, the procedure of fault diagnosis includes three steps, extracting feature and data pre-processing, training the classifiers and identifying transformer fault with the trained classifiers. Fig.4 shows the block diagram of the proposed methodology.
8.1 Data processing DGA is used to determine the faulty gases in the bushing oil. The content information reflects the states of the transformer and bushing. Ten diagnostic gases mentioned in Section 2 are extracted, which are CH4, C2H6, C2H4, C2H2, H2, CO, CO2, N2, O2 **and total**
dissolved combustible gases. The total dissolved combustible gas is given by the sum of methane, hydrogen, acetylene, ethane, ethylene and hydrogen. The faulty gases are analysed using the IEEE C57.104 standards [14]. Data pre-processing is an integral part of neural network architecture. Data pre-processing makes it easier for the network to learn. Data are normalized to fall within 0 and 1, using linear normalization.
## 8.2 **Mlp, Rbf And Svm Classifiers** In This Method Mlp, Rbf And Svm Are Trained. As
the classifier for the first level and second level classifications are trained in a similar way, the training procedure explained below applies to both the two-class and three class problems. The optimal number of hidden layer was found by using exhaustive search. An MLP with optimal number of hidden layer neuron was trained using the scaled conjugate gradient [11]. The centroids of RBF
are found by using the Gaussian mixture model with circular covariance using the Expectation Maximization (EM) algorithm [10] and the output layer was trained using the scaled conjugate gradient algorithm. Cross-validation is used to ensure that a network with good generalization property is achieved. Cross-validation was used to determine the best kernel function and capacity of the SVM. Then SVM was trained with the optimal hyperplane and the optimal C. The networks are tested with 1000 data points that are randomly selected from the data set. Table 1 shows the results of the networks trained to identify if the transformer is faulty or not which is called second level stage. The methods are tested using specificity and sensitivity. Sensitivity is defined as the probability of the classifier predicting faults correctly and specificity is the probability of a classifier predicting the non-faulty state correctly. Table 1: Comparison of the performance of different frameworks for first level of fault diagnosis
| | MLP | RBF | SVM |
|------------------------|--------|--------|----------|
| Accuracy (%) | 98.9 | 97.4 | 98.5 |
| Specificity | 0.796 | 1.000 | 0.996 |
| Sensitivity | 0.999 | 0.885 | 0.885 |
| Training Time(s) | 41.236 | 0.625 | 1975.437 |
| Classification Time(s) | 0.0157 | 0.0314 | 104.314 |
The table compares the framework in terms of accuracy, training and testing time. MLP classifier shows classification accuracy of 98.9%, RBF shows 97.4% and SVM gives 98.5% classification accuracy. This table shows that there is no significant difference between SVM and MLP classifiers. Although, RBF
performs worse than MLP and SVM in terms of |
|
classification accuracy, it trains faster while SVM is computationally most expensive.
Table 2 compares the results of the networks designed in terms of accuracy, training time and testing time to classify bushing conditions into thermal fault, PD faults and faults caused by an unknown source bushing faults and this is called second level classification. This table shows that the MLP classifier gives 98.62% classification accuracy while RBF and SVM classifier give 81.73% and 96.9%, respectively. In the second level classification, the MLP classifier performs better than the RBF and SVM.
Table 2: Comparison of the performance of different frameworks for second level of fault diagnosis
| | MLP | RBF | SVM |
|-------------------------|--------|-------|--------|
| Accuracy (%) | 98.62 | 81.73 | 96.90 |
| Training Time (s) | 30.016 | 1.038 | 83.906 |
| Classification Time (s) | 0.780 | 0.05 | 1.094 |
Because the MLP gives the best results compared with RBF and MLP on condition monitoring of bushings, it is therefore selected as the most appropriate learning engine and thus it is chosen for on-line learning.
## 9 On-Line Learning
As indicated earlier in the paper, on-line learning is suitable for modelling dynamically time-varying systems, where the operating region changes with time. It is also suitable, if the data available is not adequate and does not fully represent the system.
Another advantage of on-line learning is that it is able to accommodate new conditions that may be introduced by incoming data. An on-line bushing condition monitoring system must have incremental learning capability if it is to be used for automatic and continuous on-line monitoring. The on-line bushing monitoring system improves reliability, reduces maintenance cost and minimizes out-of-service time for a transformer. The basis of on-line learning is incremental learning, which has been studied by a number of researchers [15][16][17][18]. The difficulty in on-line learning is the tendency of an online learner to forget information gathered during the early stages of learning [19]. The method of on-line learning adopted in this paper is Learn++ [20].
## 4.1. Learn++
Learn++ is an incremental learning algorithm that uses an ensemble of classifiers that are combined using weighted majority voting [19]. Learn++ was developed by Polikar [19] and was inspired by a boosting algorithm called adaptive boosting
(AdaBoost). Each classifier is trained using a training subset that is drawn according to a distribution. The classifiers are trained using a weakLearn algorithm.
The requirement for the weakLearn algorithm is that it must be able to give a classification rate of less than 50% initially [20]. For each database Dk **that contains**
training sequence, S, **where S contains learning**
examples and their corresponding classes, Learn++
starts by initialising the weights, w**, according to the**
distribution DT, where T **is the number of hypothesis.**
Initially the weights are initialised to be uniform, which gives equal probability for all instances to be selected to the first training subset and the distribution is given by:
m D = 1 **(7)**
where m **represents the number of training examples**
in Sk**. The training data are then divided into training**
subset TR and testing subset TE **to ensure weakLearn**
capability. The distribution is then used to select the training subset TR and testing subset TE from Sk**. After**
the training and testing subset have been selected, the weakLearn algorithm is implemented. The weakLearner is trained using subset, TR**. A**
hypothesis, ht, **obtained from weakLearner is tested**
using both the training and testing subsets to obtain an error, εt:
)(
: ( )
D i i i i t h x y t ∑ t
≠
ε = **(8)**
The error is required to be less than 0.5; a normalized error βt **is computed using:**
t t t ε ε β− =1 (9)
$$({\mathfrak{g}})$$
If the error is greater than 0.5, the hypothesis is discarded and new training and testing subsets are selected according to DT **and another hypothesis is**
computed. All classifiers generated are then combined using weighted majority voting to obtain composite hypothesis, Ht:
$$H_{t}=\arg\operatorname*{max}_{y\in Y}\sum_{t\in h_{t}(x)=y}\log(1/\beta_{t})$$
Weighted majority voting gives higher voting weights to a hypothesis that performs well on the training and testing subsets. The error of the composite hypothesis is computed by:
$$\mathbf{l}\mathbf{l}.$$
|
|
$$E_{t}=\sum_{t:H_{i}(x_{i})\neq y_{i}}D_{t}(i)$$
= **(11)**
$$H_{t}=\operatorname*{arg\,max}_{y\in Y}\sum_{k=1}^{K}\sum_{t:H_{t}(x)=y}\log(1/\beta_{t})\tag{14}$$
1 ξ ξ γ (16) ∑= = C c c c c
$$1.5$$
$$\langle\mathbf{l}\mid\mathbf{l}$$
the accuracy of the results but the confidence of the system on its own decision.
If the error is greater than 0.5, the current hypothesis is discarded and the new training and testing data are selected according to the distribution DT. **Otherwise,**
if the error is less than 0.5, the normalized error of the composite hypothesis is computed as:
$\eqref{eq:walpha}$ .
## 10 Experimental Results For On-Line Learning
The first experiment evaluates the incremental capability of the algorithm using first level fault diagnosis. The data used were collected from bushing over a period of 2.5 years from bushings in services. The algorithm is implemented with 1500 training examples and 4000 validation examples. The training data were divided into five databases each with 300 training instances. In each training session, Learn++
is provided with each database and 20 hypotheses are generated. The weakLearner uses an MLP with 10 input layer neurons, 5 hidden layer neurons and one output layer neuron. To ensure that the method retains previously learned data, the previous database is tested at each training session. The first row of Table 3 shows the performance of the Learn++ on the training data for different databases. On average, the weakLearner gives 60% **classification rate on its**
training dataset, which improves to 98% when the hypotheses are combined. This demonstrates the performance improvement of Learn++ as inherited from AdaBoost on a single database. Fig. 5 shows the performance of Learn++ on training dataset against the number of classifiers for a single database.
Each column shows the performance of current and previous databases and this is to show that Learn ++
does not forget previously learned information, when new data are introduced. The last row of Table 3 shows the classifiers performances on the testing dataset, which gradually improves from 65.7% to 95.8% as new databases become available, demonstrating incremental capability of Learn++. Fig.
6 shows the performance of Learn++ on one dataset against the number of datasets. Table 4 shows that the confidence of the framework increases as new data are introduced.
The second experiment was performed to evaluate whether the frameworks can accommodate new classes. The faulty data were divided into 1000 training examples and 2000 validation data, which contained all the three classes. The training data were divided into five databases, each with 200 training instances. The first and second database contained, training examples of PD and thermal faults.
$$B_{t}=\Big\langle\Big\rangle_{1-E_{t}}$$
=1 **(12)**
The error is used in the distribution update rule, where the weights of the correctly classified instances are reduced, consequently increasing the weights of the misclassified instances. This ensures that instances that were misclassified by the current hypothesis have a higher probability of being selected for the subsequent training set. The distribution update rule is given by 1 [| ( ) |]
1)(t i i H x y t t Bt w w i
− ≠
+ = × **(13)**
Once the T **hypothesis is created for each database,**
the final hypothesis is computed by combining the hypothesis using weighted majority voting given by:
4.2.Confidence measurement
A simple procedure is used to determine the
confidence of the algorithm on its own decision. A vast majority of hypothesis agreeing on a given
instances can be interpreted as an algorithm having
confidence on the decision. Let us assume that a total
of T hypothesis are generated in k **training sessions**
for a C-class problem. For any given example, the final classification class, if the total vote class c
receives is given by [21][22]:
$$\xi_{c}=\sum_{t:h_{t}(x)=c}\psi_{t}$$
ξ ψ **(15)**
where ψt denotes the voting weights of the tth, hypothesis ht. Normalizing **the votes received by each**
class gives:
γc **can be interpreted as a measure of confidence on a**
scale of 0 to 1. A high value of γc **shows high**
confidence in the decision and consequently, a low value of γc **shows low confidence in the decision. It**
should be noted that the γc **value does not represent** |
|
Furthermore, the results show that the framework is able to accommodate new conditions introduced by incoming data. The results further show that the algorithm has a high confidence in its own decision.
## References:
[1] **B. Ward, "A survey of new techniques in**
insulation monitoring of power transformers",
IEEE Electrical Insulation Magazine**, vol.17,**
no.3, 2000, pp.16-23.
[2] T. Lord, G. Hodge, "On-line Monitoring Technology Applied to HV Bushing", Proceedings of the AVO Conference**, New**
Zealand, November, 2003 (CD-Rom).
[3] B. Mojo "*Transformer condition monitoring*
(non-invasive Infrared Thermography Technique)"**, University of the Witwatersrand**
Masters Thesis, Department of Electrical Engineering, 1997.
[4] S.M Dhlamini, T. Marwala, "An Application of SVM, RBF and MLP with ARD on bushings",
Proceedings of IEEE Conference on Cybernetics and Intelligent Systems**, 2004, pp.1245-1258.**
[5] X. Ding, Y. Liu, P.J Griffin, "Neural nets and experts system diagnoses transformer faults",
IEEE Computer Applications in Power in Power, vol. 13, no. 1, 2000, pp. 50-55.
[6] T.K. Saha, "Review of modern diagnostic techniques for assessing insulation condition in aged transformer", *IEEE Transactions on* Dielectric and Electrical Insulation**, vol.10, no.5,** 2003, pp. 903-917.
[7] T. Yanming, Q. Zheng, "DGA based Insulation Diagnosis of Power Transformer via ANN",
Proceedings of the 6th **International Conference**
on Properties and Application of Dielectric Materials**, vol.1, 2000, pp. 133-137.**
[8] A.R.G. Castro, V. Miranda, "An interpretation of neural networks as inferences engines with application to transformer failure diagnosis",
International Journal of Electrical Power and Energy Systems**, 2005 (in press).**
[9] Y. Zhang, "An artificial neural network approach to transformer fault diagnosis", **IEEE**
Transactions on Power Deliv**ery, 1996, pp 1836-**
1841.
[10] Bishop, C.M. (1995) **Neural Networks for**
Pattern Recognition. **Oxford University Press,** Oxford, UK.
[11] M.F. Mǿ**ller, "A Scaled Conjugate Gradient**
Algorithm for Supervised Learning", **Neural**
Networks**, vol. 6, 1993, pp. 525-533.**
[12] S.R Gunn, Support Vector Machine for classification and regression, Technical Report, Southampton, May 1998.
[13] V. Vapnik, *Statistical Learning Theory,* **Wiley,**
New York, 1998.
[14] Anonymous, "IEEE Guide for the interpretation of Gases generated in Oil-immersed Transformer", *IEEE Standard C57.104-1991*,
1991, pp. 1-30.
[15] Higgins C.H, Goodman R.M, "Incremental learning for rule based neural network",
Proceeding. **International. Joint Conference on**
Neural Networks**, vol. 1, 1991, pp. 875-880.**
[16] L. Fu, H.H. Hsu, J.C. Principe "Incremental backpropagation networks", **IEEE Transactions**
on Neural Network**, vol. 7, no. 3, 1996, pp. 757-**
61.
[17] K. Yamaguchi, N. Yamaguchi and N. Ishii,
"Incremental learning method with retrieving of interfered patterns", **IEEE Transactions on.**
Neural Networks**, vol. 10, no. 6, 1999, pp. 1351-**
65.
[18] G.A Carpenter, S. Grossberg, N. Marhuzon, J.H Reynolds and D.B Rosen, "ARTMAP: A neural network architecture for incremental learning supervised learning of analog multidimensional maps," *IEEE Transactions* Neural Networks**, vol 3, no. 5, 1992, pp. 698-713.**
[19] M. McCloskey, N. Cohen, "Catastrophic interference connectionist networks: The sequential learning problem", **The Psychology of** Learning and Motivation, **vol. 24, 1989, pp. 109-**
64.
[20] R. Polikar, L. Udpa, S. Udpa, V. Honavar,
"Learn++: An incremental learning algorithm for supervised neural networks," **IEEE Transactions**
on System, Man and *Cybernetics (C), Special* Issue on Knowledge Management**, vol. 31, no. 4,**
2001, pp. 497-508.
[21] R. Polikar, L. Udpa, S. Udpa, , V. Honavar "An incremental learning algorithm with confidence estimation for automated identification of NDE
signals*," IEEE Transactions on Ultrasonic* Ferroelectrics, and Frequency control**, vol. 51,** no. 8, 2004, pp. 990-1001.
[22] M. Muhlbaier, A. Topalis., R. Polikar
"Ensemble confidence estimates posterior probability," *Proceedings of the 6th International* Workshop on Multiple Classifier Systems, Springer Lecture Notes in Computer Science (LNCS), vol. 3541, 2005, pp. 3666-3700. |
|
# The Road To Quantum Artificial Intelligence Kyriakos N. Sgarbas
Wire Communications Lab., Dept. of Electrical and Computer Engineering, University of Patras, GR-26500, Patras, Greece E-mail: [email protected]
## Abstract
This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available. Keywords: Quantum Computation, Artificial Intelligence
## 1. Introduction
Quantum Computation (QC) is the scientific field that studies how the quantum behavior of certain subatomic particles (i.e. photons, electrons, etc.) can be used to perform computation and eventually large scale information processing.
Superposition and *entanglement* are two key-phenomena in the quantum domain that provide a much more efficient way to perform certain kinds of computations than classical algorithmic methods. In QC information is stored in *quantum registers* composed of series of *quantum bits* (or *qubits*). QC defines a set of operators called quantum gates that operate on quantum registers performing simple qubit-range computations. *Quantum algorithms* are successive applications of several quantum gates on a quantum register and perform more elaborate computations.
QC's ability to perform parallel information processing and rapid search over unordered sets of data promises significant advances to the whole scientific field of information processing. This article focuses on the benefits QC has to offer in the area of Artificial Intelligence (AI). In fact, several research papers have already reported how QC relates to specific aspects of AI (e.g. quantum game theory [Miakisz et al. (2006)], quantum evolutionary programming [Rylander et al. (2001)], etc). The present article attempts a more global view on quantum methods for AI applications addressing not only work already done but also some broad ideas for future work. But first it presents a very brief (due to space limitation) introduction to QC basics and algorithms, just the essentials to understand the subject. For a full introduction and |
|
more details the reader is advised to read [Karafyllidis (2005a)], [Gruska (1999)] or
[Nielsen & Chuang (2000)].
## 2. Quantum Computation Basics
The quantum analog of a bit is called a quantum bit or *qubit*. Its physical implementation can be the energy state of an electron in an atom, the polarization of a photon, or any other bi-state quantum system. When a qubit is *measured* (or *observed*), its state is always found in one of two clearly distinct states, usually transcribed as |0> and |1>. These are direct analogs of the 0 and 1 states of a classical bit but they are also orthogonal states of a 2-dimensional Hilbert space and they are called *basis states* for the qubit. Before the qubit is measured, its state can be in a composition of its basis states denoted as:
$$\mathbf{(1)}$$.
$$\left|\mathbf{q}\right\rangle=\mathbf{a}\left|0\right\rangle+\mathbf{b}\left|1\right\rangle=\mathbf{a}{\left[{\begin{array}{l}{1}\\ {0}\end{array}}\right]}+\mathbf{b}{\left[{\begin{array}{l}{0}\\ {1}\end{array}}\right]}={\left[{\begin{array}{l}{\mathbf{a}}\\ {\mathbf{b}}\end{array}}\right]}$$
In Eq.1 a and b are complex numbers called *probability amplitudes*; |a|2 is the probability of the qubit to appear in state |0> when observed, and |b|2 is the probability to appear in state |1>. Equation 1 also presents the *matrix notation* of the qubit states.
A series of qubits is called a *quantum register*. An n-qubit quantum register is denoted as:
$$Q_{n}\rangle=c_{0}\left|0\cdots000\right\rangle+c_{1}\left|0\cdots001\right\rangle+\cdots+c_{2^{n}-1}\left|1\cdots111\right\rangle=\sum_{i=0}^{2^{n}-1}c_{i}\left|i\right\rangle\tag{2}$$
It has 2n observable states, corresponding to the basis states of Eq.2, each one having a probability of |ci| 2 when measured. Again, this can be considered as a vector of an n-
$${\mathrm{\boldmath~\space~\space~\space~\space~}}\sum_{i=0}^{2^{n}-1}\bigl|c_{i}\bigr|^{2}=1\,.$$
dimensional Hilbert space with 1
A single qubit can be considered as a trivial quantum register with n=1. When n>1 the quantum register can be considered as a series of qubits:
$$\left|Q_{n}\right\rangle=\left|q_{n-1}\right\rangle\otimes\left|q_{n-2}\right\rangle\cdots\left|q_{i}\right\rangle\cdots\left|q_{1}\right\rangle\otimes\left|q_{0}\right\rangle=\left|q_{n-1}q_{n-2}\cdots q_{i}\cdots q_{1}q_{0}\right\rangle\tag{3}$$
where ⊗ denotes the *tensor product*.
Quantum systems are able to simultaneously occupy different quantum states. This is known as a *superposition* of states. In fact, the state of Eq.1 for the qubit and the state of Eq.2 for the quantum register represent superpositions of the basis states over the same set of qubits. A quantum register can be in a superposition of two or more basis states (with a maximum of 2n, where n is the number of its qubits). The qubits of the |
|
quantum register remain in superposition until they are measured (intentionally or not). At the time of measurement the state of the register *collapses* (or *is resolved*) to one of its basis states randomly, according to the probability assigned to that state. It is not necessary to measure every single qubit of a quantum register in order to trigger its collapse to a basis state. For example, consider this case:
$$\left|Q_{s}\right\rangle=\frac{1}{\sqrt{3}}\left|00000\right\rangle+\frac{1}{\sqrt{3}}\left|10000\right\rangle+\frac{1}{\sqrt{3}}\left|11111\right\rangle\tag{4}$$
Equation 4 specifies a 5-qubit register in superposition of three (of the 32 possible) basis states, |00000>, |10000> and |11111> with equal probability amplitudes; each of the three states has a 33% chance to be observed. Now, suppose we measure the qubits one by one starting from the leftmost. The leftmost qubit has a 67% chance to be |1> and 33% to be |0>. Let's say we measure it and find a |0>. We say that the leftmost qubit has collapsed to |0>. But it is not the only qubit that has collapsed; the rest four qubits must be all |0> too, since these are the only states consistent with the leftmost |0>. We say that the four rightmost qubits are *entangled* with the leftmost one. In other words, they are linked together in a way that each of the qubits loses its individuality. Measurement of one affects the others as long as they remain entangled together. Note that if instead of measuring the leftmost qubit we had decided to measure the rightmost one and found it |0>, three other qubits would collapse to |0> as well, but the leftmost qubit would still remain in superposition. But this does not mean that it was not affected by the measurement; it now has a 50%-50% chance of being observed in |0> or |1> instead of the initial 33%-67%.
Superposition does not always imply entanglement. For example, consider the state of Eq.2: we have to measure each and every one of the n qubits in order to determine the exact state of the register. In this case there is no entanglement.
Quantum systems in superposition or entangled states are said to be *coherent*. This is a very fragile condition and can be easily disturbed by interaction with the environment (which is considered an act of measurement). Such an accidental disturbance is called *decoherence* and results to losing information to the environment. Keeping a quantum register coherent is very difficult, especially if its size is large.
## 3. Quantum Computation Components And Algorithms
Higher order quantum computation machines can be devised based on quantum registers: for instance quantum finite state automata can be produced by extending probabilistic finite-state automata in the quantum domain. Analogous extensions can be performed for other similar state machines (e.g. quantum cellular automata, quantum Turing machines, etc) [Gruska (1999)]. Regardless the machine, the |
|
computation is eventually reduced to a series of basic operations to some qubits of a quantum register; this is what *quantum gates* do.
Quantum gates are the basic computation components for QC. They are very different from gates in classical computation systems. Quantum gates are not circuits with input and output; they are operators over a quantum register. These operators are always reversible; most of them originate from reversible computation theory. An infinite number of quantum gates can be defined (even for a single qubit) since it is possible to define an operator that rotates an arbitrary quantum register state anywhere in the Hilbert space. The most common quantum gates are:
- *The Identity Gate:* It is the quantum equivalent of a buffer. - *The NOT Gate:* It is used to complement the input.
- *The Hadamard Gate:* It is used to set a qubit in a superposition of two states.
Acts on a single qubit.
- *The Phase Shift Gate:* In fact it is a class of gates with varying phases. It changes the phase of a qubit in the Hilbert space.
- *The Controlled NOT Gate (CNOT or XOR):* Like the NOT gate, but acts on two qubits. The first one is called *control qubit*, the second one *target*. The gate performs a complement of the target qubit only if the control qubit is |1>.
This effect is equivalent to a XOR operation between the two qubits, hence the alternative name.
- *The Controlled Phase Shift Gate:* Like the Phase Shift gate, but acts on two qubits: control and target. It performs a phase shift on the target qubit only if the control qubit is |1>.
- *The Exchange Gate:* Acts on two qubits and exchanges their values.
- *The Controlled-Controlled NOT Gate (CCNOT or Toffoli):* Like the CNOT,
but with two control qubits. Both of them should be |1> in order to complement the target qubit.
- *The Fredkin Gate:* Like the Exchange gate, with an additional control qubit.
The two target qubits exchange their values only if the control qubit is |1>.
Each gate is expressed as a matrix, so that the application of a quantum gate on the contents of a quantum register is expressed as a matrix multiplication. Quantum Algorithms are series of applications of quantum gates over the contents of a quantum register. The most popular quantum algorithms are:
- *Parallel Computation:* Thought not exactly an algorithm, the intrinsic property of quantum registers to support massively parallel computation is mentioned due to its use in almost every quantum algorithm. When a transformation is performed to the contents of a quantum register this affects the whole set of its superimposed values. Reading the outcome is a nondeterministic process, but it is possible to maximize the probability to occur |
|
the intended result. This is called *probability amplitude amplification* [Gruska
(1999)].
- *Grover's Algorithm:* It searches N=2n items superimposed on a quantum register of n qubits using a certain state as a search key. It is able to search an unordered set of items in ( NO ) time [Grover (1997)].
- *Quantum Fourier Transform (QFT):* A basic subroutine in many specialized algorithms concerning factoring prime numbers and simulating actual quantum systems. QFT is a unitary operation acting on vectors in the Hilbert space. By altering their phases and probability amplitudes it can reveal periodicity in functions just like its classical analog [Coppersmith (1994)].
- *Shor's Algorithm:* It finds the period of a periodic function in polynomial time, a problem directly related to factorization of large integers [Shor (2004)]. This algorithm is famous for making obsolete the current public-key encryption systems.
## 4. On Quantum Artificial Intelligence
One of the first contributions that QC offers to AI is the production of truly random numbers. True randomness has been reported to cause measurable performance improvement to genetic programming and other automatic program induction methods [Rylander et al. (2001)]. Monte-Carlo, simulated annealing, random walks and other analogous search methods are expected to benefit from that as well. A truly random number of N bits can be produced by applying the Hadamard transformation to a N-qubit quantum register thus producing the superposition of all basis states 2 0 1 N
=
12
∑
−
k N
k . Then just by measuring this state we get a truly random number in the range of [0, 2N-1]. Since the process can be repeated n times to produce a nN-bit random number, it is generally possible to produce N-bit random numbers using a Mqubit quantum register where M<<N. Thus, in principle even just one qubit (M=1) is adequate. However, random search methods in QC indicate a completely different approach than in classical computation. The quantum analog of a classical random walk on a graph, i.e. the quantum random walk, even in one dimension is a much more powerful computational model [Ben-Avraham et al. (2004)]. While the classical random walk is essentially a Markov process, in a quantum random walk propagation between node pairs is exponentially faster, thus enabling the solution of NP-complete problems as well [Childs et al. (2002)]. Moreover, as mentioned by [Shor (2004)], combinations of quantum random walks with Grover's algorithm have managed to confront efficiently some real-world problems like database element comparison and dense graph search [Childs et al. (2003)]. |
|
Grover's algorithm [Grover (1997)] and its variations are ideal for efficient contentaddressable search and information retrieval from large collections of raw data. The principle of *probability amplitude amplification* that guides these processes can be relaxed for approximate pattern matching as well, thus facilitating applications like face, fingerprint, and voice recognition, corpus search, and data-mining. A quantum register containing a set of data in superposition can be seen as the quantum analog of a Hopfield neural network used as an associative memory [Trugenberger (2002)] only with much greater capacity to store patterns: while the capacity of a n-neuron Hopfield network approximates to 0.14n patterns, a quantum register of n-qubits can store 2n binary patterns.
In principle, a great deal of the problems that AI attempts to confront is too heavy for classical algorithmic approaches, i.e. NP-hard problems such as scheduling, search, etc. Many AI techniques have been developed to cope with the NP-complete nature of these problems. Since QC can reduce time complexity to polynomial range, it eventually provides a more efficient way to address these problems. Using QC all the states of the search space can be first superimposed on a quantum register and then a search can be performed using a variance of Grover's algorithm. It is evident that many problems in search, planning, scheduling, game-playing, and other analogous fields can utilize the parallel processing of a quantum register's contents and reduce their processing times by several orders of magnitude. For more complex problems even quantum constraint satisfaction heuristics can be applied, as described in [Aoun & Tarifi (2004)]. But the main challenge in these cases is to find a way to encode the problem space within the quantum register boundaries. Fortunately, for problems where a previous approach based on genetic algorithms is available, there is a significant basis for QC as well: the representation of the gene-string can be transferred to the quantum implementation almost verbatim and the whole gene pool can be superimposed to a single quantum register. Speech and language processing have also a great deal to gain from QC. Apart from the aforementioned approximate pattern matching to the input signal and the obvious rapid quantum search in huge lexical databases, the representation problem can be solved quite elegantly in a quantum register and more efficiently than ever. For instance, a common drawback of a typical syntactic parser is the fact that it produces too many parse trees, some slightly different and some quite different ones. Their representation as superimposed states in a quantum register solves not only the issue of their storage, but simplifies their further processing as well. An interesting model for the mapping of language expressions into microscopic physical states has been proposed by [Benioff (2002)]. Game theory and decision-making have also been addressed by QC. A new field of quantum game theory has emerged [Piotrowski & Sladkowski (2004a)] with promising applications at least to playing market games [Piotrowski & Sladkowski (2004b)]. The entanglement effect has been exploited to improve behavior in |
|
cooperation [Miakisz et al. (2006)] and coordination games [Huberman & Hogg
(2003)], simulating economic systems [Chen et al. (2002)] and human behavior
[Mendes (2005)]. It is interesting that in some cases the quantum solution can be derived as an extension to the classical one [Huberman & Hogg (2003)], thus enabling the optional use of quantum entanglement as an extra resource. Even more interesting is the fact that the quantum solution to these games models the human behavior much more accurately than the classical one. In fact, it seems that human player strategies in these types of games deviate significantly from the theoretical Nash equilibrium that classic game theory expects. This discrepancy (i.e. the seemingly irrational human behavior) is attributed to an emotional response of the human player. Quantum solutions such the one for the *Quantum Ultimatum Game*
[Mendes (2005)] seem to model efficiently these discrepancies and propose a better model for the human behavior in such situations.
The last comment on modeling human behavior seems to lead to the philosophical question of whether the human brain performs some kind of quantum computation [Miakisz et al. (2006)] or not [Penrose (1997)], a question that has been used to argue against hard-AI in the past. Although there are not sufficient data to answer such a question, one could argue that with QC the idea of hard-AI seems a little closer to implementation, since there is some evidence that QC is stronger than classical Turing computation. Indeed, [Calude & Pavlov (2002)] have proved that QC is theoretically capable of computing incomputable functions. Despite Feynman's argument that QC is not able to exceed the so-called Turing's barrier and solve an undecidable problem, Calude & Pavlov have used QC to solve an equivalent to the famous Halting Problem, the most well-known undecidable problem in computer science. Although it still has to be seen whether this approach is practically feasible, this is a great theoretical breakthrough that promises to change the computational capabilities of our machines. Eventually, QC's effectiveness may eventually make us reconsider what an AI problem is. For example chess playing is traditionally considered an AI problem. That is so due to the high computational cost of the brute-force algorithmic approach in the classical computational 'world'. But in the quantum domain the same problem is not so hard. Given the proper hardware, a quantum algorithmic process would be able to solve it in acceptable time. So does chess playing remain an AI problem? Only time will tell whether QC will force us to redefine the domain of AI or whether it will be eventually considered yet another weapon in the AI arsenal, like neural networks or genetic algorithms.
## 5. Conclusion
This paper attempted to present the basics of QC to readers already familiar with AI, explaining QC's potential application to traditional AI problems and methods through |
|
a very narrow choice of recent papers and research directions. For a lengthier overview on QC applications to Computational Intelligence see [Perkowski (2005)].
The ideas presented here were inevitably vague and outlined, since quantum computers are still not available for implementation purposes. The hardware for quantum registers is still in infancy due to the obstacle of decoherence which is very difficult to overcome. Thus most of the aforementioned methods have been tested either to trivial problems (requiring 3 to 5 qubits) or to QC simulators [Karafyllidis (2005b)]. But maybe this situation is going to change sooner than expected. Very recently the Canadian company D-Wave Systems announced a prototype 16-qubit adiavatic quantum computer that (among other things) solves Sudoku puzzles [Minkel (2007)]. The company has also promised to provide a commercial product very soon. Meanwhile, a programming language for quantum programming has already been proposed [Betteli et al. (2005)], so by the time quantum computers become available in the market, probably a great deal of software tools will be ready as well and the road to Quantum Artificial Intelligence will be open to explore.
## References
Aoun, B., Tarifi, M. (2004), *Quantum Artificial Intelligence*, Quantum Information Processing, ArXiv:quant-ph/0401124.
Ben-Avraham, D., Bollt, E.M., Tamon, C. (2004), *One-Dimensional ContinuousTime Random Walks*, Quantum Information Processing, vol.3, pp.295-308.
Benioff, P. (2002), *Language is Physical*, Quant.Infor.Processing, vol.1, pp.495-509. Betteli, S., Calarco, T., Serafini, L. (2005), *Toward an Architecture for Quantum* Programming, ArXiv:cs.PL/0103009.
Calude, C.S., Pavlov, B. (2002), *Coins, Quantum Measurements, and Turing's* Barrier, Quantum Information Processing, vol.1, pp.107-127.
Chen, K.-Y., Hogg, T., Beausoleil, R. (2002), A Quantum Treatment of Public Goods Economics, Quantum Information Processing, vol.1, pp.449-469.
Childs., A.M., Cleve, R.E., Deotto, E. Farhi, E., Gutmann, S., Spielman, D.A.(2003),
Exponential Algorithmic Speedup by Quantum Walk, in Proc. 35th ACM Symposium on Theory of Computing, pp.59-68.
Childs, A.M., Farhi, E., Gutmann, S. (2002), *An Example of the Difference Between* Quantum and Classical Random Walks, Quantum Inform. Proc., vol.1, pp.35-43.
Coppersmith, D. (1994), *An Approximate Fourier Transform Useful in Quantum* Factoring, IBM Research Report RC 19642.
Grover, L.K. (1997), *Quantum Mechanics Helps in Searching for a Needle in a* Haysack, Phys. Rev. Lett., vol.78, pp.325-378.
Gruska, J. (1999), *Quantum Computing*, McGraw-Hill, London. Huberman, B.A., Hogg, T. (2003), *Quantum Solution of Coordination Problems*,
Quantum Information Processing, vol.2, pp.421-432. |
|
Karafyllidis, I.G. (2005a), *Quantum Computers - Basic Principles*, Klidarithmos, Athens (in Greek).
Karafyllidis, I.G. (2005b), Quantum Computer Simulator Based on the Circuit Model of Quantum Computation, IEEE Trans.Circ.& Syst.-I, vol.52, no.8, pp.1590-1596.
Mendes, R.V. (2005), *The Quantum Ultimatum Game*, Quantum Information Processing, vol.4, pp.1-12.
Miakisz, K., Piotrowski, E.W., Sladkowski, J. (2006), *Quantization of Games:*
Towards Quantum Artificial Intelligence, Theor.Comp.Science, vol.358, pp.15-22.
Minkel, J. R. (2007), *First "Commercial" Quantum Computer Solves Sudoku Puzzles*,
Scientific American News, February 13, 2007, http://www.sciam.com/article.cfm?
articleID=BD4EFAA8-E7F2-99DF-372B272D3E271363 Nielsen, M.A., Chuang I.L. (2000), Quantum Computation and Quantum Information, Cambridge University Press, Cambridge.
Penrose, R. (ed.) (1997), *The Large, the Small and the Human Mind*, Cambridge University Press, Cambridge.
Perkowski, M.A. (2005), Multiple-Valued Quantum Circuits and Research Challenges for Logic Design and Computational Intelligence Communities, IEEE Connections, vol.3, no.4, pp.6-12.
Piotrowski, E.W., Sladkowski, J. (2004a), *The Next Stage: Quantum Game Theory*, in Mathematical Physics Frontiers, Nova Science Publishers Inc.
Piotrowski, E.W., Sladkowski, J. (2004b), *Quantum Computer: An Appliance for* Playing Market Games, Int. J. Quant. Information, vol.2, pp.495.
Rylander, B., Soule, T., Foster, J., Alves-Foss, J. (2001), Quantum Evolutionary Programming, in Spector, L. et al. (eds.), Proc. of the Genetic and Evolutionary Computation Conference (GECCO-2001), San Francisco, USA, pp.1005-1011.
Shor, P.W. (2004), *Progress in Quantum Algorithms*, Quantum Information Processing, vol.3, pp.5-13.
Trugenberger, C.A. (2002), *Quantum Pattern Recognition*, Quantum Information Processing, vol.1, pp.471-493. |
|
# Truecluster Matching
Jens Oehlschl¨agel [email protected]
## Editor: Abstract
Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the *truematch* algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a χ 2-transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm gives more consistent truecluster results for unequal cluster sizes.
Free R software is available. Keywords: Hungarian method, truematch, truecluster, MMCC, CIC, Hornik (2005)
## 1. Introduction
Applying a cluster algorithm to a dataset results in—fuzzy or crisp—assignments of cases to anonymous clusters. In order to interpret these clusters, we often wish to compare these clusters to other classifications, so some heuristic is needed to match one classification to another. With the advent of resampling and ensemble methods in clustering (Gordon and Vichi, 2001; Dimitriadou et al., 2002; Strehl and Ghosh, 2002), the task of matching cluster solutions has become even more important: we need reliable and scalable matching algorithms that do the task fully automated.
Consider, for example, the use of bootstrapping or cross-validation for cluster validation as suggested by many authors (Moreau and Jain, 1987; Jain and Moreau, 1988; Tibshirani et al., 2001; Roth et al., 2002; Ben-Hur et al., 2002; Dudoit and Fridlyand, 2002): many cluster solutions are created and agreement between them is evaluated. Some agreement indices do not need explicit cluster matching (Rand, 1971; Hubert and Arabie, 1985), but others can only be applied *after* cluster solutions have been matched, for example, Cohen's kappa (1960).
Recently, authors have suggested transfering the idea of bagging (Breiman, 1996) to clustering. Some approaches aggregate cluster centers (Leisch, 1999; Dolnicar and Leisch, 2000; Bakker and Heskes, 2001) or aggregate consensus between pairs of observations (Monti et al., 2003; Dudoit and Fridlyand, 2003, *BagClust2* algorithm). Other approaches aggregate cluster assignments and, therefore, require cluster matching, for example, the crisp arXiv:0705.4302v1 [cs.AI] 29 May 2007 |
|
<image>
|
|
## 6. Discussion
We have shown that trace maximization matching fails to behave sufficiently neutrally when matching clusterings. The problem arises generally but is especially important in contexts where random correction is not applicable. As an alternative, we have presented the truematch algorithm and heuristic, both probabilistically generate neutral expected matching tables and scale in polynomial time. Our simulations have confirmed that truematch avoids unjustified (expected) matchings induced by unequal cluster sizes. For the simulations done here, the truematch algorithm and the truematch heuristic behave identically. Since the truematch heuristic does not guarantee maximizing the χ 2-criterion, we expect the truematch algorithm to be superior. However, there is a subtle difference: while the matching of the truematch algorithm depends solely on sk,l, the truematch heuristic uses sk,l and nk,l to select the row/column matches. Therefore, a final decision about an optimal matching algorithm needs more investigation.
Truematch is central to the *MMCC* algorithm, which creates the basis for the CICevaluation in the truecluster framework and, thus, contributes to solving the decade-old problem of choosing the optimal number of clusters. Beyond that, cluster bagging, in general, could benefit from using truematch: the resulting N x K matrix is rather fuzzified than degenerated for unjustified cluster splits. This allows for better automated processing of such results. It is an open question whether the truematch algorithm also has advantages for consensus clustering, or whether different usages of cluster ensembles require different matching algorithms.
## Acknowledgments
We would like to thank Dr. Stefan Pilz for reviewing this paper and giving valuable hints for improvement. |
|
## Appendix A.
In this appendix, we give details concerning the simulations in section 5: assume a vector x of length 100 with 'true' sample group memberships where p denotes the fraction of 1 and
(1 − p) fraction of 0. Let p1 denote the matrix of joint probabilities for a case's true and clustered classification when the cluster algorithm perfectly separates 0 from 1 (at κ = 1).
$$\mathbf{p}_{1}={\left|\begin{array}{l l}{(1-p)}&{0}\\ {0}&{p}\end{array}\right|}$$
Let p0 denote the matrix of joint probabilities for a case's true and clustered classification when the cluster algorithm makes a random guess when separating 0 from 1 (at κ = 0).
$$\mathbf{p}_{0}={\left|\begin{array}{l l}{(1-p)^{2}}&{(1-p)\cdot p}\\ {(1-p)\cdot p}&{p^{2}}\end{array}\right|}$$
Then pκ denotes the matrix of joint probabilities for a case's true and clustered classification when the cluster algorithm has reliability κ.
$$p_{\kappa}=\kappa\cdot\mathbf{p}_{1}+(1-\kappa)\cdot\mathbf{p}_{0}$$
The two conditional probabilites pid that the clustering algorithm identifies the true class, given the true class, are
$$\mathbf{p}_{i d}=\kappa+(1-\kappa)\cdot{\left|\begin{array}{l}{(1-p)}\\ {p}\end{array}\right|}$$
For each value of p ∈ {1/100, 2/100..99/100} and each value of κ ∈ {0.00, 0.01, 0.02*, ..,* 1.00},
we simulate aggregation of 1000 bootstrap samples from x, for each bootstrap sample our fictitious cluster algorithm assigns cases with probability pid to the true class and with probability 1 − pid to the other class. The resulting cluster memberships c
∗ are matched versus the (current) estimated cluster memberships ˆc of the cases in the bootstrap sample.
If c
∗ or ˆc does not contain two classes, the bootstrap sample is dropped and replaced by another one. Differently from the *MMCC* algorithm in Section 4, we do not predict cluster memberships of the out-of-bag cases. We use c
∗ directly instead of c 0, consequently the rows of C are not guaranteed to have aggregated an equal number of votes. For all combinations of p and κ—the resulting 99x101 truecluster models Pˆ—we calculate information, *uncertainty*, and CIC (Oehlschl¨agel, 2007b). These values are visualized using colorcoding and contourlines are added based on a loess smooth. To create the *f ixed* version, the complete procedure is repeated, additionally enforcing a fixed fraction p by moving randomly selected observations in c
∗from the too big group to the too small one—analogous to a cluster algorithm that forces certain cluster sizes. The R-code doing the simulation is available in truematch.r in package truecluster (Oehlschl¨agel, 2007a). |
|
## References
H. Akaike. Information theory and an extension of the maximum likelihood principle.
In B.N. Petrov and F. C´aski, editors, Second International Symposium on Information Theory, pages 267–281, Budapest, 1973. Akademiai Kaid´o. Reprinted in Breakthroughs in Statistics, eds Kotz, S. & Johnson, N.L. (1992), volume I, pp. 599–624. New York:
Springer.
H. Akaike. A new look at statistical model identification. IEEE Transactions on Automatic Control, 19:716–723, 1974.
Bart Bakker and Tom Heskes. Model clustering and resampling, 2001. URL citeseer.
ist.psu.edu/bakker00model.html.
A. Ben-Hur, A. Elisseeff, and I. Guyon. A stability based method for discovering structure in clustered data. *Pac Symp Biocomputing*, 7:6–17, 2002.
Franois Bourgeois and Jean-Claude Lassalle. An extension of the munkres algorithm for the assignment problem to rectangular matrices. *Communication ACM*, 14(12):802–804, 1971.
L. Breiman. Bagging predictors. *Machine Learning*, 24(2):123–140, 1996.
Jacob Cohen. A coefficient of agreement for nominal scales. *Educational and Psychological* Measurement, 20:37–46, 1960.
E. Dimitriadou, A. Weingessel, and K. Hornik. A combination scheme for fuzzy clustering.
Journal of Pattern Recognition and Artificial Intelligence, 16:901–912, 2002.
S. Dolnicar and F. Leisch. Behavioural market segmentation using the bagged clustering approach based on binary guest survey data: Exploring and visualizing unobserved heterogeneity. *Tourism Analysis*, 5(2-4):163–170, 2000.
S. Dudoit and J. Fridlyand. A prediction-based resampling method for estimating the number of clusters in a dataset. *Genome Biology*, 3(7):research0036.1–0036.21, 2002.
S. Dudoit and J. Fridlyand. Bagging to improve the accuracy of a clustering procedure.
Bioinformatics, 19(9):1090–1099, 2003.
A. D. Gordon and M. Vichi. Fuzzy partition models for fitting a set of partitions. *Psychometrika*, 66:229–248, 2001.
Kurt Hornik. A CLUE for CLUster Ensembles. *Journal of Statistical Software*, 14(12),
September 2005. URL www.jstatsoft.org/v14/i12/.
Kurt Hornik and Walter Boehm. *clue: Cluster ensembles*, 2007. R package version 0.3-11. Lawrence Hubert and Phipps Arabie. Comparing partitions. *Journal of Classification*, 2:
193–218, 1985. |
|
A. K. Jain and J. Moreau. Bootstrap techniques in cluster analysis. *Pattern Recognition*,
20:547–568, 1988.
H. W. Kuhn. The hungarian method for the assignment problem. *Naval Research Logistics* Quaterly, 2:225–231, 1955.
Friedrich Leisch. Bagged clustering. Technical Report Working Paper 51, SFB Adaptive Information Systems and Modelling in Economics and Management Science, Vienna University of Economics and Business Administration in cooperation with the University of Vienna, Vienna University of Technology., 1999.
Stefano Monti, Pablo Tamayo, Jill Mesirov, and Todd Golub. Consensus clustering: A
resampling-based method for class discovery and visualization of gene expression microarray data. *Machine Learning*, 52:91–118, 2003.
J. V. Moreau and A. K. Jain. The bootstrap approach to clustering. In P.A. Devijver and J. Kittler, editors, *Pattern Recognition: Theory and Applications*, volume 30 of *NATO* ASI Series F, pages 63–71. Springer, 1987.
J. Munkres. Algorithms for the assignment and transportation problems. *J. Siam*, 5:32–38, 1957.
Jens Oehlschl¨agel. *Truecluster: an algorithmic framework for robust and scalable clustering*,
2007a. URL www.truecluster.com. R package version 0.3 (version 1.0 and higher will also be hosted at CRAN.R-project.org).
Jens Oehlschl¨agel. Truecluster: robust scalable clustering with model selection. *submitted* to jmlr, 2007b.
W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66:846–850, 1971.
Volker Roth, Tilman Lange, Mikio Braun, and Joachim M. Buhmann. A resampling approach to cluster validation. In Wolfgang H¨ardle and Bernd R¨onz, editors, *Proceedings* in Computational Statistics: 15th Symposium Held in Berlin (COMPSTAT2002), pages 123–128, Heidelberg, 2002. Physica-Verlag.
G. Schwarz. Estimating the dimension of a model. *Annals of Statistics*, 6:461–464, 1978. A. Strehl and J. Ghosh. Cluster ensembles - a knowledge reuse framework for combining multiple partitions. *Journal of Machine Learning Research*, 3:583–617, 2002.
Robert Tibshirani, Guenther Walther, David Botstein, and Patrick Brown. Cluster validation by prediction strength. Technical report, Stanford University, 2001. |
|
BagClust1 algorithm of Dudoit and Fridlyand (2003), the combination scheme for fuzzy clustering of Dimitriadou et al. (2002) or *truecluster* (Oehlschl¨agel, 2007b).
Truecluster is an algorithmic framework for robust scalable clustering with model selection that combines the idea of bagging with information theoretical model selection along the lines of AIC (Akaike, 1973, 1974) and BIC (Schwarz, 1978). In order to calculate its cluster information criterion (CIC), truecluster requires a reliable cluster matching algorithm. The truematch algorithm presented here was designed to play that role. The organization of the paper is as follows: in Section 2, we show an undesirable feature of the standard approach to cluster matching. In Section 3, we present the truematch algorithm. In Section 4, we demonstrate the benefits of the truematch algorithm within the truecluster framework. In Section 5, we use simulation to compare truematch against standard trace maximization matching and in Section 6, we discuss our results.
## 2. What'S Wrong With Trace Maximization Of The Matching Table
The standard aproach to cluster matching is searching for that permutation of cluster labels that minimizes the euclidean distance to a reference cluster solution. This criterion has been suggested for fuzzy consensus clustering (Gordon and Vichi, 2001; Dimitriadou et al., 2002),
as well as for crisp consensus clustering (Strehl and Ghosh, 2002) or crisp cluster bagging Dudoit and Fridlyand (2003, BagClust1). In the crisp case, this criterion is simply trace maximization of matching table counts: cross-tabulating class memberships of two solutions and then permuting rows/columns of the matching table until the trace becomes maximal.
To our knowledge, cluster publications and software differ in the algorithms used to obtain trace maximization, but do not question the euclidean criterion per se.
For example, Dimitriadou et al. (2002) suggested a recursive heuristic to approximate trace maximization. It is known that trying all permutations has time complexity O(K!), where K denotes the number of clusters. The *Hungarian method* improves on this and achieves polynomial time complexity O(K3). Kuhn (1955) published a pencil and paper version, which was followed by J.R. Munkres' executable version (Munkres, 1957) and extended to non-square matrices by Bourgeois and Lassalle (1971). For a list of further algorithmic approaches to this so-called linear sum assignment problem or *weighted bipartite* matching, see Hornik (2005).
However, scalablility is not the only quality aspect of a matching algorithm. An important statistical feature of a matching algorithm is the following: if we match two random partitions, the matching algorithm should not systematically align the two partitions. We now show that the classic trace maximization does not generally possess this feature.
Assume a cluster algorithm that claims to identify an outlier in a sample of size N = 100 but which actually declares one case as 'outlying' by random. Now assume a procedure that draws two bootstrap samples and clusters them into 99% 'normal' cases and one 'outlier'. In 1% of such procedures, the outlier picked in the second sample will randomly match the outlier picked in the first sample. In such cases, trace maximization matching will lead to a matching table as shown in Table 1. In the other 99%, there will be no match, whichby trace maximization—gives a matching table like that shown in Table 2. The resulting expected matching table is shown in Table 3. |
|
<image>
Table 1: Random matching (1%)
<image>
Table 2: Typical trace maximization matching (99%)
<image>
We can see that under random clustering, we expect 98.02% on the main diagonal which at first glance looks like a strong (non-random) match. Only applying standard random correction (Cohen, 1960) confirms this to be a pure random match (Cohen's kappa = 0). However, in a clustering context we have two objections against relying on such random corrections: as far as evaluation of cluster agreement is concerned, random corrections, such as Cohen's kappa or Hubert and Arabie's corrected rand index do not work properly, because spatial neighbors have an above-random chance of being clustered together in the absence of any cluster structure in the data. Therefore, agreement indices are too optimistic even with random correction. More importantly, in other contexts such as bagging there is no random correction available at all. If cluster sizes are (very) different, bagging cluster results will suffer because in standard trace maximization big randomly matched cells win over small cells representing non-random matches. Therefore, we are looking for a matching algorithm that does not systematically generate a strong diagonal under random conditions.
## 3. Truematch Algorithm
The problems with standard trace maximization described in the previous section result from focusing on raw counts in a situation with unequal marginal (cluster) probabilities. From other contexts, we know that this is not a good idea. Take the χ 2-test for statistical independence of two categorial variables. It is not based on raw counts. Instead, the matching table of raw counts is transformed to another unit taking the marginals into account. Let N denote the total number of observations, nk the number of observations in one row, nl the number of observations in one column and, finally, let nk,l denote the number |
|
of observations in one cell of the K x K cluster crosstable. The first step in calculating χ 2is to calculate for each cell the number of expected counts ˆnk,l under the assumption of independence:
$${\hat{n}}_{k,l}=p_{k}\cdot p_{l}\cdot N={\frac{n_{k}\cdot n_{l}}{N}}$$
N(1)
Then, we transform the matrix of raw counts in Equation 1 into a matrix of normalized squared deviations dk,l from the null model:
$$d_{k,l}={\frac{(n_{k,l}-{\hat{n}}_{k,l})^{2}}{{\hat{n}}_{k,l}}}$$
$$\left(1\right)$$
$\left(3\right)$.
$$\left(2\right)$$
The χ 2-value is defined as the sum of Equation 2 over all cells. If we restore the sign in Equation 2, we get:
$$s_{k,l}=s i g n(n_{k,l}-{\hat{n}}_{k,l})\cdot d_{k,l}$$
sk,l = sign(nk,l − nˆk,l) · dk,l (3)
In order to cope with unequal cluster sizes, we suggest basing cluster matching on maximizing the trace of sk,l rather than on maximizing the trace of nk,l. And in order to avoid any systematic not based on the data, we add a probabilistic component to the matching algorithm. Consequently we define the *truematch algorithm* as:
1. Randomly permute rows and columns of the matching table
2. Transform the matching table counts nk,l to signed normalized squared deviations sk,l using Equation 3 3. Apply a trace maximization algorithm like the Hungarian method to maximize the trace (in fact the Hungarian method minimizes −sk,l)
4. Order the resulting row/column pairs descending by sk,l breaking ties at random
If no trace maximization algorithm like the Hungarian method is available, the matching can easily be done using the *truematch heuristic* similar to the heuristic suggested by Dimitriadou et al. (2002):
1. Calculate signed normalized squared deviations sk,l for all *remaining* cells of the matching table 2. Order all cells descending by sk,l and by nk,l (breaking ties by random) and denote the first cell as the *target cell* 3. Match the row of the target cell to the column of the target cell 4. Remove the row and the column of the target cell from the matching table 5. If both the number of remaining rows and columns is at least two, repeat from step 1 |
|
It is obvious that the truematch algorithm has runtime complexity O(K3) like the Hungarian method. The truematch heuristic also nicely translates into polynomial runtime.
The number of residuals calculated to reduce the matching table from k to k −1 is K2, thus the total number of residuals calculated is
$$K^{2}+(K-1)^{2}+(K-2)^{2}+\ldots+2^{2}={\frac{(K\cdot(K+1))\cdot(2K+1)}{6}}-1$$
and, therefore, the truematch heuristic has runtime complexity O(K3) and memory complexity O(K2) if the recursive nature of the algorithm is realized using a while-loop.
R package truecluster (Oehlschl¨agel, 2007a) implements the truematch algorithm in matchindex(method = "truematch") and the truematch heuristic in matchindex(method
= "tracemax") efficiently through underlying C-code.
Applying the truematch algorithm and the truematch heuristic to the above example gives identical results: as in standard trace maximization matching, we find 1% random matches in matching table 1, but for the 99% non-random matching cases, truematch generates two versions of matching tables, see Table 4. Both versions have shifted the majority of counts off-diagonal. Due to the probabilistic component in the 2nd step, this leads to the expected matching (Table 5) that has a weak trace. Under truematch, only systematic, non-random matches will result in a strong diagonal.
<image>
<image>
Table 4: Typical truematch (49.5% + 49.5%)
<image>
We can quantify the benefit of truematch in this case by comparing expected values of certain agreement indices, cf. Table 6. The *rand* index (Rand, 1971) and its random corrected version *crand* (Hubert and Arabie, 1985) are invariant against row/column permutations and, thus, do not differ. There is also no difference for *kappa* (Cohen, 1960). However, the big difference is on the simple non-random-corrected *diagonal* fraction of observations: while the trace maximization misleadingly results in an expected diagonal close to 1, truematch reduces the expectation of this non-random-corrected index close to zero.
In the next two sections, we will explore the benefit of truematch in a bagging context, where the main diagonal defines the matching but no random correction is available. |
|
| fraction | diagonal | kappa | rand | crand | |
|---------------------------|------------|---------|--------|---------|-------|
| Tracemax RandomMatch | 1.0% | 1.00 | 1.00 | 1.000 | 1.00 |
| Tracemax NonRandomMatch | 99.0% | 0.98 | -0.01 | 0.960 | -0.01 |
| Tracemax Expected | 100.0% | 0.98 | 0.00 | 0.961 | 0.00 |
| Truematch Expected | 100.0% | 0.03 | 0.01 | 0.961 | 0.00 |
| Truematch NonRandomMatch1 | 49.5% | 0.02 | 0.00 | 0.960 | -0.01 |
| Truematch NonRandomMatch2 | 49.5% | 0.02 | 0.00 | 0.960 | -0.01 |
| Truematch RandomMatch | 1.0% | 1.00 | 1.00 | 1.000 | 1.00 |
Table 6: Agreement statistics
## 4. The Role Of Truematch In Truecluster
The truecluster concept (Oehlschl¨agel, 2007b) suggests a *cluster information criterion* (CIC)
that evaluates for each cluster model (for each number of clusters) a N x K matrix Pˆ that aggregates votes over many resamples. Pˆ is created by the *multiple match cluster count*
(*MMCC*) algorithm using the truematch algorithm as follows:
1. Create a N x K matrix C and initialize each cell Ci,k with zero
2. Take a resample (with replacement) of size N, use a *base cluster algorithm* to fit the K-cluster model c
∗to the resample. Then, use a suitable *prediction method* to determine cluster membership of the out-of-resample cases to get a complete cluster vector c 0with N elements c 0 i 3. For each row in C add one vote (add 1) to the column corresponding to the cluster membership in c 0 4. Repeat step 2 5. Estimate cluster memberships ˆc by row-wise majority count in C (breaking ties at random), use the truematch algorithm or *heuristic* to align c 0with cˆ, and rename the clusters in c 0like the corresponding clusters in cˆ
6. For each row in C add one vote (add 1) to the column corresponding to the cluster membership in c 0 7. Repeat from step 4 until some reasonable *convergence criterion* is reached 8. Divide each cell in C by its rowsum to get a matrix of estimated cluster membership probabilities Pˆ
Table 7 summarizes simulations with truecluster versus consensus clustering: 100 cases, 10,000 replications, for details see MMCCconcensus.r in R package truecluster (Oehlschl¨agel, 2007a), the table is sorted and grouped by the magnitude of CIC values). For random data without cluster structure, we would expect very 'fuzzy' Pˆ without clear preferences for any |
|
cluster. Furthermore, we would expect CIC to increase for models with more true clusters and to decrease if models try to distinguish more clusters than justified by the data.
Table 7 shows that the MMCC algorithm using truematch delivers on this expectation:
CIC increases for justified clusters and declines for unjustified ones, even if unjustified clusters in the model are small. This works because once cluster decisions are unjustified, the trumatch algorithm starts distributing its votes randomly across undistinguishable columns of C and, thus, 'fuzzifies' Pˆ . Compare that to consensus clustering (Dimitriadou et al.,
2002) based on trace maximization obtained with R package clue (Hornik and Boehm, 2007; Hornik, 2005). Models with unjustified small clusters get CIC values as high as models without the unjustified cluster. This is a consequence of the trace maximization matching, adding inappropriate stability to the voting. Take, for example, the "random 99:1" model, which is as unjustified as the "random 50:50" model but receives a much higher CIC value. The stability induced by the trace maximization matching results in quite a crisp Pˆ2: for each row, we find high probability for one cluster and low probability for the other. If we assign cases to clusters based on the maximum probability per row in Pˆ , all cases are assigned to the same cluster. Such a degenerated Pˆ is not wrong but unfortunate. If we manually analyze Pˆ2, we might detect that Pˆ2 actually represents a onecluster (K=1) model. But if we are after automatic selection of models (number of clusters),
it is misleading that Pˆ2 does not represent K = 2 but K = 1. Analyzing a consensus cluster solution Pˆ K for degeneracies does not really help: the estimated probabilitites can be biased even before the matrix formally degenerates. |
|
| MMCC | true K | model K | H | RMC | I | CIC |
|--------------------------|--------------------------------------------------------|-----------|-------|-------|-------|--------|
| random 50:49:1 | 1 | 3 | 1.578 | 0.020 | 0.044 | -1.534 |
| random 99:1 | 1 | 2 | 1.000 | 0.010 | 0.014 | -0.985 |
| random 50:50 | 1 | 2 | 0.995 | 0.010 | 0.059 | -0.936 |
| single 100 | 1 | 1 | 0.000 | 0.000 | 0.000 | 0.000 |
| justified 50 random 49:1 | 2 | 3 | 0.499 | 0.018 | 0.695 | 0.196 |
| justified 50:50 | 2 | 2 | 0.000 | 0.010 | 0.990 | 0.990 |
| consensus | true K | model K | H | RMC | I | CIC |
| random 50:49:1 | 1 | 3 | 1.066 | 0.011 | 0.049 | -1.016 |
| random 50:50 | 1 | 2 | 0.995 | 0.010 | 0.048 | -0.947 |
| random 99:1 | 1 | 2 | 0.081 | 0.001 | 0.001 | -0.080 |
| single 100 | 1 | 1 | 0.000 | 0.000 | 0.000 | 0.000 |
| justified 50 random 49:1 | 2 | 3 | 0.071 | 0.011 | 0.965 | 0.895 |
| justified 50:50 | 2 | 2 | 0.000 | 0.010 | 0.990 | 0.990 |
| true K | true number of clusters | | | | | |
| model K | model number of clusters | | | | | |
| H | model uncertainty | | | | | |
| RMC | relative model complexity | | | | | |
| I | model information | | | | | |
| CIC | cluster information criterion (I-H) | | | | | |
| single 100 | theoretical values for single group (no cluster) | | | | | |
| random 50:50 | random clustering with 2 equal sized clusters | | | | | |
| random 99:1 | random clustering 2 unequal sized clusters | | | | | |
| random 50:49:1 | random clustering with 3 unequal sized clusters | | | | | |
| justified 50:50 | justified clustering with 2 equal sized cluster | | | | | |
| justified 50 random 49:1 | 2 justified clusters, one randomly split unequal sized | | | | | |
Table 7: consensus cluster vs. truecluster |
|
## 5. Simulation Results
In order to systematically investigate the consequences of the different features of truematch versus simple trace maximization matching, we have carried out extensive simulations within the truecluster framework: we assume two clusters and vary their relative size p and the reliability κ of a fictitious clustering algorithm and compare the *truecluster* results gained via trace maximization versus truematch. We did two versions of the simulations: in the non-fixed version, p just determines sampling probabilitites; in the *fixed* version, the fictitious clustering algorithm enforces the exact relative size p of the two clusters. Details of the simulation are given in Appendix A.
Figure 1 shows information, *uncertainty*, and its difference CIC for the non-fixed simulations. White areas denote simulation trials where the truecluster algorithm degenerated from a 2-cluster solution to a 1-cluster solution. The most notable difference is the big share of non-converged truecluster solutions using trace maximization, compared to the truematch algorithm. The estimated information, given reliability and skewness, is very similar and reasonable: information is highest for p = 0.5 and κ = 1.0 and is lower for both reducing κ and/or skewing p.
By contrast, compared for uncertainty and for the CIC, trace maximization and truematch differ dramatically. Using trace maximization, the uncertainty estimate does not only depend on κ but is also artificially lower for higher skewness. As a consequence, cluster models with unequal cluster sizes get better CIC values than cluster models with equal cluster sizes. Using the truematch algorithm almost avoids this undesirable pattern: the estimated uncertainty almost only depends on κ, not on p. The estimated CIC shows a very reasonable pattern: at high κ the CIC is highest for equal sized clusters—conforming with the entropy principle— at low κ, the CIC is low, however skewed p is. Only at very extreme p is the CIC biased downwards: too small clusters cannot be detected with too small a sample size. Extreme models are non-identifiable and the uncertainty estimate has high variance. Keep in mind that 'extreme' p corresponds to very few cases at a sample size of N = 100. The fixed simulations gave similar results (Figure 2).
In summary, trace maximization fails to estimate uncertainty independent of skewness and tends to overestimate CIC for unequal cluster sizes or fails to converge. This restricts its usefulness for cluster evaluation and bagging. By contrast, the truematch algorithm works at almost any combination of reliability and skewness (with the exception of nonidentifiable models, given the sample size). |
|
# Modeling Computations In A Semantic Network
Marko A. Rodriguez and Johan Bollen Digital Library Research and Prototyping Team Los Alamos National Laboratory Los Alamos, New Mexico 87545
(Dated: August 24, 2021)
Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
keywords: I.2.12 Intelligent Web Services and Semantic Web - I.2.4.k Semantic networks - I.2.4 Knowledge Representation Formalisms and Methods - I.2 Artificial Intelligence - I Computing Methodologies
## I. Introduction
A semantic network is generally defined by a directed labeled graph [18]. Formally, a directed labeled graph can be represented in set theoretic notation as G = (*V, E* ⊆ V × *V, λ* : E → Σ), where V is the set of vertices, E is the set of edges, and λ is a function that maps the edges in E to the set of labels in Σ. Another perspective would organize each label type according to its own edge group and in such cases, G = (V, E = {E0, E1*, . . . , E*n}), where E is the set of all labeled edge sets, Ei ∈ E is a particular labeled edge set, and Ei ⊆ V × V [3].
For the Semantic Web, the semantic network substrate is defined by the constraints of the Resource Description Framework (RDF) [7, 12]. RDF represents a semantic network as a set of triples where both vertices and edge labels are called resources. In RDF, a subject resource (s) points to an object resource (o) according to a predicate resource (p). Subject and predicate resources are identified by Uniform Resource Identifiers (URI) [21] and the object is either a literal or a URI. If U is the set of all URIs and L is the set of all literals, then the Semantic Web can be formally defined as G ⊆ (U × U × (U ∪ L)).
This representation is called a triple list where a triple τ = h*s, p, o*i. RDF is a framework (or model) for denoting a semantic network in terms of URIs and literals.
RDF is not tied to a particular syntax. Various RDF
syntaxes have been developed to support the encoding and distribution of RDF graphs [1].
Ontology languages have been developed to constrain the topological features of the Semantic Web. The Resource Description Framework Schema (RDFS) supports the representation of subclassing, instantiation, and domain/range restrictions on predicates [12]. The Web Ontology Language (OWL) was developed after RDFS and allows for the creation of more advanced ontologies [13].
In OWL, cardinality restrictions, unions, and ontology dependencies were introduced. Semantic Web ontology languages, interestingly, are represented in RDF. Thus, G is the set of all ontologies and their instances.
With RDF, RDFS, and OWL, a medium currently exists to model any physical or conceptual entity and their relationships to one another. The Semantic Web supports universal modeling and allows for the commingling of disparate heterogeneous models within a single substrate that can be used by humans and machines for any computational end. Any statement, logical or illogical, true or false, possible or impossible, can be made explicit in the Semantic Web. While the Semantic Web is primarily used to define descriptive models, there is nothing that prevents the representation of procedural models. In other words, models of computing can be explicitly represented in G. It is this modeling power that has prompted the growth of the semantic computing paradigm where the Semantic Web is no longer perceived solely as a universal data modeling medium, but also as a universal computing platform.
While the ideas presented in this article are amenable to any semantic network representation, this article will focus primarily on the Semantic Web due in large part to the technological infrastructure that currently supports this effort. This article's exploration will begin with a review of the various aspects of G. Next, a formal definition of computing will be presented in order to describe how the various components of computing can be represented by a semantic network. Current semantic network computing models will be placed within this semantic computing space. The definition of this space will expose areas that have yet to be developed and leave open the potential for future work in the area of semantic network computing. |
|
## Ii. Descriptive And Procedural Models
Currently, the Semantic Web is perceived primarily as a data modeling environment where data is more "descriptive" rather than "procedural" in nature [17]. In other words, the triples in G define a model, not the rules by which that model should evolve. This article will explore the more procedural aspects of G. Figure 1 presents an taxonomy of the various types of triples contained in G, where edges have the semantic "composed of".
<image>
In its whole, G is composed of nothing but triples.
However, particular subsets of G are used to represent different aspects of the larger G model. Due to RDF,
RDFS, and OWL, G is composed of two main subnetworks: the ontological subnetwork and the instance subnetwork. While, in principle, anything can be modeled by a semantic network, most ontologies and instances are descriptive. However, there is nothing that prevents RDF from being used as a framework for denoting procedural models. That is, G can be used to model functions
(i.e. programs) and the machines that execute those functions.
This article will focus on the procedural aspects of G.
Ontological procedural models represent machine architectures (i.e. abstract machines) and the abstract functions for which they process. On the other hand, instantiated procedures are stored programs (i.e. functions, algorithms, etc.) that are explicitly encoded for virtual machines (i.e. instances of an abstract machine architecture) to execute. The next section will present a formal description of computing.
## Iii. Representing Computations In A One-Dimensional Tape
The classic notion of a computation is any process that can be explicitly represented by a formal algorithm. An algorithm is a sequence of executable, well-defined instructions [19]. This sequence of instructions is executed by some system, or machine. This machine may contain, internal to it, all the requirements necessary to render the results of the algorithm or, in other instances, may rely on some external storage medium to read in novel inputs and write novel outputs. If the former computing model is chosen, then the machine can only execute a single algorithm with no variation on its behavior because no new input is altering its deterministic path
(e.g. 1 + 2 = 3). However, if the latter model is chosen, the machine is general-purpose with respects to the particular "hard-wired" abstract algorithm. It is considered general-purpose because it can map any input to its respective output according to its abstract algorithm
(e.g. x + y = z).
This concept can be taken to its logical conclusion where a single machine can be engineered to perform any computing task. Paradoxically, that single machine executes one and only one algorithm. However, that particular algorithm is so generalized, that it can execute any number of other algorithms represented in the machine's external storage medium. This generalized algorithm can reach the "lowest common denominator" of computing and at that point, can even execute a representation of itself encoded in the storage medium. This machine is called a universal computing machine and is what is know today as the general-purpose computer.
This idea was demonstrated by Alan Turing in the 1930s and is the foundation of the computer sciences [8].
## A. Modeling Computations Using A Turing Machine
Perhaps the most common model used to represent computing is the Turing machine [20]. In the Turing machine model of computation, M is a machine with a single read/write head and D is a storage medium called a "tape" that can be read from and written to by M. A Turing machine can be formalized by the 5-tuple M = hQ, Γ, δ, q0, d0i, where
- Q is a set of machine states,
- Γ is a set of information symbols (e.g. 0,1),
- δ : Q × R → {W(γ), E} × {lf, rt} × Q is the transition/behavior function,
- q0 ∈ Q is the start state of the machine, - and d0 ∈ D is the start location of the machine head on D.
D is a one-dimensional n-length vector of symbols from Γ such that D ∈ Γ
n.
A Turing machine, M, will start at state q0 ∈ Q and cell d0 ∈ D. Depending on what γ ∈ Γ is read (R) at d0, M will use its δ function to determine: 1.) what γ ∈ Γ
to write (W) to d0 or whether to erase (E) the current symbol, 2.) whether to move its read/write head left (lf) |
|
or right (rt) on D, and finally 3.) determine which state in Q to transition to at the next time step,. This 5-tuple model is a simplified version of the 7-tuple representation in [9].
Let M denote a Turing machine that increments a unary number by one. While this is not the most exciting algorithm, it is simple enough to represent succinctly and provides an example of the previous abstract concept. The δ-function for M is
<image>
where M will write a 1 if a 0 exists at its current d ∈ D,
else it will move right and replay state A and the state B
is considered the halt state. Thus, if D = (1, 1, 0, 0), M will read the first 1, move right, read the second 1, move right, read the first 0, and write a 1. Upon entering state B, D = (1, 1, 1, 0). At the completion of this algorithm, the number 2 (11) is incremented to 3 (111). M and D
are represented in Figure 2.
<image>
FIG. 2: A Turing machine uses the tape for its input and output.
Imagine having a single physical machine for every computation one required to execute. For instance, one would have an M to add integers, an M to divide floatingpoints, an M to compare a string of characters, etc. To meet modern computing requirements, an unimaginable number of machines would be required. However, in fact, a single machine does exist for each computing need! Fortunately, these machines need not be physically represented, but instead can be virtually represented in D. This is the concept of the stored program and was serendipitously discovered by Alan Turing when he developed the idea of the universal Turing machine [20].
## B. Modeling Computations Using A Universal Turing Machine
A universal Turing machine, M∗, is a Turing machine that can execute the behavior of another Turing machine, M. This idea is a central tenet to the engineering of modern day computers. With a universal Turing machine, the state behavior of M can be encoded on D such that some M∗can simulate the behavior of the M encoded in D. In such cases, there exists another portion of D that serves as the input/ouput to M denoted DM ⊂ D. This idea is depicted in Figure 3.
<image>
The benefit of M∗is that M∗is a general-purpose machine that can be used to execute any algorithm. Thus, there need not exist separate physical machines for each algorithm. However, in order for M∗to execute some M, M must be encoded such that it is congruent with the expectations of M∗'s δ-function. Thus, there exists an ontology, Mˆ , defining the requirements of the M encoding. If some M is represented according to Mˆ , then M∗can execute it. In the lexicon of modern computing, if a program is written in native machine code, then the native machine can execute it.
Finally, to present the conclusion of this chain of reasoning, it is possible for M∗to be encoded according to the Mˆ ontology. Let M∗ denote the physical machine and M∗1 ⊂ D denote the virtual D-encoded machine that is congruent with Mˆ . In such cases, M∗1can be used to execute some other M in DM∗1 ⊂ D. This idea is diagrammed in Figure 4. This idea is congruent with the concept of the virtual machine of modern day computing
[6].
<image>
## Iv. Representing Computations In A Semantic Network
While the Turing model of computing is very simple, it is actually quite representative of the current state of computing in semantic networks. The Semantic Web's G
is a data structure similar to D except that G is not a one-dimensional vector of Γ symbols. While it is possible to represent G as a one-dimensional string of Γ symbols, the more intuitive and useful representation is that of a network of URIs (U) and literals (L). G is a highlydistributed universal "tape" that can be accessed by machines world-wide for various computational purposes. |
|
However, how much of G is leveraged for computing is machine-instance dependent.
Currently, the actual application that explicitly encodes subsets of G is the triple-store (i.e. graph database, semantic repository, etc.). A triple-store is a database that contains a subset of the larger Semantic Web. The triple-store is the gatekeeper for determining how triples are read from and written to the Semantic Web. Unlike the read/write head of the Turing machine, the machines that access G are able to move about G in a more random-access fashion due to the development of the common variable-binding interface. While any other G interface may be developed in the future, the lowestlevel requirements of such an interface are the ability to read, write, and delete triples from G. This section will discuss the nature of a primitive read/write interface into G and its relation to G-computing.
As demonstrated by Alan Turing, the most primitive components required for a computing machine are the ability to read and write to a medium and alter its states according to its perception of that medium. Similar to the relationship between M and D, it is possible to develop a semantic Turing machine that is able to read/write to G and evolve its state behavior accordingly.
A semantic Turing machine is denoted S and can be formalized by the 5-tuple
$$S=\langle Q,\Gamma,\delta,q_{0},X\rangle,$$
## Where - Q Is A Set Of Machine States,
- Γ ⊆ U ∪ L is the set of URI and literal symbols, - δ : Q × R(ϕ) × X → {W(τ ), E(ϕ)} × Q × Q is the transition/behavior function,
- q0 ∈ Q is the start state of the machine,
- and X is a set of random access machine heads.
These components will be discussed in full throughout the remainder of this section.
The most readily used low-level read model for the Semantic Web is the 3-element symbol binding model,
## R : Φ → Τ ∈ G,
where ϕ is called a query, ϕ = h*a, b, c*i, and the elements a, b, and c can either be drawn from the set Γ = U ∪ L or from the set of machine heads defined by X. If those heads in X are declared bindings, then the machine head is random access. In a semantic Turing machine, there does not exist an explicit move behavior. If a state q ∈ Q is to move a random access head, then it places a bind-symbol before the head name (e.g. ?x1) otherwise the machine will hold its head at its current pointed to location with a static-symbol
(e.g. !x1). For instance, R(hmarko, isA, ?x1i) would place the head ?x1 on some object of a triple with the subject marko and predicate isA. If ?x1 bound to human then τ = hmarko, isA, humani ∈ G. However, if the machine head is already at a particular resource in G, then it can be used as a static variable. If ?x1 bound to human on a previous read, then R(h!x1, subClassOf, ?x2i) will move
?x2 to the resource mammal. With the random-access X
machine heads, no variable states are represented internal to S, they are simply pointed to by some x ∈ X in G.
The most readily used write model for the Semantic Web is to union the semantic network triple list G with a new triple τ ,
$$W:\tau\to G\cup\tau,$$
——
where τ = h*s, p, o*i, s ∈ U, p ∈ U, and o ∈ (U ∪ L).
Finally, in order to erase (i.e. delete) a triple, the 3element symbol binding model can be used,
## E : Φ → G \ R(Φ),
where the triple R(ϕ) ∈ G is removed from G.
An S can be built to do any type of computation on G. The popular Horn-clause query/assertion can be represented by an S [10]. For instance, the rule
## Hasparent(Marko, ?X1) ∧ Hasbrother(?X1, ?X2) → Hasuncle(Marko, ?X2)
states that if marko has a parent that binds to ?x1 and x1 has a brother that binds to ?x2 then assert (i.e. write)
the fact that x2 is marko's uncle. The δ-function for S
that executes this query is
qi R X W E qi+1 6ϕ
<image>
<image>
A hmarko, hasParent, ?x1i ∅ ∅ ∅ B C
B h!x1, hasBrother, ?x2i ∅ hmarko, hasUncle, !x2i ∅ C C
C ∅ ∅ ∅ ∅ C ∅
where q0 = A, C is the halt state, x1, x2 ∈ X and 6ϕ is the state transition when a ϕ fails. If
## G ={Hmarko, Hasparent, Carolei, Hcarole, Hasbrother, Georgei},
then at q0 = A, ?x1 will point to carole, at q1 = B, ?x2 will point to george, and at q3 = C,
## G ={Hmarko, Hasparent, Carolei, Hcarole, Hasbrother, Georgei Hmarko, Hasuncle, Georgei}.
For more arithmetic operations and for the construction of novel URIs and literals, the classic Turing machine |
|
two S
∗ machines encoded in G: S
∗1 and S
∗2, where S
∗1 ∩ S
∗2 = ∅. The external S
∗can be reading in S
∗1 as a program, which is reading in S
∗2 as a program, which is reading in some other machine S as a program. In this model, there is no limit to the amount of computing redirection that is possible. Ultimately, it is up to the external S
∗to perform all the read/write operations that update the respective states of all the chained together S
∗n machines.
In the virtual machine model, not only is procedural data encoded in G, but also machine data. There must exist both an ontology for procedural data Sˆ and an ontology for machine data Sˆ∗. In principle, any subset of G that obeys Sˆ∗ is a virtualized computing machine.
## D. True Universality
While a universal semantic Turing machine can be created, it is impractical to do so because of the speed constraints currently realized by the read/write interface to the Semantic Web and because any external S
∗ already exists in a substrate engineered for general-purpose computing. Therefore, for virtualized semantic machines, Dmediums are currently used for low-level arithmetic operations only. There will always be a tradeoff between the desire to represent low-level computations in G and the desire to ensure the fast execution of those G-based machine representations.
The Fhat processor was designed with this constraint in mind. Many aspects of the machine's state are represented in G including its operand stack, symbol table, program counter, etc. However, when a low-level operation such as add 2 3 is called, those values are calculated on the physical machine, not in G. While this may not be completely theoretically satisfying, it does support a practical implementation of the virtual machine model of computing in G.
## Vi. The Future Of Semantic Network Computing
The future of semantic network computing may be one in which virtual machines and their programs exist in G. Any universal machine external to G can gain access to the URI denoting a virtual machine and begin to execute its "physics". In other words, evolve its state and compute. In this idealized world, the underlying physical hardware supporting the execution of these virtual machines is more or less inconsequential. These underlying hardware processors are analogous to the underlying physics supporting the execution of our hardware machines. Once the protocols are in place to ensure that G has a farm of processors continuously evolving it, then the Semantic Web will have reached a transition to where abstract virtualized computing becomes ubiquitous and G can be seen as a single distributed computer with the massive address space of U ∪ L. However, there are still many obstacles that prevent this model from becoming a common reality.
First, the read/write speeds for G are orders of magnitude slower than the read/write speeds for local memory and thus, computing in G is orders of magnitude slower.
There is still much more room for growth in the area of triple-store index algorithms. Unlike the relational database model where data is broken into different linked tables, the triple-store is a single massive table with various indexes supporting fast searching. As the read/write speeds continue to increase, the ability to use G as a computing "tape" will become more viable.
Third, current triple-store's have limits on the number of triples they can feasibly represent in a single store.
While some stores can easily support up to 109triples, the explicit representation of procedural data reduces the amount of space available for descriptive data. Fortunately, with an increase in the use of standards liked Linked Data [4], the growth of G will have limited effect on the ability to compute in G.
Fourth, the current state of affairs in the Semantic Web is such that writing to G is cumbersome due to the absence of a generally accepted protocol to do so.
While the proposed SPARQL/Update protocol [16] is one such write interface, it is not widely supported by all triple-store providers. Thus, each triple-store provider maintains their own mechanism for writing and deleting triples.
Finally, there does not exist a universal trust and security mechanism to deter malicious machines in G. If G is conceived as a a universal computing "tape", then the read, and more importantly, write/delete accesses to G will need to be established. Of course, G is only contained in an abstract universal store. Each triple-store supports only a subset of the larger whole. Therefore, for those running a triple-store, read/write privileges is not an issue. However, as more procedural information is encoded in G and machines can share procedural fragments, understanding where particular bits of information were derived from becomes very important. Work in the area of named graphs for trust and provenance should prove promising in this area [5]. The named graph extends the triple concept by adding an extra resource called g, or graph. A triple is thus a quad and τ = h*s, p, o, g*i. The g component of τ is a URI and this information can be used to attach read/write privileges to particular subnetworks of G.
While this list is not conclusive, it provides an overview of some of the more prominent issues concerning the future of semantic network computing.
## Vii. Conclusion
Given that the Semantic Web is an abstract data structure, it does not have the capacity to perform a computation in and of itself. The Semantic Web is simply a 7 |
|
# Automatically Restructuring Practice Guidelines Using The Gem Dtd
Amanda Bouffier Thierry Poibeau Laboratoire d'Informatique de Paris-Nord Université Paris 13 and CNRS UMR 7030 99, av. J.-B. Clément - F-93430 Villetaneuse [email protected]
## Abstract
This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present a system developed for this task. We show that it yields good performance when applied to the analysis of French practice guidelines.
## 1 **Introduction**
During the past years, clinical practices have considerably evolved towards standardization and effectiveness. A major improvement is the development of practice guidelines (Brownson et al.**, 2003).**
However, even if widely distributed to hospitals, doctors and other medical staff, clinical practice guidelines are not routinely fully exploited1**. There**
is now a general tendency to transfer these guidelines to electronic devices (via an appropriate XML
format). This transfer is justified by the assumption that electronic documents are easier to browse than paper documents.
However, migrating a collection of texts to XML
requires a lot of re-engineering. More precisely, it means analyzing the full set of textual documents so that they can fit with strict templates, as required either by XML schemas or DTD (document type definition). Unfortunately, most of the time, the
1 **See (Kolata, 2004). This newspaper article is a good example**
of the huge social impact of this research area.
semantic blocks of information required by the XML model are not explicitly marked in the original text. These blocks of information correspond to discourse structures.
This problem has thus renewed the interest for the recognition and management of discourse structures, especially for technical domains. In this study, we show how technical documents belonging to a certain domain (namely, clinical practice guidelines) can be semi-automatically structured using NLP techniques. Practice guidelines describe best practices with the aim of guiding decisions and criteria in specific areas of healthcare, as defined by an authoritative examination of current evidence
(evidence-based medicine, see Wikipedia **or**
Brownson et al.**, 2003).**
The Guideline Elements Model (GEM) is an XML-based guideline document model that can store and organize the heterogeneous information contained in practice guidelines (Schiffman, 2000).
It is intended to facilitate translation of natural language guideline documents into a format that can be processed by computers. The main element of GEM, knowledge component, **contains the most**
useful information, especially sequences of conditions and recommendations. Our aim is thus to format these documents which have been written manually without any precise model, according to the GEM DTD (see annex A).
The organization of the paper is as follows: first, we present the task and some previous approaches
(section 2). We then describe the different processing steps (section 3) and the implementation (section 4). We finish with the presentation of some results (section 5), before the conclusion (section 6). |
|
## 2 **Document Restructuring: The Case Of** Practice Guidelines
As we have previously seen, practice guidelines are not routinely fully exploited. One reason is that they are not easily accessible to doctors during consultation. Moreover, it can be difficult for the doctor to find relevant pieces of information from these guides, even if they are not very long. To overcome these problems, national health agencies try to promote the electronic distribution of these guidelines (so that a doctor could check recommendations directly from his computer).
## 2.1 **Previous Work**
Several attempts have already been made to improve the use of practice guidelines: for example knowledge-based diagnostic aids can be derived from them (e.g. Séroussi et al., **2001).**
GEM is an intermediate document model, between pure text (paper practice guidelines) and knowledge-based models like GLIF (Peleg **et al.**, 2000) or EON (Tu and Musen, 2001). GEM is thus an elegant solution, independent from any theory or formalisms, but compliant with other frameworks.
GEM Cutter (http://gem.med.yale.edu/**) is a**
tool aimed at aiding experts to fill the GEM DTD
from texts. However, this software is only an interface allowing the end-user to perform the task through a time-consuming cut-and-paste process. The overall process described in Shiffman **et al.**
(2004) **is also largely manual, even if it is an attempt to automate and regularize the translation**
process.
The main problem in the automation of the translation process is to identify that a list of recommendations expressed over several sentences is under the scope of a specific condition (conditions may refer to a specific pathology, a specific kind of patients, temporal restrictions, etc.). However, previous approaches have been based on the analysis of isolated sentences. They do not compute the exact scope of conditional sequences (Georg and Jaulent, 2005): this part of the work still has to be done by hand.
Our automatic approach relies on work done in the field of discourse processing. As we have seen in the introduction, the most important sequences of text to be tagged correspond to discourse structures (conditions, actions …). Although most researchers agree that a better understanding of text structure and text coherence could help extract knowledge, descriptive frameworks like the one developed by Halliday and Hasan2 **are poorly formalized and difficult to apply in practice.**
Some recent works have proposed more operational descriptions of discourse structures (PéryWoodley, 1998). Several authors (Halliday and Matthiessen, 2004; Charolles, 2005) have investigated the use of non-lexical cues for discourse processing (e.g temporal adverbials like "**in 1999"**). These adverbials introduce situation frames in a narrative discourse, that is to say a 'period' in the text which is dependent from the adverbial.
We show in this study that condition sequences play the same role in practice guidelines: their scope may run over several dependent clauses
(more precisely, over a set of several recommendations). Our plan is to automatically recognize these using surface cues and processing rules.
## 2.2 **Our Approach**
Our aim is to semi-automatically fill a GEM template from existing guidelines: the algorithm is fully automatic but the result needs to be validated by experts to yield adequate accuracy. Our system tries to compute the exact scope of conditional sequences. In this paper we apply it to the analysis of several French practice guidelines.
The main aim of the approach is to go from a textual document to a GEM based document, as shown on Figure 1 (see also annex A). We focus on conditions (including temporal restrictions) and recommendations since these elements are of paramount importance for the task. They are especially difficult to deal with since they require to accurately compute the scope of conditions.
The example on figure 1 is complex since it contains several levels of overlapping conditions. We observe a first opposition (Chez le sujet non immunodéprimé / chez le sujet immunodéprimé… Concerning the non-immuno-depressed patient / Concerning the immuno-depressed patient…**) but a second condition interferes in the scope of this first**
level (En cas d'aspect normal de la muqueuse iléale… If the ileal mucus seems normal…**). The task**
involves recognizing these various levels of conditions in the text and explicitly representing them through the GEM DTD. |
|
(especially connectors and lexical cues) can be automatically captured by machine learning methods.
Material structure cues**. These features include the** recognition of titles, section, enumerations and paragraphs.
Morpho-syntactic cues. **Recommendations are not** expressed in the same way as conditions from a morpho-syntactic point of view. We take the following features into account:
− Part of speech tags. For example **recommandé**
should be a verb and not a noun, even if the form is ambiguous in French;
− Tense and mood of the verb**. Present and future**
tenses are relevant, as well as imperative and conditional moods. Imperative and future always have an injunctive value in the texts. Injunctive verbs (see lexical **cues) lose their injunctive property when used in a past tense.**
Anaphoric cues. A basic and local analysis of anaphoric elements is performed. We especially focused on expressions such as dans ce cas, **dans les**
N cas précédents (**in this case, in the n preceding**
cases…**) which are very frequent in clinical documents. The recognition of such expressions is** based on a limited set of possible nouns that occurred in context, together with specific constraints
(use of demonstrative pronouns, etc).
Conjunctive cues (discourse connectors). **Conditions are mainly expressed through conjunctive**
cues. The following forms are especially interesting: forms prototypically expressing conditions (**si,**
en cas de, dans le cas où… if, in case of**…); Forms**
expressing the locations of some elements (**chez, en** présence de... in presence of…**); Forms expressing** a temporal frame (**lorsque, au moment où, avant** de… when, before…)
Lexical cues. **Recommendations are mainly expressed through lexical cues. We have observed**
forms prototypically expressing recommendations
(recommander, prescrire, … recommend, prescribe), obligations (devoir, … shall**) or options**
(pouvoir, … can**). Most of these forms are highly** ambiguous but can be automatically acquired from an annotated corpus. Some expressions from the medical domains can be automatically extracted using a terminology extractor (we use Yatea, see section 4, "Implementation").
## 3.3 **Basic Segmentation**
A basic segment **corresponds to a text sequence** expressing either a condition or a recommendation. It is most of the time a sentence, or a proposition inside a sentence.
Some of the features described in the previous section may be highly ambiguous. For this reason basic segmentation is rarely done according to a single feature, but most of the time according to a bundle of features acquired from a representative corpus. For example, if a text sequence contains an injunctive **verb with an infinitive form at the beginning of a sentence, the whole sequence is typed as**
action**. The relevant sets of co-occurring features**
are automatically derived from a set of annotated practice guidelines, using the chi-square test to calculate the dissimilarity of distributions.
After this step, the text is segmented into typed basic sequences expressing either a recommendation or a condition (the rest of the text is left untagged).
## 3.4 **Computing Frames And Scopes**
As for quantifiers, a conditional element may have a scope (a frame**) that extends over several basic** segments. It has been shown by several authors (Halliday and Matthiessen, 2004; **Charolles, 2005)** working on different types of texts that conditions detached from the sentence have most of the time a scope beyond the current sentence whereas conditions included in a sentence (but not in the beginning of a sentence) have a scope which is limited to the current sentence. Accordingly we propose a two-step strategy: 1) the default segmentation is done, and 2) a revision process is used to correct the main errors caused by the default segmentation (corresponding to the norm).
## Default Segmentation
We propose a strategy which makes use of the notion of default. By default:
1. **Scope of a heading goes up to the next heading;**
2. **Scope of an enumeration's header covers all**
the items of the enumeration ;
3. **If a conditional sequence is detached (in the**
beginning of a paragraph or a sentence), its scope is the whole paragraph; 4. **If the conditional sequence is included in a**
sentence, its scope is equal to the current sentence. |
|
M. Peleg, A. Boxwala, O. Ogunyemi, Q. Zeng, S. Tu, R.
Lacson, E. Bernstam, N. Ash, P. Mork, L. OhnoMachado, E. Shortliffe and R. Greenes. 2000.
"GLIF3: The Evolution of a Guideline Representation Format". In Proceedings of the American Medical Informatics Association**. pp. 645–649.**
M-P. Péry-Woodley. 1998. "Signalling in written text: a corpus-based approach". In M. Stede, L. Wanner &
E. Hovy (Eds.), **Proceeding of te Coling '98 Workshop on Discourse Relations and Discourse Markers**,
pp. 79–85 B. Séroussi, J. Bouaud, H. Dréau., H. Falcoff., C. Riou.,
M. Joubert., G. Simon, A. Venot. 2001. "ASTI : A Guideline-based drug-ordering system for primary care". In Proceedings MedInfo**. pp. 528–532.**
## Annex A. Screenshots Of The System
<image>
R.N. Shiffman, B.T. Karras, A. Agrawal, R. Chen, L.
Marenco, S. Nath. 2000. "GEM: A proposal for a more comprehensive guideline document model using XML". Journal of the American Medical Informatics Assoc**. n°7(5). pp. 488–498.**
R.N. Shiffman, M. George, M.G. Essaihi and E. Thornquist. 2004. "Bridging the guideline implementation gap: a systematic, document-centered approach to guideline implementation". In Journal of the American Medical Informatics Assoc. **n°11(5). pp. 418–**
426.
S. Tu and M. Musen. 2001. "Modeling data and knowledge in the EON Guideline Architecture". In Medinfo**. n°10(1). pp. 280–284.**
<image>
|
|
# Temporal Reasoning Without Transitive Tables
Sylviane R. Schwer LIPN UMR 7030 (Universit´e Paris 13 et CNRS)
[email protected] Representing and reasoning about qualitative temporal information is an essential part of many artificial intelligence tasks. Lots of models have been proposed in the litterature for representing such temporal information. All derive from a point-based or an interval-based framework. One fundamental reasoning task that arises in applications of these frameworks is given by the following scheme: given possibly indefinite and incomplete knowledge of the binary relationships between some temporal objects, find the consistent scenarii between all these objects. All these models require transitive tables - or similarly inference rules - for solving such tasks.
In [30], we have defined an alternative model, renamed in [31] S-languages - for Set-languages - to represent qualitative temporal information, based on the only two relations of **precedence** and **simultaneity**. In this paper, we show how this model enables to avoid transitive tables or inference rules to handle this kind of problem.
Keywords: Temporal reasoning, formal languages, constraints satisfaction.
## 1. Introduction
Representing and reasoning about qualitative temporal information is an essential part of many artificial intelligence tasks. These tasks appear in such diverse areas as natural language processing, planning, plan recognition, and diagnosis. Allen
[1,2] has proposed an interval algebra framework and Vilain and Kautz [34] have proposed a point algebra framework for representing such qualitative information. All models that have been proposed afterwards in the litterature derive from these two frameworks. Placing two intervals on the Timeline, regardless of their length, gives thirteen relations, known as Allen's [2] relations. Vilain [33]
provided relations for points and intervals, Kandrashina [18] provided relations for points, intervals and chains of intervals. Relations between two chains of intervals have been studied in depth by Ladkin who named them non convex intervals [20]. Al-Khatib [19] used a matricial approach. Ligozat [22] has studied relations between chains of points, named generalized intervals.
One fundamental reasoning task that arises in applications in these frameworks is given by the following scheme: given possibly indefinite and incomplete knowledge of the relationships between some temporal objects, find the consistent scenarii between all these objects. All these models have in common that the representations of temporal information are depicted as sets of binary relationships and are viewed as binary constraint networks. The reasoning is then based on transitive tables that describe the composition of any two binary relations. All these models require transitive tables - or similarly inference rules - for solving such tasks. The logical approach of the I.A.
community explains this fact.
The framework of formal languages, inside which the model of S-languages has been proposed [30, 31], provides the same material both for expressing the temporal objects and the n-ary relationships between them. The reasoning is based on three natural extensions of very well-known rational operations on languages: the intersection, the shuffle and the projection. More precisely, we have shown in [31] that binary relations between two generalized intervals are in a natural correspondence with S-languages that express Delannoy paths of order 2. By the way, we provide to Henry Delannoy
(1833-1915) a large domain of applications (though unexpected) of his theory of minimal paths of the queen from one corner to any other position on a chess-board [9].
The main idea for using formal languages for temporal representation and reasoning is that a word can be viewed as a line, with an arrow from 1 |
|
10 S. R. Schwer / Temporal reasoning without transitive tables (Rapport interne 2004 LIPN/LCR)
$$\bigcap_{\begin{array}{c}{{1\leq i\leq k}}\\ {{1\leq j\leq s_{i}}}\end{array}}\int_{X}^{\nu_{X}}{\mathcal{L}}_{I i_{j}}\neq\emptyset$$
In order to have a look on what kind of language is computed, let us revisit the unsatisfiable closed network of [2]. There are four intervals A, B, C,
D ✠ ❅ ❅ ❅ ❅ ❅❅❘ ❄ s, m s, m o A C B ✟✟ ✟✟✟✯❍❍ ❍❍ ❍❨ d,d ∼ d,d ∼ f, f ∼ ✲
Fig. 11. Allen's instance of an inconsistent labeling.
D. We then take the alphabet X = {a, b, c, d} and L(2, 2, 2, 2) on X**. The data are :**
$L_1=\int_X^\nu\left\{\left\{\begin{matrix}a\\ d\end{matrix}\right\}da,d\left\{\begin{matrix}a\\ d\end{matrix}\right\}a\right\}$ $L_2=\int_X^\nu\left\{\left\{\begin{matrix}c\\ d\end{matrix}\right\}dc,d\left\{\begin{matrix}c\\ d\end{matrix}\right\}c\right\}$ $L_3=\int_X^\nu dbdb$ $L_4=\int_X^\nu\left\{ca\left\{\begin{matrix}a\\ c\end{matrix}\right\},ac\left\{\begin{matrix}a\\ c\end{matrix}\right\}\right\}$ $L_5=\int_X^\nu\left\{bccb,cbbc\right\}$ $L_6=\int_X^\nu\left\{adda,daad\right\}$ The solution is
$L=L_1\cap L_2\cap L_3\cap L_4\cap L_5\cap L_6$ .
that we now compute.
L1 ∩ L2=
$$\int_{X}^{\nu}\left\{\left\{{\atop{c}\atop{d}}\right\}d,d\left\{{\atop{c}\atop{d}}\right\},\left\{{\atop{a}\atop{d}}\right\}\left\{{\atop{c}\atop{d}}\right\},\left\{{\atop{c}\atop{d}}\right\}\left\{{\atop{a}\atop{d}}\right\}\right\}[a||c]$$
or equivalently, due to the lack of occurrences of b, L1 ∩ L2=
$${\Bigg[}\left\{\left\{{\atop{c}\atop{d}}\right\}d,d\left\{{\atop{c}\atop{d}}\right\},\left\{{\atop{d}\atop{d}}\right\}{\Bigg\{}{\atop{c}\atop{d}}\right\},\left\{{\atop{c}\atop{d}}\right\}{\Bigg\{}{\atop{c}\atop{d}}\right\}{\Bigg\{}{\atop{d}}\right\}\}.$$
$\left[\begin{matrix}a&b\\ c&d\end{matrix}\right]$ This language contains 164 words. $\{\left\{\begin{matrix}a\\ c\\ d\end{matrix}\right\}bd,db\left\{\begin{matrix}a\\ c\\ d\end{matrix}\right\},\left\{\begin{matrix}a\\ d\end{matrix}\right\}b\left\{\begin{matrix}c\\ d\end{matrix}\right\},\left\{\begin{matrix}c\\ d\end{matrix}\right\}b\left\{\begin{matrix}a\\ d\end{matrix}\right\}\}.$ $[a||b||c]\\$ This language contains 52 words.
L1 ∩ L2 ∩ L3 ∩ L4=
$$\{{\left\{\begin{matrix}a\\ d\end{matrix}\right\}}\,b\,{\left\{\begin{matrix}c\\ d\end{matrix}\right\}}\,,\,{\left\{\begin{matrix}c\\ d\end{matrix}\right\}}\,b\,{\left\{\begin{matrix}a\\ d\end{matrix}\right\}}\,\therefore\,{\left[\left\{\begin{matrix}a\\ c\end{matrix}\right\}||b\right]}$$
This language contains 6 words.
L1 ∩ L2 ∩ L3 ∩ L4 ∩ L5=
$$\{\left\{\begin{matrix}a\\ d\end{matrix}\right\}b\left\{\begin{matrix}c\\ d\end{matrix}\right\}\left\{\begin{matrix}a\\ c\end{matrix}\right\}b,\left\{\begin{matrix}c\\ d\end{matrix}\right\}b\left\{\begin{matrix}a\\ d\end{matrix}\right\}b\left\{\begin{matrix}a\\ c\end{matrix}\right\}\}$$ in above, we have 2 equal.
This language has 2 words.
L1 ∩ L2 ∩ L3 ∩ L4 ∩ L5 ∩ L6 = ∅
This language is empty: the problem is unsatisfiable.
## 6. Temporal Reasoning About Concurrent Systems
Most of the temporal properties of programs have been studied inside either the framework of temporal logics or modal logic. Temporal reasoning about concurrent systems can be partitioned in a natural way into two classes related to the modality used : necessity, symbolized by ✷ or possibility, symbolized by ✸. Translated in a temporal framework, we use the terms always and sometime**. These modalities were studied first by the**
Megarians3**, then by Aristotle and the Stoic. Despite their variant, these modalities are linked to**
the universal ∀ and particular ∃ **quantifiers.**
Manna and Pnuelli [23,24], defined three important classes of temporal properties of concurrent programs that are investigated inside the modal 3**A school founded by Euclid of Megaric, student of**
Socrates, like Plato, this school was concurrent of Aristotle's one. |
|
- An absolute precedence property: Absence of Unsolicited Response. **An important but often overlooked desired feature is that the resource will not**
be granted to a party who has not requested it. A
similar property in the context of a communication network is that every message received must have been sent by somebody. This is expressible by the temporal formula:
$\leq q_i$).
## ¬Gi ⇒ (Ri < Gi)
The formula states that if presently giis **f alse**,
i.e., Ri **does not presently have the resource, then**
before the resource will be granted to Ri **the next**
time, Ri must signal a request by setting ri to **true**.
- A relative precedence property: a Strict (FIFO)
Responsiveness. **Sometimes the weak commitment of eventually responding to request is not**
sufficient. At the other extreme we may insist that responses are ordered in a sequence parallelling the order of arrival of the corresponding requests. Thus if requester Ri succeeded in placing in request before requester Rj **, the grant to**
Ri should precede the grant to Rj **. A straightforward translation of this sentence yields the following intuitive but slightly imprecise expression:**
(ri < rj ) ⇒ (gi < gj**). A more precise expression**
is (∀i, j)(i 6= j)(1 ≤ **i, j** ≤ k)
$$((r_{i}\land\neg r_{j}\land\neg g_{j})\Rightarrow(\neg g_{j}\mathcal{U}g_{i}).$$
It states that if ever we find ourselves in a situation where riis presently on, rj and gj **are both**
off, then we are guaranteed to eventually get a gi, and until that moment, no grant will be made to Rj . Note that ri ∧ ¬rj implies that Ri**'s request**
precedes Rj **'s request, which has not been materialized yet.**
We implicitly rely here on the assumption that once a request has been made, it is not withdrawn until the request has been honored. This assumption can also be made explicit as part of the specification, using another precedence expression:
## Ri ⇒ Gi < (¬Ri).
Note that while all the earlier properties are requirements from the granter, and should be viewed as the post-condition **part of the specification, this** requirement is the responsibility of the requesters.
It can be viewed as part of the pre-condition **of the**
specification.
Two assumptions are also implicitly used but not mentioned:
1. a process is not allowed to make an other request until he has given the resource back, 2. there are as many requests from Ri **as grants**
for Ri.
We now are leaving the way Manna and Pnuelli have resolved the problem inside the modal logic framework: finding an abstract computation model based on sequences of transitions and states, a proof system [24].
6.3. revisitation of the allocation problem inside the S-languages framework 6.3.1. Objects and relations representations Our formalization attempts to translate any information into a temporal information. The two boolean variables introduced in the formulation of the problem do not belong to the problem but to one of its data interpretation. These boolean variables are evolving through the timeline. It is natural to represent the values of boolean variables evolving through the time as characteristics function of boolean variables on a linear order that can be called temporal boolean functions [7,15,16]. Inside a determined and bounded period of time, the temporal boolean functions ri (resp. gi) can be interpreted as a chain of intervals with ni**intervals,**
where ni**is the number of requests (resp. grants).**
This number is exactly known at the end of the fixed period of time. Any interval is a maximal period inside which the value of ri (resp. gi) is **true**.
A priori, for the general specification, either we can choose to write ni **as an indeterminate number**
or to set ∗ **for saying that there is a finite but unknown number. To each requesters, we provide 2**k chains of intervals written on their identities alphabet X = {r1, g1, . . . , rk, gk, }. Any language L **that**
satisfies the problem is such that its Parikh number is L~ ⊆ ((2, 2)IN, . . . ,(2, **2)IN)**
| {z }
k times
. But for the sake of an easier reading, we will use the following alphabet X = {r1, r¯1, g1, g¯1 . . . , rk, r¯k, gk, g¯k}**, which**
allows to make a distinction between the beginning of the interval (no marked letter) and the end of the interval (marked letter) so that any language L **that satisfies the problem is such that its Parikh**
number is L~ ⊆ ((1, 1, 1, 1)IN, . . . ,(1, 1, 1, **1)IN)**
| {z }
k times
. |
|
6.3.2. The specification.
The invariant property. **Insuring that the resource is granted to at most one requester at a**
time can be interpreted like this: this constraint only concerns granters. After having read a letter gi - a beginning of an allocation to Ri **–, the first**
following occurrence of a letter of type g or ¯g inside the S-word is necessarily a ¯gi**letter (the end**
of the current allocation to Ri**. This constraint is**
formulated by the S-language
$$L_{1}=\int_{X}(g_{1}\bar{g_{1}},\ldots,g_{k}\bar{g_{k}})^{*}.$$
$\mathbf{u}$
## Hence We Have L ⊆ L1.
The liveness property. **The guaranty that every**
request ri **will eventually be granted concerns each**
Ri**individually. This organizes every couple of Swords (**rir¯i)
∗ **and (**gig¯i)
∗**saying that after each**
occurrence of a letter ri**, the first occurrence of a**
letter among the set {ri, r¯i, gi, g¯i} **is an occurrence**
of the letter gi**. There is no constraint between the**
occurrences of the letters ¯ri and ¯gi **that is to say**
that we get
$$L\subseteq\bigcap_{1\leq i\leq k}\int_{X}(r_{i}g_{i}[\bar{r_{i}},\bar{g_{i}}])^{*}$$
1≤i≤k
But we can give a more precise S-language, because we know that the sequence of actions related to any request is the following: request, allocation, release and deallocation. So more precisely we can give
$$L\subseteq L_{2}=\bigcap_{1\leq i\leq k}\int_{X}(r_{i}g_{i}\bar{r_{i}}\bar{g_{i}})^{*}.$$
The two precedence properties.
- Absence of Unsolicited Response. **A resource**
will be not granted to a party who has not requested it. This constraint is already written in the preceding constraint.
- Strict (FIFO) Responsiveness. **Grants are ordered in the same order as the corresponding requests. This concerns how the occurrences of** ri, rj , gi, gj **are shuffled together. The case where**
ri = rj **has been already taken into account in** L2.
Restricted to the alphabet {ri, rj , gi, gj}, if Ri requests before Rj , either Ri**is granted before the**
request of Rj or after4. But Rj **can request before**
4**The system is not allowed to receive simultaneous requests.**
Ri**. That is we get four cases that can be organized**
as product but not as a shuffle. Hence we have
$L\subseteq L_{3}=\bigcap_{1\leq i<j\leq k}\int_{X}(r_{i}g_{i},r_{j}g_{j},r_{i}r_{j}g_{i}g_{j},r_{j}r_{i}g_{j}g_{i})^{*}$
6.3.3. The solution.
All constraints are now to be specified in terms of S-languages. Every S-word contained in L1 ∩ L2 ∩L3 **satisfies the problem, hence the solution is**
L =
RX
(g1g¯1**, . . . , g**kg¯k)
∗∩T1≤i≤k RX
(rigir¯ig¯i)
∗ ∩
T1≤i6=j≤k RX
(rigi, rjgj, rirjgigj , rj rigjgi)
∗
If the system can't realize two tasks simultaneously, the solution will be L ∩ X∗.
6.3.4. Example Let us suppose that there are three requesters and the order of the requests is R1R2R3R1R3.
The temporal objects are the four chains expressed with the S-words - that are also words - r1r¯1r1r¯1, g1g¯1g1g¯1, r3r¯3r3r¯3, g3g¯3g3g¯3, and the intervals expressed with the S-words r2r¯2, g2g¯2**. The set of**
all possible situations is given by the language
[r1r¯1r1r¯1||g1g¯1g1g¯1||r3r¯3r3r¯3||g3g¯3g3g¯3||r2r¯2||g2g¯2].
That is the S-language with Parikh vector ((2, 2),
(2, 2),(1, 1),(1, 1),(2, 2),(2, 2)) on the ordered alphabet {r1, r¯1, g1, g¯1, r2, r¯2, g2, g¯2, r3, r¯3, g3, g¯3}.
The requests order R1R2R3R1R3 **induces the**
following sequence between the letters of type r :
r1r2r3r1r3.
L2 =RX
{r1g1r¯1g¯1r1g1r¯1g¯1, r2g2r¯2g¯2, r3g3r¯3g¯3r3g3r¯3g¯3}.
The condition on the r **letters and the expression**
of sL3 allows to substitute to L3**, the language** L
′
3:
L
′
3 =
RX
g1g2g3g1g3.
L1 =
RX
g1g¯1g2g¯2g3g¯3g1g¯1g3g¯3.
The shuffle of all these fragments are depicted by a Hasse graph of the precedence ordering on the instants corresponding to the bounds of intervals, that is the letters in figure 12. This is a graphical representation of the resulting S-language L.
## 7. Conclusion
In this paper, we have presented the S-languages framework and shown how to represent and to reason on qualitative temporal problem. The Hasse Diagram we provide for the allocation problem has to be compared with the temporally labeled graph of Gerevini and Schubert [12]. |
|
14 S. R. Schwer / Temporal reasoning without transitive tables (Rapport interne 2004 LIPN/LCR)
<image>
Different implementations have been made in order to improve the complexity on the computations. They take benefits of the algorithms used for computing operations in formal languages theory, and the use of automata theory. The problems come, as usual, from the parts of S-languages that have to be broken into disjoint parts, in order to go on in the computation.
Two implementations has already been made, concerning the interval algebra. The first one is based on the notion of pattern [11]. The second one [27] applies it on the linguistical model of Descl´es based on topological intervals [10]. Bounds of intervals are labelled in order to mention whether an interval is open or closed. As L(p1, · · · , pn**) is**
a lattice, we work on convex parts, following the approach of Freksa [14]. These prototypes suggest that ii may be better to compute in two steps: first accepting all situations, even those not allowed situations, and second without forbidden situations, rather than to compute directly the good solution.
## Acknowledgements
I would thank Jean-Michel Autebert and J´erˆome Cardot for their fruitful comments on the text.
## References
[1] James F. Allen , An Interval-Based Representation of Temporal Knowledge. Proc. 7th, Vancouver, Canada, August 1981, p221-226
[2] James F. Allen. Maintening Knowledge about Temporal Interval. Comm. ACM **26-11, (1983) 832–843.**
[3] Jean-Michel Autebert, Langages Alg´ebriques**. Masson,**
Paris, 1987.
[4] Jean-Michel Autebert, Matthieu Latapy, and Sylviane R. Schwer, Le treillis des chemins de Delannoy**, Discrete Math., 258 (2002) pp. 225–234.**
[5] Jean-Michel Autebert, Sylviane R. Schwer, On generalized Delannoy Paths, **SIAM Journal on Discrete**
Mathematics **16 (2) (2003) 208–223.**
[6] H´el`ene Bestougeff and G´erard Ligozat, **Outils logiques**
pour le traitement du temps **Masson (1989).**
[7] Maroua Bouzid, Antoni Ligeza, Temporal Logic Based on Characteristic Function. 19th Annual German Conference on Artificial Intelligence**, Lecture Notes in Artificial Iintelligence, 981(1995) 221–232.**
[8] Roy H. Campbell and Nico Haberman, The Specification of Process Synchronization by Paths Expressions.
Lecture Note in Computer Science**Springer-Verlag 16** (1974) 89–102.
[9] Henri Delannoy, **Emploi de l'´echiquier pour la**
r´esolution de certains probl`emes de probabilit´es, Comptes-Rendus du Congr´es annuel de l'Association Fran¸caise pour l'Avancement des Sciences, vol 24, p.70–90, Bordeaux, 1895.
[10] Jean-Pierres Descl´es. State, Event, Process, and topologie. General Linguistics**, 29 (3), (1990) 159–200**
[11] Michel Dubois, Sylviane R. Schwer. Classification topologique des ensembles convexes de Allen, **proceeding of the 12th Congr`es Francophone AFRIF-AFIA.**
R.F.I.A. 2000**, Paris, '2-4 f´evier 2000) III 59 - 68.**
[12] A. Gerevini, L. Schubert. Efficient algorithms for qualitative reasoning about time. Artificial Intelligence**, 74**
(1995) 207–248.
[13] Seymour Ginsburg. **The Mathematical Theory of**
Context-Free Languages**. McGraw-Hill, 1966.**
[14] Christian Freksa, Temporal reasoning based on semiintervals, Artificial Intelligence**, 54 (1991) 199–227.**
[15] Setthacha¨ı Jungjariyanonn, Sylviane R. Schwer, Condition as Temporal Functions. **Proceedings of the third**
Basque International Workshop on Information Technology**. (Biarritz, 2-4 juillet 1997), 148–156.**
[16] Setthacha¨ı Jungjariyanonn, Sylviane R. Schwer, Extended Boolean Computations. **Proceedings of Workshop 8 (Spatial and Temporal Reasoning) of the 15th**
European Conference on Artificial Intelligence **(Lyon,** France-23 Juillet 2002) 63–68
[17] Hans Kamp, Events, Instants and Temporal Reference, in **Semantics from Different points of view**,
Bauerte,R., Egli, U., von Stechow A., (eds), Spronger Verlag, p. 376-417, 1979.
[18] E. Yu Kandrashina, Representation of Temporal Knowledge. Proceedings of the 8th International Joint Conference on Artificial Intelligence, 8-12 August 1983, Kalsruhe, West Germany.
[19] al-Khatib Lina, Reasoning with non-convex time intervals. Thesis Melbourne, Florida, september 1994. |
|
[20] Peter Ladkin. Time Representation: A Taxonomy of Interval Relations. AAAI pages 360 - 366, (1986).
[21] Peter B. Ladkin and Roger D. Maddux, On Binary Constraint Problems, in Journal of the ACM
41(3):435-469, May 1994. This paper is a substantial reworking of the technical report: On Binary Constraint Networks, by Peter B. Ladkin and Roger Maddux, Technical Report KES.U.88.8, Kestrel Institute, 1988.
[22] G´erard Ligozat. On generalized interval calculi, Proceedings of the AAAI, Anaheim, C.A., **(1991) 234–**
240
[23] Zohar Manna, Amir Pnuelli ; Verification of Concurrent Programs: The temporal proof principles. Automata, Languages and Programming**, Lecture Notes**
in Computer Science 131 (Springer, Berlin 1983) 200–
252.
[24] Zohar Manna, Amir Pnuelli ; Proving precedence properties: The temporal way. **Automata, Languages and**
Programming**, Lecture Notes in Computer Science 154**
(Springer, Berlin 1983) 491–512.
[25] Klaus N¨okel, Convex Relations Between Time Intervals. Proceedings of the AAAI, Boston, MA**. (1990)**
721–727
[26] Bernard A. Nudel Consistent-labelling Problems and their Algorithms, Artificial Intelligence **21 (1983) 135–**
178
[27] Etienne Picard Etude des structures de donn´ees et des algorithmes pour une implantation des ´el´ements et relations temporelles en vue du traitement automatique de la temporalit´e dans le langage naturel. **M´emoire de**
DEA. Universit´e Paris Sorbonne **(Paris IV), septembre 2003.**
[28] David A. Randell, Anthony. C. Cohn, Zheng Cui
; Computing Transitivity Tables: A Challenge For Automated Theorem Provers. **Proc. CADE, LNCS,**
Springer Verlag**, (1992)**
[29] Bertrand Russell, **Our Knowledge of the external**
World**, G. Allen & Unwin, London, 1914, r´e´edition** 1924
[30] Sylviane R. Schwer. Raisonnement Temporel : les mots pour le dire. Rapport interne LIPN **, 1997**
[31] Sylviane R. Schwer, S-arrangements avec r´ep´etitions, Comptes Rendus de l'Acad´emie des Sciences de Paris, S´erie I 334 (2002) 261–266.
[32] Neil J.A. Sloane, sequence A001850/M2942**, An OnLine Version of the Encyclopedia of Integer Sequences,**
http://www.research.att.com/˜njas/sequences/eisonline.html
[33] Marc Vilain, A system for reasoning about time. Proceedings of the AAAI **(1982) 197–201.** [34] Marc Vilain, Henry Kautz, Constraint Propagation Algorithms for Temporal Reasoning, **Proceedings of the**
AAAI **(1986) 377-382**
[35] Eric Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 1999. |
|
left to right (the way of reading in european languages). Hence, assigning a letter to each temporal object, as its identity, and using as many occurrences of this identity as it has points or interval bounds, it is possible to describe an atomic temporal relation between n **objects on the timeline, as**
far as there is no simultaneity, with a word on an n-alphabet (alphabet with n letters). Simultaneity requires to be able to write several letters inside a same box**. This is the aim of the theory of**
S-languages.
In this paper, we show how the S-languages framework allows to represent n**-ary qualitative**
temporal relations and to reason without any transitive tables.
In the next part, we recall the basis of formal languages, following [3,13], and S-languages, and we examine the usual operations of the relational algebra [21] in the context of S-languages.
We then provide two examples of how to reason without transitivity tables. The first one is a revisitation of the well-known unsatisfiable closed network of Allen [2] . The second one revisits the Manna-Pnuelli's problem of the allocation of a resource between several requesters [24]. This aims to show how a problem of concurrency for complex systems, written in modal temporal logic can be solved with the S-languages framework.
## 2. Formal Languages
Let us first recall some basis on formal languages.
## 2.1. Basis
An alphabet X is a finite nonempty set of symbols called letters. A word (of length k ≥ **0) over**
an alphabet X is a finite sequence x1**, . . . , x**k of letters in X. A word x1, . . . , xk **is usually written** x1 . . . xk. The unique word having no letter, **i.e.** of length zero, called the empty **word, is denoted by**
ε. The length of a word f is denoted by |f|**. The**
number of occurrences of a letter a **in the word**
f is denoted by |f|a**. The set of all words (resp.**
of length n) on X is denoted by X∗**(resp.** Xn).
Let us remark that X∗ =Sn≥0 Xn**. The set of all**
words on X is written X∗. A subset of X∗**is called**
a language. The empty set ∅ **is the least language**
and X∗**is the greatest language for the order of**
inclusion.
Let u and v be words in X∗. If u = u1 . . . ur and v = v1 . . . vs are words, then u.v **(usually written**
uv), called the concatenation of u and v**, is the**
word u1 . . . urv1 . . . vs. For instance let X = {**x, y**},
u = xx and v = yy**, then the concatenation is**
uv = xxyy. Let us notice that uv 6= vu**. We also**
have to set u 0 = {ε} , u 1 = u, u n+1 = uun**. One**
has v.ε = ε.v = v.
The concatenation can be extended to languages on X by setting L.L′ = {uv|u ∈ **L, v** ∈ L
′}**. This**
operation endows 2X∗**with a structure of noncommutative monoid. We also have** L
0 = {ε} ,
L
1 = u, L
n+1 = LLn, L
∗ =Sn≥0L
n.
u
∗ =Sn≥0 u n**. Even if** u
∗**is a set, it can be**
worked with like an element, so that we will take this alternative and use u
∗ **as a word or S-word.**
The shuffle is a very useful operator which is used in concurrency applications. The shuffle operator describes all possibilities of doing two concurrent sequences of actions in a sequential manner. Therefore, this is not a binary combination of X∗ **because, from two words, it provides a set**
of words, that is a language. Its definition is the following: Let u and v **be two words written on**
an alphabet X∗. The shuffle of u and v **is the**
language u∨∨v = {α1β1 . . . αkβk ∈ X∗|α1, βk ∈
X∗, α2, . . . , αk, β1, . . . , βk−1 ∈ X+, u = α1 **. . . α**k, v = β1 . . . βk}. For instance let X = {x, y}**, then**
xx∨∨yy = {**xxyy, xyxy, yxxy, xyyx, yxyx, yyxx**}.
The concatenation uv **means an order between**
u and v, this is a word of the language u∨∨v**. One**
has ε∨∨v = v∨∨ε = v for any word v of X∗**. The**
shuffle can be naturally extended to languages on X **by setting** L∨∨L
′ =Su∈L,v∈L′ u∨∨v. The shuffle endows 2X∗**with a structure of commutative**
monoid.
Words are read from left to right, so that the reading induces a natural arrow of Time. Any occurrence of a letter can be viewed as an instant numbered by its position inside the word. Traces languages or paths expressions [8] are used for such a purpose: planning the order of execution of events. But with these languages, it is not possible to differentiate two occurrences that are concurrent (i.e. one may be before, **at the same time** or after **the other) from those that must occur at the**
same time**: these two events are said to commute.**
It is presupposed that the granularity of the time measurement is fine enough to avoid the case at the same time. |
|
2.2. S-alphabet, S-words, S-languages In order to model explicitly concurrency with words, various tools have been proposed such as event structures or equivalence relations on words i.e. **traces. In those theories, it is not possible to** model only synchronization. One is able to say that two events can be done at the same time but it is not possible to express that they have to be done at the same time. This is due to the fact that concurrency is modelled inside a deeply sequential framework, hence, synchronization is simulated with commutativity. But one has to handle with instant**, in the sense of Russell [29]. This is**
why we introduce the concept of S-alphabet which is a powerset of a usual alphabet.
2.2.1. Basic definitions Let us set Definition 2.1 If X is an alphabet, an **S-alphabet**
over X **is a non-empty subset of** 2 X−∅**. An element**
of an S-alphabet is an S-letter. A word on an Salphabet is an S-word. A set of words on an Salphabet is an **S-language**.
S-letters are written either horizontally or vertically: {\a, b} = {**a, b,** a b
}**. For S-letters with only**
two letters, we also write a b
**instead of** a b
.
Examples of S-alphabets over X are:
1. the natural one X˙ = {{a}| a ∈ X} **that is**
identified with X.
2. the full S-alphabet over X, i.e. Xb = 2X − ∅.
3. S-alphabets obtained from others S-alphabets with the following construction:
For an S-alphabet Y over X**, define**
z}|{
Y =
{A|∃A1**, . . . , A**k ∈ Y : A =Sk i=1 Ai}.
z}|{
Y is also an S-alphabet over X.
Note that, for all S-alphabets Y and Z over X**, we**
have z}|{ z}|{
Y =
z}|{
Y and z }| {
Yb ∪ Z =
z }| {
Y ∪ Z**. A S-word**
on a full S-alphabet over X **will be simply designed**
by an S-word on X.
In this work, we use the full S-alphabet Xb =
2 X − ∅**. Identifying any singleton with its letter,**
we write X ⊂ Xb and X∗ ⊂ Xb
∗
.
In order to link S-words on X **with letters of** X,
we set Definition 2.2 Let X = {x1, . . . , xn} **be an nalphabet and** f ∈ Xb
∗
. We note kfkx for x ∈ X the number of occurrences of x **appearing inside the**
S-letters of f, and kfk **the integer** P1≤i≤n kfkxi.
The Parikh vector of f, denoted ~f**, is the n-tuple**
(kfkx1
, . . . , kfkxn).
_Example 1 :_ $$f=\left\{\begin{matrix}a\\ b\end{matrix}\right\}cba\left\{\begin{matrix}a\\ c\end{matrix}\right\}c\left\{\begin{matrix}a\\ b\end{matrix}\right\}a\left\{\begin{matrix}a\\ b\\ c\end{matrix}\right\}aaaaa\ is\ an\ S-$$ _word such that $\vec{f}=(10,4,4)$._ _2.2.2. Concatenation and shuffle_
The concatenation of two S-words or two Slanguages are defined exactly in the same way as in formal languages. The S-shuffle has to be generalized in the following way:
Definition 2.3 Let X and Y **be two disjoint alphabets,** f ∈ Xb
∗
, g ∈ Yb
∗
. The S-shuffle of f and g is the language [f||g] = {h1 **. . . h**r|hi ∈ X\∪ Y ,
with max(|f|, |g|) ≤ r ≤ |f| + |g| **and such that**
there are decompositions of f and g: f = f1 **. . . f**k, g = g1 . . . gk, satisfying, (i) ∀i ∈ [r], |fi|, |gi| ≤ 1,
(ii) 1 ≤ |fi| + |gi|**, and (iii)** hi = fi ∪ gi}.
For instance [aa||bb]={**aabb**, a a b b, **abab,** a b ab
,ab a b
,
a b
a b
, **baab**, ba a b
, **abba**,
a b ba, baba, b a b a, bbaa} = {f ∈ {\**a, b**}
∗
|
~f **= (2**, 2)}.
The S-shuffle of two S-languages L and L
′ **written on two disjoint alphabets is the language**
[L||L
′] = ∪f∈L,f′∈L′ [f||f
′].
The S-shuffle is, like the shuffle, an associative and commutative operations, which allows to note
[u1|| · · · ||un] for the S-shuffle of n **S-words or Slanguages.**
In the case where all S-words of a language L share the same Parikh vector, we note L~ **this**
common Parikh vector. In particular, on the nalphabet X = {x1, · · · , xn}**, the language,**
$${\mathcal{L}}(p_{1},\cdots,p_{n})=\{f\in{\hat{X}}^{*}|{\vec{f}}=(p_{1},\cdots,p_{n})\}$$
that we call (p1, · · · , pn**)-Delannoy Language - on**
X **–, are of a particular interest for temporal qualitative reasoning, as we will show it in the next**
section. Let us just recall [31] that the cardinality D(p1, . . . , pn) of a (p1, · · · , pn**)-Delannoy Language**
is given by the following functional equation: |
|
$$4$$
4 S. R. Schwer / Temporal reasoning without transitive tables (Rapport interne 2004 LIPN/LCR)
$\text{S.R.Schwer/Temporal reasoning with}$ .
$$D(p_{1},\ldots,p_{n})=\sum_{{\mathcal{P}}r e d(p_{1},\ldots,p_{n})}D_{n}({\widetilde{p_{1}}},\ldots,{\widetilde{p_{n}}})$$
where for p > 0), pe = {p, p − 1}; e0 = {0} and Pred((p1, · · · , pn)) = {(pe1, · · · , pfn)}−{(p1, · · · , pn)}.
In particular D(p, q)=D(p, q − 1) + D(p − 1, q −
1) + D(p − 1, q) with the initial steps D(0, **0) =** D(0, 1) = D(1, **0) = 1.**
Like Pascal's table for computing binomial numbers, there is a Delannoy table for computing Delannoy numbers, given in Table 1.
| p | q 0 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|-----|------------------------------------------------------|--------------------------|---------------|------|-------|------|------|-----|
| 0 | 1 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 1 | 1 3 | 5 | 7 | 9 | 11 | 13 | 15 | 17 |
| 2 | 1 5 13 | 25 | 41 | 61 | 85 | 113 | 145 | |
| 3 | 1 7 | 25 | 63 | 129 | 231 | 377 | 575 | 833 |
| 4 | 1 9 | 41 | 129 321 | 681 | 1289 | 2241 | 3649 | |
| 5 | 1 11 61 | 231 | 681 1683 3653 | 7183 | 13073 | | | |
| 6 | 1 13 85 | 377 1289 3653 8989 19825 | 40081 | | | | | |
| 7 | 1 15 113 575 2241 7183 19825 48639 108545 | | | | | | | |
| 8 | 1 17 145 833 3649 13073 40081 108545 265729 | | | | | | | |
| 9 | 1 19 181 1159 5641 22363 75517 224143 598417 Table 1 | | | | | | | |
D(p, q**) are the well-known Delannoy numbers**
<image>
[9,35,32] that enumerate Delannoy paths in a (p,q)-rectangular chessboard. A Delannoy path is given as a path that can be drawn on a rectangular grid, starting from the southwest corner, going to the northeast corner, using only three kinds of elementary steps: north, east, and northeast**. Hence they are minimal paths with diagonal**
steps. The natural correspondence between (p,q)-
Delannoy paths and L(p, q) on the alphabet {**a, b**}
is: the S-letter a corresponds to a north**-step, the**
S-letter b to a east**-step and the S-letter** a b to north-east**-step.**
## 2.2.3. Projection
We extend the well-known notion of projection in formal languages theory to S-languages.
The aim is to be able to erase all occurrences of some letters in an S-word and having as results a new S-word. The problem is how to handle an S-letter with all its letters erased. For that purpose we set: let X **be an alphabet and** f ∈ Xb
∗
,
Xf = {x ∈ X|kfkx 6= 0}.
Definition 2.4 Let X **be an alphabet and** Y ⊆ X.
The S-projection from Xb
∗
to Yb
∗
is a monoid morphism π X
Y
defined by the image of the S-letters: for s ⊂ X**, its image is** π X
Y
(s) = s ∩Y if this intersection is not empty, ε **if not.**
The projection on Y of an S-word f **is denoted** f|Y
instead of π X
Y
(f).
Example 2 (Example 1 continued) Let f = a b cba a c c a b a a b c aaaa, - f|{a} = aaaaaaaaaa, - f|{b}=bbbb, - f|{c}=cccc, - f|{a,b} = a b baa a b a a b aaaa, - f|{b,c} = bcbccb bc , - f|{a,c}= aca a c caa a c aaaa, - f|{a,b,c}=f.
## 3. Qualitative Temporal Objects And Relations In The Binary Algebra And Their Transitivity Tables
We examine qualitative temporal objects and relations inside the framework of relational algebra, as Ladkin and Maddux initiated it [21]. We recall the usual qualitative temporal binary algebra: the point algebra, the interval algebra [2], the pointinterval algebra [33,34], chains algebra.
In this paper, we use the term situation **for the**
description of a unique temporal relation (complete information) between objects, which is sometimes called an atomic relation.
Let us recall the principia of transitivity table.
Given a particular theory Σ supporting a set of mutually exhaustive and pairwise disjoint dyadic situations, three individuals, a, b and c **and a pair**
of dyadic relations R1 and R2 **selected from Σ such**
that R1(a, b) and R2(b, c**), the transitive closure**
R3(a, c) represents a disjunction of all the possible dyadic situations holding between a and c in Σ. Each R3(a, c**) result can be represented as one**
entry of a matrix for each R1(a, b) and R2(b, c) ordered pair. If there are n dyadic situations supported by Σ, then there will be n×n **entries in the**
matrix. This matrix is a transitivity table. Transitive tables for binary situations have been written |
|
S. R. Schwer / Temporal reasoning without transitive tables **(Rapport interne 2004 LIPN/LCR) 5**
for convex intervals (Allen), for points, for points and convex intervals: knowing an atomic relation between objets A and B **and an atomic relation**
between objects B and C**, derive the possible - or**
other said not prohibited - relations between objects A and C.
Cohn et al. **[28] have studied transitivity tables**
for reasoning in both time and space in a more general context. They noted the difficulty to build such secure transitivity tables.
## 3.1. The Point Algebra.
The three situations are the three basic temporal relations: before <**, equals = and after** > as shown in Figure 1. The set of point qualitative Fig. 1. situations of the black point with respect to the white point.
$$\begin{array}{l l l l}{{}}&{{}}&{{}}&{{\bullet}}\\ {{}}&{{\odot}}&{{}}&{{\odot}}\\ {{}}&{{<}}&{{}}&{{=}}\end{array}\qquad{\begin{array}{l l l l}{{}}&{{}}&{{}}&{{\odot}}\\ {{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{>}}\end{array}$$
temporal binary relations is the set : {**<, >,** =, 6
, >, 6=, ⊥, ⊤}, where ⊥ **is the empty relation (no**
feasible relation) and ⊤ **the universal relation (any**
relation is feasible). The transitive table is given in Figure 2
<image>
Fig. 2. point transitivity table For instance, if A < B and B > C **then** A⊤C.
## 3.2. The Interval Algebra.
In Figure 3, we recall the thirteen situations between two intervals studied by Allen [1].
The transitivity table is given in Table 4 where1:
✷ = ⊤, that is there is no constraint, every situa1**The notation is taken, whenever possible, from Delannoy paths draw as kind of greek letters on the chess, with**
the following convention: Upper case for 5-subsets, lower case for 3-subsets
## Rel Or Rel∼
equals(=) OR equals(=)
<image>
started-by (s
∼) OR **starts** (s)
contains(d
∼) OR **during** (d)
finished-by (f
∼) OR **finishes** (f)
overlaps (o)OR**overlapped-by**(o
∼)
meets (m) OR **met-by** (m∼)
before (<) OR **after** (>)
Fig. 3. the set of 13 situations between two intervals on line tion is allowed,
✸ = ⊤−{<, m, m∼, >}**, which means that the two**
intervals intersects on an interval, this is exactly what Kamp named the overlapping2**relation** ◦ on two processes [17] and Freksa the contemporary relation [14], Γ
∼ = {<, m, o, s, d}, that is the relation **begin before**, Λ = {<, m, o, d∼, f ∼}, that is the **relation end after**
α = {**<, m, o**}, δ
∼ = {**o, s, d**}, ρ
∼ = {o
∼**, d, f**},
sˆ = {s, =, s∼},
ˆf = {f, =, f ∼}
and using the following property:
∀A ⊆ ⊤, x ∈ **A iff x**∼ ∈ A∼.
<image>
Fig. 4. interval transitivity table |
|
## 6 S. R. Schwer / Temporal Reasoning Without Transitive Tables (Rapport Interne 2004 Lipn/Lcr) 3.3. The Point-Interval Algebra
In order to take into account both instantaneous and durative processes, Vilain provided a model with points and intervals [33]. Figure 5 shows the five situations between a point and an interval.
<image>
Fig. 5. The 5 situations between a point and an interval
(each point designs a situation)
Besides the two preceding transitivity tables, are needed six more transitivity tables: (i) points/intervals-intervals/points, (ii) points/intervals-intervals/intervals,
(iii) points/points-points/intervals, (iv) intervals/intervals-intervals/points,
(v) intervals/points-points/intervals, (vi) intervals/points-points/points.
## 3.4. Chains Algebras
The T-model of Kandrashina [18] has three qualitative basic notions: the point, the interval and the sequence of intervals. Situations between two sequences of intervals are derived from situations between intervals. Some frequent situations are shown like the one in Figure 6:
## Fig. 6. S1 **Alternates** S2
These objects has been revisited and studied for their own by Ladkin [20] under the name of nonconvex intervals. Ligozat [22] generalized to sequences of points and/or intervals under the name of generalized intervals.
There are 3 situations between two points, 5 between a point and an interval, 13 situations between two intervals, 8989 situations between two sequences of three intervals or two sequences of 6 points. Ladkin [20, Theorem1], proved the number of situations between two chains of intervals is at least exponential in the number of intervals. The exact number of situations between a sequence of p points and a sequence of q points has been provided by [6, p. 83] without doing the connection with Delannoy numbers.
Freksa studied transitivity tables with respect to convex set of intervals [14], Randel & al. [28] have studied transitivity tables for reasoning in both time and space in a more general context, both in order to below the complexity rate of the computations. That was also the aim of Vilain et al. who have studied the fragment of the interval algebra, that can be written without disjunction inside the point algebra, based on the fact that relations between intervals can be translated in terms of their bounds, inside the point algebra. An interval A is a couple of its bounds (a, a**¯) viewed as points they**
can contain or not, with the constraint a < a**¯. Situations between intervals are represented in terms**
of the situations of their bounds:
- A is before B iff a < **a < b <** ¯
¯b
- A meets B iff a < a¯ = b < ¯b
- A overlaps B iff a < b < a <¯
¯b
- A starts iff B a = b < a <¯
¯b
- A during iff b < a < a <¯
¯b
- A finishes iff **b < a <** a¯ = ¯b
- A equals B iff a = b < a¯ = ¯b.
## 4. Qualitative Temporal Objects And Relations In The S-Languages Framework 4.1. Temporal Objects
All temporal items previously reviewed are based on points or maximal convex interval, that is isolated points or pairing points. The idea is to assign an identity to each temporal objects. The set of these identities is the alphabet on which the Slanguages will be written. A temporal object with identity a and p **bounds and/or isolated points is**
depicted by the (S-)word a p**. To distinguish between points and bounds, it is possible to mark the**
right bound of an interval. If one non-marked letter follows a non-marked letter, then the first one depicts a point. For instance the S-word **aaaaaa** ¯ aa¯ a¯
depicts the sequence : point, interval, point, point, interval, interval. |
|
S. R. Schwer / Temporal reasoning without transitive tables **(Rapport interne 2004 LIPN/LCR) 7**
<image>
Fig. 7. A relation between 3 chains of intervals
## 4.2. Temporal Relations
A relation between n temporal items, using alphabet X = {x1, · · · , xn}**, is an S-word on** Xb∗ =
2 X − ∅ **that describes exactly the situation of the**
points on the timeline, described by the relation. For instance Example 3 (Examples 1 and 2 continued) Let A, B, C three temporal items as depicted in figure 7.
On the alphabet X = {a, b, c}**, item A is written**
aaaaaaaaaa, item B bbbb and item C cccc. The situation between them is given by the S-word f of Example 1, that is
$$f=\left\{{a\atop b}\right\}c b a\left\{{a\atop c}\right\}c\left\{{a\atop b}\right\}a\;.$$
<image>
$$\left\{\begin{array}{l l}{a a a a.\ H e n c e,\ i n t}\\ {}\end{array}\right.$$
Example 2, we have computed:
- f|{a}**, which is item A,**
- f|{b}**, which is item B,** - f|{c}**, which is item C,**
- f|{a,b}**, which is the relation between A and B,** - f|{b,c}**, which is the relation between B and C,** - f|{a,c}**, which is the relation between A and C,** - f|{a,b,c}**, which is the relation between A, B and**
C.
The following theorem [31] is the most important for our purpose:
Theorem 4.1 For any integer n ≥ 1, let T1, · · · , Tn be n temporal items, Xn = {x1, . . . , xn} **be an**
alphabet and x p1
, · · · , xpn **– writing** x n **the word**
x . . . x | {z }
ntimes
- their temporal words on X. Let us denote by Π(p1, . . . , pn) **the set of all n-ary situations**
among T1, · · · , Tn, L(p1, · · · , pn) **is its corresponding language.**
In dimension 2, it is obvious to see that there is a natural correspondence between L(p, q**) and**
Delannoy paths in a (p,q)-rectangular chessboard.
The correspondence between interval situations, L(p, q**) and (2,2)-Delannoy paths is shown in Figure 9, inside the N¨okel Lattice [25].**
The arrow means, for S-words, the Thue rewriting rules [3] ab →
a b
→ ba**, which is exactly the**
<image>
Point lattice as we can see it in Figure 8. Autebert et al. have proved [4] that the S-language L(**p, q**) on the alphabet {a, b} **(that is any set of situations**
between a sequence of p **points and a sequence of**
q points or Ligozat's Π(p, q**) set [22]) can be generated from the single S-word** a pb q **and these Thue**
rewriting rules. They also rigorously proved that these rules make the (p,q)Parikh vector S-language to be a distributive lattice. They also characterize the subset of union**-irreducible S-words, which is**
the lattice of ideals of the language :
{a p−1b q−1| p > 0, q}||{c**} ∪ {**a p−kb la kb q−l| 0 <
l ≤ q, 0 < k ≤ p}**. Its cardinality is 2**pq.
Autebert and Schwer [5] generalized the results to the n-ary case, proving that L(p1, · · · , pn**), with**
the following Thue rewriting rule is also a lattice, but not distributive because not modular, as soon as n ≥ **3 like Figure 10 shows it. Given an arbitrary**
order over the letters of X by a1 < a2 < · · · < an, this induces over the S-letters a partial order P <
Q ⇐⇒ [∀x ∈ P, ∀y ∈ Q : x < y**]. Then the Thue**
system denoted −→ on Xb∗**, by the following:**
∀P, Q, R ∈ Xb such that P < Q and R = P ∪ Q,
set P Q −→ R and R −→ QP.
<image>
|
|
8 S. R. Schwer / Temporal reasoning without transitive tables (Rapport interne 2004 LIPN/LCR)
<image>
## 4.3. Operations On Temporal Relations
In a relational algebra, relations are basic objects on which operators operate. Apart of sets operators like union, intersection and complementation, we have yet seen two operations : the composition ◦ and the inverse ∼**. The inverse operation exchanges the role of the objects:** aSb ⇐⇒
b(S)
∼a**. There is an other unary operation, closed**
to the Time arrow: the symmetry **function that**
inverses the arrow of Time. The symmetrical of aSb is aS
∼b**. In the framework of S-languages, the**
transposition is the identity function; the symmetry **function is the mirror one that is the reading**
from right to left.
The third operation, the composition**, is the fundamental operation for the reasoning. We show**
now how the S-language framework avoids such a material.
These operations have their correspondents inside the S-languages framework, but we prefer to simulate them with the two new operators from Swords to S-languages that we now introduce. The first one is the inverse of the projection, named integration, it is an unary operator; the second one is the main operator, it is closed to the composition of relations. It aims to answer the following question: having three worlds X, Y, Z - with possible intersections –, and having information f in world X and information g **in world Y, what possible - i.e. not forbidden - information can be deduced from them in world Z? This operator is the**
one which allows to avoid transitive table.
Definition 4.2 For any alphabet Z **and any word**
f ∈ Xb
∗
,
- The free integration of the S-word f **on the**
alphabet Z**, denoted** RZ
f**, is the S-language**
$$\int_{Z}f=[\pi_{X_{f}}^{X_{f}\cup Z}]^{-1}(f)=\{g\in\widehat{Z\cup X_{f}}^{*}|g|_{X_{f}}=f\}$$
- For any distinct letters t1, · · · , tn of Z**, and**
ν = (t p1 1
, · · · , tpn n ), the bounded to ν integration of the S-word f on the alphabet Z**, denoted** R ν Z
f**, is the S-language**
$$\int_{Z}^{\nu}f=(\int_{Z}f)\cap(\int_{Z\cup X_{f}}{\mathcal{L}}(p_{1},\ldots,p_{n}))=$$
$$\{g\in{\widehat{Z\cup X_{f}}}^{*}|g_{|X_{f}}=f,\forall i:||g||_{t_{i}}=p_{i}\}$$
These definitions are extended to languages in the following natural way: The free integration of the S-language L on the alphabet Z**, denoted** RZ
L**, is the S-language**
$$\int_{Z}L=\bigcup_{f\in L}\int_{Z}f$$
|
|
The bounded to ν **integration of the S-language L**
on the alphabet Z**, denoted** R ν Z
L**, is the S-language**
$$\int_{Z}^{\nu}L=\bigcup_{f\in L}\int_{Z}^{\nu}f$$
For instance - cf. end of Section 2 - let Z =
$\{a,b,c\}$ and $\nu=(a^{10},b^{4},c^{4})$. $\int_{Z}^{\nu}a^{10}=\int_{Z}^{\nu}b^{4}=\int_{Z}^{\nu}c^{4}=\mathcal{L}_{Z}(10,4,4)$. $\int_{Z}^{\nu}f_{|\{a,b\}}=[f_{|\{a,b\}}||c^{4}]$ and $\int_{Z}^{\nu}f_{|\{b,c\}}=[f_{|\{b,c\}}]$. We then have $\int_{Z}^{\nu}f_{|\{a,b\}}\cap\int_{Z}^{\nu}f_{|\{b,c\}}$= $\left\{\begin{matrix}a\\ b\end{matrix}\right\}cb([aa||cc]).\left\{\begin{matrix}a\\ b\end{matrix}\right\}a\left\{\begin{matrix}a\\ b\\ c\end{matrix}\right\}aaaa$ which contains the word $f$. We never compute the integration. It is just a
10, b4, c4). R ν
R ν
10].
We never compute the integration. It is just an artifact in order to have every constraints written on the same alphabet. The operation which costs the most is the intersection. In fact, we do not do it. We operate a kind of join, which consists in (i) computing the set of letters in common under the two integrals, (ii) verifying if all occurrences of these letters are ordered in the same manner under the two integrals (iii) shuffle the two subwords which are between two such following occurrences.
(i) and (ii) causes no problem. If the common letters are isolated (that is, not inside a shuffle part), the complexity of (iii) is linear but in the worse case, it can be exponential. We are studying convex part of lattices and heuristics in order to improve the complexity of the computation, in the spirit of [11].
## 5. Reasoning Inside The S-Language Framework
It is usual in temporal applications that information arrives from many various sources or a same source can complete the knowledge about a same set of intervals. The usual way to deal with that, when no weight of credibility or plausibility is given, is to intersect all the information. The knowledge among some set of intervals interferes with some other sets of intervals by transitivity: if you know that Marie leaved before your arrival, and you are waiting for Ivan who attempts to see Marie, you can tell him that he has missed her.
Vilain and Kautz [34] argued that there are two kinds of problems:
Problem number 1 Let R1(A, C) and R2(A, C**) be**
two sets of constraints between intervals A and C, what is the resulting set of constraints for A and C?
Problem number 2 Let A, B, C **be three intervals** and R(A, B) and R(B, C**) the sets of constraints**
respectively between A and B and between B and C. What is the deduced set of constraints between A and C?
The first problem requires an and logical operator or an intersection **set operator. The second**
problem requires a transitivity operator based on tables.
In our framework, the answers to these two problems are described exactly in the same manner, the difference being just a matter of integration alphabet. Let R1(a, b) [resp.R2(b, c), R(**a, c**)
] be the language associated to R1(a, b**) [resp.** R2(b, c), R(a, c**)], the first answer is**
$${\mathcal{R}}(a,c)=\pi_{Z}^{X}(\int_{X}^{(2,2,2)}{\mathcal{R}}_{1}(a,c)\cap\int_{X}^{(2,2,2)}{\mathcal{R}}_{2}(a,c))$$
and the second answer is
$${\mathcal{R}}(a,c)=\pi_{Z}^{X}(\int_{X}^{(2,2,2)}{\mathcal{R}}_{1}(a,b)\cap\int_{X}^{(2,2,2)}{\mathcal{R}}_{2}(b,c))$$
with in both cases Z = {a, c}**. More generally, our**
main result, set in terms of intersection and integration, is:
Theorem 5.1 Let I = {I1, . . . , In} **be a set of n**
temporal items, X = {x1, . . . , xn} be the corresponding alphabet and ν = (x p1 1
, · · · , xpn n
) **their**
Parikh vector. Let J1, . . . , Jk be k non empty subsets of I and Y1, . . . , Yk their corresponding alphabets and νYi**their Parikh vectors. For** 1 ≤ i ≤ k, let {Li1
, . . . ,Lisi
} ⊆ L(νYi) **be a set of languages**
describing si**-ary temporal qualitative constraints** among Ji.
- **The all solution problem for I is given by the**
language
$$\bigcap_{\begin{array}{c}{{1\leq i\leq k}}\\ {{1\leq j\leq s_{i}}}\end{array}}\int_{X}^{\nu_{X}}{\mathcal{L}}_{I i_{j}}$$
- The temporal satisfaction problem for I is satisfied if and only if |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.