Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
900
1,824
Four-Iegged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm Susanne Still NEC Research Institute 4 Independence Way, Princeton NJ 08540, USA [email protected] Klaus Hepp Institute of Theoretical Physics ETH Zurich, Switzerland Bernhard Scholkopf Microsoft Research Institute 1 Guildhall Street, Cambridge, UK [email protected] Rodney J. Douglas Institute of Neuroinformatics ETHIUNI Zurich, Switzerland Abstract To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets. Fine tuning of the chip's input parameters is done automatically by the robotic system, using an unsupervised Support Vector (SV) learning algorithm introduced recently [7]. The learning requires only that the description of the desired output is given. The machine learns from (unlabeled) examples how to set the parameters to the chip in order to obtain a desired motor behavior. 1 Introduction Modem robots still lag far behind animals in their capability for legged locomotion. Fourlegged animals use distinct walking gaits [1], resulting for example in reduction of energy consumption at high speeds [5]. Similarly, the use of different gaits can allow legged robots to adjust their walking behavior not only for speed but also to the terrain they encounter. Coordinating the rhythmic movement patterns necessary for locomotion is a difficult task involving a large number of mechanical degrees of freedom (DOF) and input from many sensors, and considerable advantages may be gained by emulating control architectures found in animals. Neuroscientists have found increasingly strong evidence during the past century to support the hypothesis that centers in the nervous system, called Central Pattern Generators (CPGs), generate rhythmic output responsible for coordinating the large number of muscles needed for locomotion [2]. CPGs are influenced by signals from the brain stem and the cerebellum, brain structures in which locomotive adaptation is believed to take place [11]. This architecture greatly simplifies the control problem for the brain. The brain only needs to set the general level of activity, leaving it to the CPG to coordinate A r-------------------------------~ SV Learning Algorithm "- /' Parameter setting program Data acquisition program ~ Input port I t1 Jl w_ a_lki _?_n g, G _ a_i_t c_o--.n_tr_ol_c_h_ip__ j Motors Brain t DAC l'--__ B I I Sensors Muscles Sensors Figure 1: A: Sketch of the control architecture of the robot. Thick arrows indicate the learning loop. B: Sketch of simplified control architecture for locomotion of vertebrates. the complex pattern of muscle activity required to generate locomotion [3]. We make use of these biological findings by implementing a similar control architecture to control a walking machine. A neuromorphic Gait Controller (GC) chip produces time varying output voltages that control the movement of the robot's legs. Their frequency, phase relationships and duty cycles determine the resulting walking behavior (step frequency, walking gait and direction of motion) and depend on a small set of control parameters. For an ideal system, the relationship between control parameters and output voltages can be determined analytically, but deviations of the chip from ideal have to be compensated for by other means. Since the goal here is that the resulting machine works autonomously, we propose a learning procedure to solve this problem. The robot is given the specifications of the desired movement sequence and explores parameter combinations in the input parameter space of its GC chip, keeping only those leading to a movement that is correct within some tolerance. It then locates the region in input parameter space that contains most of these parameter combinations using an algorithm [7] that extends SV learning to unlabelled data. 2 The robotic system The robotic system consists of (i) a body with one degree of freedom per leg l and a potentiometer attached to each motor that serves as a sensor providing information about the angular displacement of the leg, (ii) the neuromorphic Gait Controller (GC) chip and (iii) a PC on which algorithms are run to (a) acquire data from chip and sensors, (b) set the chip's input parameters and (c) implement the learning algorithm. The control architecture is inspired by the architecture which is used for locomotion in vertebrates (see Fig. 1 and Sec. 1). Like in biology, the existence of the GC chip considerably simplifies the control task. The computer only needs to set the input parameters to the GC chip, leaving the chip to coordinate the pattern of motor movements necessary to generate locomotion of the robot. The circuitry on the GC chip is based on an analysis [8] of circuits originating from M. W. Tilden (e.g. [4]). The GC chip contains five oscillators which can be inter-connected in different ways. The chip can be used in three reasonable configurations, one of which, a chain of four oscillators (Fig. 2), is used in the present work. Each oscillator (see Fig. 3) consists of two similar sub-circuits. In the following, subscripts i E {I, .. ,4} will denote the oscillator identity and subscripts k E {l, r} will denote one of the two sub-circuits within an oscillator. Here l stands for the left side of the oscillator circuit and r for the right side. Each sub-circuit has a capacitor connecting an input node to a node Vi,k, to which the input node of an inverter is connected. The output node of the inverter is called Vout,i,k. lThe robot's aluminum body is 12 cm long and 6.2 cm wide. It has four DC motors which drive aluminum legs attached at right angles to the plane of the body. Each leg ends in a foot that contains a small electromagnet which is activated during the stance phase of the leg and deactivated during the swing phase. Leg and foot together have a length of 6.5 cm. The robot walks on a metal floor so that the electromagnet increases the friction during the stance phase. For further details see [7]. Left Front Leg ~ OUT0 BIAS Right Front Leg BIA~ OUT Left Hind Leg ~ BIAS Right Front Leg Figure 2: Sketch of the configuration in which the chip is used: a chain of four coupled oscillators. Each of the thin lines is connected to a pad. The round symbols stand for oscillators numbered corresponding to the text. The thick arrows stand for the transmission gates which couple the oscillators (see circuit diagram in Fig. 3). The arrows that lead to the four legs represent the outputs of the oscillators. Finally, a n-FET transistor with gate voltage Vb,i,k is connected between Vi,k and ground. An oscillator is obtained by connecting the input node of one sub-circuit to the output node of the other and vice versa. The output voltages of a single oscillator are two mirror-image step functions at Vout,i,l and Vout,i,r' These voltages control the stepping movements of one leg. Two oscillators, j and j + 1 (j E {I, '" 3}), are coupled with two transmission gates. One is connected between Vout,j,l and Vi+l,l. The current that flows through it depends on the bias voltage Vi"j j+1,1' Likewise, the other transmission gate connects Vout,j,r and Vi+1,r and has the bias voltage Vb,j j+1 ,r' Note that the coupling is asymmetric, affecting only oscillator j + 1. The voltages at the input nodes to the inverters of oscillator j are not affected by the coupling since the inverters act as impedance buffers. The chip's output is characterized by the frequency (common to all oscillators), the four duty cycles of the oscillators and three phase lags between oscillators. The phase lags determine which gait the robot adopts. The duty cycles of the oscillators set the ratio between stance and swing phase of the legs. Certain combinations of duty cycles differing from 50% make the robot turn [8]. For a set of constant input parameters, {Vb,i,r, Vb,i,l, Vb,j j+1,r, Vb,j j+l,l}, a rhythmic output is produced with oscillation period P, duty cycles Di and phase shifts ?j, where i E {I, ",4} and j E {I, '" 3}. Analysis of the resulting circuit reveals [8] how the output characteristics of the chip depend on the input parameters. Assume that all transistors on the chip are identical and that the peak voltages at node V1,1 and at node V1,r are identical and equal to Vmax . For a certain range of input parameters, the period of the oscillators is given by the period of the first oscillator in the chain (called the master oscillator) ..!L..!L P = C(Vmax - vth)( e - kTK.Vb,l ,1 + e - kTK.Vb,l,r)/ Ion (1) where C = 5.159 X 10- 10 F is the capacitance and Ion is the drain source leakage current of the n-FET. The threshold voltage of the inverter, vth = 1. 345V, is calculated from the process parameters [9]. Vmax = 3.23V, Ion = 2.2095 X 10- 16 A and K. = 0.6202 are estimated with a least squares fit to the data (Fig. 4a). T is the temperature, k the Boltzmann constant and q the electron charge. Let the duty cycle be defined as the fraction of the period during which Vout,i,l is high. The master oscillator's duty cycle is D1 = 1/[1 + ePrK.(Vb,l,r - Vb,l ,I)] (2) A very simple requirement for the controller is to produce a symmetric waveform for straight forward locomotion. For this, all oscillators must have a duty cycle of 112 (=50%) [8]. This can be implemented by a symmetric circuit (identical control voltages on both right and left side: Vb,j j+1,1 = Vb ,j j+l,r =: Vb,j j+l Vj E {I, '" 3} and Vb,i,l = Vb,i,r =: Vb,i Vi E {I, ",4}). For simplicity, let Vb,i = Vb Vi. Then the phase lag between oscillators j and j + 1 is given by (compare Fig. 4b) 1 ?j = - + 2 kT /2q(Vmax - vth) ['Y(vth) - p,(vth)ePrK.(Vb,j j+1 + Vb)] ..!L ( ) In ..!L ( ) V /3e- kT K. b,jj+l+Vb -1 'Y(Vo)-p,(Vo)ekTK.Vb,jj+l+Vb (3) OSCILLA TOR j Transmission Gate Vout,j+l,r OSCILLA TOR j+ 1 Figure 3: Two oscillators are coupled through a transmission gate. The gate voltage on the n-FET of each transmission gate is set to be the complementary voltage of the p-FET of the same transmission gate by the circuits which are drawn next to the transmission gates. These circuits are controlled by the bias voltages Vb ,i HI,I and Vb,i i+1 ,r and copy the voltages Vb,j+1,1 and Vb,j+1 ,r, to nodes 2 and 4, respectively, while the voltages at nodes 1 and 3 are (Vdd - Vb ,HI,I) and (Vdd - Vb,HI,r), respectively. The symbols on the right correspond to the symbols in Fig. 2. where Va = 0.1 V and (3 = lop ePr"-Vdd / Ion; ')'(V) = (Ion + lop ePr V) ePr"-Vdd; JL(V) = Ion e Pr V 3 Learning In theory, all duty cycles should be 112 for the symmetric circuit. In the real system, the duty cycle changes with the phase lag (see Fig. 4c) due to transistor mismatch. Thus, to obtain a duty cycle of 112, Vb ,i HI,1 might have to differfrom Vb,j HI,r . Parameter points which lead to both a desired phase lag and a duty cycle of 112 lie in a two dimensional space spanned by Vb ,j HI,I and Vb,j j+1,r. These parameters are learned 2 . First, a subset X in this input parameter space is chosen according to the estimate given by (3). X is scanned and at each point, the output characteristics of the GC chip are determined. If they match the desired output characteristics (within specified tolerances), this point is added to the training data set V eX. After the scan is completed, the training data is transformed by a feature map <1> : X -+ F , into a feature space F such that if x, y EX, the dot 2The desired duty cycle of 1/2 is an example, leading to forward locomotion for of the test robot. In the same way, any other value for the duty cycle can be learned (for examples see [8]) 12 r-,----,----,----.,----.,----.,----, 8 .... I -.j .. ---.l.----.l.---.. l.. ----j......+...... j ....~ ..... ;.... ; ..... '" 6 .... ;;: ?????t?????j , : : : 4 ---_._( : .j.... j ..... : i ?-t------)_?_---+------:----?':'?----- 0.9 0.8 ~~ o. 7 [.. , 0.5 ~c:::;---,--,--,........,---,---,---,--, , .. ~ .......,. ????r?? ~.- ~ ~ f?????+?????+????+?????f-??????f????? ~:::::::l::::::t::::::t:::::::~:::::-+-? ????r::::: ;--.-.--j----.-.~-----.-+.----- t??????:???????;? --~.... -.- ' ; 2 ... j............................. ...... 0.6 o 0.5 .....' ----'-----'--'---'---'----'----' ; 0.3 0.4 0.5 0.6 N::;~~'~~;I;! :::::::::::::r::r:::T:::r"::;::::;::::;::::' t??...-.!......-{..-....+...-... .....-.! ....... ..... 4.22 (b) 4.24 4.26 Vb,12/V A . 0.2 .....:...1...1....1...1....1 ::r:::::::r:: 0.1 :::::;:::::::)::::::::::::::;.... ]..... ( .. ]..... 4.28 00.5 ' 0.6 0.7 (c) 0.8 0.9 1 <1>1 Figure 4: (a): Oscillation period P (points = data, 3% error) as a function of the bias voltage Vb i 1 = Vb i r =: Vb. Vb i 1 = Vb i r implies that the duty cycle of the oscillation is 112 (see (2?. The ~s~iIIation peri~d follo~s (1) (solid line). (b): Phase lag between the first two oscillators in a chain of 4 oscillators. Function given in (3) (solid line) and data points. Vmax and Ion are as determined in (a), lop, as estimated by data fitting is 1.56 x 10- 19 A. (c): Duty cycle of the second oscillator in a chain of 4 oscillators as a function of the phase lag between oscillators 1 and 2. product (<p(x) . <p(y)) can be computed by evaluation of the Gaussian kernel (which fulfills Mercer's condition [6]) = (<p(x)? <p(y)) = e-lIx-yIl2/2u2 (4) In feature space, a hyperplane (w . <p(x)) - p = 0 separating most of the data from the k(x,y) origin with large margin is found by solving the constrained optimization problem (see [7]) mill wEF,eEIRI,pEIR subject to !ll w l1 2 + tr L~=1 ei - P (w? <p(v)) ~ p- ei, ei ~ 0 (5) (6) A decision function, f, is computed, which is + 1 on a region in input space capturing most of the training data points and -1 elsewhere. The approximate geometrical center of this region is used as the input to the GC chip. The algorithmic implementation of the learning procedure uses the quadratic optimizer LOQO implementing a primal-dual interior-point method [10]. The parameter v upper bounds the fraction of outliers (see [7], Proposition 4), which is related to the noise that the training data is subject to. In our experiments, v = 0.2 is chosen such that the algorithm disregards approximately as many points as can be expected to be falsely included in the training data given the noise of the data acquisition. 4 Results As an example, the input parameters are learned for a forward walk, requiring phase shifts of qJt = rP2 = rP3 = 0.75 and duty cycles of Dl = D2 = D3 = D4 = 0.5. The oscillation period P = 0.89s and the duty cycle Dl = 0.5 are set according to (1) and (2). The value of P takes the mechanics of the robot into account [8]. The scanning step size is 2mV and the tolerances are chosen to be ?0.015 for the phase lags and ?0.05 for the duty cycles. The parameters Vb,j Hl,l and Vb,j Hl,r are learned in sequence, first for j = 1 (see Fig. 5). The result is applied to the GC chip. Then Vb ,23,l and Vb,23,r are learned and the result is also applied to the GC chip. Finally, Vb,34,l and Vb,34,r are learned. All input parameters of the GC chip are set to the learned values and the robot moves forward using a walk gait (see Fig. 6). The phase relationships of the robot's leg movements are measured. Simultaneously, the robot's trajectory is tracked using a video camera monitoring the robot from above, and two Light Emitting Diodes (LEDs) attached to the robot's front and rear. The robot has learned to move in the forward direction, using a walk gait, as desired, despite the inability to theoretically predict the exact values of the GC chip's bias voltages. > 4.3 4.3 4.285 4.285 ~-4.27 > 4.255 4.255 4.24'-:--~c::---,-':--c-----:--"-=----,-' 4.28 ~ 4.27 4.295 4.31 Vb,12,r 4.325 4.34 (??? .. ... ........ 0 -:. 4.24 '-:--~c::---,-':--c-----:--"-=----,-' 4.28 4.295 4.31 4.325 4.34 Vb,12,r Figure 5: Result of learning values of the bias voltages Vb,12 ,1 and Vb,12,r which lead to qh = 0.75 and D2 = 0.5. Correctly classified (stars) and misclassified (crosses) training data (left) and test data (right). Outlined regions: learned by algorithm from training data. The training data is obtained from one scan of the displayed rectangular region X. The test data is a set of points obtained from three scans. 5 Discussion We have introduced a novel neuromorphic chip for inter-leg coordination of a walking machine. This chip successfully controls a four-legged robot. A Support Vector algorithm enables tile robotic system to learn a desired movement sequence. We have demonstrated this here using the walk gait as an example. Other gaits have also been learned [8]. The architecture we used reduced the learning of a complex motor behavior to a classification task. The classifier we used requires only a few examples, making the learning efficient, and it can handle noisy data, making the learning robust. The chip need not be interfaced to a computer; it can control the robot without any need of software once the input parameters of the chip are known. Note, that the chips bias voltages can also be changed by simple sensors in a direct way, enabling the robot to adapt its behavior according to sensory information. This point is elaborated in [8]. However, the chip-computer interface creates a hybrid system in which the complex movement pattern required to make a four legged machine locomote is controlled by the chip, while algorithms run on the computer can focus on more demanding tasks. This architecture enables the robotic system to exploit the motor abilities it has due to the GC chip - independent of tile particular physical shape of the robot. The hybrid system could also be useful for the development of a second generation of neuromorphic motor control chips, able to solve more complex tasks. Furthermore, the control circuit could easily be extended to the control of six (or more) legs simply by addition of two (or more) oscillators, without increasing drastically in complexity, as the number of control parameters is small and scales linearly with the number of oscillators. Similarly, the circuit could be expanded to control n-jointed legs if each of the four oscillators becomes itself the master of a chain of n oscillators. Finally, the learning procedure introduced here could be used as a general method for fine tuning of neuromorphic aVLSI chips. Acknowledgments S. S. is grateful to the late Misha Mahowald for inspiring discussions and indebted to Mark W. Tilden for discussing his circuits. We thank Adrian M. Whatley for useful comments and technical assistance. For helpful discussions we thank William Bialek, Gert Cauwenberghs, Giacomo Indiveri, Shih-Chii Liu, John C. Platt, Alex 1. Smola, John Shawe-Taylor and Robert C. Williamson. S. S. was supported by CSEM, Neuchiitel, the Physics Department of ETH Zurich and tile SPP of tile Swiss National Science Foundation. L F() RF (o) LH (+) RH ( O) + .. +: .?+- .- o 00 0 - 00 ? o ., 0 0 + 0 + ___ c 0 + 0 + + ,~ , ~-7. ,,~~~'~O~ ,,~~~~~__~~ tme l s ! ~ lIJIlI IllI [ III!JlIIIlIlllllllIIIt!IIlHH! !IIII!I1mIlUHIIHI UJ .... 0 3 time / s Figure 6: Left: Control voltages (upper plot) and angular displacements of the legs as measured by potentiometers that are attached to the motors (lower plot) as a function of time shown for a cycle of the rhythmic movement. The four legs are distinguished by the abbreviations: left front (LF; dots), right front (RF; circles), left hind (LH; crosses) and right hind (RH; stars). The legs move in succession with a phase shift of 90? : LF, RH, RF, and finally LH, a typical walk sequence [1]. Note that the data is acquired with a limited sampling rate. Thus the duty cycles of the control voltages appear to deviate from 50 %. However, the data on the right proves that the duty cycles are sufficiently close to 50 % to cause the robot to walk forward in a straight line, as desired. Right: Position of the robot's center of gravity as a function of time. Upper plot: x-coordinate, lower plot: y-coordinate. Errors are due mainly to the extension of the images of the LEDs in the image frames obtained from the CCD camera. The y-coordinate is constant within the error. This shows that the robot moves forward on a straight line. The robot moves at roughly 4.7 cm S-l . References [1] R. McN. Alexander, The Gaits of Bipedal and Quadrupedal Animals. Intl. 1. Robotics Research, 1984,3, pp. 49-59 [2] F. De1comyn, Neural Basis of Rhythmic Behaviour in Animals. Science, 1980, 210, pp. 492-498 [3] S. Grillner, 1981, Control oflocomotion in bipeds , tetrapods and fish. In: Handbook of Physiology II, M. D. Bethesda (ed.), Am. Physiol. Soc. , pp. 1179-1236; S. Grillner, 1998, Vertebrate Locomotion - A Lamprey Perspective. In: Neuronal Mechanisms for Generating Locomotor Activity, O. Kiehn et. aI. (eds.), New York Academy of Science [4] B. Hasslacher & M. W. Tilden, Living Machines. Robotics and Autonomous Systems: The Biology and Technology of Intelligent Autonomous Agents, 1995, L. Steels (ed.), Elsevier; S. Still & M. W. Tilden Controller for a four legged walking machine. In: Neuromorphic Systems, 1998, L. S. Smith & A. Hamilton (eds.), World Scientific [5] D. F. Hoyt & R. C. Taylor, Gait and the energetics of locomotion in horses. Nature , 1981 , 292, pp.239-240 [6] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations. Phil. Trans. Roy. Soc. London A , 1909, 209, pp. 415-446 [7] B. Scholkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola and R. C. Williamson, Estimating the Support of a High-Dimensional Distribution. Technical Report, Microsoft Research, 1999, MSRTR-99-87, Redmond, WA, To appear in Neural Computation. [8] S. Still, Walking Gait Control for Four-legged Robots, PhD Thesis, ETH Ziirich, 2000 [9] N. H. E. Weste & K. Eshraghian, Principles of CMOS VLSI Design, 1993, Addison Wesley [10] R. J. Vanderbei , LOQO User's Manual - Version 3.10, Technical Report, SOR-97-08, Princeton University, Statistics and Operations Research, 1997 [11] D. Yanagihara, M. Udo, I. Kondo and T. Yoshida, Neuroscience Research, 1993, 18, pp. 241244
1824 |@word version:1 adrian:1 d2:2 locomotive:1 tr:1 solid:2 reduction:1 configuration:2 contains:3 liu:1 past:1 current:2 com:2 kondo:1 must:1 john:2 physiol:1 shape:1 enables:2 motor:10 plot:4 nervous:1 plane:1 smith:1 node:11 cpg:1 five:1 direct:1 scholkopf:2 consists:2 fitting:1 falsely:1 acquired:1 theoretically:1 inter:2 expected:1 roughly:1 behavior:5 mechanic:1 brain:5 inspired:1 automatically:1 vertebrate:4 increasing:1 becomes:1 estimating:1 circuit:15 qjt:1 cm:4 differing:1 finding:1 nj:2 act:1 charge:1 gravity:1 classifier:1 uk:1 control:28 platt:2 appear:2 hamilton:1 t1:1 positive:1 scientist:1 despite:1 subscript:2 approximately:2 might:1 potentiometer:2 limited:1 lop:3 range:1 acknowledgment:1 responsible:1 camera:2 practice:1 epr:3 implement:1 lf:2 swiss:1 tme:1 procedure:3 displacement:2 eth:3 physiology:1 numbered:1 unlabeled:1 interior:1 close:1 map:1 demonstrated:1 center:3 compensated:1 phil:1 yoshida:1 rectangular:1 simplicity:1 spanned:1 his:1 century:1 handle:1 gert:1 coordinate:6 autonomous:2 qh:1 user:1 exact:1 us:1 hypothesis:1 locomotion:12 origin:1 roy:1 walking:10 asymmetric:1 lix:1 region:5 cycle:22 connected:5 autonomously:1 movement:11 complexity:1 legged:7 vdd:4 depend:3 solving:1 grateful:1 creates:1 basis:1 easily:1 chip:38 distinct:1 london:1 horse:1 klaus:1 dof:1 neuroinformatics:1 lag:9 solve:2 ability:1 statistic:1 noisy:1 itself:1 advantage:1 sequence:4 transistor:4 whatley:1 gait:15 propose:1 product:1 adaptation:1 loop:1 academy:1 description:1 transmission:8 requirement:1 csem:1 produce:2 intl:1 generating:1 cmos:1 coupling:2 avlsi:1 measured:2 strong:1 soc:2 implemented:1 diode:1 indicate:1 implies:1 switzerland:2 direction:2 thick:2 foot:2 correct:1 waveform:1 aluminum:2 implementing:2 behaviour:1 sor:1 proposition:1 biological:1 extension:1 sufficiently:1 ground:1 guildhall:1 algorithmic:1 predict:1 electron:1 circuitry:1 driving:1 tor:2 optimizer:1 inverter:5 coordination:1 vice:1 successfully:1 bsc:1 sensor:6 gaussian:1 varying:2 voltage:25 focus:1 indiveri:1 mainly:1 greatly:1 am:1 helpful:1 elsevier:1 rear:1 pad:1 vlsi:2 originating:1 transformed:1 misclassified:1 dual:1 classification:1 development:1 animal:5 constrained:1 equal:1 once:1 sampling:1 biology:2 identical:3 unsupervised:1 thin:1 report:2 intelligent:1 few:1 simultaneously:1 national:1 phase:17 connects:1 microsoft:2 william:1 freedom:2 neuroscientist:1 kiehn:1 evaluation:1 adjust:1 bipedal:1 misha:1 light:1 pc:1 behind:1 activated:1 primal:1 chain:6 kt:2 integral:1 necessary:2 lh:3 taylor:3 walk:7 desired:9 circle:1 theoretical:1 neuromorphic:8 mahowald:1 deviation:1 subset:1 front:6 scanning:1 sv:3 considerably:1 giacomo:1 peri:1 explores:1 peak:1 physic:2 wef:1 hoyt:1 connecting:2 together:1 thesis:1 central:2 tile:4 leading:2 account:1 w_:1 star:2 sec:1 mv:1 idealized:1 vi:7 depends:1 cauwenberghs:1 capability:1 rodney:1 elaborated:1 square:1 characteristic:4 likewise:1 succession:1 interfaced:2 correspond:1 vout:7 chii:1 produced:1 trajectory:1 monitoring:1 drive:1 straight:3 indebted:1 classified:1 influenced:1 manual:1 ed:4 energy:1 acquisition:2 frequency:3 pp:6 di:1 vanderbei:1 couple:1 wesley:1 done:1 furthermore:1 angular:2 smola:2 sketch:3 dac:1 ei:3 scientific:1 usa:1 requiring:1 true:1 swing:2 analytically:2 stance:3 symmetric:3 cerebellum:1 round:1 during:5 ll:1 assistance:1 fet:4 d4:1 vo:2 motion:1 temperature:1 l1:1 interface:1 geometrical:1 image:3 electromagnet:2 novel:2 recently:1 common:1 physical:1 tracked:1 spinal:1 attached:4 stepping:1 jl:2 cambridge:1 versa:1 ai:1 tuning:2 outlined:1 similarly:2 biped:1 shawe:2 dot:2 robot:29 specification:1 locomotor:1 perspective:1 buffer:1 certain:2 discussing:1 muscle:3 floor:1 determine:2 period:6 signal:1 ii:2 living:1 stem:1 technical:3 unlabelled:1 characterized:1 match:1 believed:2 long:1 cross:2 adapt:1 energetics:1 locates:1 controlled:2 va:1 involving:1 controller:4 represent:1 kernel:1 robotics:2 ion:7 affecting:1 addition:1 fine:2 iiii:1 diagram:1 leaving:2 source:1 phasing:1 comment:1 subject:2 flow:1 capacitor:1 ideal:3 iii:2 independence:1 fit:1 architecture:9 simplifies:2 shift:3 duty:21 six:1 york:1 cause:1 jj:2 useful:2 inspiring:1 reduced:1 generate:3 mcn:1 fish:1 msrtr:1 coordinating:2 estimated:2 per:1 correctly:1 neuroscience:1 affected:1 four:13 shih:1 threshold:1 quadrupedal:1 drawn:1 d3:1 douglas:1 v1:2 fraction:2 run:2 angle:1 bia:1 master:3 place:1 extends:1 reasonable:1 oscillation:4 decision:1 jointed:1 vb:49 capturing:1 bound:1 hi:6 quadratic:1 activity:3 scanned:1 alex:1 software:1 rp2:1 speed:2 friction:1 loqo:2 expanded:1 hind:3 department:1 according:3 combination:3 increasingly:1 bethesda:1 making:2 hl:2 leg:23 outlier:1 pr:1 equation:1 zurich:3 turn:1 mechanism:1 needed:1 addison:1 serf:1 end:1 operation:1 distinguished:1 encounter:1 gate:10 existence:1 ktk:2 completed:1 ccd:1 follo:1 lamprey:1 exploit:1 uj:1 prof:1 leakage:1 move:5 capacitance:1 added:1 bialek:1 thank:2 separating:1 street:1 consumption:1 lthe:1 length:1 relationship:5 providing:1 ratio:1 acquire:1 difficult:1 robert:1 negative:1 steel:1 susanne:1 implementation:1 design:1 boltzmann:1 upper:3 modem:1 enabling:1 displayed:1 emulating:1 extended:1 dc:1 gc:14 frame:1 introduced:3 mechanical:1 required:2 specified:1 connection:1 learned:10 trans:1 able:1 redmond:1 pattern:6 mismatch:2 spp:1 program:2 rf:3 deactivated:1 video:1 demanding:1 hybrid:2 technology:1 coupled:4 text:1 deviate:1 drain:1 relative:1 generation:1 udo:1 generator:2 foundation:1 degree:2 agent:1 metal:1 mercer:2 port:1 principle:1 elsewhere:1 changed:1 supported:1 keeping:1 copy:1 cpgs:2 drastically:1 side:3 allow:1 bias:9 institute:4 wide:1 rhythmic:5 tolerance:3 calculated:1 stand:3 world:1 sensory:1 adopts:1 forward:7 vmax:5 simplified:1 far:1 emitting:1 approximate:1 bernhard:1 robotic:5 reveals:1 handbook:1 terrain:1 impedance:1 learn:1 nature:1 robust:1 williamson:2 complex:4 vj:1 linearly:1 arrow:3 rh:3 noise:2 grillner:2 complementary:1 body:3 neuronal:1 fig:10 sub:4 illi:1 position:1 lie:1 late:1 learns:1 symbol:3 offset:1 evidence:1 dl:2 gained:1 mirror:1 phd:1 nec:2 margin:1 mill:1 led:2 simply:1 vth:5 u2:1 abbreviation:1 goal:1 identity:1 oscillator:36 considerable:1 change:1 included:1 determined:3 typical:1 hyperplane:1 called:3 hepp:1 disregard:1 support:5 mark:1 scan:3 fulfills:1 inability:1 alexander:1 princeton:2 d1:1 ex:2
901
1,825
Learning curves for Gaussian processes regression: A framework for good approximations Dorthe Malzahn Manfred Opper Neural Computing Research Group School of Engineering and Applied Science Aston University, Birmingham B4 7ET, United Kingdom. [malzahnd.opperm]~aston.ac.uk Abstract Based on a statistical mechanics approach, we develop a method for approximately computing average case learning curves for Gaussian process regression models. The approximation works well in the large sample size limit and for arbitrary dimensionality of the input space. We explain how the approximation can be systematically improved and argue that similar techniques can be applied to general likelihood models. 1 Introduction Gaussian process (GP) models have gained considerable interest in the Neural Computation Community (see e.g.[I, 2, 3, 4] ) in recent years. Being non-parametric models by construction their theoretical understanding seems to be less well developed compared to simpler parametric models like neural networks. We are especially interested in developing theoretical approaches which will at least give good approximations to generalization errors when the number of training data is sufficiently large. In this paper we present a step in this direction which is based on a statistical mechanics approach. In contrast to most previous applications of statistical mechanics to learning theory we are not limited to the so called "thermodynamic" limit which would require a high dimensional input space. Our work is very much motivated by recent papers of Peter Sollich (see e.g. [5]) who presented a nice approximate treatment of the Bayesian generalization error of G P regression which actually gives good results even in the case of a one dimensional input space. His method is based on an exact recursion for the generalization error of the regression problem together with approximations that decouple certain correlations of random variables. Unfortunately, the method seems to be limited because the exact recursion is an artifact of the Gaussianity of the regression model and is not available for other cases such as classification models. Second, it is not clear how to assess the quality of the approximations made and how one may systematically improve on them. Finally, the calculation is (so far) restricted to a full Bayesian scenario, where a prior average over the unknown data generating function simplifies the analysis. Our approach has the advantage that it is more general and may also be applied to other likelihoods. It allows us to compute other quantities besides the generalization error. Finally, it is possible to compute the corrections to our approximations. 2 Regression with Gaussian processes To explain the Gaussian process scenario for regression problems [2], we assume that we observe corrupted values y(x) E R of an unknown function f(x) at input points x E Rd. If the corruption is due to independent Gaussian noise with variance u 2, the likelihood for a set of m example data D = (Y(Xl), ... , Y(Xm))) is given by exp (_ "'~ (Yi-f(Xi))2) L...tz=l 20- 2 P(Dlf) = (2)"'. 27rU 2 (1) where Yi == Y(Xi). The goal of a learner is to give an estimate of the function f(x). The available prior information is that f is a realization of a Gaussian process (random field) with zero mean and covariance C(x,x') = E[f(x)f(x')], where E denotes the expectation over the Gaussian process. We assume that the prediction at a test point x is given by the posterior expectation of f(x), i.e. by j(x) = E{f(x)ID} = Ef(x)P(Dlf) Z (2) where the partition function Z normalises the posterior. Calling the true data generating function 1* (in order to distinguish it from the functions over which we integrate in the expectations) we are interested in the learning curve, i.e. the generalization (mean square) error averaged over independent draws of example data, i.e. Cg = [((f*(x) - j(x))2}]D as a function of m, the sample size. The brackets [.. .]D denote averages over example data sets where we assume that the inputs Xi are drawn independently at random from a density p(x). (... ) denotes an average over test inputs drawn from the same density. Later, the same brackets will also be used for averages over several different test points and for joint averages over test inputs and test outputs. 3 The Partition Function As typical of statistical mechanics approaches, we base our analysis on the averaged "free energy" [-In Z]D where the partition function Z (see Eq. (2)) is Z = EP(Dlf). (3) [In Z]D serves as a generating function for suitable posterior averages. The concrete application to Cg will be given in the next section. The computation of [In Z]D is based on the replica trick In Z = limn-+o znn- 1, where we compute [zn]D for integer n and perform the continuation at the end. Introducing a set of auxiliary integration variables squares, we get Zka in order to decouple the where En denotes the expectation over the n times replicated GP measure. In general, it seems impossible to perform the average over the data. Using a cumulant expansion, an infinite series of terms would be created. However one may be tempted to try the following heuristic approximation: If (for fixed function I), the distribution of f(Xk) - Yk was a zero mean Gaussian, we would simply end up with only the second cumulant and JII dZka 27r exp k,a x En exp ((72 2) -2 '" L...J Zka (5) X k,a (-~ La,b Lk ZkaZkb((fa(X) - y)(fb(X) - yn) . Although such a reasoning may be justified in cases where the dimensionality of inputs x is large, the assumption of approximate Gaussianity is typically (in the sense of the prior measure over functions I) completely wrong for small dimensions. Nevertheless, we will argue in the next section that the expression Eq. (5) Uustified by a different reason) is a good approximation for large sample sizes and nonzero noise level. We will postpone the argument and proceed to evaluate Eq. (5) following a fairly standard recipe: The high dimensional integrals over Zka are turned into low dimensional integrals by the introduction of" order-parameters" 'T}ab = 2:;;'=1 ZkaZkb so that [Z"[D ~ ! ll. x Enexp d,?? exp (-~a' ~>.. + G({,})) x (-~ L'T}ab((fa(X) - y)(fb(X) a,b (6) y)) where eG({I)}) = J TIk,a d~;Q TIa::;b J (2:;;'=1 Zka Zkb - 'T}ab). We expect that in the limit of large sample size m, the integrals are well approximated by the saddle-point method. To perform the limit n -t 0, we make the assumption that the saddle-point of the matrix 'T} is replica symmetric, i.e. 'T}ab = 'T} for a f:. band 'T}aa = 'T}o. After some calculations we arrive at [lnZJD = (72'T}O m m'T} 'T} 0 2 --2- + "2 In ('T}o - 'T}) + 2('T}o _ 'T}) - "2(E f (x) + In E exp [- 'T}o ; 'T} ((f(x) - y)2)] - ~ (In(27rm) - (7) 1) into which we have to insert the values 'T} and 'T}o that make the right hand side an extremum. We have defined a new auxiliary (translated) Gaussian measure over functions by (8) where ? is a functional of f. For a given input distribution it is possible to compute the required expectations in terms of sums over eigenvalues and eigenfunctions of the covariance kernel C(x, x'). We will give the details as well as the explicit order parameter equations in a full version of the paper. 4 Generalization error To relate the generalization error with the order parameters, note that in the replica framework (assuming the approximation Eq. (5)) we have -l~fIId"'ab exp [-~u2L"'aa+G({"'})l a cg+u 2 x a~b aa X En exp "'12 (-~ La,b "'ab((fa(X) - y)(fb(X) - y))) which by a partial integration and a subsequent saddle point integration yields Cg = - ( m", "'0 -", 2 )2 - (9) U . It is also possible to compute other error measures in terms of the order parameters like the expected error on the (noisy) training data defined by (10) The "true" training error which compares the prediction with the data generating function 1* is somewhat more complicated and will be given elsewhere. 5 Why (and when) the approximation works Our intuition behind the approximation Eq. (5) is that for sufficiently large sample size, the partition function is dominated by regions in function space which are close to the data generating function 1* such that terms like ((fa(x) - y)(fb(X) - y)) are typically small and higher order polynomials in fa(x) - Y generated by a cumulant expansion are less important. This intuition can be checked self consistently by estimating the omitted terms perturbatively. We use the following modified partition function [Zn('\)]D = f II d;;a e - u 2 2 Ek,a z~a En [ exp k,a (i'\ Lk,a Zka(fa (Xk) - y) 1 ~ ,\2 L L ZkaZkb((fa(X) - y)(fb(X) - y)))] a,b k (11) D which for ,\ = 1 becomes the "true" partition function, whereas Eq. (5) is obtained for ,\ = O. Expanding in powers of ,\ (the terms with odd powers vanish) is equivalent to generating the cumulant expansion and subsequently expanding the non-quadratic terms down. Within the saddle-point approximation, the first nonzero correction to our approximation of [In Z] is given by ,\4 C"'O 2-:n",)2 (u 2(C(X, x)) + (C(x, X)F2(X)) - (C(x, x')F(x)F(x')) +",(C(x, x')C(x, x")C(x', x")) - ",(C(x, x)C 2(x, x'))) +~ (-: + ~) ((C 2(x,x)) - (C 2(x, x'))) ). (12) C(x,x') = E?{f(x)f(x')} denotes the covariance with respect to the auxiliary measure and F(x) == f*(x) - (C(x,XI)f*(X")). The significance of the individual terms as m -+ 00 can be estimated from the following scaling. We find that ('T}o - 'T}) = O(m) is a positive quantity, whereas 'T} = O(m) is negative. C(x, x') = O(l/m). Using these relations, we can show that Eq. (12) remains finite as m -+ 00, whereas the leading approximation Eq. (7) diverges with m. We have not (yet) computed the resulting correction to ego However, we have studied the somewhat simpler error measure e' == ~ l:dE{(f*(Xi) - f(Xi))2ID}]D which can be obtained from a derivative of [In Z]D with respect to a 2 . It equals the error of a Gibbs algorithm (sampling from the posterior) on the training data. We can show that the correction to e' is typically by a factor of O(l/m) smaller than the leading term. However, our approximation becomes worse with decreasing noise variance a 2 . a = 0 is a singular case for which (at least for some GPs with slowly decreasing eigenvalues) it can be shown that our approximation for eg decays to zero at the wrong rate. For small values of a, a -+ 0, we expect that higher order terms in the perturbation expansion will become relevant. 6 Results We compare our analytical results for the error measures eg and et with simulations of GP regression. For simplicity, we have chosen periodic processes of the form f(x) = v'2l: n(an cos(27rnx) + bn sin(27rnx)) for x E [0,1] where the coefficients an, bn are independent Gaussians with E{ a~} = E{b~} = An. This choice is convenient for analytical calculations by the orthogonality of the trigonometric functions when we sample the Xi from a uniform density in [0,1]. The An and the translation invariant covariance kernel are related by c(x - y) == C(x,y) = 2l:n An cos(27rn(x - y)) and An = c(x) cos(27rnx) dx. We specialise on the (periodic) RBF kernel c(x) = l:~-oo exp [-(x - k)2/2l 2] with l = 0.1. For an illustration we generated learning curves for two target functions f* as displayed in Fig. 1. One function is a sine-wave f*(x) = J2Al sin(27rx) while the other is a random realisation from the prior distribution. The symbols in the left panel of Fig. 1 represent example sets of fifty data points. The data points have been obtained by corruption of the target function with Gaussian noise of variance a 2 = 0.01. The right panel of Fig. 1 shows the data averaged generalization and training errors eg, et as a function of the number m of example data. Solid curves display simulation results while the results of our theory Eqs. (9), (10) are given by dashed lines. The training error et converges to the noise level a 2 ? As one can see from the pictures our theory is very accurate when the number m of example data is sufficiently large. While the generalization error e 9 differs initially, the asymptotic decay is the same. J; 7 The Bayes error We can also apply our method to the Bayesian generalization error (previously approximated by Peter Sollich [5]). The Bayes error is obtained by averaging the generalization error over "true" functions f* drawn at random from the prior distribution. Within our approach this can be achieved by an average of Eq. (7) over f*. The resulting order parameter equations and their relation to the Bayes error turn out be identical to Sollich's result. Hence, we have managed to re-derive his approximation within a broader framework from which also possible corrections can be obtained. Data generating function Learning curves 10? 10- 1 Ct f ?(x) -1 10- 2 10- 3 10-4 0 0.2 0.4 0.6 0.8 1 0 x 150 200 50 100 Number m of example data 10? 10- 1 . 10- 2 f (x) 10- 3 -1 10-4 -2 1 0 x Number m of example data Figure 1: The left panels show two data generating functions f*(x) and example sets of 50 data points. The right panels display the corresponding averaged learning curves. Solid curves display simulation results for generalization and training errors Cg, Ct. The results of our theory Eqs. (9), (10) are given by dashed lines. 8 Future work At present, we extend our method in the following directions: ? The statistical mechanics framework presented in this paper is based on a partition function Z which can be used to generate a variety of other data averages for posterior expectations. An obvious interesting quantity is given by the sample fluctuations of the generalization error which gives confidence intervals on Cg. ? Obviously, our method is not restricted to a regression model (in this case however, all resulting integrals are elementary) but can also be directly generalized to other likelihoods such as the classification case [4, 6]. A further application to Support Vector Machines should be possible. ? The saddle-point approximation neglects fluctuations of the order parameters. This may be well justified when m is sufficiently large. It is possible to improve on this by including the quadratic expansion around the saddlepoint. ? Finally, one may criticise our method as being of minor relevance to practical applications, because our calculations require the knowledge of the unknown function 1* and the density of the inputs x. However, Eqs. (9) and (10) show that important error measures are solely expressed by the order parameters 'fI and 'flo. Hence, estimating some error measures and the posterior variance at the data points empirically would allow us to predict values for the order parameters. Those in turn could be used to make predictions for the unknown generalization error. Acknowledgement This work has been supported by EPSRC grant GR/M8160l. References [1] D. J. C. Mackay, Gaussian Processes, A Replacement for ral Networks, NIPS tutorial 1997, May be obtained http://wol.ra.phy.cam.ac.uk/pub/mackay/. Neufrom [2] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer and M. E. Hasselmo eds., 514-520, MIT Press (1996). [3] C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems 9, M. C. Mozer, M. I. Jordan and T. Petsche, eds., 295-30l. MIT Press (1997). [4] D. Barber and C. K. I. Williams, Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo, in Neural Information Processing Systems 9, M . C. Mozer, M. I. Jordan and T. Petsche, eds., 340-346. MIT Press (1997). [5] P. Sollich, Learning curves for Gaussian processes, in Neural Information Processing Systems 11, M. S. Kearns, S. A. Solla and D. A. Cohn, eds. 344 - 350, MIT Press (1999). [6] L. Csata, E. Fokoue, M. Opper, B. Schottky, and O. Winther. Efficient approaches to Gaussian process classification. In Advances in Neural Information Processing Systems, volume 12, 2000.
1825 |@word version:1 polynomial:1 seems:3 simulation:3 bn:2 covariance:4 solid:2 phy:1 series:1 united:1 pub:1 yet:1 dx:1 subsequent:1 partition:7 xk:2 manfred:1 simpler:2 become:1 expected:1 ra:1 mechanic:5 decreasing:2 becomes:2 estimating:2 panel:4 developed:1 extremum:1 wrong:2 rm:1 uk:2 grant:1 yn:1 positive:1 engineering:1 limit:4 id:2 fluctuation:2 solely:1 approximately:1 studied:1 rnx:3 co:3 limited:2 averaged:4 practical:1 postpone:1 differs:1 convenient:1 confidence:1 get:1 close:1 impossible:1 equivalent:1 williams:3 independently:1 simplicity:1 his:2 construction:1 target:2 exact:2 gps:1 trick:1 ego:1 approximated:2 ep:1 epsrc:1 region:1 solla:1 yk:1 intuition:2 mozer:3 cam:1 f2:1 learner:1 completely:1 translated:1 joint:1 monte:1 heuristic:1 gp:3 noisy:1 obviously:1 advantage:1 eigenvalue:2 analytical:2 turned:1 relevant:1 realization:1 trigonometric:1 opperm:1 flo:1 recipe:1 diverges:1 generating:8 converges:1 oo:1 develop:1 ac:2 derive:1 odd:1 minor:1 school:1 eq:12 auxiliary:3 direction:2 subsequently:1 wol:1 require:2 generalization:14 elementary:1 normalises:1 insert:1 correction:5 sufficiently:4 around:1 exp:9 predict:1 omitted:1 birmingham:1 tik:1 hasselmo:1 mit:4 gaussian:16 modified:1 broader:1 dlf:3 consistently:1 ral:1 likelihood:4 contrast:1 cg:6 sense:1 typically:3 initially:1 relation:2 interested:2 classification:4 integration:3 fairly:1 mackay:2 field:1 equal:1 sampling:1 identical:1 future:1 realisation:1 individual:1 malzahnd:1 replacement:1 ab:6 interest:1 bracket:2 behind:1 accurate:1 integral:4 partial:1 re:1 theoretical:2 zn:2 introducing:1 uniform:1 gr:1 corrupted:1 periodic:2 density:4 winther:1 together:1 concrete:1 slowly:1 worse:1 tz:1 ek:1 derivative:1 leading:2 jii:1 tia:1 de:1 gaussianity:2 coefficient:1 later:1 try:1 sine:1 wave:1 bayes:3 complicated:1 ass:1 square:2 perturbatively:1 variance:4 who:1 yield:1 bayesian:4 carlo:1 rx:1 corruption:2 explain:2 touretzky:1 checked:1 ed:4 energy:1 obvious:1 treatment:1 knowledge:1 dimensionality:2 actually:1 higher:2 improved:1 correlation:1 hand:1 cohn:1 artifact:1 quality:1 true:4 managed:1 hence:2 symmetric:1 nonzero:2 eg:4 ll:1 sin:2 self:1 generalized:1 reasoning:1 ef:1 fi:1 znn:1 functional:1 empirically:1 b4:1 volume:1 extend:1 gibbs:1 rd:1 base:1 posterior:6 recent:2 scenario:2 certain:1 yi:2 somewhat:2 dashed:2 ii:1 thermodynamic:1 full:2 calculation:4 prediction:3 regression:10 expectation:6 kernel:3 represent:1 achieved:1 justified:2 whereas:3 interval:1 singular:1 limn:1 fifty:1 eigenfunctions:1 jordan:2 integer:1 variety:1 simplifies:1 specialise:1 motivated:1 expression:1 peter:2 proceed:1 clear:1 band:1 continuation:1 generate:1 http:1 tutorial:1 estimated:1 group:1 nevertheless:1 drawn:3 schottky:1 replica:3 year:1 sum:1 arrive:1 draw:1 scaling:1 ct:2 distinguish:1 display:3 quadratic:2 orthogonality:1 calling:1 dominated:1 argument:1 developing:1 smaller:1 sollich:4 saddlepoint:1 restricted:2 invariant:1 equation:2 remains:1 previously:1 turn:2 serf:1 end:2 available:2 gaussians:1 apply:1 observe:1 petsche:2 denotes:4 neglect:1 especially:1 quantity:3 parametric:2 fa:7 argue:2 barber:1 reason:1 assuming:1 ru:1 besides:1 illustration:1 kingdom:1 unfortunately:1 relate:1 negative:1 unknown:4 perform:3 finite:1 displayed:1 rn:1 perturbation:1 arbitrary:1 community:1 required:1 nip:1 malzahn:1 xm:1 including:1 power:2 suitable:1 hybrid:1 recursion:2 improve:2 aston:2 picture:1 lk:2 created:1 nice:1 understanding:1 prior:5 acknowledgement:1 asymptotic:1 expect:2 interesting:1 integrate:1 systematically:2 translation:1 elsewhere:1 supported:1 free:1 rasmussen:1 side:1 allow:1 curve:9 opper:2 dimension:1 fb:5 made:1 replicated:1 far:1 approximate:2 xi:7 why:1 expanding:2 expansion:5 significance:1 noise:5 fokoue:1 fig:3 en:4 explicit:1 xl:1 vanish:1 down:1 symbol:1 decay:2 gained:1 simply:1 saddle:5 expressed:1 aa:3 goal:1 rbf:1 tempted:1 considerable:1 typical:1 infinite:2 averaging:1 decouple:2 kearns:1 called:1 la:2 support:1 cumulant:4 relevance:1 evaluate:1
902
1,826
Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503 J apan [email protected] Abstract Algebraic geometry is essential to learning theory. In hierarchical learning machines such as layered neural networks and gaussian mixtures, the asymptotic normality does not hold , since Fisher information matrices are singular. In this paper , the rigorous asymptotic form of the stochastic complexity is clarified based on resolution of singularities and two different problems are studied. (1) If the prior is positive, then the stochastic complexity is far smaller than BIO, resulting in the smaller generalization error than regular statistical models, even when the true distribution is not contained in the parametric model. (2) If Jeffreys' prior, which is coordinate free and equal to zero at singularities, is employed then the stochastic complexity has the same form as BIO. It is useful for model selection, but not for generalization. 1 Introduction The Fisher information matrix determines a metric of the set of all parameters of a learning machine [2]. If it is positive definite, then a learning machine can be understood as a Riemannian manifold. However, almost all learning machines such as layered neural networks, gaussian mixtures, and Boltzmann machines have singular Fisher metrics. For example, in a three-layer perceptron, the Fisher information matrix J( w) for a parameter w is singular (det J( w) = 0) if and only if w represents a small model which can be realized with the fewer hidden units than the learning model. Therefore , when the learning machine is in an almost redundant state, any method in statistics and physics that uses a quadratic approximation of the loss function can not be applied. In fact , the maximum likelihood estimator is not subject to the asymptotic normal distribution [4]. The Bayesian posterior probability converges to a distribution which is quite different from the normal one [8]. To construct a mathematical foundation for such learning machines, we clarified the essential relation between algebraic geometry and Bayesian statistics [9,10]. In this paper, we show that the asymptotic form of the Bayesian stochastic complexity is rigorously obtained by resolution of singularities. The Bayesian method gives powerful tools for both generalization and model selection, however, the appropriate prior for each purpose is quite different. 2 Stochastic Complexity Let p(xlw) be a learning machine, where x is a pair of an input and an output, and w E Rd is a parameter. We prepare a prior distribution 'fJ( w) on Rd. Training samples xn = (Xl, x 2 , ... , Xn) are independently taken from the true distribution q(x), which is not contained in p(x lw) in general. The stochastic complexity F(xn) and its average F (n) are defined by F(xn) = - log J ITp(Xi lw) 'fJ(w)dw i=l and F(n) = Exn{F(xn)}, respectively, where Exn{ .} denotes the expectation value overall training sets. The stochastic complexity plays a central role in Bayesian statistics. Firstly, F(n+1)-F(n)-S, where S = - J q(x) logq(x)dx, is equal to the average Kullback distance from q(x) to the Bayes predictive distribution p(xlxn), which is called the generalization error denoted by G(n). Secondly, exp(-F(Xn)) is in proportion to the posterior probability of the model, hence, the best model is selected by minimization of F (xn) [7]. And lastly, if the prior distribution has a hyperparameter (), that is to say, 'fJ(w) = 'fJ(wl()), then it is optimized by minimization of F(xn) [1]. We define a function Fo(n) using the Kullback distance H(w), Fo(n) = - log J exp( -nH(w))'fJ(w)dw, H(w) = J q(x) log pf~~2) dx. Then by Jensen's inequality, F(n) - Sn ::; Fo(n). Moreover, we assume that L(x,w) == logq(x) - logp(xlw) is an analytic function from w to the Hilbert space of all square integrable functions with the measure q(x)dx , and that the support of the prior W = supp 'fJ is compact . Then H(w) is an analytic function on W, and there exists a constant CI > 0 such that, for an arbitrary n, n FO("2) - 3 CI ::; F(n) - Sn ::; Fo(n). (1) General Learning Machines In this section, we study a case when the true distribution is contained in the parametric model, that is to say, there exists a parameter Wo E W such that q(x) = p(x lwo). Let us introduce a zeta function J(z) (z E C) of H(w) and a state density function v(t) by J(z) = J H(wY'fJ(w)dw, v(t) = J J(t - H(w))'fJ(w)dw. Then, J( z ) and Fo(n) are represented by the Mellin and the Laplace transform of v(t), respectively. J(z) = lh tZv(t)dt, Fo(n) = - log lh exp(-nt)v(t)dt, where h = maxWEW H(w). Therefore Fo(n), v(t), and J(z) are mathematically connected. It is obvious that J(z) is a holomorphic function in Re( z) > O. Moreover, by using the existence of Sato-Bernstein's b-function [6], it can be analytically continued to a meromorphic function on the entire complex plane, whose poles are real, negative, and rational numbers. Let -AI > -A2 > -A3 > ... be the poles of J (z) and mk be the order of - Ak. Then, by using the inverse Mellin tansform, it follows that v(t) has an asymptotic expansion with coefficients {Ckm}, 00 v(t) ~ mk LL Ckm tAk - 1(- logt)m-l (t ---> +0). k=lm=1 Therefore, also Fo (n) has an asymptotic expansion, by putting A = Al and m = ml, Fo (n) = A log n - (m - 1) log log n + 0 (1) , which ensures the asymptotic expansion of F(n) by eq.(l), F(n) = Sn + Alogn - (m - 1) log log n + 0(1). The Kullback distance H(w) depends on the analytic set Wo = {w E W; H(w) = O} , resulting that both A and m depend on Woo Note that, if the Bayes generalization error G(n) = F(n + 1) - F(n) - S has an asymptotic expansion, it should be AI n - (m - 1) I (n log n). The following lemma is proven using the definition of Fo(n) and its asymptotic expansion. Lemma 1 (1) Let (Ai, mi) (i = 1,2) be constants corresponding to (Hi(W), rpi(W)) (i = 1, 2). If H 1(w) :::::: H 2(w) and rpl(W) 2': rp2(W), then 'AI < A2' or 'AI = A2 and ml 2': m2 '. (2) Let (Ai , mi) (i = 1, 2) be constants corresponding to (Hi(Wi), rpi(Wi)) (i = 1, 2). Let W = (WI, W2), H(w) = HI (wI) + H 2(W2), and rp(w) = rpI(Wl)rp2(W2). Then the constants of (H(w) , rp(w)) are A = Al + A2 and m = ml + m2 - 1. The concrete values of A and m can be algorithmically obtained by the following theorem. Let Wi be the open kernel of W (the maximal open set contained in W). Theorem 1 (Resolution of Singularities, Hironaka [5}) Let H(w) 2': 0 be a real analytic function on Wi. Th en there exist both a real d-dimensional manifold U and a real analytic function g : U ---> Wi such that, in a neighborhood of an arbitrary U E U, (2) where a( u) > 0 is an analytic function and {sd are non-negative integers. M oreover, for arbitrary compact set K c W, g-1 (K) c U is a compact set. Such a function g( u) can be found by finite blowing-ups. Remark. By applying eq.(2) to the definition of J( z), one can see the integral in J( z) is decomposed into a direct product of the integral of each variable [3]. Applications to learning theory are shown in [9,10]. In general it is not so easy to find g(u) that gives the complete resolution of singularities, however , in this paper, we show that even a partial resolution ma pping gives an upper bound of A. Definition. We introduce two different priors. (1) The prior distribution rp(w) is called positive if rp(w) > 0 for an arbitrary wE Wi, (W = supp<p(w)). (2) The prior distribution ?( w) is called Jeffreys' one if ?(w) = 1 Iij(w) :zVdetI(w), = J 8L 8L UWi uWj ~~p(x l w)dx, where Z is a normalizing constant and I(w) is the Fisher information matrix. In neural networks and gaussian mixtures, Jeffreys' prior is not positive, since det I( w) = 0 on the parameters which represent the smaller models. Theorem 2 Assume that ther'e exists a par'ameter Wo E Wi such that q(x) = p( x Iwo). Then followings hold. (1) If the prior is positive, then 0 <).::; d/2 and 1::; m::; d. Ifp(xlw) satisfies the condition of the asymptotic normality, then). = d/2 and m = 1. (2) If Jeffreys' prior is applied, then '). > d/2' or '). = d/2 and m = 1 '. (Outline of the Proof) (1) In order to examine the poles of J(z), we can divide the parameter space into the sum of neighborhoods. Since H( w) is an analytic function, in arbitrary neighborhood of Wo that satisfies H(wo) = 0, we can find a positive definite quadratic form which is smaller than H(w). The positive definite quadratic form satisfies). = d/2 and m = 1. By using Lemma 1 (1), we obtain the first half. (2) Because Jeffreys' prior is coordinate free, we can study the problem on the parameter space U instead of Wi in eq. (2). Hence, there exists an analytic function t(x, u) such that, in each local coordinate, L(x, u) = L(x, g( u)) = (8t ~Wi UWi t(x, U)U~l ... U~d. = 1,2, ... , d). For simplicity, we assume that Si > 0 (i 8L ~ UWi = + Sit )u 1 By using blowing-ups Ui = V1V2'" Vi (i ... + Sd, it is easy to show 81 = .. ,ui8?' - 1 .. 'U d8d . 1,2, ... , d) and a notation rYp d detI(v) ::; Then II v;dap+p-d-2, p=1 = sp+sp+l + d du = (II Ivpld-p)dv . (3) p=1 11 By using H(g(u)y = v;apz and Lemma.1 (1), in order to prove the latter half of the theorem , it is suthcient to prove that has a pole z = -d/2 with the order m completes the theorem. (Q.E.D.) 4 = 1. Direct calculation of integrals in J(z) Three-Layer Percept ron In this section, we study some cases when the learner is a three-layer percept ron and the true distribution is contained and not contained. We define the three layer percept ron p(x, vlw) with JII! input units, K hidden units, and N output units, where x is an input, V is an output, and w is a parameter. r(x) 1 (27ru 2)N/2 exp(- 2u211v p(x, vlw) - 2 fK(x ,w)11 ) K fK(x,w) = Laku(bk?x+Ck) k=l where w = {(ak' bk, Ck); ak E R N , bk E R M , Ck E Rl}, r(x) is the probability density on the input, and u 2 is the variance of the output (either r(x) or u is not estimated). Theorem 3 If the true distribution is represented by the three-layer perceptron with Ko ::; K hidden units, and if positive prior is employed, then 1 A ::; "2 {Ko(M + N + 1) + (K - . Ko) mm(M (Outline of Proof) Firstly, we consider a case when g(x) = + 1, N)}. (4) O. Then, (5) Let ak = (akl' ... , akN) au = 0:, and bk akj = (b k1 , ... , bkM ). = o:a~j Let us consider a blowing-up, (k -=/:-l,j -=/:-1), bk1 = b~l' Ck = c~. Then da db dc = o:KN-1do: da' db' dc' and there exists an analytic function H1(a' , b' , c') such that H(a , b,c) = 0:2H1(a',b',c'). Therefore J(z) has a pole at z = - K N /2 . Also by using another blowing-up, then, da db dc = 0:(M+1)K- 1do: da" db" dc" and there exists an analytic function H2(a l ,bl ,c") such that H(a , b,c) = 0:2H2(a l ,bl ,c"), which shows that J( z) has a pole at z = -K(M + 1)/2. By combining both results, we obtain A ::; (K/2) min(M + 1, N) . Secondly, we prove the general case, 0 < Ko ::; K. Then, (6) By combining Lemma. 1 (2) and the above result, we obtain the Theorem. (Q.E.D. ). If the true regression function g(x) is not contained in the learning model, we assume that, for each 0 ::; k ::; K, there exists a parameter w~k) E W that minimizes the square error We use notations E(k) k) min(M + 1, N). (1/2){k(M + N + 1) + (K - Theorem 4 If the true regression function is not contained in the learning model and positive prior is applied, then F(n):,,::: min [n2E (k) +'\(k)lognJ +0(1). O~k~K a (Outline of Proof) This theorem can be shown by the same procedure as eq.(6) in the preceding theorem. (Q.E.D.) If G(n) has an asymptotic expansion G(n) = 2::~1 aqfq(n), where fq(n) is a decreasing function of n that satisfies fq+1(n) = o(fq(n)) and fQ(n) = l/n, then G (n):,,::: min [ E (k) + ,\ (k ) ] O~k~K a 2 n' which shows that the generalization error of the layered network is smaller than the regular statistical models even when the true distribution is not contained in the learning model. It should be emphasized that the optimal k that minimizes G(n) is smaller than the learning model when n is not so large, and it becomes larger as n increases. This fact shows that the positive prior is useful for generalization but not appropriate for model selection. Under the condition that the true distribution is contained in the parametric model, Jeffreys' prior may enable us to find the true model with higher probability. Theorem 5 If the true regression function is contained in the three-layer perceptron and Jeffrey's prior is applied, then ,\ = d/2 and m = 1, even if the Fisher metric is degenerate at the true parameter. (Outline of Proof) For simplicity, we prove the theorem for the case g(x) = O. The general cases can be proven by the same method. By direct calculation of the Fisher information matrix, there exists an analytic function D(b, e) ~ 0 such that K N detI(w) = II(Lakp)2(M+1)D(b,e) k=1 p=1 By using a blowing-up we obtain H(w) = a 2H 1(a',b',e') same as eq.(5), detI(w) ex a 2(M+1)K, and da db de = aN K -1 da da' db de. The integral }(z) = 1 a 2z a(M+1)K+NK- 1da 1"'1?' has a pole at z = -(M + N + 1)K/2. By combining this result with Theorem 3, we obtain Theorem.5. (Q.E.D.). 5 Discussion In many applications of neural networks, rather complex machines a re employed compared with the number of training samples. In such cases, the set of optimal parameters is not one point but an analytic set with singularities, and the set of almost optimal parameters {Wi H(w ) < E} is not an 'ellipsoid'. Hence neither the Kullback distance can be approximat ed by any quadratic form nor the saddle point approximation can be used in integration on the parameter space. The zeta function of the Kullback distance clarifies the behavior of the stochastic complexity and resolution of singularities enables us to calculate the learning efficiency. 6 Conclusion The relation between algebraic geometry and learning theory is clarified, and two different facts are proven. (1) If the true distribution is not contained in a hierarchical learning model, then by using a positive prior, the generalization error is made smaller than the regular statistical models. (2) If the true distribution is contained in the learning model and if Jeffreys' prior is used , then the average Bayesian factor has the same form as BIC. Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports a nd Culture in Japan , Grant-in-Aid for Scientific Research 12680370. References [1] Akaike, H. (1980) Likelihood and Bayes procedure. Bayesian Statistics, (Bernald J.M. eds.) University Press, Valencia, Spain, 143-166. [2] Amari, S. (1985) Differential-geometrical m ethods in Statistics . Lecture Notes in Statistics, Springer . [3] Atiyah, M . F. (1970) Resolution of singularities and division of distributions. Comm. Pure and Appl. Math. , 13, pp.145-150. [4] Dacunha-Castelle, D. , & Gassiat, E. (1997). Testing in locally conic models, and application to mixture models. Probability and Statistics, 1, 285-317. [5] Hironaka, H. (1964) Resolution of Singularities of an a lgebraic variety over a field of characteristic zero . Annals of Math., 79 ,109-326. [6] Kashiwara, M . (1976) B-functions and holonomic systems. Inventions Math., 38,33-53. [7] Schwarz, G . (1978) Estimating the dimension of a model. Ann. of Stat., 6 (2), 461-464. [8] Watanabe, S. (1998) On the generalization error by a layered statistical model with Bayesian estimation. IEICE Transactions, J81-A (10), 1442-1452 . English version: (2000)Electronics and Communications in Japan, Part 3, 83(6) ,95-104. [9] Watanabe, S. (2000) Algebraic analysis for non-regular learning machines. Advances in Neural Information Processing Systems, 12, 356-362. [10] Watanabe, S. (2001) Algebraic analysis for non-identifiable learning machines. Neural Computation, to appear.
1826 |@word version:1 proportion:1 nd:1 open:2 xlw:3 electronics:1 itp:1 nt:1 rpi:3 si:1 dx:4 analytic:12 enables:1 midori:1 intelligence:1 fewer:1 selected:1 half:2 plane:1 math:3 clarified:3 ron:3 firstly:2 mathematical:1 direct:3 differential:1 prove:4 introduce:2 behavior:1 examine:1 nor:1 decomposed:1 decreasing:1 pf:1 becomes:1 spain:1 estimating:1 moreover:2 notation:2 akl:1 minimizes:2 bio:2 unit:5 grant:1 appear:1 positive:11 understood:1 local:1 sd:2 ak:4 au:1 studied:1 appl:1 acknowledgment:1 testing:1 definite:3 procedure:2 holomorphic:1 ups:2 regular:4 layered:4 selection:3 applying:1 independently:1 resolution:8 simplicity:2 pure:1 m2:2 estimator:1 continued:1 dw:4 coordinate:3 laplace:1 annals:1 play:1 us:1 akaike:1 logq:2 ckm:2 role:1 calculate:1 ensures:1 connected:1 comm:1 complexity:8 ui:1 rigorously:1 depend:1 predictive:1 division:1 efficiency:1 learner:1 represented:2 neighborhood:3 quite:2 whose:1 larger:1 say:2 amari:1 statistic:7 transform:1 maximal:1 product:1 combining:3 degenerate:1 xlxn:1 converges:1 ac:1 stat:1 lwo:1 eq:5 tokyo:1 stochastic:8 enable:1 education:1 generalization:9 singularity:10 secondly:2 mathematically:1 hold:2 mm:1 normal:2 exp:4 lm:1 a2:4 purpose:1 estimation:1 prepare:1 schwarz:1 wl:2 tool:1 minimization:2 gaussian:3 ck:4 rather:1 deti:3 likelihood:2 fq:4 rigorous:1 entire:1 hidden:3 relation:2 tak:1 overall:1 denoted:1 integration:1 equal:2 construct:1 field:1 represents:1 geometry:4 uwi:3 jeffrey:1 mixture:4 integral:4 partial:1 lh:2 culture:1 divide:1 re:2 mk:2 logp:1 pole:7 kn:1 density:2 akj:1 physic:1 zeta:2 concrete:1 central:1 supp:2 jii:1 japan:2 de:2 coefficient:1 depends:1 vi:1 h1:2 bayes:3 square:2 variance:1 characteristic:1 percept:3 clarifies:1 bayesian:8 bk1:1 fo:11 ed:2 definition:3 pp:1 obvious:1 proof:4 riemannian:1 mi:2 rational:1 hilbert:1 blowing:5 higher:1 dt:2 lastly:1 approximat:1 scientific:1 ieice:1 true:14 hence:3 analytically:1 laboratory:1 ll:1 outline:4 complete:1 dap:1 fj:8 geometrical:1 ifp:1 rl:1 jp:1 nh:1 ai:6 rd:2 fk:2 posterior:2 inequality:1 integrable:1 ministry:1 preceding:1 employed:3 exn:2 redundant:1 ii:3 apz:1 calculation:2 ko:4 regression:3 titech:1 metric:3 expectation:1 holonomic:1 kernel:1 represent:1 completes:1 singular:3 w2:3 subject:1 db:6 valencia:1 integer:1 bernstein:1 easy:2 variety:1 bic:1 det:2 wo:5 algebraic:6 remark:1 useful:2 akn:1 locally:1 exist:1 estimated:1 algorithmically:1 hyperparameter:1 putting:1 nagatsuta:1 sumio:1 neither:1 invention:1 sum:1 inverse:1 powerful:1 almost:3 layer:6 hi:3 bound:1 quadratic:4 identifiable:1 sato:1 rp2:2 min:4 logt:1 smaller:7 wi:12 jeffreys:7 dv:1 bkm:1 taken:1 hierarchical:2 appropriate:2 yokohama:1 rp:4 existence:1 denotes:1 k1:1 bl:2 realized:1 parametric:3 distance:5 manifold:2 ru:1 ellipsoid:1 iwo:1 negative:2 meromorphic:1 boltzmann:1 upper:1 finite:1 communication:1 dc:4 arbitrary:5 bk:4 pair:1 swatanab:1 optimized:1 ethods:1 ther:1 wy:1 kashiwara:1 normality:2 technology:1 conic:1 rpl:1 ryp:1 woo:1 sn:3 prior:20 asymptotic:11 loss:1 par:1 lecture:1 proven:3 foundation:1 h2:2 pi:1 supported:1 free:2 english:1 perceptron:3 institute:1 dimension:1 xn:8 made:1 far:1 transaction:1 compact:3 kullback:5 ml:3 xi:1 ku:1 expansion:6 du:1 pping:1 complex:2 da:8 sp:2 gassiat:1 en:1 aid:1 iij:1 precision:1 watanabe:4 xl:1 lw:2 theorem:14 emphasized:1 jensen:1 a3:1 normalizing:1 essential:2 exists:8 sit:1 ci:2 nk:1 saddle:1 contained:13 partially:1 sport:1 springer:1 determines:1 satisfies:4 ma:1 hironaka:2 ann:1 fisher:7 lemma:5 called:3 support:1 latter:1 ex:1
903
1,827
A productive, systematic framework for the representation of visual structure Shimon Edelman 232 Uris Hall, Dept. of Psychology Cornell University Ithaca, NY 14853-7601 Nathan Intrator Institute for Brain and Neural Systems Box 1843, Brown University Providence, RI 02912 [email protected] N athan_Intrator@brown. edu Abstract We describe a unified framework for the understanding of structure representation in primate vision. A model derived from this framework is shown to be effectively systematic in that it has the ability to interpret and associate together objects that are related through a rearrangement of common "middle-scale" parts, represented as image fragments. The model addresses the same concerns as previous work on compositional representation through the use of what+where receptive fields and attentional gain modulation. It does not require prior exposure to the individual parts, and avoids the need for abstract symbolic binding. 1 The problem of structure representation The focus of theoretical discussion in visual object processing has recently started to shift from problems of recognition and categorization to the representation of object structure. Although view- or appearance-based solutions for these problems proved effective on a variety of object classes [1], the "holistic" nature of this approach - the lack of explicit representation of relational structure - limits its appeal as a general framework for visual representation [2]. The main challenges in the processing of structure are productivity and systematicity, two traits commonly attributed to human cognition. A visual system is productive if it is open-ended, that is, if it can deal effectively with a potentially infinite set of objects. A visual representation is systematic if a well-defined change in the spatial configuration of the object (e.g., swapping top and bottom parts) causes a principled change in the representation (e.g., the interchange of the representations of top and bottom parts [3, 2]). A solution commonly offered to the twin problems of productivity and systematicity is compositional representation, in which symbols standing for generic parts drawn from a small repertoire are bound together by categorical symbolically coded relations [4]. 2 The Chorus of Fragments In visual representation, the need for symbolic binding may be alleviated by using location in the visual field in lieu of the abstract frame that encodes object structure. Intuitively, the constituents of the object are then bound to each other by virtue of residing in their proper places in the visual field; this can be thought of as a pegboard, whose spatial structure supports the arrangement of parts suspended from its pegs. This scheme exhibits shallow compositionality, which can be enhanced by allowing the "pegboard" mechanism to operate at different spatial scales, yielding effective systematicity across levels of resolution. Coarse coding the constituents (e.g., representing each object fragment in terms of its similarities to some basis shapes) will render the scheme productive. We call this approach to the representation of structure the Chorus of Fragments (CoF; [5]). 2.1 Neurobiological building blocks What+ Where cells. The representation of spatially anchored object fragments postulated by the CoF model can be supported by what+where neurons, each tuned both to a certain shape class and to a certain range of locations in the visual field. Such cells have been found in the monkey in areas V4 and posterior IT [6], and in the prefrontal cortex [7]. Attentional gain fields. To decouple the representation of object structure from its location in the visual field, one needs a version of the what+where mechanism in which the response of the cell depends not merely on the location of the stimulus with respect to fixation (as in classical receptive fields), but also on its location with respect to the focus of attention. Indeed, modulatory effects of object-centered attention on classical RF structure (gain fields) have been found in area V4 [8]. 2.2 Implemented model Our implementation of the CoF model involves what+where cells with attentionmodulated gain fields, and is aimed at productive and systematic treatment of composite shapes in object-centered coordinates. It operates directly on gray-level images, pre-processed by a model of the primary visual cortex [9], with complexcell responses modified to use the MAX operation suggested in [10]. In the model, one what+where unit is assigned to the top and one to the bottom fragment of the visual field, each extracted by an appropriately configured Gaussian gain profile (Figure 2, left). The units are trained (1) to discriminate among five objects, (2) to tolerate translation within the hemifield, and (3) to provide an estimate of the reliability of its output, through an autoassociation mechanism attempting to reconstruct the stimulus image [11, 12]. Within each hemifield, the five outputs of a unit can provide a coarse coding of novel objects belonging to the familiar category, in a manner useful for translation-tolerant recognition [13]. The reliability estimate carries information about category, allowing outputs for objects from other categories to be squelched. Most importantly, due to the spatial localization of the unit's receptive field, the system can distinguish between different configurations of the same shapes, while noting the fragment-wise similarities. We assume that during learning the system performs multiple fixations of the target object, effectively providing the what+where units with a basis for spanning the o "1 above center" I I L_ ---- Wl '~' d~~~~~~.-+ what i _---'''6'' " 1" r! j , "1 below center" where l "9" "something below center" Figure 1: Left: the CoF model conceptualized as a "computation cube" trained to distinguish among three fragments (1, 6, 9), each possibly appearing at two locations (above or below the center of attention). A parallel may be drawn between the computation cube and a cortical hypercolumn; in the inferotemporal cortex, cells selective for specific shapes may be arranged in columns, with the dimension perpendicular to the cortical surface encoding different variants of the same shape [14]. It is not known whether the attention-centered location of the shape, which affects the responses of V4 cells [8], is mapped in an orderly fashion onto some physical dimension(s) of the cortex. Right: the estimation of the marginal probabilities of shapes, which can be used to decide whether to allocate a unit coding for their composition, can be carried out simply by summing the activities of units along the different dimensions of the computation cube. space of stimulus translations. It is up to the model, however, to figure out that the objects may be composed of recurring fragments, and to self-organize in a manner that would allow it to deal with novel configurations of those fragments. This problem, which arises both at the level of fragments and of their constituent features, can be addressed within the Minimum Description Length (MDL) framework. Specifically, we propose to construct receptive fields (RFs) for composite objects so as to capture the deviation from independence between the probability distributions of the responses of RFs tuned to their fragments. This implies a savings in the description length of the composite object. Suppose, for example, that r /l is the response of a unit tuned roughly to the top half of the character 6 and r h - the response of a unit tuned to its bottom half. The construction of a more complex RF combining the responses of these two units will be justified when P(r/l,rh)>> P(r/l)P(rh) (1) or, more practically, when some measure of deviation from independence between P(rfd and P(rh) is large (the simplest such measure would be the covariance, namely, the second moment of the joint distribution but we believe that higher moments may also be required, as suggested by the extensive work on measuring deviation from Gaussian distributions). By this criterion, a composite RF will be constructed that recognizes the two "parts" of the character 6 when they are appropriately located: the probability on the LHS of eq. 1 in that case would be proportional to 1/10, while the probability of the RHS would be proportional to 1/100 (assuming that all characters are equiprobable, and that their fragments never appear in isolation). At the same time, a composite RF tuned, say, to 6 above 3 (see section 3) will not be allocated, because the probability of such a complex feature as measured by either the RHS or the LHS of eq. 1 is proportional to 1/100. We note that this feature analysis can be performed on the marginal probabilities of the corresponding fragments, which are by definition less sensitive to image parameters such as the exact location or scale, and can be based on a family of features (cf. Figure 1). A discussion of this approach and of its relationship to the reconstruction constraint we impose when training the fragment-tuned modules is beyond the scope of this paper. A parallel can be drawn between the MDL framework just outlined and the findings concerning what+where cells and gain fields in the shape processing pathway in the monkey cortex. Under the interpretation we propose, the features at all levels of the hierarchy are coarsely coded, and each feature is associated with a rough location in the visual field, so that composite features necessarily represent more complex spatial structure than their constituents, without separately implemented binding, and without a combinatorial proliferation of features. The computational experiments described below concentrate on these novel characteristics of our model, rather than on the standard MDL machinery. Reconstruction error (modulatory signal) Classification (output signal) Figure 2: The CoF model, trained on five composite objects (lover 6,2 over 7, etc.). Left: the model consists of two what+where units, responsible for the bottom and the top fragments of the stimulus, respectively. Gain fields (boxes labeled below center and above center) steer each input fragment to the appropriate unit. The learning mechanism (RIC, for Reconstruction/Classification) was implemented as a radial basis function network. The reconstruction error (~) modulates the classification outputs. Right: training the model, viewed as a computation cube. Multiple fixations of the stimulus (of which three are illustrated), along with Gaussian windows selecting stimulus fragments, allow the system to learn what+where responses. A cell would only be allocated to a given fragment if it recurs in the company of a variety of other fragments, as warranted by the ratio between their joint probability and the product of the corresponding marginal probabilities (cf. eq. 1 and Figure 1, right; this criterion has not yet been incorporated into the CoF training scheme). 3 Computational experiments We conducted three experiments that examined the properties of the structured representations emerging from the CoF model. The first experiment (reported elsewhere [13)), involved animal-like shapes and aimed at demonstrating basic productivity and systematicity. We found that the CoF model is capable of systematically interpreting composite objects to which it was not previously exposed (for example, a half-goat and half-lion chimera is represented as such, by an ensemble of units trained to discriminate between three altogether different animals). In the second experiment, a version of the CoF model (Figure 2) was charged with learning to reuse fragments of the members of the training set - five bipartite objects composed of shapes of numerals from 1 through 0 - in interpreting novel compositions of the same fragments. The gain field mechanism built into the CoF model allowed it to respond largely systematically to the learned fragments even when these were shown in novel locations, both absolute, and relative (Figure 3, left). The third experiment addressed a basic prediction of the CoF model, stemming from its reliance on what+where mechanisms: the interaction between effects of shape and location in object representation. Such interaction had been found in a psychophysical study [15], in which the task was 4-alternative forced-choice classification of two-part stimuli consisting of simple geometric shapes (cube, cylinder, sphere, cone) . The composite stimuli were defined by two variables, shape and location, each of which could be same, neutral, or different in the prime and the target (yielding 9 conditions altogether). Response times of human subjects revealed effects of shape and location (what+where) , but not of shape alone; the pattern of priming across the nine conditions was replicated by the CoF model (correlation between model and human data r = 0.85), using the same stimuli as in the psychophysical experiment. 4 Discussion Because CoF relies on retinotopy rather than on abstract binding, its representation of spatial structure is location-specific; so is the treatment of structure by the human visual system, as indicated by a number of findings. For example, priming in a subliminal perception task was found to be confined to a quadrant of the visual field [16]. The notion that the representation of an object may be tied to a particular location in the visual field where it is first observed is compatible with the concept of object file, a hypothetical record created by the visual system for every encountered object, which persists as long as the object is observed. Moreover, location (as it figures in the CoF model) should be interpreted relative to the focus of attention, rather than retinotopically [17]. The idea that global relationships (hence, large-scale structure) have precedence over local ones [18], which is central to our approach, has withstood extensive testing in the past two decades. Even with the perceptual salience of the global and local structure equated, subjects are able to process the relations among elements before the elements themselves are identified [19]. More generally, humans are limited in their ability to represent spatial structure, in that the representation of spatial relations requires spatial attention. For example, visual search is difficult when above 0.9 mean correct rate 0.8 above ___1.1 ____ 1 2 3 4 5 6 7 8 9 0 below below __1____1_1 2 3 4 5 6 7 8 9 0 0.5 !J 0.4 0.3 0.1 1. 75 f \ f \ f \ f \ f I!J.... f , f i!I, 0.6 1.50 1.25 1.00 'c 0.05 0 . 75 0 . 50 ~ 0 .1 mean entropy per unit 2 . 00 I 0.7 0.2 ~ 0 . 25 0.01 0.005 Figure 3: Left: the response of the CoF model to a novel composite object, 6 (which only appeared in the bottom position in the training set) over 3 (which was only seen in the top position) . The interpretations offered by the model were correct in 94 out of the 100 possible test cases (10 digits on top x 10 digits on the bottom) in this experiment. Note: in the test scenario, each unit (above and below) must be fed each of the two input fragments (above and below), hence the 20 bars in the plots of the model's output. Right: the non-monotonic dependence of the mean entropy per output unit (ordinate axis on the right; dashed line) on the spread constant a of the radial basis functions (abscissa) indicates that entropy alone should not be used as a training criterion in object representation systems. targets differ from distractors only in the spatial relation between their elements, as if "... attention is required to bind features ... " [20]. The CoF model offers a unified framework, rooted in the MDL principle, for the understanding of these behavioral findings and of the functional significance of what+where receptive fields and attentional gain modulation. It extends the previous use of gain fields in the modeling of translation invariance [21] and of objectcentered herni-neglect [22], and highlights a parallel between whaHwhere cells and probabilistic approaches to structure representation in computational vision (e.g., [23]). The representational framework we described is both productive and effectively systematic. Specifically, it has the ability, as a matter of principle, to recognize as such objects that are related through a rearrangement of mesoscopic parts, without being taught those parts individually, and without the need for abstract symbolic binding. References [1] S. Edelman. Computational theories of object recognition. Trends in Cognitive Science, 1:296- 304, 1997. [2] J. E. Hummel. Where view-based theories of human object recognition break down: the role of structure in human shape perception. In E. Dietrich and A. Markman, eds., Cognitive Dynamics: conceptual change in humans and machines, ch. 7. Erlbaum, Hillsdale, NJ, 2000. [3] R. F. Hadley. Cognition, systematicity, and nomic necessity. Mind and Language, 12:137-153, 1997. [4] E. Bienenstock, S. Geman, and D. Potter. Compositionality, MDL priors, and object recognition. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, NIPS 9. MIT Press, 1997. [5] S. Edelman. Representation and recognition in vision. MIT Press, Cambridge, MA, 1999. [6] E. Kobatake and K. Tanaka. Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex. J. Neurophysiol., 71 :856- 867, 1994. [7] S. C. Rao, G. Rainer, and E. K. Miller. Integration of what and where in the primate prefrontal cortex. Science, 276:821- 824, 1997. [8] C. E. Connor, D. C. Preddie, J. L. Gallant, and D. C. Van Essen. Spatial attention effects in macaque area V4. J. of Neuroscience, 17:3201- 3214, 1997. [9] D. J . Heeger, E . P. Simoncelli, and J. Anthony Movshon. Computational models of cortical visual processing. Proc. Nat. Acad. Sci., 93:623- 627, 1996. [10] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2:1019- 1025, 1999. [11] D. Pomerleau. Input reconstruction reliability estimation. In C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, NIPS 5, pages 279- 286. Morgan Kaufmann, 1993. [12] I. Stainvas, N. Intrator, and A. Moshaiov. Improving recognition via reconstruction, 2000. preprint. [13] S. Edelman and N. Intrator. (Coarse Coding of Shape Fragments) Representation of Structure. Spatial Vision, 13:255- 264, 2000. + (Retinotopy) ~ [14] I. Fujita, K. Tanaka, M. Ito, and K. Cheng. Columns for visual features of objects in monkey inferotemporal cortex. Nature, 360:343- 346, 1992. [15] S. Edelman and F. N. Newell. On the representation of object structure in human vision: evidence from differential priming of shape and location. CSRP 500, University of Sussex, 1998. [16] M. Bar and I. Biederman. Subliminal visual priming. Psychological Science, 9(6):464469, 1998. [17] A. Treisman. Perceiving and re-perceiving objects. American Psychologist, 47:862875, 1992. [18] D. Navon. Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9:353- 383, 1977. [19] B. C. Love, J. N. Rouder, and E. J. Wisniewski. A structural account of global and local processing. Cognitive Psychology, 38:291- 316, 1999. [20] A. M. Treisman and N. G. Kanwisher. Perceiving visually presented objects: recognition, awareness, and modularity. Current Opinion in Neurobiology, 8:218- 226, 1998. [21] E. Salinas and L. F. Abbott. Invariant visual responses from attentional gain fields. J. of Neurophysiology, 77:3267- 3272, 1997. [22] S. Deneve and A. Pouget . Neural basis of object-centered representations. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, NIPS 11, Cambridge, MA, 1998. MIT Press. [23] M. C. Burl, M. Weber, and P. Perona. A probabilistic approach to object recognition using local photometry and global geometry. In Proc. 4th Europ. Conf. Comput. Vision, H. Burkhardt and B. Neumann (Eds.), LNCS-Series Vol. 1406- 1407, Springer- Verlag, pages 628- 641, June 1998.
1827 |@word neurophysiology:1 version:2 middle:1 open:1 covariance:1 wisniewski:1 carry:1 moment:2 necessity:1 configuration:3 series:1 fragment:25 selecting:1 tuned:6 past:1 current:1 yet:1 must:1 stemming:1 shape:19 plot:1 alone:2 half:4 record:1 coarse:3 location:17 five:4 along:2 constructed:1 differential:1 edelman:5 consists:1 fixation:3 pathway:2 behavioral:1 manner:2 kanwisher:1 indeed:1 proliferation:1 themselves:1 abscissa:1 love:1 roughly:1 brain:1 company:1 window:1 moreover:1 chimera:1 what:15 interpreted:1 monkey:3 emerging:1 unified:2 finding:3 ended:1 nj:1 every:1 hypothetical:1 unit:16 appear:1 organize:1 before:2 persists:1 local:4 bind:1 limit:1 acad:1 encoding:1 modulation:2 examined:1 autoassociation:1 limited:1 range:1 perpendicular:1 responsible:1 testing:1 block:1 digit:2 lncs:1 area:3 thought:1 composite:10 alleviated:1 pre:1 radial:2 quadrant:1 symbolic:3 onto:1 charged:1 center:6 conceptualized:1 exposure:1 attention:8 resolution:1 pouget:1 importantly:1 notion:1 coordinate:1 enhanced:1 target:3 suppose:1 construction:1 exact:1 hierarchy:1 associate:1 rfd:1 element:3 recognition:10 trend:1 located:1 geman:1 labeled:1 bottom:7 observed:2 module:1 role:1 preprint:1 capture:1 solla:1 principled:1 mozer:1 productive:5 dynamic:1 trained:4 objectcentered:1 exposed:1 localization:1 bipartite:1 basis:5 neurophysiol:1 joint:2 represented:2 forced:1 describe:1 effective:2 salina:1 whose:1 say:1 reconstruct:1 ability:3 withstood:1 dietrich:1 propose:2 reconstruction:6 interaction:2 product:1 combining:1 holistic:1 representational:1 description:2 constituent:4 neumann:1 categorization:1 object:42 measured:1 eq:3 implemented:3 europ:1 involves:1 implies:1 differ:1 concentrate:1 correct:2 centered:4 human:9 opinion:1 numeral:1 hillsdale:1 require:1 repertoire:1 precedence:2 practically:1 hall:1 residing:1 visually:1 cognition:2 scope:1 ventral:1 estimation:2 proc:2 combinatorial:1 goat:1 sensitive:1 individually:1 wl:1 rough:1 mit:3 gaussian:3 modified:1 rather:3 cornell:2 rainer:1 derived:1 focus:3 june:1 recurs:1 indicates:1 bienenstock:1 relation:4 perona:1 selective:1 fujita:1 among:3 classification:4 animal:2 spatial:12 integration:1 cube:5 field:21 marginal:3 construct:1 saving:1 never:1 markman:1 stimulus:9 equiprobable:1 composed:2 recognize:1 individual:1 familiar:1 geometry:1 consisting:1 hummel:1 cylinder:1 rearrangement:2 essen:1 mdl:5 yielding:2 csrp:1 swapping:1 capable:1 lh:2 poggio:1 machinery:1 tree:1 re:1 theoretical:1 psychological:1 column:2 modeling:1 steer:1 rao:1 giles:1 measuring:1 deviation:3 neutral:1 conducted:1 erlbaum:1 reported:1 providence:1 standing:1 systematic:5 v4:4 probabilistic:2 together:2 hemifield:2 treisman:2 central:1 prefrontal:2 possibly:1 cognitive:4 conf:1 american:1 account:1 twin:1 coding:4 matter:1 configured:1 postulated:1 depends:1 performed:1 view:2 systematicity:5 break:1 parallel:3 kaufmann:1 characteristic:1 largely:1 ensemble:1 miller:1 rouder:1 mesoscopic:1 ed:2 definition:1 involved:1 associated:1 attributed:1 gain:11 proved:1 treatment:2 distractors:1 tolerate:1 higher:1 response:11 arranged:1 box:2 just:1 correlation:1 lack:1 indicated:1 gray:1 believe:1 building:1 effect:4 brown:2 concept:1 burl:1 hence:2 assigned:1 spatially:1 illustrated:1 deal:2 hadley:1 during:1 self:1 sussex:1 rooted:1 criterion:3 performs:1 interpreting:2 image:4 wise:1 weber:1 novel:6 recently:1 common:1 functional:1 physical:1 retinotopically:1 cerebral:1 interpretation:2 interpret:1 trait:1 composition:2 cambridge:2 connor:1 outlined:1 language:1 had:1 reliability:3 similarity:2 cortex:9 surface:1 etc:1 inferotemporal:2 something:1 posterior:1 navon:1 prime:1 scenario:1 selectivity:1 certain:2 verlag:1 suspended:1 seen:1 minimum:1 morgan:1 impose:1 signal:2 dashed:1 multiple:2 simoncelli:1 offer:1 sphere:1 long:1 concerning:1 coded:2 prediction:1 variant:1 basic:2 vision:6 represent:2 confined:1 cell:9 justified:1 chorus:2 separately:1 addressed:2 allocated:2 ithaca:1 appropriately:2 operate:1 file:1 subject:2 cowan:1 member:1 lover:1 jordan:2 call:1 structural:1 noting:1 revealed:1 variety:2 affect:1 independence:2 psychology:3 isolation:1 identified:1 idea:1 shift:1 whether:2 allocate:1 reuse:1 movshon:1 render:1 cause:1 compositional:2 nine:1 useful:1 modulatory:2 generally:1 aimed:2 burkhardt:1 processed:1 category:3 simplest:1 peg:1 neuroscience:2 per:2 vol:1 coarsely:1 taught:1 reliance:1 demonstrating:1 drawn:3 abbott:1 deneve:1 symbolically:1 merely:1 cone:1 respond:1 place:1 family:1 extends:1 decide:1 ric:1 bound:2 distinguish:2 cheng:1 encountered:1 activity:1 constraint:1 ri:1 encodes:1 nathan:1 attempting:1 structured:1 belonging:1 across:2 character:3 shallow:1 primate:2 psychologist:1 intuitively:1 invariant:1 previously:1 mechanism:6 mind:1 fed:1 lieu:1 operation:1 hierarchical:1 intrator:3 generic:1 appropriate:1 petsche:1 appearing:1 alternative:1 altogether:2 top:7 cf:2 recognizes:1 neglect:1 classical:2 psychophysical:2 arrangement:1 receptive:5 primary:1 dependence:1 exhibit:1 attentional:4 mapped:1 sci:1 spanning:1 assuming:1 potter:1 length:2 relationship:2 providing:1 ratio:1 difficult:1 potentially:1 implementation:1 pomerleau:1 proper:1 allowing:2 gallant:1 neuron:1 riesenhuber:1 relational:1 incorporated:1 neurobiology:1 frame:1 biederman:1 compositionality:2 ordinate:1 namely:1 hypercolumn:1 required:2 extensive:2 hanson:1 photometry:1 learned:1 tanaka:2 nip:3 macaque:2 address:1 beyond:1 bar:2 suggested:2 recurring:1 below:9 lion:1 pattern:1 perception:3 able:1 appeared:1 challenge:1 rf:6 max:1 built:1 representing:1 scheme:3 axis:1 started:1 carried:1 created:1 categorical:1 prior:2 understanding:2 geometric:1 relative:2 highlight:1 proportional:3 awareness:1 offered:2 principle:2 editor:3 systematically:2 translation:4 elsewhere:1 compatible:1 supported:1 l_:1 salience:1 allow:2 institute:1 absolute:1 van:1 dimension:3 cortical:3 avoids:1 interchange:1 commonly:2 equated:1 replicated:1 neurobiological:1 orderly:1 global:5 tolerant:1 summing:1 conceptual:1 search:1 decade:1 anchored:1 modularity:1 nature:3 learn:1 improving:1 forest:1 warranted:1 complex:4 necessarily:1 priming:4 anthony:1 significance:1 main:1 spread:1 rh:5 profile:1 allowed:1 neuronal:1 fashion:1 ny:1 position:2 explicit:1 heeger:1 comput:1 tied:1 perceptual:1 third:1 ito:1 shimon:1 down:1 specific:2 symbol:1 appeal:1 subliminal:2 virtue:1 concern:1 evidence:1 effectively:4 modulates:1 nat:1 entropy:3 simply:1 appearance:1 visual:24 binding:5 monotonic:1 ch:1 newell:1 springer:1 relies:1 extracted:1 ma:2 se37:1 viewed:1 retinotopy:2 change:3 infinite:1 specifically:2 operates:1 perceiving:3 decouple:1 preddie:1 kearns:1 discriminate:2 invariance:1 productivity:3 support:1 cof:16 arises:1 dept:1
904
1,828
Spike-Timing-Dependent Learning for Oscillatory Networks Silvia Scarp etta Dept. of Physics "E.R. Caianiello" Salerno University 84081 (SA) Italy and INFM, Sezione di Salerno Italy scarpetta@na. infn. it Zhaoping Li Gatsby Compo Neurosci. Unit University College, London, WCIN 3AR United Kingdom [email protected] John Hertz Nordita 2100 Copenhagen 0, Denmark [email protected] Abstract We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1 Introduction Recent studies of synapses between pyramidal neocortical and hippocampal neurons [1, 2, 3, 4] have revealed that changes in synaptic efficacy can depend on the relative timing of pre- and postsynaptic spikes. Typically, a presynaptic spike followed by a postsynaptic one leads to an increase in efficacy (LTP), while the reverse temporal order leads to a decrease (LTD). The dependence of the change in synaptic efficacy on the difference r between the two spike times may be characterized by a kernel which we denote A(r) [4]. For hippocampal pyramidal neurons, the half-width of this kernel is around 20 ms. Many important neural structures, notably hippocampus and olfactory cortex, exhibit oscillatory activity in the 20-50 Hz range. Here the temporal variation of the neuronal firing can clearly affect the synaptic dynamics, and vice versa. In this paper we study a simple model for learning oscillatory patterns, based on the structure of the kernel A(r) and other known physiology of these areas. We will assume that these synaptic changes in long range lateral connections are driven by oscillatory, patterned input to a network that initially has only local synaptic connections. The result is an imprinting of the oscillatory patterns in the synapses, such that subsequent input of a similar pattern will evoke a strong resonant response. It can be viewed as a generalization to oscillatory networks with spike-timing-dependent learning of the standard scenario whereby stationary patterns are stored in Hopfield networks using the conventional Hebb rule. 2 Model The computational neurons of the model represent local populations of biological neurons that share common input. They follow the equations of motion [5] Ui = -aUi - (3?gv(Vi) + L J~gu(Uj) + Ii, (1) j Vi -avi +')'Pgu(Ui) + Lwggu(Uj). (2) #i Here Ui and Vi are membrane potentials for excitatory and inhibitory (formal) neuron i, a- 1 is their membrane time constant, and the sigmoidal functions gu( ) and gv( ) model the dependence of their outputs (interpreted as instantaneous firing rates) on their membrane potentials. The couplings (3? and ')'? are inhibitory-to-excitatory (resp. excitatory-to-inhibitory) connection strengths within local excitatory-inhibitory pairs, and for simplicity we take the external drive Ii(~ to act only on the excitatory units. We include nonlocal excitatory couplings J ij between excitatory units and wg from excitatory units to inhibitory ones. In this minimal model, we ignore long-range inhibitory couplings, appealing to the fact that real anatomical inhibitory connections are predominantly short-ranged. (In what follows, we will sometimes use bold and sans serif notation (e.g., u, J) for vectors and matrices, respectively.) The structure of the couplings is shown in Fig. 1A. The model is nonlinear, but here we will limit our treatment to an analysis of small oscillations around a stable fixed point {ii, v} determined by the DC part of the input. Performing the linearization and eliminating the inhibitory units [6, 5], we obtain ii + [2a - J]ti + [a 2 + (3(')' + W) - aJ]u = (at + a)81. (3) Here u is now measured from the fixed point ii, 81 is the time-varying part of the input, and the elements of J and W are related to those of JO and WO by W ij = g~(Uj)wg and Jij = g~(Uj)J~. For simplicity, we have assumed that the effective local couplings (3i = g~(Vi)(3? and ')'i = g~(uih? are independent of i: (3i = (3, ')'i = ')'. With oscillatory inputs 81 = ee- iwt + c.c., the oscillatory pattern elements ~i = I~ile-i?i are complex, reflecting possible phase differences across the units. We likewise separate the response u = u+ + u- (after the initial transients) into positive- and negative-frequency components u? (with u- = u+* and u? ex: e'fiwt). Since ti? = =t=iwu?, Eqn. (3) can be written [2a ? ~(a2 + (3')' - w2)] u? = M?u? + (1 ? ~) 81?, (4) a form that shows how the matrix M?(w) == J =t= !..((3W - aJ). (5) w describes the effective coupling between local oscillators. 2a is the intrinsic damping and a 2 + (3')' the frequency of the individual oscillators. J II 1j L CD G) J:, + , ...--------~ , + ,, , , ,, u. Ul , , ,, >, ,, ,, / ,, ,, O ,, /w:, '0 A B.1 B.2 Figure 1: A. The model: In addition to the local excitatory-inhibitory connections (vertical solid lines), there are nonlocallong-range connections (dashed lines) between excitatory units (Jij ) and from excitatory to inhibitory units (Wij ). External inputs are fed to the excitatory units. B: Activation function used in simulations for excitatory units (B.1) and inhibitory units (B.2). Crosses mark the equilibrium point (ii, v) of the system. 2.1 Learning phase We employ a generalized Hebb rule of the form c5Cij (t) ='T/ r 10 T dtjOO dTYi(t+T)A(T)Xj(t) (6) -00 for synaptic weight Cij, where Xj and Yi are the pre- and postsynaptic activities, measured relative to stationary levels at which no changes in synaptic strength occur. We consider a general kernel A(T), although experimentally A(T) > 0 ? 0) for T > 0 ? 0). We will apply the rule to both J and W in our linearized network, where the firing rates 9u(Ui) and 9v(Vi) vary linearly with Ui and Vi, so we will use Eqn. (6) with Xj = Uj and Yi = Ui or Vi (measured from the fixed point Vi), respectively. We assume oscillatory input c51 = eOe-iwot + c.c. during learning. In the brain structures we are modeling, cholinergic modulation makes the long-range connections ineffective during learning [7]. Thus we set J = W = 0 in Eqn. (3) and find -iwot (Wo +.Ia)':0 u7- = '>i e = Uo~~e-iwot (7) t 2awo + i(a 2 + (3"1 - w5) t and, from (at + a)vi = "lUi, (8) Using these in the learning rule (6) leads to (9) where A(w) = J~oodT A(T)e- iwT is the Fourier transform of A(T), Jo 27r'T/J IUol2 /wo, and 'T/J(W) are the respective learning rates. When the rates are tuned such that 'T/J = 'T/w"l(3/(a 2 +w5) and when w = Wo, we have Mit = JoA(wo)~?~J*, a generalization of the outer-product learning rule to the complex patterns el-l from the Hopfield-Hebb form for real-valued patterns. For learning multiple patterns f.L = 1,2, ... , the learned weights are simply sums of contributions from individual patterns like Eqns. (9) with ~? replaced by ~r. e, 2.2 Recall phase We return to the single-pattern problem and study the simple case when 'fJJ + w~). Consider first an input pattern 81 = ee- iwt + c.c. that matches the stored pattern exactly (e = eO), but possibly oscillating at a different frequency. We then find, using Eqns. (9) in Eqn. (3), the (positive-frequency) response 'fJw'"Y(3/(a 2 (w + ia)eoe- iwt - 2aw - ~(w + wo)A'(wo) + i[a 2 + (3'"Y - ~(w + WO)AII(WO) - w2 ]? u+ - (10) where A'(wo) == ReA(wo) and A"(WO) == ImA(wo). For strong response at w = Wo, we require (11) This means (1) the resonance frequency Wo is determined by A", (2) the effective damping 2a - JoA' should be small, and (3) deviation of w from Wo reduces the responses. It is instructive to consider the case where the width of the time window for synaptic change is small compared with the oscillation period. Then we can expand A(wo) in Wo: (12) In particular, A(T) = 8(T) gives ao = 1 and al = 0 and the conventional Hebbian learning [5]. Experimentally, al > 0 , implying a resonant frequency greater than the intrinsic local frequency, a 2 + (3'"Y obtained in the absence of long-range coupling. J If the drive e does not match the stored pattern (in phase and amplitude), the response will consist of two terms. The first has the form of Eqn. (10) but reduced in amplitude by an overlap factor e O* . e. (For convenience we use normalized pattern vectors.) The second term is proportional to the part of e orthogonal to the stored pattern. The J and W matrices do not act in this subspace, so the frequency dependence of this term is just that of uncoupled oscillators, i.e., Eqn. (10) with Jo set equal to zero. This response is always highly damped and therefore small. It is straightforward to extend this analysis to multiple imprinted patterns. The response consists of a sum of terms, one for each stored pattern. The term for each stored pattern is just like that just described in the single-stored-pattern case: it has one part for the input component parallel to the stored pattern and another part for the component orthogonal to the stored pattern. We note that, in this linear analysis, an input which overlaps several stored patterns will (if the imprinting and input frequencies match) evoke a resonant response which is a linear combination of the stored patterns. Thus, a network tuned to operate in a nearly linear regime is able to interpolate in forming its representation of the input. For categorical associative memory, on the other hand, a network has to work in the extreme nonlinear limit, responding with only the strongest stored pattern in an input mixture. As our network operates near the threshold for spontaneous oscillations, we expect that it should exhibit properties intermediate between A B C 00 0 00 90', - - - - - - - - - - - - - : : = - - - , $ ,* 200 200 * " ~" 'C ..E 1'i 1'i 00 .. ~ 100 ~ 100 o 60 02 o-eo* 04 06 Overlap " 00 / ? * 08 45 Input angle (degrees) 90 Figure 2: Circles show non-linear simulation results, stars show the linear simulation results, while the dotted line show the analytical prediction for the linearized model. A. Importance of frequency match: amplitude of response of output units as a function of the frequency of the current input. The frequency of the imprinted pattern is 41 Hz. B.Importance of amplitude and phase mismatch: amplitude of response as a function of overlap between current input and imprinted pattern (i.e. I~o* . ~I), for different presented input patterns~. C: Input - output relationship when two orthogonal patterns and have been imprinted at the same frequency w = 41Hz. The angle of input pattern with resrect to ~1 is shown as a function of the angle of the output pattern with respect to ~ ,for many different input patterns. e e, these limits. We find that this is indeed the case in the simulations reported in the next section. From our analysis it turns out that the network behaves like a Hopfield-memory (separate basins, without interpolation capability) for patterns with different imprinting frequencies, but at the same time it is able to interpolate among patterns which share a common frequency. 3 Simulations Checking the validity of our linear approximation in the analysis, we performed numerical simulations of both the non-linear equations (1,2) and the linearized ones (3). We simulated the recall phase of a network consisting of 10 excitatory and 10 inhibitory cells. The connections Jij and Wij were calculated from Eqns. (9), where we used the approximations (12) for the kernel shape A(T). Parameters were set in such a way that the selective resonance was in the 40-Hz range. In non-linear simulations we used different piecewise linear activation functions for 9u() and 9v(), as shown in Fig.1B. We chose the parameters of the functions 9u () and 9v () so that the network equilibrium points Ui, ih were close to, but below, the high-gain region, i.e. at the points marked with crosses in Fig. lB. The results confirm that when the input pattern matches the imprinted one in frequency, amplitude and phase, the network responds with strong resonant oscillations. However, it does not resonate if the frequencies do not match, as shown in the frequency tuning curve in Fig. 2A. The behavior when the two frequencies are close to each other differs in the linear and nonlinear cases. However, in both cases a sharp selectivity in frequency is observed. The dependence on the overlap between the input and the stored pattern is shown in Fig. 2B. The non-linear case, indicated by circles, should be compared with the linear case, where the amplitude is always linear in the overlap. In the nonlinear case, the linearity holds roughly only for overlaps lower than about 0.4; for larger overlaps the amplification is as high as for the perfect match case. This means that input patterns with an overlap with the imprinted one greater than 0.4 lie within the basis of attraction of the W =WI e= e l ,NS:C] _soC] 50 ,----------, 400 , " S:c=J 0 -soC] -50 -50 _so c=J ,.':~ , .~O ,.':8 ,. ~8 o 200 400 200 ,"1--1 o - 50 o 200 400 200 400 ," 5: -SO 0 200 0 200 400 - 50 0 200 400 0 200 400 0 200 400 - 50 0 200 400 Figure 3: Frequency selectivity: Response evoked on 3 of the 10 neurons. Oscillatory patterns ele-iwlt + c.c. and e2e-iw2tc.c. have been imprinted, with e l .1 and WI = 41 Hz, W2 = 63 Hz. During the learning phases the parameter al of kernel was tuned appropriately, i.e. al = 0.1 when imprinting e l and al = 1.1 when imprinting e e? imprinted pattern. The response elicited when two orthogonal patterns have been imprinted with the same frequency is shown in Fig. 2C. Let ele-iwot + c.c. and ee- iwot + c.c. denote the imprinted patterns, and ee- iwot + c.c. be the input to the trained network. In both linear and non-linear simulations the network responds vigorously(with highamplitude oscillations) to the drive if e is in the subspace spanned by the imprinted patterns, and fails to respond appreciably if e is orthogonal to that plane. When the input pattern e is in the plane spanned by the stored patterns, the resonant response u also lies in this plane. However, while in the linear case the output is proportional to the input, in agreement with the analytical analysis, in the nonlinear case there are preferred directions, in the stored pattern plane. The figure shows that, in case simulated here, there are three stable attractors: and the symmetric linear combination (e l + e 2 )/V2). e, e, Finally we performed linear simulations storing two orthogonal patterns ele-iwlt + c.c. and e2e-iw2t + c.c. with two different imprinting frequencies. Fig. 3 shows a good performance of the network in separating the basins of attraction in this case. The response to a linear combination of the two patterns, (ae + be)e- iw2t + c.c. is proportional to the part of the input whose imprinting frequency matches the current driving frequency. Linear combinations of the two imprinted patterns are not attractors if the two patterns do not share the same imprinting frequency. 4 Summary and Discussion We have presented a model of learning for memory or input representations in neural networks with input-driven oscillatory activity. The model structure is an abstrac- tion of the hippocampus or the olfactory cortex. We propose a simple generalized Hebbian rule, using temporal-activity-dependent LTP and LTD, to encode both magnitudes and phases of oscillatory patterns into the synapses in the network. After learning, the model responds resonantly to inputs which have been learned (or, for networks which operate essentially linearly, to linear combinations of learned inputs), but negligibly to other input patterns. Encoding both amplitude and phase enhances computational capacity, for which the price is having to learn both the excitatory-to-excitatory and the excitatory-to-inhibitory connections. Our model puts contraints on the form of the learning kernal A(r) that should be experimenally observed, e.g., for small oscillation frequencies, it requires that the overall LTP dominates the overall LTD, but this requirement should be modified if the stored oscillations are of high frequencies. Plasticity in the excitatory-to-inhibitory connections (for which experimental evidence and investigation is still scarce) is required by our model for storing phase locked but unsynchronous oscillation patterns. As for the Hopfield model, we distinguish two functional phases: (1) the learning phase, in which the system is clamped dynamically to the external inputs and (2) the recall phase, in which the system dynamics is determined by both the external inputs and the internal interactions. A special property of our model in the linear regime is the following interpolation capability: under a given oscillation frequency, once the system has learned a set of representation states, all other states in the subspace spanned by the learned states can also evoke vigorous responses. Hippocampal place cells could employ such a representation. Each cell has a localised "place field", and the superposition of activity of several cells wth nearby place fields can represent continuously-varying position. The locality of the place fields also means that this representation is conservative (and thus robust), in the sense that interpolation does not extend beyond the spatial range of the experienced locations or to locations in between two learned but distant and disjoint spatial regions. Of course, this interpolation property is not always desirable. For instance, in categorical memory, one does not want inputs which are linear combinations of stored patterns to elicit responses which are also similar linear combinations. Suitable nonlinearity can (as we saw in the last section), enable the system to perform categorization: one way involved storing different patterns (or, by implication, different classes of patterns) at different frequencies. For instance, in a multimodal area, "place fields" might be stored at one oscillation frequency, and (say) odor memories at another. It seems likely to us that the brain may employ different kinds and degrees of nonlinearity in different areas or at different times to enhance the versatility of its computations. References [1] H Markram, J Lubke, M Frotscher, and B Sakmann, Science 275 213 (1997). [2] J C Magee and D Johnston, Science 275 209 (1997). [3] D Debanne, B H Gahwiler, and S M Thompson, J Physiol507 237 (1998) . [4] G Q Bi and M M Poo, J Neurosci 18 10464 (1998). [5] Z Li and J Hertz, Network: Computation in Neural Systems 11 83-102 (2000). [6] Z Li and J J Hopfield, Biol Cybern 61 379-92 (1989) . [7] M E Hasselmo, Neural Comp 5 32-44 (1993).
1828 |@word eliminating:1 hippocampus:3 seems:1 simulation:9 linearized:3 solid:1 vigorously:1 initial:1 efficacy:3 united:1 tuned:3 current:3 activation:2 written:1 john:1 numerical:1 distant:1 subsequent:1 plasticity:1 shape:1 gv:2 stationary:2 half:1 implying:1 plane:4 short:1 compo:1 wth:1 location:2 sigmoidal:1 consists:1 olfactory:3 notably:1 indeed:1 presumed:1 roughly:1 behavior:1 brain:2 window:1 notation:1 linearity:1 what:1 kind:1 interpreted:1 temporal:3 act:2 ti:2 exactly:1 uk:1 unit:12 uo:1 fjw:1 positive:2 timing:3 local:7 limit:3 encoding:1 firing:3 modulation:1 interpolation:4 might:1 chose:1 evoked:1 dynamically:1 patterned:1 range:8 locked:1 bi:1 imprinting:8 differs:1 area:3 elicit:1 physiology:1 pre:3 convenience:1 close:2 salerno:2 put:1 cybern:1 conventional:2 poo:1 straightforward:1 thompson:1 simplicity:2 rule:7 attraction:2 spanned:3 population:1 variation:1 debanne:1 resp:1 spontaneous:1 agreement:1 element:2 observed:2 role:1 negligibly:1 uih:1 region:2 decrease:1 ui:7 instructive:1 dynamic:2 caianiello:1 trained:1 depend:1 basis:1 gu:2 multimodal:1 aii:1 hopfield:5 represented:1 effective:4 london:1 avi:1 whose:1 larger:1 valued:1 say:1 wg:2 transform:1 associative:2 analytical:2 ucl:1 propose:1 interaction:1 product:2 jij:3 amplification:1 imprinted:12 wcin:1 requirement:1 oscillating:2 categorization:1 perfect:1 coupling:7 ac:1 measured:3 ij:2 odd:1 sa:1 strong:3 soc:2 direction:1 transient:1 enable:1 require:1 ao:1 generalization:2 investigation:1 biological:1 sans:1 hold:1 around:2 equilibrium:2 driving:1 vary:1 a2:1 superposition:1 saw:1 appreciably:1 hasselmo:1 vice:1 mit:1 clearly:1 always:3 modified:1 varying:2 encode:1 sense:1 dependent:3 el:1 typically:1 initially:1 expand:1 wij:2 selective:1 overall:2 among:1 resonance:2 spatial:2 special:1 frotscher:1 equal:1 once:1 field:4 having:1 zhaoping:2 nearly:1 stimulus:1 piecewise:1 employ:3 interpolate:2 individual:2 ima:1 replaced:1 phase:14 consisting:1 attractor:2 versatility:1 w5:2 highly:1 cholinergic:1 mixture:1 extreme:1 damped:1 implication:1 respective:1 orthogonal:6 damping:3 circle:2 minimal:1 instance:2 modeling:1 ar:1 eoe:2 measuring:1 deviation:1 stored:18 reported:1 aw:1 physic:1 enhance:1 continuously:1 infn:1 na:1 jo:3 possibly:1 external:4 return:1 li:3 potential:2 star:1 bold:1 vi:9 performed:2 tion:1 parallel:1 capability:2 e2e:2 elicited:1 contribution:1 lubke:1 likewise:1 comp:1 drive:3 oscillatory:13 synapsis:3 strongest:1 synaptic:11 frequency:32 involved:1 di:1 gain:1 treatment:1 recall:3 ele:3 amplitude:8 reflecting:1 follow:1 gahwiler:1 response:18 just:3 hand:1 eqn:6 nonlinear:5 aj:2 indicated:1 effect:1 validity:1 ranged:1 normalized:1 symmetric:1 during:3 width:2 eqns:3 whereby:1 m:1 generalized:2 hippocampal:3 neocortical:1 motion:1 instantaneous:1 predominantly:1 common:2 behaves:1 functional:1 extend:2 versa:1 tuning:1 u7:1 nonlinearity:2 stable:2 cortex:3 recent:1 italy:2 driven:2 reverse:1 scenario:1 selectivity:2 yi:2 greater:2 eo:2 period:1 dashed:1 ii:7 multiple:2 desirable:1 reduces:1 hebbian:2 match:8 characterized:1 cross:2 long:4 post:1 ile:1 fjj:1 prediction:1 ae:1 essentially:1 sometimes:1 kernel:7 represent:2 cell:4 rea:1 addition:1 want:1 johnston:1 pyramidal:2 appropriately:1 w2:3 operate:2 ineffective:1 hz:6 ltp:3 ee:4 near:1 revealed:1 intermediate:1 affect:1 xj:3 ltd:3 ul:1 wo:18 reduced:1 inhibitory:14 dotted:1 disjoint:1 anatomical:1 nordita:2 threshold:1 sum:2 angle:3 respond:1 place:5 resonant:6 oscillation:11 followed:1 iwt:4 distinguish:1 activity:6 strength:2 aui:1 occur:1 nearby:1 fourier:1 performing:1 combination:7 hertz:2 membrane:3 across:1 describes:1 postsynaptic:4 wi:2 appealing:1 kernal:1 equation:2 turn:1 fed:1 apply:2 v2:1 infm:1 odor:1 sezione:1 responding:1 include:1 uj:5 spike:6 dependence:4 responds:3 enhances:2 exhibit:2 subspace:3 separate:2 lateral:1 simulated:2 separating:1 outer:2 capacity:1 presynaptic:2 denmark:1 relationship:1 kingdom:1 cij:1 relate:1 localised:1 negative:1 contraints:1 sakmann:1 perform:1 vertical:1 neuron:6 dc:1 lb:1 sharp:1 copenhagen:1 pair:1 required:1 connection:10 learned:7 uncoupled:1 able:2 beyond:1 below:1 pattern:55 mismatch:1 regime:2 memory:6 ia:2 overlap:9 suitable:1 scarce:1 categorical:2 magee:1 checking:1 relative:2 expect:1 proportional:4 degree:2 basin:2 storing:3 share:3 cd:1 excitatory:18 summary:1 course:1 last:1 formal:1 markram:1 curve:1 calculated:1 nonlocal:1 ignore:1 preferred:1 evoke:3 confirm:1 assumed:1 scarpetta:1 learn:1 robust:1 complex:3 neurosci:2 linearly:2 silvia:1 neuronal:1 fig:7 gatsby:2 hebb:3 n:1 fails:1 position:1 experienced:1 lie:2 clamped:1 dk:1 dominates:1 evidence:1 serif:1 intrinsic:2 consist:1 ih:1 importance:2 magnitude:1 linearization:1 locality:1 vigorous:1 simply:1 likely:1 forming:2 determines:1 viewed:1 marked:1 oscillator:3 price:1 absence:1 change:6 experimentally:2 determined:3 reducing:1 lui:1 operates:1 conservative:1 experimental:1 college:1 internal:1 mark:1 dept:1 biol:1 ex:1
905
1,829
Learning winner-take-all competition between groups of neurons in lateral inhibitory networks Xiaohui Xie, Richard Hahnloser and H. Sebastian Seung E25-21O, MIT, Cambridge, MA 02139 {xhxielrhlseung}@mit.edu Abstract It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or "grandmother cell" representation in which only a single neuron is active [1, 2, 3, 4]. When used for unsupervised learning, winner-take-all networks discover representations similar to those learned by vector quantization [5]. Recently many research efforts have focused on unsupervised learning algorithms for sparsely distributed representations [6, 7]. These algorithms lead to networks in which groups of multiple neurons are coactivated to represent an object. Therefore, it is of great interest to find ways of using lateral inhibition to mediate winner-take-all competition between groups of neurons, as this could be useful for learning sparsely distributed representations. In this paper, we show how winner-take-all competition between groups of neurons can be learned. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a simple formula that can be interpreted as arising from an online learning rule. To show that the resulting network functions as advertised, we perform a stability analysis. If the strength of inhibition is sufficiently great, and the group organization satisfies certain conditions, we show that the only sets of neurons that can be coactivated at a stable steady state are the given groups and their subsets. Because of the competition between groups, only one group can be activated at a time. In general, the identity of the winning group depends on the initial conditions of the network dynamics. If the groups are ordered by the aggregate input that each receives, the possible winners are those above a cutoff that is set by inequalities to be specified. 1 Basic definitions Let m groups of neurons be given, where group membership is specified by the matrix , {I? fl = if the ith neuron is in the ath group otherwise (1) We will assume that every neuron belongs to at least one group l, and every group contains at least one neuron. A neuron is allowed to belong to more than one group, so that the groups are potentially overlapping. The inhibitory synaptic connectivity of the network is defined in terms of the group membership, Ji ' J = lIm (1 _ ~a ~'!) = , a=I J {o if i and. j both belong to a group 1 otherwIse (2) One can imagine this pattern of connectivity arising by a simple learning mechanism. Suppose that all elements of J are initialized to be unity, and the groups are presented sequentially as binary vectors ,~m. The ath group is learned through the update e, ... (3) In other words, if neurons i and j both belong to group a, then the connection between them is removed. After presentation of all m groups, this leads to Eq. (2). At the start of the learning process, the initial state of J corresponds to uniform inhibition, which is known to implement winner-take-all competition between individual neurons. It will be seen that, as inhibitory connections are removed during learning, the competition evolves to mediate competition between groups of neurons rather than individual neurons. The dynamics of the network is given by dx'- + xdt ' = [b-, + ax,- - (3 'L...J " J-'J-x-J ] + (4) j where [z]+ = max{z,O} denotes rectification, a (3 > the strength of lateral inhibition. ? > ? the strength of self-excitation, and Equivalently, the dynamics can be written in matrix-vector form as :i; + x = [b + W x]+, where W = aI - (3J includes both self-excitation and lateral inhibition. The state of the network is specified by the vector x, and the external input by the vector b. A vector v is said to be nonnegative, v 2: 0, if all of its components are nonnegative. The nonnegative orthant is the set of all nonnegative vectors. It can be shown that any trajectory of Eq. (4) starting in the nonnegative orthant remains there. Therefore, for simplicity we will consider trajectories that are confined to the nonnegative orthant x 2: 0. However, we will consider input vectors b whose components are of arbitrary sign. 2 Global stability The goal of this paper is to characterize the steady state response of the dynamics Eq. (4) to an input b that is constant in time. For this to be a sensible goal, we need some guarantee that the dynamics converges to a steady state, and does not diverge to infinity. This is provided by the following theorem. Theorem 1 Consider the network Eq. (4). The following statements are equivalent: lThis condition can be relaxed, but is kept for simplicity_ 1. For any input b, there is a nonempty set of steady states that is globally asymptotically stable, exceptfor initial conditions in a set of measure zero. 2. The strength a of self-excitation is less than one. Proof sketch: ? (2) => (1): Ifa < 1, the function Hl-a)xTx+~xT Jx-bTxis bounded below and radially unbounded in the nonnegative orthant. Furthermore it is nonincreasing under the dynamics Eq. (4), and constant only at steady states. Therefore it is a Lyapunov function, and its local minima are globally asymptotically stable . ? (1) => (2): Suppose that (2) is false. If a ~ 1, it is possible to choose b and an initial condition for x so that only one neuron is active, and the activity of this neuron diverges, so that (1) is contradicted . ? 3 Relationship between groups and permitted sets In this section we characterize the conditions under which the lateral inhibition of Eq. (4) enforces winner-take-all competition between the groups of neurons. That is, the only sets of neurons that can be coactivated at a stable steady state are the groups and their subsets. This is done by performing a linear stability analysis, which allows us to classify active sets using the following definition. Definition 1 If a set of neurons can be coactivated by some input at an asymptotically stable steady state, it is called permitted. Otherwise, it is forbidden Elsewhere we have shown that whether a set is permitted or forbidden depends on the submatrix of synaptic connections between neurons in that set[l]. If the largest eigenvalue of the sub-matrix is less than unity, then the set is permitted. Otherwise, it is forbidden. We have also proved that any superset of a forbidden set is forbidden, while any subset of a permitted set is also permitted. Our goal in constructing the network (4) is to make the groups and their subsets the only permitted sets of the network. To determine whether this is the case, we must answer two questions. First, are all groups and their subsets permitted? Second, are all permitted sets contained in groups? The first question is answered by the following Lemma. Lemma 1 All groups and their subsets are permitted. Proof: If a set is contained in a group, then there is no lateral inhibition between the neurons in the set. Provided that a < 1, all eigenvalues of the sub-matrix are less than unity, and the set is permitted. ? The answer to the second question, whether all permitted sets are contained in groups, is not necessarily affirmative. For example, consider the network defined by the group membership matrix ~ = {(I, 1,0), (0, 1, 1), (1,0,1)}. Since every pair of neurons belongs to some group, there is no lateral inhibition (J = 0), which means that there are no forbidden sets. As a result, (1,1,1) is a permitted set, but obviously it is not contained in any group. Let's define a spurious permitted set to be one that is not contained in any group. For example, {I, 1, I} is a spurious permitted set in the above example. To eliminate all the spurious permitted sets in the network, certain conditions on the group membership matrix ~ have to be satisfied. Definition 2 The membership ~ is degenerate if there exists a set of n ~ 3 neurons that is not contained in any group, but all of its subsets with n - 1 neurons belong to some group. Otherwise, ~ is called nondegenerate. For example, ~ degenerate. = {(I, 1, 0), (0, 1, 1), (1,0, I)} is Using this definition, we can formulate the following theorem. Theorem 2 The neural dynamics Eq. (4) with a permitted set if and only if ~ is degenerate. < 1 and (3 > 1- a has a spurious Before we prove this theorem, we will need the following lemma. Lemma 2 If (3 > 1- a, any set containing two neurons not in the same group isforbidden under the neural dynamics Eq. (4). Proof sketch: We will start by analyzing a very simple case, where there are two neurons belonging to two different groups. Let the group membership be {(I, 0), (0, I)}. In this case, W = {(a, -(3), (-(3, a)}. This matrix has eigenvectors (1,1) and (1, -1) and eigenvalues a - (3 and a + (3. Since a < 1 for global stability and (3 > 0 by definition, the (1,1) mode is always stable. But if (3 > 1 - a, the (1, -1) mode is unstable. This means that it is impossible for the two neurons to be coactivated at a stable steady state. Since any superset of a forbidden set is also forbidden, the general result of the lemma follows .?. Proof of Theorem 2 (sketch): ? ?::: If ~ is degenerate, there must exist a set n ~ 3 neurons that is not contained in any group, but all of its subsets with n - 1 neurons belong to some group. There is no lateral inhibition between these n neurons, since every pair of neurons belongs to some group. Thus the set containing all n neurons is permitted and spurious . ? =>: If there exists a spurious permitted set P, we need to prove that ~ must be degenerate. We will prove this by contradiction and induction. Let's assume ~ is nondegenerate. P must contain at least 2 neurons since anyone neuron subset is permitted and not spurious. By Lemma 2, these 2 neurons must be contained in some group, or else it is forbidden. Thus P must contain at least 3 neurons to be spurious, and any pair of neurons in P belongs to some group by Lemma 2. If P contains at least n neurons and all of its subsets with n - 1 neurons belong to some group, then the set with these n neurons must belong to some group, otherwise ~ is degenerate. Thus n must contain at least n + 1 neurons to be spurious, and all its n subsets belong to some group. By induction, this implies that P must contain all neurons in the network, in which case, P is either forbidden or nonspurious. This contradicts with the assumption P is a spurious permitted set. ? From Theorem 2, we can easily have the following result. Corollary 1 If every group contains some neuron that does not belong to any other group, then there is no any spurious permitted set. 4 The potential winners We have seen that if ~ is nondegenerate, the active set must be contained in a group, provided that lateral inhibition is strong ?(3 > 1 - a). The group that contains the active set will be called the "winner" of the competition between groups. The identity of the winner depends on the input b, and also on the initial conditions of the dynamics. For a given input, we need to characterize which pattern could potentially be the winner. Suppose that the group inputs B a = Li [biJ + ~i are distinct. Without loss of generality, we order the group inputs as Bl > ... > Bm. Let's denote the largest input as bmax = maxi{bi} and assume bmax > 0. Theorem 3 For nonoverlapping groups, the top c groups with the largest group input could end up the winner depending on the initial conditions of the dynamics, where c is determined by the equation BC 2': (1 - a)(3-1b max > B C+! Proof sketch: Suppose the ath group is the winner. For all neurons not in this group to be inactive, the self-consistent condition should read "'[ ~ bi J+a ~i 2': i I-a max{ [J+ -(3bj } J~a (5) If a group containing the neuron with the largest input, this condition can always be satisfied. Moreover, this group is always in the top c groups. For groups not containing the neuron with the largest input, this condition can be satisfied if and only if they are in the top c groups .? The winner-take-all competition described above holds only for the case of strong inhibition (3 > 1 - a. On the other hand, if (3 is small, the competition will be weak and may not result in group-winner-take-all. In particular, if (3 < (1 - a) / Amax, where Am ax is the largest eigenvalue of -J, then the set of all neurons is permitted. Since every subset of a permitted set is permitted, that means there are no forbidden sets and the network is monostable. Hence, group-winner-take-all does not hold. If (1 - a) / Amax < (3 < 1 - a, the network has forbidden sets, but the possibility of spurious permitted sets cannot be excluded. 5 Examples Traditional winner-take-all network This is a special case of our network with N groups, each containing one of the N neurons. Therefore, the group membership matrix ~ is the identity matrix, and J = 11 T - I, where 1 denotes the vector of all ones. According to Corollary 1, only one neuron is permitted to be active at a stable steady state, provided that (3 > 1 - a. We refer to the active neuron as the "winner" of the competition mediated by the lateral inhibition. If we assume that the inputs bi have distinct values, they can be ordered as b1 > b2 > ... > bN , without loss of generality. According to Theorem 3, any of the neurons 1 to k can be the winner, where k is defined by bk 2': (1 - a)(3-1b1 > bk+!. The winner depends on the initial condition of the network dynamics. In other words, any neuron whose input is greater than (1 - a) / (3 times the largest input can end up the winner. Topographic organization Let the N neurons be organized into a ring, and let every set of d contiguous neurons be a group. d will be called the width. For example, in a network with N = 4 neurons and group width d = 2, then the membership matrix is ~ = {(I, 1,0,0), (0,1,1,0), (0,0,1,1), (1,0,0, I)}. This ring network is similar to the one proposed by Ben-Yishai et al in the modeling of orientation tuning of visual cortex[9]. Unlike the WTA network where all groups are non-overlapping which implies that ~ is always nondegenerate, in the ring network neurons are shared among different groups, ~ will become degenerate when the width of the group is large. To guarantee all permitted sets are the subsets of some group, we have the following corollary, which can be derived from Theorem 2. A D 10 15 L--_~_~ 50 100 C 150 200 100 200 F 300 400 10 15L--_~_~ 10 15 Figure 1: Permitted sets of the ring network. The ring network is comprised of 15 neurons with Q = 0.4 and /3 = 1. In panels A and D, the 15 groups are represented by columns. Black refers to active neurons and white refers to inactive neurons. (A) 15 groups of width d = 5. (B) All permitted sets corresponding to the groups in A. (C) The 15 permitted sets in B that have no permitted supersets. They are the same as the groups in A. (D) 15 groups with width d = 6. (E) All permitted set corresponding to groups in D. (F) There are 20 permitted sets in E that have no permitted supersets. Note that there are 5 spurious permitted sets. Corollary 2 In the ring network with N neurons, no spurious permitted set. if the width d < N /3 + 1, then there is Fig. (1) shows the permitted sets of a ring network with 15 neurons. From Corollary 2, we know that if the group width is no larger than 5 neurons, there will not exist any spurious permitted set. In the left three panels of Fig. (1), the group width is 5 and all permitted sets are subsets of these groups. However, when the group width is 6 (right three panels), there exists 5 spurious permitted sets as shown in panel F. As we have mentioned earlier, the lateral inhibition strength (3 plays a critical role in determining the dynamics of the network. Fig. (2) shows four types of steady states of a ring network corresponding to different values of (3. 6 Discussion We have shown that it is possible to organize lateral inhibition to mediate a winner-take-all competition between potentially overlapping groups of neurons. Our construction utilizes the distinction between permitted and forbidden sets of neurons. If there is strong lateral inhibition between two neurons, then any set that contains them is forbidden (Lemma 2). Neurons that belong to the same group do not have any mutual inhibition, and so they form a permitted set. Because the synaptic connections between neurons in the same group are only composed of self-excitation, their outputs equal their rectified inputs, amplified by the gain factor of 1/(1 - a). Hence the neurons in the winning group operate in a purely analog regime. The coexistence of analog filtering with logical constraints on neural activation represents a form of hybrid analog-digital computation that may be especially appropriate for perceptual tasks. It might be possible to apply a similar method to the problem of data re- construction using a constrained set of basis vectors. combination of basis vectors could for example imA plement sparsity or nonnegativity constraints. 1. As we have shown in Theorem 2, there are some degenerate cases of overlapping groups, to which our method does not apply. It is an interesting open question whether there exists a general way of how to translate arbitrary groups of coactive neurons into permitted sets without involving spurious permitted sets. In the past, a great deal of The constraints on the linear B o. o. c D 15 5 10 15 research has been inspired Figure 2: Lateral inhibition strength f3 determines the behavior by the idea of storing memof the network. The network is a ring network of 15 neurons with ories as dynamical attracwidth d = 5 and where 0: = 0.4 and input bi = 1, Vi. These tors in neural networks [10]. panels show the steady state activities of the 15 neurons. (A) Our theory suggests an alThere are no forbidden sets. (B) The marginal state f3 = (1 ternative viewpoint, which O:)/Amax = 0.874, in which the network forms a continuous is to regard permitted sets attractor. (C) Forbidden sets exist, and so do spurious permitted sets. (D) Group-winner-take-a11 case, no spurious permitted sets. as memories latent in the synaptic connections. From this viewpoint, the contribution of the present paper is a method of storing and retrieving memories as permitted sets in neural networks. References [1] R. Hahnloser, R. Sarpeshkar, M. Mahowald, Douglas R., and H.S. Seung. Digital selection and analog amplification coexist in an electronic circuit inspired by neocortex. Nature, 3:609- 616, 2000. [2] Shun-Ichi Amari and Michael A. Arbib. Competition and Cooperation in Neural Nets, pages 119- 165. Systems Neuroscience. Academic Press, 1977. J. Metzler (ed). [3] 1. Feng and K.P. Hadeler. Qualitative behaviour of some simple networks. 1. Phys. A:, 29:50195033, 1996. [4] Richard H.R. Hahnloser. About the piecewise analysis of networks of linear threshold neurons . Neural Networks, 11:691- 697, 1998. [5] T. Kohonen . Self-Organization and Associative Memory. Springer-Verlag, Berlin, 3 edition, 1989. [6] D. D. Lee and H. S. Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788- 91, 1999. [7] B. A. Olshausen and D. 1. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609, 1996. [8] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA , 92:3844-3848, 1995. [9] 1. J. Hopfield. Neurons with graded response have collective properties like those of two-state neurons . Proc. Natl. Acad. Sci. USA, 81:3088- 3092, 1984.
1829 |@word open:1 bn:1 initial:7 contains:5 bc:1 past:1 coactive:2 activation:1 dx:1 written:1 must:10 update:1 ith:1 unbounded:1 become:1 retrieving:1 qualitative:1 prove:3 behavior:1 inspired:2 globally:2 provided:4 discover:1 bounded:1 moreover:1 panel:5 circuit:1 interpreted:2 affirmative:1 guarantee:2 every:7 demonstrates:1 organize:2 before:1 local:1 acad:2 analyzing:1 lev:1 black:1 might:1 suggests:1 factorization:1 bi:4 enforces:1 implement:1 xtx:1 word:2 refers:2 cannot:1 selection:1 coexist:1 context:1 impossible:1 equivalent:1 xiaohui:1 starting:1 focused:1 formulate:1 simplicity:1 contradiction:1 rule:2 amax:3 retrieve:1 stability:4 imagine:1 suppose:4 play:1 construction:2 element:1 sparsely:2 metzler:1 role:1 sompolinsky:1 contradicted:1 removed:2 mentioned:1 seung:3 dynamic:12 purely:1 basis:2 easily:1 hopfield:1 sarpeshkar:1 represented:1 distinct:2 bmax:2 aggregate:1 whose:2 larger:1 otherwise:6 amari:1 topographic:1 emergence:1 online:1 obviously:1 associative:1 eigenvalue:4 net:1 kohonen:1 ath:3 translate:1 degenerate:9 amplified:1 amplification:1 competition:16 diverges:1 a11:1 converges:1 ring:9 object:2 ben:2 depending:1 eq:8 strong:3 implies:2 lyapunov:1 shun:1 behaviour:1 hold:2 sufficiently:1 great:3 bj:1 tor:1 jx:1 proc:2 largest:7 mit:2 always:4 rather:1 broader:1 corollary:5 ax:2 derived:1 am:1 membership:8 eliminate:1 spurious:19 supersets:2 among:1 orientation:2 constrained:1 special:1 mutual:1 marginal:1 equal:1 field:2 f3:2 represents:1 unsupervised:2 piecewise:1 richard:2 composed:1 individual:2 ima:1 attractor:1 organization:3 interest:1 possibility:1 activated:1 natl:2 yishai:2 nonincreasing:1 initialized:1 re:1 classify:1 modeling:1 column:1 earlier:1 contiguous:1 mahowald:1 subset:14 uniform:1 comprised:1 characterize:3 answer:2 lee:1 diverge:1 michael:1 e25:1 connectivity:4 satisfied:3 containing:5 choose:1 external:1 li:1 potential:1 nonoverlapping:1 b2:1 includes:1 depends:4 vi:1 start:2 contribution:1 weak:1 trajectory:2 rectified:1 phys:1 sebastian:1 synaptic:4 ed:1 definition:6 proof:5 gain:1 coexistence:1 proved:1 radially:1 logical:1 lim:1 organized:1 xie:1 permitted:51 response:2 done:1 generality:2 furthermore:1 sketch:4 receives:1 hand:1 overlapping:6 mode:2 olshausen:1 usa:2 contain:4 hence:2 read:1 excluded:1 white:1 deal:1 during:1 self:6 width:9 steady:12 excitation:4 image:1 recently:2 ji:1 winner:26 belong:10 analog:4 refer:1 cambridge:1 ai:1 tuning:2 stable:8 cortex:2 inhibition:21 forbidden:17 belongs:4 store:1 certain:2 verlag:1 inequality:1 binary:1 seen:2 minimum:1 greater:1 relaxed:1 determine:1 multiple:1 academic:1 long:1 involving:1 basic:1 represent:1 confined:1 cell:2 else:1 operate:1 unlike:1 superset:2 hadeler:1 arbib:1 idea:1 inactive:2 whether:4 effort:1 generally:1 useful:1 eigenvectors:1 neocortex:1 exist:3 inhibitory:5 sign:1 neuroscience:1 arising:3 group:99 ichi:1 four:1 threshold:1 cutoff:1 douglas:1 kept:1 asymptotically:3 compete:1 electronic:1 utilizes:1 submatrix:1 fl:1 nonnegative:8 activity:2 strength:6 infinity:1 constraint:3 answered:1 anyone:1 performing:1 according:2 combination:1 belonging:1 contradicts:1 unity:3 evolves:1 wta:1 hl:1 advertised:1 rectification:1 equation:1 remains:1 mechanism:1 nonempty:1 know:1 serf:1 end:2 apply:2 monostable:1 enforce:1 appropriate:1 ternative:1 denotes:2 top:3 ifa:1 xdt:1 especially:1 graded:1 bl:1 feng:1 question:4 receptive:1 traditional:2 said:1 lateral:17 berlin:1 sci:2 sensible:1 unstable:1 induction:2 code:1 relationship:1 illustration:1 equivalently:1 potentially:5 statement:1 collective:1 perform:1 neuron:78 orthant:4 arbitrary:2 introduced:1 bk:2 pair:3 specified:3 connection:5 distinction:2 learned:3 bar:1 below:1 pattern:2 dynamical:1 regime:1 sparsity:1 grandmother:1 max:3 memory:4 critical:1 natural:1 hybrid:1 mediated:1 determining:1 loss:2 interesting:1 filtering:1 localized:1 digital:2 consistent:1 viewpoint:3 nondegenerate:4 storing:2 elsewhere:1 cooperation:1 sparse:1 distributed:2 regard:1 collection:2 bm:1 global:2 active:10 sequentially:1 b1:2 continuous:1 latent:1 lthis:1 nature:3 necessarily:1 constructing:1 mediate:3 edition:1 allowed:1 fig:3 sub:2 nonnegativity:1 winning:2 perceptual:1 plement:1 bij:1 formula:2 theorem:11 xt:1 coactivated:5 maxi:1 exists:4 quantization:1 false:1 visual:2 ordered:2 contained:9 springer:1 corresponds:1 satisfies:1 determines:1 ma:1 hahnloser:3 identity:3 presentation:1 goal:3 shared:1 determined:1 lemma:8 called:4 exception:1
906
183
348 Further Explorations in Visually-Guided Reaching: Making MURPHY Smarter Bartlett W. Mel Center for Complex Systems Research Beckman Institute, University of illinois 405 North Matheus Street Urbana, IL 61801 ABSTRACT MURPHY is a vision-based kinematic controller and path planner based on a connectionist architecture, and implemented with a video camera and Rhino XR-series robot arm. Imitative of the layout of sensory and motor maps in cerebral cortex, MURPHY'S internal representations consist of four coarse-coded populations of simple units representing both static and dynamic aspects of the sensory-motor environment. In previously reported work [4], MURPHY first learned a direct kinematic model of his camera-arm system during a period of extended practice, and then used this "mental model" to heuristically guide his hand to unobstructed visual targets. MURPHY has since been extended in two ways: First, he now learns the inverse differential-kinematics of his arm in addition to ordinary direct kinematics, which allows him to push his hand directly towards a visual target without the need for search. Secondly, he now deals with the much more difficult problem of reaching in the presence of obstacles. INTRODUCTION Visual guidance of a multi-link arm through a cluttered workspace is known to be an extremely difficult computational problem. Classical approaches in the field of robotics have typically broken the problem into pieces of manageable size, including modules for direct and inverse kinematics and dynamics [7], along with a variety of highly complex algorithms for motion planning in the configuration space of a multi-link arm (e.g. [3]). Workers in the field of robotics have rarely (until recently) emphasized neural plausibility at the level of representation and algorithm, opting instead for explicit mathematical computations or complex, multi-stage algorithms using general-purpose data structures. More peculiarly, very little emphasis has been placed on full use of the visual channel for robot control, other than as a source of target shape or coordinates. Further Explorations in Visually-Guided Reaching VIsual Unit Population From""G . ; " Hand-Dlrectlon Population WI' Joint-Angie Population .. ~ --? ::=-I Joh.-Veloclty s Population r-"""';;'-'-:'-.. Figure 1: MURPHY's Connectionist Architecture. Four interconnected populations of neuron-like units implement a variety of sensory-motor mappings. Much has been learned of the neural substrate for vision-guided limb control in humans and non-human primates (see [2] for review), albeit at a level too far removed from concrete algorithmic specification to be of direct engineering utility. Nonetheless, a number of general principles of cortical organization have inspired the current approach to vision-based kinematic learning and motion-planning. MURPHY'S connectionist architecture has been based on the observation that a surprisingly large fraction of the vertebrate brain is devoted to the explicit representation of the animal's sensory and motor state [6]. During normal behavior, each of these neural representations carries behaviorally-relevant state information, some yoked to the sensory-epithelia, others to t.he motor system. The effect is a rich set of online associative learning opportunities. Moreover, the visual modality is by far the dominant in the primate brain by measures of sheer real-estate, including a number of areas that are known to be concerned with the representation of limb control in immediate extrapersonal space [2], suggesting that visual processing may overshadow what has usually been perceived as primarily a motor process. MURPHY's ORGANIZATION In the interests of space, we ptesent here a highly reduced description of MURPHY'S organization; the reader is referred to [5] for a much more comprehensive treat- 349 350 Mel ment, including a lengthy discussion of the relevance of MURPHY'S structure and function to the psychology, motor-physiology, and neural-basis for visually-guided limb control in primates. The Physical Setup MURPHY's physical setup consists of a 512 x 512 JVC video camera pointed at a Rhino XR-3 robotic arm, whose wrist, elbow, and shoulder rotate freely in the image plane of the camera. White spots are stuck to the arm in convenient places; when the image is thresholded, only the white spots appear in the image (see fig. 2). This arrangement allows continuous control over the complexity of the visual image of the arm, which in turn affects computation time during learning. The arm is software controllable, with a stepper motor for each joint. Arm dynamics are not dealt with in this work. The Connectionist Architecture MURPHY is currently based on four interconnected populations of neuron-like units (fig. 1), encoding both static and dynamic aspects of the sensory-motor environment (only two were used in a previous report [4]). Visual Populations. The principal sensory population is organized as a rectangular, visuotopically-mapped 64 x 64 grid of coarsely-tuned visual units, each of which responds when a visual feature (such as a white spot on the arm) falls into its receptive field (fig 1, upper left). The second population of 24 units encodes the direction of MURPHY'S hand motion through the visual field (fig. 1, lower left)-vector magnitude is ignored at present. These units are thus "fired" only by the distinct visual image of the hand, but are selective for the direction of hand motion through the visual field as MURPHY moves his arm in the workspace. Joint Populations. The third population of 273 units consists of three subpopulations encoding the static joint configuration; the angle of each joint is value-coded individually in a subpopulation dedicated to that joint, consisting of units with overlapping triangular receptive fields. (fig. 1, upper right) . The fourth and final population of 24 units also consists of three subpopulations, each value-coding the velocity of one of the three joints (fig. 1, lower right). During both his learning and performance phases to be described in subsequent sections, MURPHY is also able to carry out simple sequential operations that are driven by a control structure external to his connectionist architecture. MURPHY's Kinematics For a detailed discussion of the relation between MURPHY'S novel style of kinematic representation and those used in other approaches to robot control, see [5]. Briefly, in reference to the four unit populations described above, MURPHY'S primary workhorse is his direct kinematic mapping that relates static joint angles to the associated visual image of the arm. This smooth nonlinear mapping comprises both the kinematics of the arm and the optical parameters and global geometry of Further Explorations in Visually-Guided Reaching the camera/imaging system, and is learned and represented as a synaptic projection from the joint-angle to visual-field populations (fig. 1). Post-training, MURPHY can assume an arbitrary joint posture "mentally", i.e. by setting up the appropriate pattern of activation on his joint-angle population without allowing the arm to move. The learned mapping will then synaptically activate a mental image of the arm, in that configuration, on the "post-synaptic" visual-field population. Contemplated movements of the arm can thus be evaluated without overt action-this is the heart of MURPHY'S mental model. MURPHY is also able to learn the inverse differential-kinematics of his arm, a mapping which translates a desired direction of motion through the workspace into the requisite commands to the joints, allowing MURPHY to guide his hand along a desired trajectory through his field of view. This mapping is learned and represented as a synaptic projection originating from both i) the hand-vector population, encoding the desired visual-field direction, and ii) the joint-angle population encoding the current state of the arm, and terminating on the joint-move population, which specifies the appropriate pertubation to the joints (fig. 1, see arrows labelled "Inverse Jacobian"). In the next section, we describe how this learning takes place. HOW MURPHY LEARNS As described in [4,5], MURPHY learns by doing. Thus, during an initial training period for the direct kinematics, MURPHY steps his arm systematically through a small representative sample of the 3.3 billion legal arm configurations (visiting 17,000 configs. in 5 hours). Each step constitutes a standard connectionist training example between his joint-angle and visual-field populations. A novel synaptic learning scheme called sigma-pi learning is used for weight modification [4,5], described elsewhere in great detail [5]. This scheme essentially treats each post-synaptic sigma-pi neuron (see [5]) as an interpolating lookup table of the kind discussed by Albus and others [1], rather than as a standard linear threshold unit. Sigma-pi learning has been inspired by the physical structure and membrane properties of biological neurons, and yields several advantages in performance and simplicity of implementation for the learning of smooth low-dimensional functions [5]. As an implementation note, once the sigma-pi units have been appropriately trained, they are reimplemented using k-d trees, a much more efficient data-structure for a sequential computer (giving a speedup on the order of 50-100). MURPHY'S inverse-differential mapping is learned analogously, where each movement of the arm (rather than each state) is used as a training example. Thus, as the hand sweeps through the visual field during either physical or mental practice, each of the three relevant populations are activated (hand-vector and joint-angle as inputs, joint-move as output), acting again as a single input-output training example for the learning procedure. 351 352 Mel "" " ' -.':.~~- I . Mt! " . . ? ..~~ . ~,\ ...... ~~ ?? . '., ?? Ie- ,, ?? ? f Figure 2: Four Visual Representations. The first frame shows the unprocessed camera view of MURPHY'S arm. White spots have been stuck to the arm at various places, such that a thresholded image contains only the white spots. The second frame shows the resulting pattern of activation over the 64 x 64 grid of coarsely-tuned visual units as driven by the camera. The third frame depicts an internally-produced "mental" image of the arm in the same configuration, as driven by weighted connections from the joint-angle population. Note that the mental image is a sloppy, but highly recognizable approximation to the camera-driven trace. The fourth frame shows the mental image generated using k-d trees in the place of sigma-pi units. MURPHY IN ACTION Reaching to Targets In a previous report, MURPHY was only able to reach to a visual target by mentally flailing his way to the target (i.e. by generating a small random change in joint position, evaluating the induced mental image of the arm for proximity to the target, and keeping only those moves that reduced this distance), and then moving the arm physically in one fell sWQOP [4] . On repeated reaches to the same or similar targets, MURPHY was doomed to repeatedly wander his way stupidly and aimlessly to the target. Typical trajectories generated in this way can be seen in fig. 3ABC. Using only the steps in these three trajectories as training examples for MURPHY'S inverse-differential mapping, and then allowing this map to generate "guesses" as to the appropriate joint-move at each step, the trajectories for similar targets are substantially more direct (fig. 3DEF). Avoiding 0 bstacles Augmenting this direct search approach with only a few additional visual heuristics, MURPHY is able to find circuitous paths through complicated obstacle layouts, even when contrived with significant local minima designed to trap the arm (fig. 4). For problems of this kind, MURPHY uses a non-replacement, best-first search with Further Explorations in Visually-Guided Reaching A~ ..... .. . B ; { ?? _._--t! ~ . ,, , F E )??-r? ? Figure 3: Improving with Practice. Frames A, B, and C shows MURPHY's initially random search trajectories from start to goal. Joint moves made during these three "mental" reaching episodes were used to train MURPHY's inverse differentalkinematic mapping. Frames D, E, and F show improvement in 3 subsequent reaching trials to nearby targets. backtracking on a quantized grid in confuguration space. Mental images of the arm were generated in sequence, and evaluated according to several criteria: moves that brought the hand closer to the target without collision with obstacles were accepted, marked, and pursued; moves that either had been tried before, pushed the hand out of the visual field, or resulted in collision were rejected (i.e. popped). Collision detection, usually considered a combinatorially expensive operation under typical representational assumptions (see [3]), is here represented as a single, parallel virtual-machine operation that detects superposition between arbitrary obstacleblobs in the visual field and the mental image of the arm. Problems such as that of fig. 4 consumed an average of 10 minutes on a Sun 3-160 running inefficiently with full graphics. Reaching trials only consistently failed when the grain of quantization in MURPHY'S configuration space search prevented him from finding clear paths through too-tight spaces. This problem could be (but has not as yet been) attacked through hierarchical quantization. CONCLUSIONS design has evolved from three schools of thought: ROBOTICS WITHOUT EQUATIONS, LEARNING WITHOUT TEACHERS, and BETTER LIVING THROUGH VISION. First, the approach illustrates that neurally-inspired representational structures can, without equations, implement the core functional-mappings used in robot control. The approach also demonstrates that a great deal of useful knowledge can be extracted from the environment without need of a teacher, i.e. simply by do- MURPHY'S 353 354 Mel Figure 4: Reaching for a target (white cross) in the presence of obstacles (miscellaneous other white blobs). MURPHY typically used fewer than 100 internal search steps for problems of this approximate difficulty. Further Explorations in Visually-Guided Reaching ing. Thirdly, the approach illustrates that planning can be naturally carried out simultaneously in joint and workspace coordinates, that is, can be "administered" in joint space, but evaluated using massively parallel visual machine operations. Thus, the use of a massively-parallel architecture makes direct heuristic search through the configuration space of an arm computationally feasible, since a single plan step (i.e. running the direct kinematics and evaluating for progress and/or collision) is reduced to 0(1) virtual machine operations. This feature of the approach is that which most distinguishes MURPHY from other motion-planning schemes. A detailed analysis of the scaling behavior of this approach was carried out in [4] suggesting that a real-time, 3-d vision/6 degree-of-freedom super-MURPHY could be built with state-of-the-art 1988 hardware, though it must be stressed that the competitiveness of the approach depends heavily on massive hardware parallelism that is not conveniently available at this time. Questions also remain as to the scaling of problem difficulty in the jump to a practical real world systems. Acknowledgements This work was supported in part by a University of Illinois Cognitive Science/AI fellowship, the National Center for Supercomputing Applications, Champaign, Illinois, and NSF grant Phy 86-58062. Thanks are also due to Stephen Omohundro for encouragement and scientific support throughout the course of the project. References [1] Albus, J.S. A new approach to manipulator control: the cerebellar model articulation controller (CMAC). ASME J. of Dynamic Systems, Measurement, & Control, September 1975, 220-227. [2] Humphrey, D.R. On the cortical control of visually directed reaching: contributions by nonprecentral motor areas. In Posture and movement, R.E. Talbott & D.R. Humphrey, (Eds.), New York: Raven Press, 1979. [3] Lozano-Perez, T. A simple motion-planning algorithm for general robot manipulators. IEEE J. of Robotics & Automation, 1987, RA-9, 224-238. [4] Mel, B.W. MURPHY: A robot that learns by doing. In Neural information processing systems, p. 544-553, American Institute of Physics, New York, 1988. [5] Mel, B.W. A neurally-inspired connectionist approach to learning and performance in vision-based robot motion planning. Technical Report CCSR-89-17, Center for Complex Systems Research, Beckman Institute, University of Illinois, 405 N. Matheus, Urbana, IL 61801. [6] Merzenich, M.M & Kaas, J. Principles of organization of sensory-perceptual systems in mammals. In Progress in psychobiology and physiological psychology, vol. 9, 1980. [7] Paul, R. Robot manipulators: mathematics, programming, and control. Cambridge: MIT Press, 1981. 355
183 |@word trial:2 briefly:1 joh:1 manageable:1 heuristically:1 tried:1 mammal:1 carry:2 phy:1 configuration:7 series:1 extrapersonal:1 contains:1 initial:1 tuned:2 current:2 activation:2 yet:1 must:1 grain:1 subsequent:2 shape:1 motor:10 designed:1 pursued:1 fewer:1 guess:1 plane:1 core:1 mental:11 coarse:1 quantized:1 mathematical:1 along:2 direct:10 differential:4 competitiveness:1 consists:3 epithelium:1 circuitous:1 recognizable:1 ra:1 behavior:2 planning:6 multi:3 brain:2 inspired:4 detects:1 little:1 humphrey:2 vertebrate:1 elbow:1 project:1 moreover:1 what:1 evolved:1 kind:2 substantially:1 finding:1 demonstrates:1 control:12 unit:15 internally:1 grant:1 appear:1 before:1 engineering:1 local:1 treat:2 encoding:4 path:3 emphasis:1 directed:1 practical:1 camera:8 wrist:1 practice:3 implement:2 xr:2 spot:5 procedure:1 cmac:1 area:2 physiology:1 thought:1 convenient:1 projection:2 subpopulation:3 map:2 center:3 layout:2 cluttered:1 rectangular:1 simplicity:1 peculiarly:1 his:16 population:23 coordinate:2 target:13 heavily:1 massive:1 substrate:1 programming:1 us:1 velocity:1 expensive:1 module:1 sun:1 episode:1 movement:3 removed:1 environment:3 broken:1 complexity:1 dynamic:5 terminating:1 trained:1 tight:1 basis:1 joint:24 represented:3 various:1 train:1 distinct:1 describe:1 activate:1 whose:1 heuristic:2 triangular:1 pertubation:1 final:1 online:1 associative:1 advantage:1 sequence:1 blob:1 ment:1 interconnected:2 relevant:2 fired:1 representational:2 albus:2 description:1 billion:1 contrived:1 generating:1 ccsr:1 augmenting:1 school:1 progress:2 implemented:1 direction:4 guided:7 exploration:5 human:2 virtual:2 imitative:1 biological:1 secondly:1 proximity:1 considered:1 normal:1 visually:7 great:2 mapping:10 algorithmic:1 purpose:1 perceived:1 overt:1 beckman:2 yoked:1 currently:1 superposition:1 him:2 individually:1 combinatorially:1 weighted:1 brought:1 mit:1 behaviorally:1 super:1 reaching:12 rather:2 command:1 improvement:1 consistently:1 typically:2 initially:1 rhino:2 relation:1 originating:1 selective:1 animal:1 plan:1 art:1 field:14 once:1 constitutes:1 connectionist:7 others:2 report:3 primarily:1 few:1 distinguishes:1 simultaneously:1 resulted:1 comprehensive:1 national:1 murphy:41 phase:1 consisting:1 geometry:1 replacement:1 freedom:1 detection:1 organization:4 interest:1 highly:3 kinematic:5 perez:1 activated:1 devoted:1 closer:1 worker:1 tree:2 desired:3 guidance:1 stepper:1 obstacle:4 ordinary:1 visuotopically:1 too:2 graphic:1 reported:1 teacher:2 thanks:1 ie:1 workspace:4 physic:1 analogously:1 concrete:1 again:1 external:1 cognitive:1 american:1 style:1 suggesting:2 lookup:1 coding:1 north:1 automation:1 depends:1 piece:1 view:2 kaas:1 doing:2 start:1 complicated:1 parallel:3 contribution:1 il:2 yield:1 dealt:1 produced:1 trajectory:5 psychobiology:1 reach:2 synaptic:5 lengthy:1 ed:1 nonetheless:1 naturally:1 associated:1 static:4 knowledge:1 organized:1 evaluated:3 though:1 rejected:1 stage:1 until:1 hand:12 opting:1 nonlinear:1 overlapping:1 scientific:1 manipulator:3 effect:1 lozano:1 merzenich:1 deal:2 white:7 during:7 mel:6 criterion:1 asme:1 omohundro:1 workhorse:1 motion:8 dedicated:1 image:14 novel:2 recently:1 unobstructed:1 mentally:2 mt:1 physical:4 functional:1 cerebral:1 thirdly:1 discussed:1 he:3 doomed:1 significant:1 measurement:1 cambridge:1 ai:1 encouragement:1 grid:3 mathematics:1 pointed:1 illinois:4 had:1 moving:1 specification:1 robot:8 cortex:1 dominant:1 driven:4 massively:2 seen:1 minimum:1 additional:1 freely:1 period:2 living:1 ii:1 relates:1 full:2 neurally:2 stephen:1 smooth:2 ing:1 champaign:1 technical:1 plausibility:1 cross:1 post:3 prevented:1 coded:2 controller:2 vision:6 essentially:1 physically:1 cerebellar:1 smarter:1 robotics:4 synaptically:1 addition:1 fellowship:1 source:1 modality:1 appropriately:1 fell:1 induced:1 estate:1 presence:2 concerned:1 variety:2 affect:1 psychology:2 architecture:6 translates:1 consumed:1 administered:1 unprocessed:1 bartlett:1 utility:1 york:2 action:2 repeatedly:1 ignored:1 useful:1 collision:4 detailed:2 clear:1 overshadow:1 hardware:2 reduced:3 generate:1 specifies:1 nsf:1 vol:1 coarsely:2 four:5 sheer:1 threshold:1 thresholded:2 imaging:1 fraction:1 inverse:7 angle:8 fourth:2 place:4 planner:1 reader:1 throughout:1 scaling:2 pushed:1 def:1 software:1 encodes:1 nearby:1 aspect:2 extremely:1 optical:1 speedup:1 according:1 membrane:1 remain:1 wi:1 making:1 primate:3 modification:1 heart:1 legal:1 equation:2 computationally:1 previously:1 turn:1 kinematics:8 popped:1 available:1 operation:5 limb:3 hierarchical:1 appropriate:3 inefficiently:1 running:2 opportunity:1 giving:1 classical:1 sweep:1 move:9 arrangement:1 question:1 posture:2 receptive:2 primary:1 responds:1 visiting:1 september:1 distance:1 link:2 mapped:1 street:1 contemplated:1 difficult:2 setup:2 sigma:5 trace:1 implementation:2 design:1 allowing:3 upper:2 neuron:4 observation:1 urbana:2 attacked:1 immediate:1 matheus:2 extended:2 shoulder:1 frame:6 arbitrary:2 connection:1 learned:6 hour:1 able:4 usually:2 pattern:2 reimplemented:1 parallelism:1 articulation:1 built:1 including:3 video:2 difficulty:2 arm:31 representing:1 scheme:3 carried:2 review:1 acknowledgement:1 wander:1 sloppy:1 degree:1 principle:2 systematically:1 pi:5 elsewhere:1 jvc:1 course:1 placed:1 surprisingly:1 keeping:1 supported:1 guide:2 institute:3 fall:1 cortical:2 evaluating:2 world:1 rich:1 sensory:8 stuck:2 made:1 jump:1 supercomputing:1 far:2 approximate:1 global:1 robotic:1 search:7 continuous:1 table:1 channel:1 learn:1 controllable:1 improving:1 complex:4 interpolating:1 arrow:1 paul:1 repeated:1 fig:12 referred:1 representative:1 depicts:1 position:1 comprises:1 explicit:2 perceptual:1 third:2 jacobian:1 learns:4 minute:1 emphasized:1 physiological:1 consist:1 trap:1 quantization:2 albeit:1 sequential:2 raven:1 magnitude:1 illustrates:2 push:1 backtracking:1 simply:1 visual:27 failed:1 conveniently:1 flailing:1 abc:1 extracted:1 goal:1 marked:1 towards:1 miscellaneous:1 labelled:1 feasible:1 change:1 typical:2 acting:1 principal:1 called:1 accepted:1 rarely:1 internal:2 support:1 rotate:1 stressed:1 relevance:1 requisite:1 avoiding:1
907
1,830
Learning Segmentation by Random Walks Marina Meila University of Washington Jianbo Shi Carnegie Mellon University mmp~stat.washington.edu jshi~cs.cmu.edu Abstract We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentation have a probabilistic foundation. In particular, we prove that the Normalized Cut method arises naturally from our framework. Finally, the framework provides a principled method for learning the similarity function as a combination of features. 1 Introduction This paper focuses on pairwise (or similarity-based) clustering and image segmentation. In contrast to statistical clustering methods, that assume a probabilistic model that generates the observed data points (or pixels), pairwise clustering defines a similarity function between pairs of points and then formulates a criterion (e.g. maximum total intracluster similarity) that the clustering must optimize. The optimality criteria quantify the intuitive notion that points in a cluster (or pixels in a segment) are similar, whereas points in different clusters are dissimilar. An increasingly popular approach to similarity based clustering and segmentation is by spectral methods. These methods use eigenvalues and eigenvectors of a matrix constructed from the pairwise similarity function. Spectral methods are sometimes regarded as continuous approximations of previously formulated discrete graph theoretical criteria as in image segmentation method of [9], or as in the web clustering method of [4, 2]. As demonstrated in [9,4], these methods are capable of delivering impressive segmentation/clustering results using simple low-level features. In spite of their practical successes, spectral methods are still incompletely understood. The main achievement of this work is to show that there is a simple probabilistic interpretation that can offer insights and serve as an analysis tool for all the spectral methods cited above. We view the pairwise similarities as edge flows in a Markov random walk and study the properties of the eigenvectors and values of the resulting transition matrix. Using this view, we were able to show that several of the above methods are subsumed by the Normalized Cut (NCut) segmentation algorithm of [9] in a sense that will be described. Therefore, in the following, we will focus on the NCut algorithm and will adopt the terminology of image segmentation (i.e. the data points are pixels and the set of all pixels is the image), keeping in mind that all the results are also valid for similarity based clustering. A probabilistic interpretation of NCut as a Markov random walk not only sheds new lights on why and how spectral methods work in segmentation, but also offers a principled way of learning the similarity function. A segmented image can provide a "target" transition matrix to which a learning algorithm matches in KL divergence the "learned" transition probabilities. The latter are output by a model as a function of a set of features measured from the training image. This is described in section 5. Experimental results on learning segmenting objects with smooth and rounded shape is described in section 6. 2 The Normalized Cut criterion and algorithm Here and in the following, an image will be represented by a set of pixels I. A segmentation is a partioning of I into mutually disjoint subsets. For each pair of pixels i, j E I a similarity Sij = Sji ~ 0 is given. In the NCut framework the similarities Sij are viewed as weights on the edges ij of a graph G over I. The matrix S = [Sij] plays the role of a "real-valued" adjacency matrix for G. Let d i = LjEf Sij, called the degree of node i, and the volume of a set A c I be vol A = LiEA d i ? The set of edges between A and its complement A is an edge cut or shortly a cut. The normalized cut (NCut) criterion of [9] is a graph theoretical criterion for segmenting an image into two by minimizing (1) over all cuts A, A. Minimizing NCut means finding a cut ofrelatively small weight between two subsets with strong internal connections. In [9] it is shown that optimizing NCut is NP hard. The NCut algorithm was introduced in [9] as a continuous approximation for solving the discrete minimum NCut problem by way of eigenvalues and eigenvectors. It uses the Laplacian matrix L = D - S where D is a diagonal matrix formed with the degrees of the nodes. The algorithm consists of solving the generalized eigenvalues/vectors problem Lx = )'Dx (2) The NCut algorithm focuses on the second smallest eigenvalue of (2) and its corresponding eigenvector, call them ),L and xL respectively. In [9] it is shown that when there is a partitioning of A, A of I such that L Xi _ - {a, (3, i E A i EA (3) then A, A is the optimal NCut and the value of the cut itself is NCut(A, A) This result represents the basis of spectral segmentation by normalized cuts. One solves the generalized spectral problem (2), then finds a partitioning of the elements of xL into two sets containing roughly equal values. The partitioning can be done by thresholding the elements. The partitioning of the eigenvector induces a partition on I which is the desired segmentation. As presented above, the NCut algorithm lacks a satisfactory intuitive explanation. In particular, the NCut algorithm and criterion offer little intuition about (1) what causes xL to be piecewise constant? (2) what happens when there are more than two segments and (3) how does the algorithm degrade its performance when xL is not piecewise constant? The random walk interpretation that we describe now will answer the first two questions as well as give a better understanding of what spectral clustering is achieving. We shall not approach the third issue here: instead, we point to the results of [2] that apply to the NCut algorithm as well. 3 Markov walks and normalized cuts By "normalizing" the similarity matrix S one obtains the stochastic matrix P = D- 1 S (4) whose row sums are all 1. As it is known from the theory of Markov random walks, Pij represents the probability of moving from node i to j in one step, given that we are in i. The eigenvalues of Pare A1 = 1 ~ A2 ~ ... An ~ -1; xl.. ?n are the eigenvectors. The first eigenvector of P is Xl =1, the vector whose elements are all Is. W.l.o.g we assume that no node has degree O. Let us now examine the spectral problem for the matrix P, namely the solutions of the equation (5) Px AX = Proposition 1 If A, solutions of (2). X are solutions of (5) and P = D- 1 S, then (1 - A), x are In other words, the NCut algorithm and the matrix P have the same eigenvectors; the eigenvalues of P are identical to 1 minues the generalized eigenvalues in (2). Proposition 1 shows the equivalence between the spectral problem formulated by the NCut algorithm and the eigenvalues/vectors of the stochastic matrix P. This also helps explaining why the NCut algorithm uses the second smallest generalized eigenvector: the smallest eigenvector of (2) corresponds to the largest eigenvector of P, which in most cases of interest is equal to 1 thus containing no information. The NCut criterion can also be understood in this framework. First define ?roo = [?ri"]iEI bY?ri" = It is easy to verify that pT?r oo = ?roo and thus that ?roo is a stationary distribution of the Markov chain. If the chain is ergodic, which happens under mild conditions [1], then ?roo is the only distribution over I with this property. Note also that the Markov chain is reversible because ?ri" Pij = ?rj Pji = Sij /voII. Define PAB = Pr[A -+ BIA] as the probability of the random walk transitioning from set A c I to set B C I in one step if the current state is in A and the random walk is started in its stationary distribution. !oh-. EiEA,jEB Sij vol(A) (6) From this it follows that NCut(A, A) = P AA + P AA (7) If the NCut is small for a certain partition A, A then it means that the probabilities of evading set A, once the walk is in it and of evading its complement A are both small. Intuitively, we have partioned the set I into two parts such that the random walk, once in one of the parts, tends to remain in it. The NCut is strongly related to a the concept of low conductivity sets in a Markov random walk. A low conductivity set A is a subset of I such that h(A) = max( P AA , P AA ) is small. They have been studied in spectral graph theory in connection with the mixing time of Markov random walks [1]. More recently, [2] uses them to define a new criterion for clustering. Not coincidentally, the heuristic analyzed there is strongly similar to the NCut algorithm. 4 Stochastic matrices with piecewise constant eigenvectors In the following we will use the transition matrix P to achieve a better understanding of the NCut algorithm. Recall that the NCut algorithm looks at the second "largest" eigenvector of P, denoted by X2 and equal to X L, in order to obtain a partioning of I. We define a vector x to be piecewise constant relative to a partition ~ = (Al' A 2 , ??. A k ) of I iff Xi = Xj for i,j pixels in the same set As, s = 1, ... k. Since having piecewise constant eigenvectors is ideal case for spectral segmentation, it is important to understand when the matrix P has this desired property. We study when the first k out of n eigenvectors are piecewise constant. Proposition 2 Let P be a matrix with rows and columns indexed by I that has independent eigenvectors. Let ~ = (Al' A 2 , ?? . A k ) be a partition of I. Then, P has k eigenvectors that are piecewise constant w. r. t. ~ and correspond to non-zero eigenvalues if and only if the sums Pis = l:jEA. Pij are constant for all i E As' and all s, s' = 1, ... k and the matrix R = [Pss' ]s,s'=l ,."k (with Pss' = l:jEA~ Pij , i E As) is non-singular. Lemma 3 If the matrix P of dimension n is of the form P = D- l S with S symmetric and D non-singular then P has n independent eigenvectors. We call a stochastic matrix P satisfying the conditions of Proposition 2 a blockstochastic matrix. Intuitively, Proposition 2 says that a stochastic matrix has piecewise constant eigenvectors if the underlying Markov chain can be aggregated into a Markov chain with state space ~ = {A l , . .. A k } and transition probability matrix P. This opens interesting connections between the field of spectral segmentation and the body of work on aggregability or (lumpability) [3] of Markov chains. The proof of Proposition 2 is provided in [5]. Proposition 2 shows that a much broader condition exists for Ncut algorithm to produce an exact segmentation/clustering solution. Such condition shows that in fact spectral clustering is able to group pixels by the similarity of their transition probabilities to subsets of I. Experiments [9] show that NCut works well on many graphs that have a sparse complex connection structure supporting this result with practical evidence. Proposition 2 generalizes previous results of [10]. The NCut algorithm and criterion is one of the recently proposed spectral segmentation methods. In image segmentation, there are algorithms of Perona and Freeman (PF) [7] and Scott and Longuet-Higgins (SLH) [8]. In web clustering, there are algorithms of Kleinberg[4] (K), the long known latent semantic analysis (LSA), and in the variant proposed by Kannan, Vempala and Yetta (KVV) [2]. It is easy to show that each of the above ideal situations imply that the resulting stochastic matrix P satisfies the conditions of Proposition 2 and thus the NCut algorithm will also work exactly in these situations. In this sense NCut subsumes PF, SLH and (certain variants of) K. Moreover, none of the three other methods takes into account more information than NCut does. Another important aspect of a spectral clustering algorithm is robustness. Empirical results of [10] show that NCut is at least as robust as PF and SLH. 5 The framework for learning image segmentation The previous section stressed the connection between NCut as a criterion for image segmentation and searching for low conductivity sets in a random walk. Here we will exploit this connection to develop a framework for supervised learning of image segmentation. Our goal is to obtain an algorithm that starts with a training set of segmented images and with a set of features and learns a function of the features that produces correct segmentations, as shown in figure 1. Learning Image Featules .... . . ,g 'J~ '1j P,j ~ == ~~ Human labeled Segmentation Figure 1: The general framework for learning image segmentation. For simplicity, assume the training set consists of one image only and its correct segmentation. From the latter it is easy to obtain "ideal" or target transition probabilities p .*. 1J = {OiTAT' ~ (j. A J EA. for i in segment A with IAI elements (8) r, We also have a predefined set of features q = 1, ... Q which measure similarity between two pixels according to different criteria and their values for 1. The model is the part of the framework that is subject to learning. It takes the features fi~ as inputs and outputs the global similarity measure Sij. For the present experiments we use the simple model Sij = eL: q Aql;; Intuitively, it represents a set of independent "experts", the factors eAql" voting on the probability of a transition i -+ j. In our framework, based on the fact that a segmentation is equivalent to a random walk, optimality is defined as the minimization of the conditional K ullback-Leibler (KL) divergence between the target probabilities Ptj and the transition probabilities Pij obtained by normalizing Sij. Because P* is fixed, the above minimization is equivalent to maximizing the cross entropy between the two (conditional) distributions, i.e. max J, where I~I LPtj logPij J = L iEI (9) jEI If we interpret the factor 1/lll as a uniform distribution over states 71"0 then the criterion in (9) is equivalent to the KL divergence between two distributions over transitions KL(Pi~jllPi-+j) where j = 7I"?Pi~*) ' pt: Maximizing J can be done via gradient ascent in the parameters A. We obtain oj -_ TIf1 OA q q ('"' ~ Pij* f ij 1J - ~ P ij f ij ) '"' q (10) 1J One can further note that the optimum of J corresponds to the solution of the following maximum entropy problem: maxH(jli) S.t. < fi~ >... OP;l i = < f& > ... OP;ji for q = 1, ... Q P;li Since this is a convex optimization problem, it has a unique optimum. (11) 6 Segmentation with shape and region information In this section, we exemplify our approach on a set of synthetic and real images and we use features carrying contour and shape information. First we use a set of local filer banks as edge detectors. They capture both edge strength and orientation. From this basic information we construct two features: the intervening contour (IC) and the co-linearity/co-circularity (CL). (a) .' ,~ D . ? lC (c) (b) DeL 00,' ;A \u J Figure 2: Features for segmenting objects with smooth rounded shape. (a) The edge strength provides a cue of region boundary. It biases against random walks in a direction orthogonal to an edge. (b) Edge orientation provides a cue for the object's shape. The induced edge flow is used to bias the random walk along the edge, and transitions between co-circular edge flows are encouraged. (c) Edge flow for the bump in figure 3. (g) (d) (e) (h) (f) lIO 02 O. 06 08 1 onoovO"oogoccnlrasl 12 U 0750 ~o 100 1S(! l!\IO ~ 3tXI ImaQem1cnsrlyr<rge Figure 3: "Bump" images (a)-(f) with gradually reduced contrast are used for training. (g) shows the relation between the image edge contrast and the learned value of AIC, demonstrating automatic adaptation to the dynamic range of the IC. (h) shows the dependence on image contrast of ACL. At low image contrast, CL becomes more important. The first feature is based on the assumption that if two pixels are separated by an edge, then they are less likely to belong together(figure 2). In the random walk interpretation, we are less likely to walk in a direction perpendicular to an edge. The intervening contour[6] is computed by = MAXkEI(i,nEdge(k), where l(i,j) is a line connecting pixel i and j, and Edge(k) is the edge strength at pixel k. ifF While the IC provides a cue for region boundaries, the edge orientation provides a cue for object shape. Human visual studies suggest that the shape of an object's boundary has a strong influence on how objects are grouped. For example, a convex region is more likely to be perceived as a single object Thinking of segmentation as a random walk provides a natural way of exploiting this knowledge. Each discrete edge in the image induces an edge flow in its neighborhood. To favor convex regions, we can further bias the random walk by enhancing the transition probabilities between pixels with co-circular edge flow. Thus we define the CL feature as: + 2-cos(2ai+aj) h d fi d . fi 2(b) JijGL = 2-cos(2a;)-cos(2aj) , were ai, aj are l-cos(ad l-cos(a o ) e ne as III gure . For training, we have constructed the set of "bump" images with varying image contrast, as shown in figure 3. Figure 4 shows segmentation results using the weights trained with the "bump" image in figure 3(c). Figure 4: Testing on real images: (a) test images; (b) canny edges computed with the Matlab "edge" function; (c) NCut segmentation computed using the weights learned on the image in 5(c). The system learns to prefer contiguous groups with smooth boundary. The canny edge map indicates that simply looking for edges is likely gives brittle and less meaningful segmentations. 7 Conclusion The main contribution of our paper is showing that spectral segmentation methods have a probabilistic foundation. In the framework of random walks, we give a new interpretation to the NCut criterion and algorithm and a better understanding of its motivation. The probabilistic framework also allows us to define a principled criterion for supervised learning of image segmentation. Acknowledgment: J.S. is supported by DARPA NOOOI4-00-1-0915, NSF IRI-9817496 . References [1] Fan R. K. Chung. Spectral Graph Theory. American Methematical Society, 1997. [2] Ravi Kannan, Santosh Vempala, and Adrian Yetta. On clusterings: good, bad and spectral. In Proc. 41st Symposium on the Foundations of Computer Science, 2000. [3] J. R. Kemeny and J. L. Snell. Finite Markov Chains. Van Nostrand, New York, 1960. [4] Jon M. Kleinberg . Authoritative sources in a hyperlinked environment. Technical report , IBM Research Division, Almaden Research Center, 1997. [5] M. Maila and J . Shi. A random walks view of spectral segmentation. International Workshop on AI and Statistics(AISTATS), 200l. In Proc. [6] Jitendra Malik, Serge Belongie, Thomas Leung, and Jianbo Shi. Contour and texture analysis for image segmentation. International Journal of Computer Vision, 2000. [7] P. Perona and W. Freeman. A factorization approach to grouping. In European Conference on Computer Vision, 1998. [8] G.L. Scott and H. C. Longuet-Higgins. Feature grouping by relocalsation of eigenvectors of the proximity matrix. In Proc. British Machine Vision Conference, 1990. [9] J. Shi and J . Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. An earlier version appeared in CVPR 1997. [10] Y. Weiss. Segmentation using eigenvectors: a unifying view. In International Conference on Computer Vision, 1999.
1830 |@word mild:1 version:1 open:1 adrian:1 current:1 dx:1 must:1 partition:4 shape:7 stationary:2 cue:4 intelligence:1 slh:3 gure:1 provides:6 node:4 lx:1 along:1 constructed:2 symposium:1 prove:1 consists:2 pairwise:5 roughly:1 examine:1 freeman:2 little:1 pf:3 lll:1 becomes:1 provided:1 underlying:1 moreover:1 linearity:1 what:3 eigenvector:7 finding:1 voting:1 shed:1 exactly:1 jianbo:2 partitioning:4 conductivity:3 lsa:1 segmenting:3 understood:2 local:1 tends:1 io:1 acl:1 studied:1 equivalence:1 partioning:2 co:9 factorization:1 range:1 perpendicular:1 practical:2 unique:1 acknowledgment:1 testing:1 empirical:1 word:1 spite:1 suggest:1 influence:1 optimize:1 equivalent:3 map:1 demonstrated:1 shi:4 maximizing:2 center:1 iri:1 convex:3 ergodic:1 simplicity:1 insight:1 higgins:2 regarded:1 oh:1 searching:1 notion:1 jli:1 target:3 play:1 pt:2 exact:1 us:3 element:4 satisfying:1 cut:12 labeled:1 observed:1 role:1 capture:1 region:5 principled:3 intuition:1 environment:1 dynamic:1 trained:1 carrying:1 solving:2 segment:3 serve:1 division:1 basis:1 darpa:1 represented:1 separated:1 describe:1 methematical:1 neighborhood:1 whose:2 heuristic:1 valued:1 cvpr:1 say:1 roo:4 pab:1 favor:1 statistic:1 itself:1 eigenvalue:10 aql:1 hyperlinked:1 adaptation:1 canny:2 mixing:1 iff:2 achieve:1 intuitive:2 intervening:2 achievement:1 exploiting:1 cluster:2 optimum:2 produce:2 object:7 help:1 oo:1 develop:1 stat:1 measured:1 ij:4 op:2 solves:1 strong:2 c:1 quantify:1 direction:2 correct:2 stochastic:6 human:2 adjacency:1 lumpability:1 snell:1 proposition:9 proximity:1 ic:3 bump:4 adopt:1 smallest:3 a2:1 ptj:1 perceived:1 proc:3 largest:2 grouped:1 tool:1 minimization:2 varying:1 broader:1 ax:1 focus:3 ps:2 indicates:1 contrast:6 sense:2 el:1 leung:1 perona:2 relation:1 pixel:13 issue:1 orientation:3 denoted:1 almaden:1 santosh:1 equal:3 once:2 field:1 having:1 washington:2 construct:1 identical:1 represents:3 encouraged:1 look:1 jon:1 thinking:1 np:1 report:1 piecewise:8 divergence:3 subsumed:1 interest:1 circular:2 analyzed:1 circularity:1 light:1 chain:7 predefined:1 edge:27 capable:1 orthogonal:1 indexed:1 walk:24 desired:2 theoretical:2 column:1 earlier:1 contiguous:1 formulates:1 subset:4 uniform:1 answer:1 synthetic:1 st:1 cited:1 international:3 probabilistic:6 rounded:2 together:1 connecting:1 containing:2 expert:1 chung:1 american:1 li:1 account:1 iei:2 subsumes:1 jitendra:1 ad:1 view:5 evading:2 start:1 contribution:1 formed:1 kvv:1 correspond:1 serge:1 none:1 detector:1 against:1 naturally:1 proof:1 popular:1 noooi4:1 recall:1 exemplify:1 knowledge:1 segmentation:37 ea:2 supervised:2 wei:1 iai:1 done:2 strongly:2 web:2 reversible:1 lack:1 del:1 defines:1 aj:3 normalized:7 verify:1 concept:1 symmetric:1 leibler:1 satisfactory:1 semantic:1 criterion:15 generalized:4 txi:1 image:32 recently:2 fi:4 jshi:1 ji:1 volume:1 belong:1 interpretation:6 interpret:2 mellon:1 ai:3 automatic:1 meila:1 moving:1 similarity:17 impressive:1 maxh:1 optimizing:1 certain:2 nostrand:1 sji:1 success:1 jeb:1 minimum:1 aggregated:1 rj:1 segmented:2 smooth:3 match:1 technical:1 offer:3 long:1 cross:1 marina:1 a1:1 laplacian:1 variant:2 basic:1 vision:4 enhancing:1 cmu:1 sometimes:1 whereas:1 singular:2 source:1 ascent:1 subject:1 induced:1 flow:7 call:2 ideal:3 iii:1 easy:3 xj:1 york:1 cause:1 matlab:1 delivering:1 eigenvectors:15 coincidentally:1 induces:2 reduced:1 nsf:1 disjoint:1 carnegie:1 discrete:3 shall:1 vol:2 group:2 terminology:1 demonstrating:1 achieving:1 ravi:1 graph:6 sum:2 bia:1 ullback:1 prefer:1 aic:1 fan:1 strength:3 ri:3 x2:1 generates:1 kleinberg:2 aspect:1 optimality:2 vempala:2 px:1 yetta:2 according:1 combination:1 remain:1 increasingly:1 happens:2 intuitively:3 gradually:1 sij:9 pr:1 equation:1 mutually:1 previously:1 mind:1 generalizes:1 apply:1 spectral:21 robustness:1 shortly:1 pji:1 thomas:1 clustering:16 unifying:1 exploit:1 society:1 malik:2 question:1 dependence:1 diagonal:1 kemeny:1 gradient:1 incompletely:1 oa:1 lio:1 degrade:1 kannan:2 minimizing:2 liea:1 markov:13 finite:1 supporting:1 situation:2 looking:1 introduced:1 complement:2 pair:2 namely:1 kl:4 connection:6 learned:3 able:2 pattern:1 scott:2 appeared:1 max:2 oj:1 explanation:1 natural:1 pare:1 imply:1 ne:1 started:1 filer:1 understanding:3 relative:1 brittle:1 interesting:1 foundation:3 authoritative:1 degree:3 pij:6 thresholding:1 bank:1 pi:3 ibm:1 row:2 supported:1 keeping:1 bias:3 understand:1 explaining:1 sparse:1 van:1 boundary:4 dimension:1 transition:13 valid:1 contour:4 transaction:1 obtains:1 global:1 belongie:1 xi:2 continuous:2 latent:1 why:2 robust:1 longuet:2 complex:1 cl:3 european:1 aistats:1 main:2 motivation:1 intracluster:1 body:1 lc:1 mmp:1 xl:6 third:1 learns:2 british:1 transitioning:1 bad:1 showing:1 normalizing:2 evidence:1 exists:1 workshop:1 grouping:2 texture:1 jea:2 entropy:2 simply:1 likely:4 jei:1 ncut:35 visual:1 aa:4 corresponds:2 satisfies:1 conditional:2 viewed:1 formulated:2 goal:1 hard:1 rge:1 lemma:1 total:1 called:1 experimental:1 meaningful:1 internal:1 latter:2 arises:1 stressed:1 dissimilar:1
908
1,831
Balancing Multiple Sources of Reward in Reinforcement Learning Christian R. Shelton Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] Abstract For many problems which would be natural for reinforcement learning, the reward signal is not a single scalar value but has multiple scalar components. Examples of such problems include agents with multiple goals and agents with multiple users. Creating a single reward value by combining the multiple components can throwaway vital information and can lead to incorrect solutions. We describe the multiple reward source problem and discuss the problems with applying traditional reinforcement learning. We then present an new algorithm for finding a solution and results on simulated environments. 1 Introduction In the traditional reinforcement learning framework, the learning agent is given a single scalar value of reward at each time step. The goal is for the agent to optimize the sum of these rewards over time (the return). For many applications, there is more information available. Consider the case of a home entertainment system designed to sense which residents are currently in the room and automatically select a television program to suit their tastes. We might construct the reward signal to be the total number of people paying attention to the system. However, a reward signal of 2 ignores important information about which two users are watching. The users of the system change as people leave and enter the room. We could, in theory, learn the relationship among the users present, who is watching, and the reward. In general, it is better to use the domain knowledge we have instead of requiring the system to learn it. We know which users are contributing to the reward and that only present users can contribute. In other cases, the multiple sources aren't users, but goals. For elevator scheduling we might be trading off people serviced per minute against average waiting time. For financial portfolio managing, we might be weighing profit against risk. In these cases, we may wish to change the weighting over time. In order to keep from having to relearn the solution from scratch each time the weighting is changed, we need to keep track of which rewards to attribute to which goals. There is a separate difficulty if the rewards are not designed functions of the state but rather are given by other agents or people in the environment. Consider the case of the entertainment system above but where every resident has a dial by which they can give the system feedback or reward. The rewards are incomparable. One user may decide to reward the system with values twice as large as those of another which should not result in that user having twice the control over the entertainment. This isn't limited to scalings but also includes any other monotonic transforms of the returns. If the users of the system know they are training it, they will employ all kinds of reward strategies to try to steer the system to the desired behavior [2]. By keeping track of the sources of the rewards, we will derive an algorithm to overcome these difficulties. 1.1 Related Work The work presented here is related to recent work on multiagent reinforcement learning [1,4,5,7] in that multiple rewards signals are present and game theory provides a solution. This work is different in that it attacking a simpler problem where the computation is consolidated on a single agent. Work in multiple goals (see [3, 8] as examples) is also related but assumes either that the returns of the goals are to be linearly combined for an overall value function or that only one goal is to be solved at a time. 1.2 Problem Setup We will be working with partially observable environments with discrete actions and discrete observations. We make no assumptions about the world model and thus do not use belief states. x(t) and a(t ) are the observation and action, respectively, at time t. We consider only reactive policies (although the observations could be expanded to include history). 7f( x, a) is the policy or probability the agent will take action a when observing x. At each time step, the agent receives a set of rewards (one for each source in the environment), Ts (t ) is the reward at time t from source s. We use the average reward formulation and so R; = limn--->CXl ~E [r? s(1) + Ts(2 ) + ... + Ts(n )I7f] is the expected return from source s for following policy 7f. It is this return that we want to maximize for each source. We will also assume that the algorithm knows the set of sources present at each time step. Sources which are not present provide a constant reward, regardless of the state or action, which we will assume to be zero. All sums over sources will be assumed to be taken over only the present sources. The goal is to produce an algorithm that will produce a policy based on previous experience and the sources present. The agent's experience will take the form of prior interactions with the world. Each experience is a sequence of observations, action, and reward triplets for a particular run of a particular policy. 2 Balancing Multiple Rewards 2.1 Policy Votes If rewards are not directly comparable, we need to find a property of the sources which is comparable and a metric to optimize. We begin by noting that we want to limit the amount of control any given source has over the behavior of the agent. To that end, we construct the policy as the average of a set of votes, one for each source present. The votes for a source must sum to 1 and must all be non-negative (thus giving each source an equal "say" in the agent's policy). We will first consider restricting the rewards from a given source to only affect the votes for that source. The form for the policy is therefore (1) ? where for each present source 8, L xa s(x ) = 1, as (x) ~ for all x , L avs (x, a) = 1 for all x , and Vs (x, a) ~ for all x and a. We have broken apart the vote from a source into two parts, a and v . as (x ) is how much effort source 8 is putting into affecting the policy for observation x . vs (x, a) is the vote by source 8 for the policy for observation x. Mathematically this is the same as constructing a single vote (v~(x, a) = as (x )vs (x, a) , but we find a and v to be more interpretable. ? We have constrained the total effort and vote anyone source can apply. Unfortunately, these votes are not quite the correct parameters for our policy. They are not invariant to the other sources present. To illustrate this consider the example of a single state with two actions, two sources, and a learning agent with the voting method from above. If 8 1 prefers only a1 and 82 likes an equal mix of a1 and a2, the agent will learn a vote of (1,0) for 81 and 82 can reward the agent to cause it to learn a vote of (0,1) for 82 resulting in a policy of (0.5,0.5). Whether this is the correct final policy depends on the problem definition. However, the real problem arises when we consider what happens if 8 1 is removed. The policy reverts to (0 , 1) which is far from 82 'S (the only present source's) desired (0.5 , 0.5) Clearly, the learned votes for 82 are meaningless when 8 1 is not present. Thus, while the voting scheme does limit the control each present source has over the agent, it does not provide a description of the source's preferences which would allow for the removal or addition (or reweighting) of sources. 2.2 Returns as Preferences While rewards (or returns) are not comparable across sources, they are comparable within a source. In particular, we know that if R;l > R ;2 that source 8 prefers policy 'if1 to policy 'if2 . We do not know how to weigh that preference against a different source's preference so an explicit tradeoff is still impossible, but we can limit (using the voting scheme of equation 1) how much one source's preference can override another source's preference. We allow a source's preference for a change to prevail in as much as its votes are sufficient to affect the change in the presences of the other sources' votes. We have a type of a general-sum game (letting the sources be the players of game theory jargon). The value to source 8 ' of the set of all sources' votes is R;, where 'if is the function of the votes defined in equation 1. Each source 8 ' would like to set its particular votes, as, (x) and v~ (x, a) to maximize its value (or return). Our algorithm will set each source's vote in this way thus insuring that no source could do better by "lying" about its true reward function . In game theory, a "solution" to such a game is called a Nash Equilibrium [6], a point at which each player (source) is playing (voting) its best response to the other players. At a Nash Equilibrium, no single player can change its play and achieve a gain. Because the votes are real-valued, we are looking for the equilibrium of a continuous game. We will derive a fictitious play algorithm to find an equilibrium for this game. 3 Multiple Reward Source Algorithm 3.1 Return Parameterization In order to apply the ideas of the previous section, we must find a method for finding a Nash Equilibrium. To do that, we will pick a parametric form for R; (the estimate of the return): linear in the KL-divergence between a target vote and 1L Letting as. bs, f3s (x), and Ps(x, a) be the parameters of Ii;, An ( ) """' ( ) Rs =as """' ~ f3sx ~psx, alog x a Ps(X, ( a)) + bs n x, a (2) where as ::::: 0, f3s (x ) ::::: 0, L:x f3s (x ) = 1, Ps(x, a) ::::: 0, and L:aps(x , a) = 1. Just as a s (x) was the amount of vote source s was putting towards the policy for observation x, f3s (x ) is the importance for source s of the policy for observation x . And, while Vs (x, a) was the policy vote for observation x for source s, ps(x, a) is the preferred policy for observation x for source s. The constants as and bs allow for scaling and translation of the return. = as f3s (x)ps(x, a), then, given experiences of different policies and their empirical returns, we can estimate p~( x, a) using linear least-squares. Imposing the constraints just involves finding the normal least-squares fit with the constraint that all p~( x, a) be non-negative. Fromp~(x, a) we can calculate as = L: x, ap~(x, a), f3s (x ) = If we let p~(x , a) .1... L:ap~(x, a) and Ps(x, a) = ~p~ ')' We now have a method for solving for as a' P(~t) s x,a given experience. We now need to find a way to compute the agent's policy. Ii; 3.2 Best Response Algorithm To produce an algorithm for finding a Nash Equilibrium, let us first start by deriving an algorithm for finding the best response for source s to a set of votes. We need to find the set of as (x) and Vs (x, a) that satisfy the constraints on the votes and maximize equation 2 which is the same as minimizing ,,"", f-! ()"""' ( ~ fJs X ~ps x , a x )1 . L:slas/ (x )vs/ (X) og a '" L..J s' Qs , ( ) (3) X over a s (x ) and v s (x , a) for given s because the other terms depend on neither as (x ) nor vs(x , a). To minimize equation 3, let's first fix the a -values and optimize Vs (x, a). We will ignore the non-negative constraints on Vs (x , a) and just impose the constraint that L:a Vs (x , a) = 1. The solution, whose derivation is simple and omitted due to space, is (4) We impose the non-negative constraints by setting to zero any Vs (x, a) which are negative and renormalizing. Unfortunately, we have not been able to find such a nice solution for a s(x ). Instead, we use gradient descent to optimize equation 3 yielding (5) We constrain the gradient to fit the constraints. We can find the best response for source s by iterating between the two steps above. First we initialize a s(x ) = f3s (x ) for all x. We then solve for a new set of vs(x, a) with equation 4. Using those v-values, we take a step in the direction of the gradient of a s(x ) with equation 5. We keep repeating until the solution converges (reducing the step size each iteration) which usually only takes a few tens of steps. ?K~ J Bright ' , ,:~ Sbottom " ' , T , , ,~ 8 1eft ,:[ T , ? ':~,., => ,:L T " ps(5,a) , ? => ::b ' , 7f(5 ,a) v s (5 , a) Figure 1: Load-unload problem : The right is the state diagram, Cargo is loaded in state L Delivery to a boxed state results in reward from the source associated with that state, The left is the solution found, For state 5, from left to right are shown the p-values, the v-values, and the policy, J => Bright '[ Sbottom J~,~, ps(5 , a) 'J,., ':'vs(5,a) ? => , 7f(5, a) Figure 2: Transfer of the load-unload solution: plots of the same values as in figure 1 but with the left source absent No additional learning was allowed (the left side plots are the same), The votes, however, change, and thus so does the final policy, 3.3 Nash Equilibrium Algorithm To find a Nash Equilibrium, we start with as (x) = f3s (x) and vs(x , a) = Ps(x, a) and iterate to an equilibrium by repeatedly finding the best response for each source and simultaneously replacing the old solution with the new best responses, To prevent oscillation, whenever the change in as (x)v s (x , a) grows from one step to the next, we replace the old solution with one halfway between the old and new solutions and continue the iteration, 4 Example Results In all of these examples we used the same learning scheme. We ran the algorithm for a series of epochs. At each epoch, we calculated 7f using the Nash Equilibrium algorithm. With probability t, we replace 7f with one chosen uniformly over the simplex of conditional distributions . This insures some exploration. We follow 7f for a fixed number of time steps and record the average reward for each source. We add these average rewards and the empirical estimate of the policy followed as data to the least-squares estimate of the returns. We then repeat for the next epoch. 4.1 Multiple Delivery Load-Unload Problem We extend the classic load-unload problem to multiple receivers. The observation state is shown in figure 1. The hidden state is whether the agent is currently carrying cargo. Whenever the agent enters the top state (state 1), cargo is placed on the agent Whenever the agent arrives in any of the boxed states while carrying cargo, the cargo is removed and the agent receives reward. For each boxed state, there is one reward source who only rewards for deliveries to that state (a reward of 1 for a delivery and 0 for all other time steps). In state 5, the agent has the choice of four actions each of which moves the agent to the corresponding state without error. Since the agent cannot observe neither whether it Figure 3: One-way door state diagram: At every state there are two actions (right and left) available to the agent. In states 1,9, 10, and 15 where there are only single outgoing edges, both actions follow the same edge. With probability 0.1, an action will actually follow the other edge. Source 1 rewards entering state 1 whereas source 2 rewards entering state 9. 81 82 ..... . :1 :~I .?...... ~ :~I ~~ .. ~. . ~ (3s (x ) =} (ts(x) =} 81 82 Ps(x, right) Vs (x , right) n(x, right) Figure 4: One-way door solution: from left to right: the sources' ideal policies, the votes, and the final agent's policy. Light bars are for states for which both actions lead to the same state. has cargo nor its history, the optimal policy for state 5 is stochastic. The algorithm set all (t- and {3-values to 0 for states other than state 5. We started f at 0.5 and reduced it to 0.1 by the end of the run. We ran for 300 epochs of 200 iterations by which point the algorithm consistently settled on the solution shown in figure 1. For each source, the algorithm found the best solution of randomly picking between the load state and the source's delivery state (as shown by the p-values). The votes are heavily weighted towards the delivery actions to overcome the other sources' preferences resulting in an approximately uniform policy. The important point is that, without additional learning, the policy can be changed if the left source leaves. The learned (t - and p-values are kept the same, but the Nash Equilibrium is different resulting in the policy in figure 2. 4.2 One-way Door Problem In this case we consider the environment shown in figure 3. From each state the agent can move to the left or right except in states 1, 9, 10, and 15 where there is only one possible action. We can think of states 1 and 9 as one-way doors. Once the agent enters states 1 or 9, it may not pass back through except by going around through state 5. Source 1 gives reward when the agent passes through state 1. Source 2 gives reward when the agent passes through state 9. Actions fail (move in the opposite direction than intended) 0.1 of the time. We ran the learning scheme for 1000 epochs of 100 iterations starting f at 0.5 and reducing it to 0.015 by the last epoch. The algorithm consistently converged to the solution shown in figure 4. Source 1 considers the left-side states (2-5 and 11-12) the most important while source 2 considers the right-side states (5-8 and 13-14) the most important. The ideal policies captured by the p-values show that source 1 wants the agent to move left and source 2 wants the agent to move right for the upper states (2-8) while the sources agree that for the lower states (11-14) the agent should move towards state 5. The votes reflect this preference and agreement. Both sources spend most of their vote on state 5, the state they both feel is important and on which they disagree. The other states (states for which only one source has a strong opinion or on which they agree), they do not need to spend much of their vote. The resulting policy is the natural one: in state 5, the agent randomly picks a direction after which, the agent moves around the chosen loop quickly to return to state 5. Just as in the load-unload problem, if we remove one source, the agent automatically adapts to the ideal policy for the remaining source (with only one source, So, present, 7f(x, a) = P SQ (x, a)). Estimating the optimal policies and then taking the mixture of these two policies would produce a far worse result. For states 2-8, both sources would have differing opinions and the mixture model would produce a uniform policy in those states; the agent would spend most of its time near state 5. Constructing a reward signal that is the sum of the sources' rewards does not lead to a good solution either. The agent will find that circling either the left or right loop is optimal and will have no incentive to ever travel along the other loop. 5 Conclusions It is difficult to conceive of a method for providing a single reward signal that would result in the solution shown in figure 4 and still automatically change when one of the reward sources was removed. The biggest improvement in the algorithm will come from changing the form of the Ii; estimator. For problems in which there is a single best solution, the KL-divergence measure seems to work well. However, we would like to be able to extend the load-unload result to the situation where the agent has a memory bit. In this case, the returns as a function of 7f are bimodal (due to the symmetry in the interpretation of the bit). In general, allowing each source's preference to be modelled in a more complex manner could help extend these results. Acknowledgments We would like to thank Charles Isbell, Tommi Jaakkola, Leslie Kaelbling, Michael Kearns, Satinder Singh, and Peter Stone for their discussions and comments. This report describes research done within CBCL in the Department of Brain and Cognitive Sciences and in the AI Lab at MIT. This research is sponsored by a grants from ONR contracts Nos. NOOOI493-1-3085 & NOO014-95-1-0600, and NSF contracts Nos. IIS-9800032 & DMS-9872936. Additional support was provided by: AT&T, Central Research Institute of Electric Power Industry, Eastman Kodak Company, Daimler-Chrysler, Digital Equipment Corporation, Honda R&D Co. , Ltd., NEC Fund, Nippon Telegraph & Telephone, and Siemens Corporate Research, Inc. References [1] 1. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Froc. of the 15th International Con! on Machine Learning, pages 242- 250, 1998. [2] C. L. Isbell, C. R. Shelton, M. Kearns, S. Singh, and P. Stone. A social reinforcement learning agent. 2000. submitted to Autonomous Agents 2001. [3] 1. Karlsson. Learning to Solve Multiple Goals. PhD thesis, University of Rochester, 1997. [4] M. Kearns, Y. Mansouor, and S. Singh. Fast planning in stochastic games. In Proc. of the 16th Conference on Uncertainty in Artificial Intelligence , 2000. [5] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proc. of the 11th International Conference on Machine Learning, pages 157-163, 1994. [6] G. Owen. Game Theory. Academic Press, UK, 1995. [7] S. Singh, M. Kearns, and Y. Mansour. Nash convergence of gradient dynamics in general-sum games. In Proc. of the 16th Conference on Uncertainty in Artificial Intelligence , 2000. [8] S. P. Singh. The efficient learning of multiple task sequences. In NIPS, volume 4, 1992.
1831 |@word seems:1 hu:1 r:1 pick:2 profit:1 series:1 must:3 christian:1 remove:1 designed:2 interpretable:1 aps:1 plot:2 v:15 sponsored:1 intelligence:3 leaf:1 weighing:1 fund:1 parameterization:1 record:1 provides:1 contribute:1 honda:1 preference:10 simpler:1 along:1 incorrect:1 manner:1 expected:1 behavior:2 nor:2 planning:1 multi:1 brain:1 automatically:3 company:1 begin:1 estimating:1 provided:1 what:1 kind:1 consolidated:1 differing:1 finding:6 corporation:1 every:2 voting:4 uk:1 control:3 grant:1 limit:3 ap:2 approximately:1 might:3 twice:2 co:1 limited:1 acknowledgment:1 sq:1 empirical:2 cannot:1 scheduling:1 risk:1 applying:1 impossible:1 optimize:4 attention:1 regardless:1 starting:1 q:1 estimator:1 deriving:1 financial:1 classic:1 autonomous:1 feel:1 slas:1 target:1 play:2 heavily:1 user:10 nippon:1 agreement:1 solved:1 enters:2 calculate:1 removed:3 ran:3 weigh:1 environment:5 broken:1 nash:9 reward:45 littman:1 dynamic:1 depend:1 solving:1 carrying:2 singh:5 derivation:1 fast:1 describe:1 artificial:3 quite:1 whose:1 spend:3 valued:1 solve:2 say:1 think:1 final:3 sequence:2 interaction:1 if2:1 combining:1 loop:3 achieve:1 adapts:1 description:1 convergence:1 p:11 produce:5 renormalizing:1 leave:1 converges:1 help:1 derive:2 illustrate:1 paying:1 strong:1 involves:1 trading:1 come:1 tommi:1 direction:3 correct:2 attribute:1 stochastic:2 exploration:1 opinion:2 fix:1 mathematically:1 lying:1 around:2 normal:1 cbcl:1 equilibrium:11 a2:1 omitted:1 proc:3 travel:1 currently:2 circling:1 weighted:1 mit:2 clearly:1 rather:1 og:1 jaakkola:1 improvement:1 consistently:2 equipment:1 sense:1 hidden:1 going:1 overall:1 among:1 constrained:1 initialize:1 equal:2 construct:2 once:1 having:2 unload:6 simplex:1 report:1 employ:1 few:1 randomly:2 simultaneously:1 divergence:2 elevator:1 intended:1 suit:1 karlsson:1 mixture:2 arrives:1 wellman:1 yielding:1 light:1 edge:3 experience:5 old:3 desired:2 theoretical:1 industry:1 steer:1 leslie:1 kaelbling:1 uniform:2 combined:1 international:2 contract:2 off:1 telegraph:1 picking:1 michael:1 quickly:1 thesis:1 reflect:1 settled:1 central:1 watching:2 worse:1 creating:1 cognitive:1 return:15 includes:1 inc:1 satisfy:1 depends:1 try:1 lab:2 observing:1 start:2 rochester:1 minimize:1 square:3 bright:2 loaded:1 who:2 conceive:1 modelled:1 history:2 converged:1 submitted:1 whenever:3 definition:1 against:3 dm:1 associated:1 con:1 gain:1 massachusetts:1 knowledge:1 if1:1 actually:1 back:1 noo014:1 follow:3 response:6 formulation:1 done:1 xa:1 just:4 until:1 relearn:1 working:1 receives:2 replacing:1 reweighting:1 resident:2 grows:1 requiring:1 true:1 entering:2 jargon:1 game:11 stone:2 override:1 charles:1 volume:1 extend:3 interpretation:1 eft:1 cambridge:1 imposing:1 ai:2 enter:1 portfolio:1 add:1 recent:1 apart:1 onr:1 continue:1 captured:1 additional:3 impose:2 managing:1 attacking:1 maximize:3 signal:6 ii:4 multiple:15 mix:1 corporate:1 academic:1 a1:2 metric:1 iteration:4 bimodal:1 affecting:1 want:4 addition:1 whereas:1 diagram:2 source:81 limn:1 i7f:1 meaningless:1 pass:2 comment:1 near:1 noting:1 presence:1 door:4 vital:1 ideal:3 iterate:1 affect:2 serviced:1 fit:2 opposite:1 incomparable:1 idea:1 tradeoff:1 absent:1 whether:3 ltd:1 effort:2 peter:1 cause:1 repeatedly:1 action:14 prefers:2 iterating:1 transforms:1 amount:2 repeating:1 ten:1 daimler:1 reduced:1 nsf:1 per:1 track:2 cargo:6 discrete:2 incentive:1 waiting:1 putting:2 four:1 changing:1 prevent:1 neither:2 kept:1 halfway:1 sum:6 run:2 uncertainty:2 decide:1 fjs:1 home:1 oscillation:1 delivery:6 scaling:2 comparable:4 bit:2 followed:1 constraint:7 constrain:1 isbell:2 anyone:1 expanded:1 department:1 across:1 describes:1 b:3 happens:1 invariant:1 taken:1 equation:7 agree:2 f3s:8 discus:1 fail:1 know:5 letting:2 end:2 available:2 apply:2 observe:1 kodak:1 assumes:1 top:1 include:2 entertainment:3 remaining:1 dial:1 giving:1 move:7 strategy:1 parametric:1 traditional:2 gradient:4 separate:1 thank:1 simulated:1 considers:2 relationship:1 psx:1 providing:1 minimizing:1 setup:1 unfortunately:2 difficult:1 negative:5 policy:38 allowing:1 upper:1 av:1 observation:11 disagree:1 markov:1 alog:1 descent:1 t:4 situation:1 looking:1 ever:1 mansour:1 kl:2 learned:2 nip:1 able:2 bar:1 usually:1 program:1 reverts:1 memory:1 belief:1 power:1 natural:2 difficulty:2 fromp:1 scheme:4 technology:1 started:1 isn:1 prior:1 nice:1 taste:1 removal:1 epoch:6 contributing:1 multiagent:2 fictitious:1 digital:1 agent:42 sufficient:1 throwaway:1 playing:1 balancing:2 translation:1 changed:2 repeat:1 placed:1 keeping:1 last:1 side:3 allow:3 institute:2 taking:1 feedback:1 overcome:2 calculated:1 world:2 ignores:1 insuring:1 reinforcement:8 far:2 social:1 observable:1 ignore:1 preferred:1 keep:3 satinder:1 receiver:1 assumed:1 continuous:1 cxl:1 triplet:1 scratch:1 learn:4 transfer:1 symmetry:1 boxed:3 complex:1 constructing:2 domain:1 electric:1 linearly:1 allowed:1 biggest:1 wish:1 explicit:1 weighting:2 minute:1 load:7 restricting:1 prevail:1 importance:1 phd:1 nec:1 television:1 aren:1 eastman:1 insures:1 partially:1 scalar:3 monotonic:1 ma:1 conditional:1 goal:9 towards:3 room:2 replace:2 owen:1 change:8 telephone:1 except:2 reducing:2 uniformly:1 kearns:4 total:2 called:1 pas:1 player:4 vote:30 siemens:1 select:1 people:4 support:1 arises:1 reactive:1 outgoing:1 shelton:2
909
1,832
Generalized Belief Propagation Jonathan S. Yedidia MERL 201 Broadway Cambridge, MA 02139 Phone: 617-621-7544 William T. Freeman MERL 201 Broadway Cambridge, MA 02139 Phone: 617-621-7527 Yair Weiss Computer Science Division UC Berkeley, 485 Soda Hall Berkeley, CA 94720-1776 Phone: 510-642-5029 [email protected] freema [email protected] yweiss@cs .berkeley.edu Abstract Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP fixed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions ofthese Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. 1 Introduction Local "belief propagation" (BP) algorithms such as those introduced by Pearl are guaranteed to converge to the correct marginal posterior probabilities in tree-like graphical models. For general networks with loops, the situation is much less clear. On the one hand, a number of researchers have empirically demonstrated good performance for BP algorithms applied to networks with loops. One dramatic case is the near Shannon-limit performance of "Turbo codes" , whose decoding algorithm is equivalent to BP on a loopy network [2, 6]. For some problems in computer vision involving networks with loops, BP has also shown to be accurate and to converge very quickly [2, 1, 7]. On the other hand, for other networks with loops, BP may give poor results or fail to converge [7] . For a general graph, little has been understood about what approximation BP represents, and how it might be improved. This paper's goal is to provide that understanding and introduce a set of new algorithms resulting from that understanding. We show that BP is the first in a progression of local message-passing algorithms, each giving equivalent results to a corresponding approximation from statistical physics known as the "Kikuchi" approximation to the Gibbs free energy. These algorithms have the attractive property of being user-adjustable: by paying some additional computational cost, one can obtain considerable improvement in the accuracy of one's approximation, and can sometimes obtain a convergent message-passing algorithm when ordinary BP does not converge. 2 Belief propagation fixed-points are zero gradient points of the Bethe free energy We assume that we are given an undirected graphical model of N nodes with pairwise potentials (a Markov network). Such a model is very general, as essentially any graphical model can be converted into this form. The state of each node i is denoted by Xi, and the joint probability distribution function is given by P(Xl,X2, ... ,XN) = z II 'l/Jij(Xi,Xj) II 'l/Ji(Xi) 1 ij (1) i where 'l/Ji(Xi) is the local "evidence" for node i, 'l/Jij(Xi, Xj) is the compatibility matrix between nodes i and j, and Z is a normalization constant. Note that we are subsuming any fixed evidence nodes into our definition of 'l/Ji(Xi). The standard BP update rules are: mij(Xj) a f- L 'l/Jij (Xi, Xj)'l/Ji(Xi) II mki(xi) kEN(i)\j (2) Xi bi(Xi) a'I/Ji (Xi) f- II mki(xi) (3) kEN(i) where a denotes a normalization constant and N(i)\j means all nodes neighboring node i, except j. Here mij refers to the message that node i sends to node j and bi is the belief (approximate marginal posterior probability) at node i, obtained by multiplying all incoming messages to that node by the local evidence. Similarly, we can define the belief bij(Xi,Xj) at the pair of nodes (Xi, Xj) as the product of the local potentials and all messages incoming to the pair of nodes: bij(Xi, Xj) = acPij(Xi, Xj) ITkEN(i)\j mki(Xi) IT1EN(j)\i mlj (Xj), where cPij(Xi,Xj) 'l/Jij(Xi,Xj)'l/Ji(Xi)'l/Jj(Xj) . = Claim 1: Let {mij} be a set of BP messages and let {bij , bi} be the beliefs calculated from those messages. Then the beliefs are fixed-points of the BP algorithm if and only if they are zero gradient points of the Bethe free energy, Ff3: LL ij bij(Xi,Xj) [In bij(Xi, Xj) -lncPij(xi,Xj)] Xi,Xj Xi subject to the normalization and marginalization constraints: LXi bi(Xi) = 1, L Xi bij(Xi,Xj) = bj(xj). (qi is the number of neighbors of node i.) To prove this claim we add Lagrange multipliers to form a Lagrangian L: Aij (x j ) is the multiplier corresponding to the constraint that bij (Xi, Xj) marginalizes down to bj(xj), and "(ij, "(i are multipliers corresponding to the normalization constraints. The equation 8b.fL .j = 0 gives: Inbij(xi,Xj) = In(?ij(xi,Xj)) + Aij(Xj) + IJ X"'X J Aji(Xi) + "(ij - 1. The equation 8b~&i) = 0 gives: (qi - l)(lnbi (xi) + 1) = In?jJi (Xi) + LjEN(i) Aji (Xi) + "(i? Setting Aij (Xj) = In OkEN(j)\i mkj (Xj) and using the marginalization constraints, we find that the stationary conditions on the Lagrangian are equivalent to the BP fixed-point conditions. (Empirically, we find that stable BP fixed-points correspond to local minima of the Bethe free energy, rather than maxima or saddle-points.) 2.1 Implications The fact that F/3( {b ij , bd) is bounded below implies that the BP equations always possess a fixed-point (obtained at the global minimum of F). To our knowledge, this is the first proof of existence of fixed-points for a general graph with arbitrary potentials (see [9] for a complicated prooffor a special case). The free energy formulation clarifies the relationship to variational approaches which also minimize an approximate free energy [3]. For example, the mean field approximation finds a set of {b i } that minimize: FMF({bd) = - LL ij bi(Xi)bj(xj) In?jJij(xi,Xj)+ Xi,'Xj L L bi(Xi) [lnbi(xi) -In?jJi(xi)] i Xi (5) subject to the constraint Li bi(Xi) = 1. The BP free energy includes first-order terms bi(Xi) as well as second-order terms bij (Xi, Xj), while the mean field free energy uses only the first order ones. It is easy to show that the BP free energy is exact for trees while the mean field one is not. Furthermore the optimization methods are different: typically FMF is minimized directly in the primal variables {b i } while F/3 is minimized using the messages, which are a combination of the dual variables {Aij(Xj)}. Kabashima and Saad [4] have previously pointed out the correspondence between BP and the Bethe approximation (expressed using the TAP formalism) for some specific graphical models with random disorder. Our proof answers in the affirmative their question about whether there is a "deep general link between the two methods." [4] 3 Kikuchi Approximations to the Gibbs Free Energy The Bethe approximation, for which the energy and entropy are approximated by terms that involve at most pairs of nodes, is the simplest version of the Kikuchi "cluster variational method." [5, 10] In a general Kikuchi approximation, the free energy is approximated as a sum of the free energies of basic clusters of nodes, minus the free energy of over-counted cluster intersections, minus the free energy of the over-counted intersections of intersections, and so on. Let R be a set of regions that include some chosen basic clusters of nodes, their intersections, the intersections of the intersections, and so on. The choice of basic clusters determines the Kikuchi approximation- for the Bethe approximation, the basic clusters consist of all linked pairs of nodes. Let Xr be the state of the nodes in region r and br(xr) be the "belief" in X r . We define the energy of a region by Er(xr) == -In TIij 'l/Jij (Xi, Xj) - In TIi 'l/Ji(Xi) == -In 'l/Jr(xr ), where the products are over all interactions contained within the region r. For models with higher than pair-wise interactions, the region energy is generalized to include those interactions as well. The Kikuchi free energy is FK = 2:: (2:: Cr br(xr)Er(xr) + x.,. rER 2:: (6) br(xr) IOgbr(Xr)) XT' where Cr is the over-counting number of region r, defined by: Cr = 1- LSEStLper(r) Cs where super(r) is the set of all super-regions of r. For the largest regions in R, Cr = 1. The belief br (Q: r ) in region r has several constraints: it must sum to one and be consistent with the beliefs in regions which intersect with r. In general, increasing the size of the basic clusters improves the approximation one obtains by minimizing the Kikuchi free energy. 4 Generalized belief propagation (G BP) Minimizing the Kikuchi free energy subject to the constraints on the beliefs is not simple. Nearly all applications of the Kikuchi approximation in the physics literature exploit symmetries in the underlying physical system and the choice of clusters to reduce the number of equations that need to be solved from O(N) to 0(1). But just as the Bethe free energy can be minimized by the BP algorithm, we introduce a class of analogous genemlized belief propagation (GBP) algorithms that minimize an arbitrary Kikuchi free energy. These algorithms represent an advance in physics, in that they open the way to the exploitation of Kikuchi approximations for inhomogeneous physical systems. There are in fact many possible GBP algorithms which all correspond to the same Kikuchi approximation. We present a "canonical" GBP algorithm which has the nice property of reducing to ordinary BP at the Bethe level. We introduce messages mrs(xs) between all regions r and their "direct sub-regions" s. (Define the set subd(r) of direct sub-regions of r to be those regions that are sub-regions of r but have no super-regions that are also sub-regions of r, and similarly for the set superd(r) of "direct super-regions.") It is helpful to think of this as a message from those nodes in r but not in s (which we denote by r\s) to the nodes in s. Intuitively, we want messages to propagate information that lies outside of a region into it. Thus, for a given region r, we want the belief br(xr) to depend on exactly those messages mr,s, that start outside ofthe region r and go into the region r. We define this set of messages M(r) to be those messages mr,s, (xs,) such that region r'\s' has no nodes in common with region r, and such that region s' is a sub-region of r or the same as region r. We also define the set M(r, s) of messages to be all those messages that start in a sub-region of r and also belong to M(s), and we define M(r)\M(s) to be those messages that are in M(r) but not in M(s). The canonical generalized belief propagation update rules are: mrs +- Q: [2:: br II 'l/Jr\s(xr\s) m,1I ," X'\' +- Q:'l/Jr(x r ) II EM(r)\M(s) mr,s, II mrllsll] / m,' " mr,s, (7) EM(r,s) (8) m" . ,EM(r) where for brevity we have suppressed the functional dependences of the beliefs and messages. The messages are updated starting with the messages into the smallest regions first. One can then use the newly computed messages in the product over M(r, s) of the message-update rule. Empirically, this helps convergence. Claim 2: Let {mrs(xs)} be a set of canonical GBP messages and let {br(xr)} be the beliefs calculated from those messages. Then the beliefs are fixed-points of the canonical GBP algorithm if and only if they are zero gradient points of the constrained Kikuchi free energy FK. We prove this claim by adding Lagrange multipliers: 'Yr to enforce the normalization of br and Ars(Xs) to enforce the consistency of each region r with all of its direct sub-regions s. This set of consistency constraints is actually more than sufficient, but there is no harm in adding extra constraints. We then rotate to another set of Lagrange multipliers /l-rs(xs) of equal dimensionality which enforce a linear combination of the original constraints: /l-rs (xs) enforces all those constraints involving marginalizations by all direct super-regions r' of s into s except that of region r itself. The rotation matrix is in a block form which can be guaranteed to be full rank. We can then show that the /l-rs(xs) constraints can be written in the form /l-rs(xs) Er'ER(/-Iro) Cr, Ex., b(x~) where R(/l-rs) is the set of all regions which receive the message /l-rs in the belief update rule of the canonical algorithm. We then re-arrange the sum over all /l-'S into a sum over all regions, which has the form ErER Cr Ex. br(xr) E/-I .. EM(r) /l-rs(Xs). (M(r) is a set of /l-r's' in one-toone correspondence with the mr,s, in M(r).) Finally, we differentiate the Kikuchi free energy with respect to br(r), and identify /l-rs(xs) = lnmrs(xs) to obtain the canonical GBP belief update rules, Eq. 8. Using the belief update rules in the marginalization constraints, we obtain the canonical GBP message update rules, Eq.7. It is clear from this proof outline that other GBP message passing algorithms which are equivalent to the Kikuchi approximation exist. If one writes any set of constraints which are sufficient to insure the consistency of all Kikuchi regions, one can associate the exponentiated Lagrange multipliers of those constraints with a set of messages. The GBP algorithms we have described solve exactly those networks which have the topology of a tree of basic clusters. This is reminiscent of Pearl's method of clustering [8], wherein wherein one groups clusters of nodes into "super-nodes," and then applies a belief propagation method to the equivalent super-node lattice. We can show that the clustering method, using Kikuchi clusters as super-nodes, also gives results equivalent to the Kikuchi approximation for those lattices and cluster choices where there are no intersections between the intersections of the Kikuchi basic clusters. For those networks and cluster choices which do not obey this condition, (a simple example that we discuss below is the square lattice with clusters that consist of all square plaquettes of four nodes), Pearl's clustering method must be modified by adding additional update conditions to agree with GBP algorithms and the Kikuchi approximation. 5 Application to Specific Lattices We illustrate the canonical G BP algorithm for the Kikuchi approximation of overlapping 4-node clusters on a square lattice of nodes. Figure 1 (a), (b), (c) illustrates the beliefs at a node, pair of nodes, and at a cluster of 4 nodes, in terms of messages propagated in the network. Vectors are the single index messages also used in ordinary BP. Vectors with line segments indicate the double-indexed messages arising from the Kikuchi approximation used here. These can be thought of as correction terms accounting for correlations between messages that ordinary BP treats as in- dependent. (For comparison, Fig. 1 (d), (e), (f) shows the corresponding marginal computations for the triangular lattice with all triangles chosen as the basic Kikuchi clusters). We find the message update rules by equating marginalizations of Fig. 1 (b) and (c) with the beliefs in Fig. 1 (a) and (b), respectively. Figure 2 (a) and (b) show (graphically) the resulting fixed point equations. The update rule (a) is like that for ordinary BP, with the addition of two double-indexed messages. The update rule for the double-indexed messages involves division by the newly-computed singleindexed messages. Fixed points of these message update equations give beliefs that are stationary points (empirically minima) of the corresponding Kikuchi approximation to the free energy. If , J J 9 ~CJ,~ (b) , t- l!J~, O.. =J l 1..t (c) b~.L. >\I.+~L (d) (e) 1\ !\~/\ Figure 1: Marginal probabilities in terms of the node links and GBP messages. For (a) node, (b) line, (c) square cluster, using a Kikuchi approximation with 4-node clusters on a square lattice. E.g., (b) depicts (a special case of Eq. 8, written here using node labels): bab(X a , Xb) = a'I/Jab(Xa, Xb)'l/Ja(Xa)'l/Jb(Xb)M~M~M~M:t Mt M~ Mt: M~~ , where super and subscripts indicate which nodes message M goes from and to. (d), (e), (f): Marginal probabilities for triangular lattice with 3-node Kikuchi clusters. .a b (a) Figure 2: Graphical depiction of message update equations (Eq. 7; marginalize over nodes shown unfilled) for GBP using overlapping 4-node Kikuchi clusters. (a) Update equation for the single-index messages: M!(x a) = a 'l/Jb(Xb)'l/Jab(Xa, xb)M:t Mt M~ Mt: M~~. (b) Update equation for doubleindexed messages (involves a division by the single-index messages on the left hand side) . L:xb 6 Experimental Results Ordinary BP is expected to perform relatively poorly for networks with many tight loops, conflicting interactions, and weak evidence. We constructed such a network, known in the physics literature as the square lattice Ising spin glass in a random magnetic field. The nodes are on a square lattice, with nearest neighbor nodes exp(J,??) . .)) connected by a compatibility matrix of the form 'l/Ji3" = ( ( 13J, ) exp(-J, (J, )13 exp - ij exp ij and local evidence vectors of the form 'l/Ji = (exp(hi)jexp(-h i )). To instantiate a particular network, the Jij and hi parameters are chosen randomly and independently from zero-mean Gaussian probability distributions with standard deviations J and h respectively. The following results are for n by n lattices with toroidal boundary conditions and with J = 1, and h = 0.1. This model is designed to show off the weaknesses of ordinary BP, which performs well for many other networks. Ordinary BP is a special case of canonical G BP, so we exploited this to use the same general-purpose GBP code for both ordinary BP and canonical GBP using overlapping square fournode clusters, thus making computational cost comparisons reasonable. We started with randomized messages and only stepped half-way towards the computed values of the messages at each iteration in order to help convergence. We found that canonical G BP took about twice as long as ordinary BP per iteration, but would typically reach a given level of convergence in many fewer iterations. In fact, for the majority of the dozens of samples that we looked at, BP did not converge at all, while canonical GBP always converged for this model and always to accurate answers. (We found that for the zero-field 3-dimensional spin glass with toroidal boundary conditions, which is an even more difficult model, canonical GBP with 2x2x2 cubic clusters would also fail to converge). For n = 20 or larger, it was difficult to make comparisons with any other algorithm, because ordinary BP did not converge and Monte Carlo simulations suffered from extremely slow equilibration. However, generalized belief propagation converged reasonably rapidly to plausible-looking beliefs. For small n, we could compare with exact results, by using Pearl's clustering method on a chain of n by 1 super-nodes. To give a qualitative feel for the results, we compare ordinary BP, canonical GBP, and the exact results for an n = 10 lattice where ordinary BP did converge. Listing the values of the one-node marginal probabilities in one of the rows, we find that ordinary BP gives (.0043807, .74502, .32866, .62190, .37745, .41243, .57842, .74555, .85315, .99632), canonical GBP gives (.40255, .54115, .49184, .54232, .44812, .48014, .51501, .57693, .57710, .59757), and the exact results were (.40131, .54038, .48923, .54506, .44537, .47856, .51686, .58108, .57791, .59881). References [1] W. T. Freeman and E. Pasztor. Learning low-level vision. In 7th Intl. Conf. Computer Vision, pages 1182- 1189, 1999. [2] B. J. Frey. Graphical Models for Machine Learning and Digital Communication. MIT Press, 1998. [3] M. Jordan, Z. Ghahramani, T. Jaakkola, and 1. Saul. An introduction to variational methods for graphical models. In M. Jordan, editor, Learning in Graphical Models. MIT Press, 1998. [4] Y. Kabashima and D. Saad. Belief propagation vs. TAP for decoding corrupted messages. Euro. Phys. Lett., 44:668, 1998. [5] R. Kikuchi. Phys. Rev., 81:988, 1951. [6] R. McEliece, D. MacKay, and J. Cheng. Turbo decoding as an instance of Pearl's 'belief propagation' algorithm. IEEE J. on Sel. Areas in Comm., 16(2):140- 152, 1998. [7] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: an empirical study. In Proc. Uncertainty in AI, 1999. [8] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988. [9] T. J. Richardson. The geometry of turbo-decoding dynamics. IEEE Trans. Info. Theory, 46(1):9-23, Jan. 2000. [10] Special issue on Kikuchi methods. Progr. Theor. Phys. Suppl., vol. 115, 1994.
1832 |@word exploitation:1 version:2 open:1 r:8 propagate:1 simulation:1 accounting:1 dramatic:1 minus:2 com:2 bd:2 must:2 written:2 reminiscent:1 designed:1 update:15 v:1 stationary:3 half:1 instantiate:1 yr:1 fewer:1 node:44 constructed:1 direct:5 qualitative:1 prove:2 introduce:3 pairwise:1 expected:1 freeman:2 little:2 increasing:1 bounded:1 underlying:1 insure:1 what:1 affirmative:1 berkeley:3 exactly:2 toroidal:2 understood:1 frey:1 local:7 treat:1 limit:1 subscript:1 might:1 twice:1 equating:1 bi:8 enforces:1 block:1 writes:1 xr:12 jan:1 aji:2 intersect:1 area:1 empirical:1 significantly:1 thought:1 refers:1 mkj:1 marginalize:1 equivalent:6 demonstrated:1 lagrangian:2 go:2 graphically:1 starting:1 independently:1 disorder:1 equilibration:1 insight:1 rule:10 importantly:1 analogous:1 updated:1 feel:1 user:1 exact:4 us:1 associate:1 approximated:2 ising:1 solved:1 region:36 connected:1 comm:1 complexity:1 dynamic:1 depend:1 tight:1 segment:1 division:3 triangle:1 joint:1 jjij:1 monte:1 outside:2 whose:1 larger:1 solve:1 plausible:2 triangular:2 richardson:1 think:1 itself:1 differentiate:1 took:1 interaction:4 jij:6 product:3 neighboring:1 loop:6 rapidly:1 poorly:1 supposed:1 exploiting:1 lnbi:2 oken:1 cluster:24 convergence:3 double:3 intl:1 kikuchi:31 help:2 derive:1 illustrate:2 nearest:1 ij:10 ji3:1 progress:1 eq:4 paying:1 c:2 involves:2 implies:1 indicate:2 inhomogeneous:1 correct:1 ja:1 yweiss:1 mki:3 theor:1 correction:1 hall:1 exp:5 bj:3 claim:4 jexp:1 arrange:1 smallest:1 purpose:1 proc:1 label:1 largest:1 mit:2 always:3 gaussian:1 super:10 modified:1 rather:1 cr:6 sel:1 jaakkola:1 improvement:1 rank:1 glass:2 helpful:1 inference:3 dependent:1 typically:2 compatibility:2 issue:1 dual:1 denoted:1 constrained:1 special:4 mackay:1 uc:1 marginal:7 field:5 construct:1 equal:1 bab:1 represents:1 nearly:1 minimized:3 others:1 jb:2 intelligent:1 randomly:1 murphy:1 geometry:1 william:1 message:47 x2x2:1 weakness:1 primal:1 xb:6 chain:1 implication:1 accurate:5 tree:4 indexed:3 re:1 toone:1 merl:4 formalism:1 instance:1 ar:1 lattice:12 ordinary:16 loopy:2 cost:2 deviation:1 answer:2 corrupted:1 randomized:1 probabilistic:1 physic:6 off:1 decoding:4 quickly:1 marginalizes:1 conf:1 li:1 potential:3 converted:1 tii:1 includes:1 linked:1 characterizes:1 start:2 complicated:1 minimize:3 square:8 spin:2 accuracy:1 kaufmann:1 listing:1 correspond:2 clarifies:1 ofthe:1 identify:1 weak:1 carlo:1 multiplying:1 researcher:1 kabashima:2 converged:2 reach:1 phys:3 definition:1 energy:28 proof:3 propagated:1 newly:2 knowledge:1 improves:1 dimensionality:1 cj:1 actually:1 mlj:1 higher:1 wherein:2 wei:2 improved:1 formulation:1 furthermore:1 just:1 xa:3 correlation:1 mceliece:1 hand:3 overlapping:3 propagation:13 multiplier:6 progr:1 unfilled:1 attractive:1 ll:2 generalized:6 outline:1 performs:1 reasoning:1 variational:4 wise:1 common:1 rotation:1 functional:1 mt:4 empirically:4 physical:2 ji:8 jab:2 belong:1 cambridge:2 gibbs:2 ai:1 grid:1 fk:2 similarly:2 pointed:1 consistency:3 stable:1 depiction:1 add:1 posterior:2 phone:3 exploited:1 morgan:1 minimum:3 additional:2 mr:8 converge:9 ii:7 full:1 long:1 qi:2 involving:3 basic:8 vision:3 essentially:1 subsuming:1 iteration:3 sometimes:1 normalization:5 represent:1 suppl:1 receive:1 addition:1 want:2 sends:1 suffered:1 saad:2 extra:1 posse:1 subject:3 undirected:1 jordan:3 near:1 counting:1 easy:1 xj:31 marginalization:5 topology:1 reduce:1 br:10 whether:1 fmf:2 passing:4 jj:1 deep:1 clear:2 involve:1 ken:2 simplest:2 prooffor:1 exist:1 canonical:15 arising:1 per:1 vol:1 group:1 four:1 graph:3 sum:4 uncertainty:1 soda:1 reasonable:1 rer:1 fl:1 hi:2 guaranteed:2 convergent:1 correspondence:2 jji:2 cheng:1 turbo:4 constraint:15 bp:41 x2:1 extremely:1 relatively:1 combination:2 poor:1 jr:3 em:4 suppressed:1 ofthese:1 making:1 rev:1 intuitively:1 equation:9 agree:1 previously:1 discus:1 fail:2 yedidia:2 progression:1 obey:1 enforce:3 magnetic:1 yair:1 lxi:1 existence:1 original:1 denotes:1 clustering:4 include:2 graphical:8 exploit:1 giving:1 ghahramani:1 build:1 question:1 looked:1 dependence:1 gradient:3 link:2 majority:1 stepped:1 iro:1 code:3 index:3 relationship:1 minimizing:2 difficult:2 broadway:2 info:1 adjustable:2 perform:1 markov:2 pasztor:1 situation:1 looking:1 communication:1 arbitrary:2 introduced:2 pair:6 connection:1 gbp:20 tap:2 conflicting:1 pearl:6 trans:1 plaquettes:1 below:2 including:1 belief:32 ff3:1 started:1 nice:1 understanding:3 literature:2 digital:1 sufficient:2 consistent:1 editor:1 row:1 surprisingly:1 free:25 aij:4 side:1 exponentiated:1 neighbor:2 saul:1 boundary:2 calculated:2 xn:1 lett:1 made:1 counted:2 approximate:5 obtains:1 global:1 incoming:2 harm:1 xi:48 nature:1 bethe:11 reasonably:1 ca:1 symmetry:1 did:3 subd:1 fig:3 euro:1 depicts:1 cubic:1 slow:1 sub:7 xl:1 lie:1 bij:8 dozen:1 down:1 specific:2 xt:1 er:4 x:11 evidence:5 consist:2 adding:3 illustrates:1 entropy:1 intersection:8 saddle:1 lagrange:4 expressed:1 contained:1 applies:1 mij:3 determines:1 ma:2 goal:1 towards:1 considerable:1 except:2 reducing:1 experimental:1 shannon:1 rotate:1 jonathan:1 brevity:1 ex:2
910
1,833
Regularized Winnow Methods Tong Zhang Mathematical Sciences Department IBM TJ. Watson Research Center Yorktown Heights, NY 10598 [email protected] Abstract In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive update when there are many irrelevant attributes. Recently, there has been much effort on enhancing the Perceptron algorithm by using regularization, leading to a class of linear classification methods called support vector machines. Similarly, it is also possible to apply the regularization idea to the Winnow algorithm, which gives methods we call regularized Winnows. We show that the resulting methods compare with the basic Winnows in a similar way that a support vector machine compares with the Perceptron. We investigate algorithmic issues and learning properties of the derived methods. Some experimental results will also be provided to illustrate different methods. 1 Introduction In this paper, we consider the binary classification problem that is to determine a label y E {-1, 1} associated with an input vector x. A useful method for solving this problem is through linear discriminant functions, which consist of linear combinations of the components of the input variable. Specifically, we seek a weight vector W and a threshold () such that w T x < () if its label y = -1 and w T x 2: () if its label y = 1. Given a training set of labeled data ( Xl, yl ), . . . , (x n , yn ), a number of approaches to finding linear discriminant functions have been advanced over the years. In this paper, we are especially interested in the following two families of online algorithms: Perceptron [12] and Winnow [10]. These algorithms typically fix the threshold () and update the weight vector w by going through the training data repeatedly. They are mistake driven in the sense that the weight vector is updated only when the algorithm is not able to correctly classify an example. For the Perceptron algorithm, the update rule is additive: if the linear discriminant function misclassifies an input training vector xi with true label yi, then we update each component j of the weight vector was: Wj f- Wj + T]X~yi, where T] > 0 is a parameter called learning rate. The initial weight vector can be taken as W = O. For the (unnormalized) Winnow algorithm (with positive weight), the update rule is multiplicative: if the linear discriminant function misclassifies an input training vector xi with true label yi, then we update each component j of the weight vector was: Wj fWj exp( T]X~yi), where T] > 0 is the learning rate parameter, and the initial weight vector can be taken as Wj = f-Lj > O. The Winnow algorithm belongs to a general family of algo- rithms called exponentiated gradient descent with unnormalized weights (EGU) [9]. There can be several variants. One is called balanced Winnow, which is equivalent to an embedding of the input space into a higher dimensional space as: x = [x, -x]. This modification allows the positive weight Winnow algorithm for the augmented input x to have the effect of both positive and negative weights for the original input x. Another modification is to normalize the one-norm of the weight w so that 2:j Wj = W, leading to the normalized Winnow. Theoretical properties of multiplicative update algorithms have been extensively studied since the introduction of Winnow. For linearly separable binary-classification problems, both Perceptron and Winnow are able to find a weight that separate the in-class vectors from the out-of-class vectors in the training set within a finite number of steps. However, the number of mistakes (updates) before finding a separating hyperplane can be very different [10, 9]. This difference suggests that the two algorithms serve for different purposes. For linearly separable problems, Vapnik proposed a method that optimizes the Perceptron mistake bound which he calls "optimal hyperplane" (see [15]). The same method has also appeared in the statistical mechanical learning literature (see [1, 8, 11]), and is referred to as achieving optimal stability. For non-separable problems, a generalization of optimal hyperplane was proposed in [2] by introducing a "soft-margin" loss term. In this paper, we derive regularized Winnow methods by constructing "optimal hyperplanes" that minimize the Winnow mistake bound (rather than the Perceptron mistake bound as in an SVM). We then derive a "soft-margin" version of the algorithms for non-separable problems. For simplicity, we shall assume 0 = 0 in this paper. The restriction does not cause problems in practice since one can always append a constant feature to the input data x, which offset the effect of O. The formulation with 0 = 0 can be more amenable to theoretical analysis. For an SVM, a fixed threshold also allows a simple Perceptron like numerical algorithm as described in chapter 12 of [13], and in [7]. Although more complex, a non-fixed 0 does not introduce any fundamental difficulty. The paper is organized as follows. In Section 2, we review mistake bounds for Perceptron and Winnow. Based on the bounds, we show how regularized Winnow methods can be derived by mimicking the optimal stability method (and SVM) for Perceptron. We also discuss the relationship of the newly derived methods with related methods. In Section 3, we investigate learning aspects of the newly proposed methods in a context similar to some known SVM results. An example will be given in Section 4 to illustrate these methods. 2 SVM and regularized Winnow 2.1 From Perceptron to SVM We review the derivation of SVM from Perceptron, which serves as a reference for our derivation of regularized Winnow. Consider linearly separable problems and let W be a weight that separates the in-class vectors from the out-of-class vectors in the training set. It is well known that the Perceptron algorithm computes a weight that correctly classifies all training data after at most M updates (a proof can be found in [15]) where M = IIwll~ max; Ilxill~/(miI1i w T xi)2. The weight vector w* that minimizes the right hand side of the bound is called the optimal hyperplane in [15] or the optimal stability hyperplane in [1, 8, 11]. This optimal hyperplane is the solution to the following quadratic programming problem: . 1 2 mln-w w 2 For non-separable problems, we introduce a slack variable f.i for each data point (xi, yi) (i = 1, ... ,n), and compute a weight vector w. (C) that solves Where C > 0 is a given parameter [15]. It is known that when C --+ 00, f.i --+ 0 and w. (C) converges to the weight vector w. of the optimal hyperplane. We can write down the KKT condition for the above optimization problem, and let Qi be the Lagrangian multiplier for w T xiyi ~ 1 - f. i . After elimination of wand f., we obtain the following dual optimization problem of the dual variable Q (see [15], chapter 10 for details): m;-x 2: Qi ~(2: Qixiyi)2 - i s.t. Qi E [0, CJ for i = 1, ... ,n. i The weight w.(C) is given by w.(C) = I:i Qixiyi at the optimal solution. To solve this problem, one can use the following modification of the Perceptron update algorithm (see [7] and chapter 12 of [13]): at each data point (xi, yi), we fix all Qk with k f:. i, and update Qi to maximize the dual objective functional, which gives: Qi --+ max(min(C, Qi + 7](1 _ w T xiyi)), 0), where w = I:i Qixiyi. The learning rate 7] can be set as 7] = to the exact maximization of the dual objective functional. 2.2 1/ xiT xi which corresponds From Winnow to regularized Winnow Similar to Perceptron, if a problem is linearly separable with a positive weight w, then Winnow computes a solution that correctly classifies all training data after at most M updates with M = 2W(I: j Wj In ::IIII~II:) max,; Ilxill~/P, where 0 < 6 ::; milli w T xiyi, W ~ Ilwlll and the learning rate is 7] = 6 /(W maXi Ilxi II~). The proof of this specific bound can be found in [16] which employed techniques in [5] (also see [10] for earlier results). Note that unlike the Perceptron mistake bound, the above bound is learning rate dependent. It also depends on the prior J.Lj > 0 which is the initial value of w in the basic Winnows. For problems separable with positive weights, to obtain an optimal stability hyperplane associated with the Winnow mistake bound, we consider fixing Ilwlll such that Ilwlll = W > O. It is then natural to define the optimal hyperplane as the (positive weight) solution to the following convex programming problem: . '" mm~wj W 1nWj - eJ.Lj j s.t. = , ... ,n. f? wT x i Y i_> lor z1 We use e to denote the base of natural logarithm. Similar to the derivation of SVM, for non-separable problems, we introduce a slack variable f.i for each data point (xi, yi), and compute a weight vector w. (C) that solves w? J min2: w j l n W,(. eJ.Lj J . +C2:c . ' Where C > 0 is a given parameter. Note that to derive the above methods, we have assumed that Ilwlll is fixed at Ilwlll = 11J.L111 = W, where W is a given parameter. This implies that the derived methods are in fact regularized versions of the normalized Winnow. One can also ignore this normalization constraint so that the derived methods correspond to regularized versions of the unnormalized Winnow. The entropy regularization condition is natural to all exponentiated gradient methods [9], as can be observed from the theoretical results in [9]. The regularized normalized Winnow is closely related to the maximum entropy discrimination [6] (the two methods are almost identical for linearly separable problems). However, in the framework of maximum entropy discrimination, the Winnow connection is non-obvious. As we shall show later, it is possible to derive interesting learning bounds for our methods that are connected with the Winnow mistake bound. Similar to the SVM formulation, the non-separable formulation of regularized Winnow approaches the separable formulation as C -+ 00. We shall thus only focus on the nonseparable case below. Also similar to an SVM, we can write down the KKT condition and let o:i be the Lagrangian multiplier for w T xiyi ~ 1 - i . After elimination of wand we obtain (the algebra resembles that of [15], chapter 10, which we shall skip due to the limitation of space) the following dual formulation for regularized unnormalized Winnow: e m.;x L o:i - . L f-Lj exp(L o:ix~yi) j o:i E [0, CJ s.t. e, for i = 1, ... ,n. . The j-th component of weight w*(C) is given by w*(C)j = f-Lj exp(2:i o:ix~yi) at the optimal solution. For regularized normalized Winnow with IIWllt = W > 0, we obtain m.;x L o:i i W In(L f-Lj exp(L o:ix;yi)) j s.t. o:i E [0, CJ for i = 1, ... ,n. i The weight w*(C) is given by w*(C)j = Wf-Lj exp(2:i o:ix~yi)/ 2: j f-Lj exp(2:i o:ix~yi) at the optimal solution. Similar to the Perceptron-like update rule for the dual SVM formulation, it is possible to derive Winnow-like update rules for the regularized Winnow formulations. At each data point (xi, yi), we fix all O:k with k -# i, and update O:i to maximize the dual objective functionals. We shall not try to derive an analytical solution, but rather use a gradient LD (O:i), where we use LD to denote ascent method with a learning rate TJ: O:i -+ O:i + TJ the dual objective function to be maximized. TJ can be either fixed as a small number or computed by the Newton's method. It is not hard to verify that we obtain the following update rule for regularized unnormalized Winnow: o:i -+ max(min(C, o:i + TJ(1 - w T xiyi)), 0), where Wj = f-Lj exp(2:i o:ixjyi). This gradient ascent on the dual variable gives an EGU rule as in [9]. Compared WIth the SVM dual update rule which is a soft-margin version of the Perceptron update rule, this method naturally corresponds to a soft-margin version of unnormalized Winnow update. Similarly, we obtain the following dual update rule for regularized normalized Winnow: a:, o:i -+ max(min(C, o:i + TJ(1 - w T xiyi)), 0), where Wj = Wf-Lj exp(2:i o:ix~yi)/ 2:j f-Lj exp(2:i o:ix~yi). Again, this rule (which is an EG rule in [9]) can be naturally regarded as the soft-margin version of the normalized Winnow update. In our experience, these update rules are numerically very efficient. Note that for regularized normalized Winnow, the normalization constant W needs to be carefully chosen based on the data. For example, if data is infinity-norm bounded by 1, then it does not seem to be appropriate if we choose W ::; 1 since IwT x I ::; 1: a hyperplane with IIWllt ::; 1 does not achieve reasonable margin. This problem is less crucial for unnormalized Winnow, but the norm of the initial weight f-Lj still affects the solution. Besides maximum entropy discrimination which is closely related to regularized normalized Winnow, a large margin version of unnormalized Winnow has also been proposed based on some heuristics [3,4]. However, their algorithm was purely mistake driven without dual variables o:i (the algorithm does not compute an optimal stability hyperplane for the Winnow mistake bound). In addition, they did not include a regularization parameter C which in practice may be important for non-separable problems. 3 Some statistical properties of regularized Winnows In this section, we derive some learning bounds based on our formulations that minimize the Winnow mistake bound. The following result is an analogy of a leave-one-out crossvalidation bound for separable SVMs - Theorem 10.7 in [15]. Theorem 3.1 The expected misclassification error errn with the true distribution by using hyperplane W obtained from the linearly separable (C = 00) unnormalized regularized Winnow algorithm with n training samples is bounded by errn < n~l Emin(K, 1.5W(2:j Wj In ~) max; Ilxi ll~), where the right-hand side expectation is taken with n + 1random samples (xl, yl), ... , (xn+l, yn+l). K is the number ofsupport vectors of the solution. Let W be the optimal solution using all the samples with dual a i for i = 1, . .. , n + 1. Let w k be the weight obtained from setting a k = 0, then W = max(llwI11, Ilwllll, ... ,llwn+llld. Proof Sketch. We only describe the major steps due to the limitation of space. Denote by illk the weight obtained from the optimal solution by removing (x k , yk) from the training sample. Similar to the proof of Theorem 10.7 in [15], we need to bound the leave-oneout cross-validation error, which is at most K. Also note that the leave-one-out crossvalidation error is at most I{k : Ilillk - wlllllxklloo ~ I}I. We then use the following two inequalities: Ilillk - wllf : : : 2W(2: j ill] - Wj - Wj In( ill] /Wj )); and 2: j ill] - Wj Wj In( ill] / Wj) ::::: 2:j wJ - Wj - Wj In( wJ / Wj) - the latter inequality can be obtained by comparing the dual objective functionals and by using the corresponding KKT condition of the dual problem. The remaining problem is now reduced to proving that I{k : 2:j wJ - Wj - Wj In(w] /Wj) ~ 1/(2Wllxkll~)} 1 : : : ..J2w 2: j Wj In ~ . For the dual formulation, by summing over index k of the KKT first order condition with respective to the dual a k , multiplied by a k , one obtains 2:k a k = 2: j Wj In~. We thus only need to show that if 2:j w] - Wj - Wj In(w]/wj) ~ 1/(2Wllxkll~), then a k ~ 2/(3Wllxkll~). This can be checked directly through Taylor expansion. 0 By using the same technique, we may also obtain a bound for regularized normalized Winnow. One disadvantage of the above bound is that it is the expectation of a random estimator that is no better than the leave-one-out cross-validation error based on observed data. However, the bound does convey some useful information: for example, we can observe that the expected misclassification error (learning curve) converges at a rate of 0 (1/ n) as long as W(2: j Wj In ~) and sup Ilxlloo are reasonably bounded. It is also not difficult to obtain interesting PAC style bounds by using the covering number result for entropy regularization in [16] and ideas in [14]. Although the PAC analysis would imply a slightly suboptimal learning curve of O(log n/n) for linearly separable problems, the bound itself provides a probability confidence and can be generalized to non-separable problems. We state below an example for non-separable problems, which justifies the entropy regularization. The bound itself is a direct consequence of Theorem 2.2 and a covering number result with entropy regularization in [16]. Note that as in [14], the square root can be removed if k-y 0; "( can also be made data-dependent. = Theorem 3.2 If the data is infinity-norm bounded as Ilxll oo : : : b, then consider the family r of hyperplanes W such that Ilwlll : : : a and 2: j Wj In(::III~\\~) ::::: c. Denote by err( w) the misclassification error of W with the true distribution. Then there is a constant C such that for any "( > 0, with probability 1 - TJ over n random samples, any W E r satisfies: err(w) k < ..2 + - n C -2-b2(a2 "( n nab 1 "( TJ + ac) In(- + 2) + In-, where k-y = I{ i : w T xiyi < 'Y} I is the number afsamples with margin less than 'Y- 4 An example We use an artificial dataset to show that a regularized Winnow can enhance a Winnow just like an SVM can enhance a Perceptron. In addition, it shows that for problems with many irrelevant features, the Winnow algorithms are superior to the Perceptron family algorithms. The data in this experiment are generated as follows. We select an input data dimension d, with d = 500 or d = 5000. The first 5 components of the target linear weight w are set to ones; the 6th component is -1; and the remaining components are zeros. The linear threshold 9 is 2. Data are generated as random vectors with each component randomly chosen to be either 0 or 1 with probability 0.5 each. Five percent of the data are given wrong labels. The remaining data are given correct labels, but we remove data with margins that are less than 1. One thousand training and one thousand test data are generated. We shall only consider balanced versions of the Winnows. We also compensate the effect of 9 by appending a constant 1 to each data point, as mentioned earlier. We use UWin and NWin to denote the basic unnormalized and normalized Winnows respectively. LMUWin and LM -NWin denote the corresponding large margin versions. The SVM sty Ie large margin Perceptron is denoted as LM-Perc. We use 200 iterations over the training data for all algorithms. The initial values for the Winnows are set to be the priors: f-Lj = 0.01. For online algorithms, we fix the learning rates at 0.01. For large margin Winnows, we use learning rates TJ = 0.01 in the gradient ascent update. For (2-norm regularized) large margin Perceptron, we use the exact update which corresponds to a choice TJ 1/ xiT xi. = Accuracies (in percentage) of different methods are listed in Table 1. For regularization methods, accuracies are reported with the optimal regularization parameters. The superiority of the regularized Winnows is obvious, especially for high dimensional data. Accuracies of regularized algorithms with different regularization parameters are plotted in Figure 1. These behaviors are very typical for regularized algorithms. In practice, the optimal regularization parameter can be found by cross-validation. dimension Perceptron 82.2 67.9 LM-NWin 94.3 88.6 Table 1: Testset accuracy (in percentage) on the artificial dataset - '- - -,f;. - __ ~ I I I ,4 II , , . -, L I I I , I I , , I I ~ 70 _ _ _ *- __ 0- ~~ ? ~~,~.?~~"?~~"~?~~"?~~'if d = 500 _ _ ..... __ .. __ ? __ .. __ ? _ _ \ ... __ .. __ , , _ _ -o- __ o- __ ~~ . ~~,~ ?. ~~".~~"~.~~",~~,, d = 5000 Figure 1: Testset accuracy (in percentage) as a function of>. = n~ 5 Conclusion In this paper, we derived regularized versions of Winnow online update algorithms. We studied algorithmic and theoretical properties of the newly obtained algorithms, and compared them to the Perceptron family algorithms. Experimental results indicated that for problems with many irrelevant features, the Winnow family algorithms are superior to Perceptron family algorithms. This is consistent with the implications from both the online learning theory, and learning bounds obtained in this paper. References [1] lK Anlauf and M. Biehl. The AdaTron: an adaptive perceptron algorithm. Europhys. Lett., 10(7):687-692, 1989. [2] C. Cortes and V,N. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [3] I. Dagan, Y. Karov, and D. Roth . Mistake-driven learning in text categorization. In Proceedings of the Second Conference on Empirical Methods in NLP, 1997. [4] A. Grove and D. Roth. Linear concepts and hidden variables. Machine Learning, 2000. To Appear; early version appeared in NIPS-IO. [5] A.J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In Proc. lOthAnnu. Con! on Comput. Learning Theory, pages 171-183,1997. [6] Tommi Jaakkola, Marina Meila, and Tony Jebara. Maximum entropy discrimination. In S.A. Solla, T.K Leen, and K-R. Miiller, editors, Advances in Neural Information Processing Systems 12, pages 470-476. MIT Press, 2000. [7] T.S. Jaakkola, Mark Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, to appear. [8] W. Kinzel. Statistical mechanics of the perceptron with maximal stability. In Lecture Notes in Physics, volume 368, pages 175-188. Springer-Verlag, 1990. [9] l Kivinen and M.K Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Information and Computation, 132: 1-64, 1997. [10] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linearthreshold algorithm. Machine Learning, 2:285-318, 1988. [11] M. Opper. Learning times of neural networks: Exact solution for a perceptron algorithm. Phys. Rev. A, 38(7):3824-3826, 1988. [12] F. Rosenblatt. Principles of Neurodynamics,' Perceptrons and the Theory of Brain Mechanisms. Spartan, New York, 1962. [l3] Bernhard Scholkopf, Christopher l C. Burges, and Alexander l Smola, editors. Advances in Kernel Methods,' Support Vector Learning. The MIT press, 1999. [14] l Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Trans. In! Theory, 44(5):19261940,1998. [15] V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998. [16] Tong Zhang. Analysis of regularized linear functions for classification problems. Technical Report RC-21572, IBM, 1999. Abstract in NIPS'99, pp. 370-376.
1833 |@word version:11 norm:5 seek:1 ld:2 initial:5 err:2 com:1 comparing:1 xiyi:7 john:1 numerical:1 additive:3 remove:1 update:28 discrimination:4 warmuth:1 mln:1 detecting:1 provides:1 hyperplanes:2 zhang:2 lor:1 height:1 mathematical:1 five:1 c2:1 direct:1 rc:1 scholkopf:1 introduce:3 ilxill:2 expected:2 behavior:1 nonseparable:1 mechanic:1 brain:1 abound:1 provided:1 classifies:2 bounded:4 minimizes:1 karov:1 finding:2 adatron:1 wrong:1 yn:2 superiority:1 appear:2 positive:6 before:1 ilxll:1 mistake:13 consequence:1 io:1 studied:2 resembles:1 suggests:1 practice:3 empirical:1 confidence:1 protein:1 context:1 risk:1 restriction:1 equivalent:1 lagrangian:2 center:1 roth:2 convex:1 simplicity:1 rule:12 estimator:1 haussler:1 regarded:1 embedding:1 stability:6 proving:1 updated:1 hierarchy:1 target:1 exact:3 programming:2 labeled:1 observed:2 thousand:2 wj:32 connected:1 solla:1 remote:1 removed:1 yk:1 balanced:2 mentioned:1 solving:1 algo:1 algebra:1 serve:1 purely:1 milli:1 chapter:4 derivation:3 describe:1 artificial:2 spartan:1 europhys:1 heuristic:1 solve:1 biehl:1 nab:1 itself:2 online:4 advantage:1 analytical:1 maximal:1 achieve:1 normalize:1 crossvalidation:2 convergence:1 categorization:1 converges:2 leave:4 illustrate:2 derive:7 oo:1 fixing:1 ac:1 solves:2 skip:1 implies:1 tommi:1 errn:2 closely:2 correct:1 attribute:2 elimination:2 fix:4 generalization:1 mm:1 exp:9 algorithmic:2 lm:3 major:1 early:1 a2:1 purpose:1 proc:1 label:7 minimization:1 mit:2 always:1 rather:2 ej:2 jaakkola:2 derived:6 xit:2 focus:1 sense:1 wf:2 dependent:3 typically:1 lj:14 hidden:1 going:1 interested:1 linearthreshold:1 mimicking:1 issue:1 classification:4 dual:17 ill:4 denoted:1 misclassifies:2 identical:1 biology:1 report:1 randomly:1 investigate:2 tj:10 amenable:1 implication:1 grove:2 experience:1 respective:1 taylor:2 logarithm:1 littlestone:2 plotted:1 theoretical:4 classify:1 soft:5 earlier:2 disadvantage:1 maximization:1 introducing:1 ilwlll:6 reported:1 fundamental:1 ie:1 yl:2 physic:1 enhance:2 quickly:1 again:1 choose:1 perc:1 leading:2 style:1 b2:1 depends:1 multiplicative:3 later:1 try:1 root:1 sup:1 minimize:2 square:1 accuracy:5 qk:1 maximized:1 correspond:1 phys:1 checked:1 fwj:1 pp:1 obvious:2 naturally:2 associated:2 proof:4 con:1 rithms:1 newly:3 dataset:2 organized:1 cj:3 carefully:1 higher:1 emin:1 l111:1 formulation:9 leen:1 just:1 smola:1 hand:2 sketch:1 christopher:1 indicated:1 effect:3 normalized:10 true:4 multiplier:2 verify:1 homology:1 regularization:11 concept:1 eg:1 ll:1 covering:2 yorktown:1 unnormalized:10 generalized:1 percent:1 recently:1 ilxi:2 superior:2 functional:2 kinzel:1 volume:1 he:1 numerically:1 egu:2 meila:1 similarly:2 shawe:1 l3:1 base:1 winnow:57 irrelevant:4 driven:3 belongs:1 optimizes:1 certain:1 verlag:1 inequality:2 binary:2 watson:2 yi:15 employed:1 determine:1 maximize:2 ii:3 technical:1 cross:3 long:1 compensate:1 marina:1 qi:6 prediction:1 variant:1 basic:3 enhancing:1 expectation:2 iteration:1 normalization:2 kernel:1 addition:2 iiii:1 crucial:1 unlike:1 ascent:3 seem:1 call:2 structural:1 iii:1 affect:1 suboptimal:1 idea:2 diekhans:1 bartlett:1 effort:1 miiller:1 york:2 cause:1 repeatedly:1 useful:2 listed:1 extensively:1 svms:1 reduced:1 percentage:3 correctly:3 rosenblatt:1 write:2 shall:6 threshold:4 achieving:1 year:1 wand:2 tzhang:1 family:7 almost:1 reasonable:1 bound:24 iwt:1 quadratic:1 constraint:1 infinity:2 aspect:1 min:3 separable:18 department:1 anlauf:1 combination:1 slightly:1 son:1 rev:1 modification:3 nwin:3 taken:3 discus:1 slack:2 mechanism:1 serf:1 multiplied:1 apply:1 observe:1 appropriate:1 appending:1 original:1 remaining:3 include:1 nlp:1 tony:1 newton:1 especially:2 objective:5 gradient:6 separate:2 separating:1 discriminant:5 besides:1 index:1 relationship:1 difficult:1 negative:1 oneout:1 append:1 min2:1 i_:1 finite:1 descent:1 jebara:1 mechanical:1 z1:1 connection:1 nip:2 trans:1 able:2 below:2 appeared:2 max:7 misclassification:3 difficulty:1 natural:3 regularized:28 kivinen:1 advanced:1 imply:1 lk:1 text:1 review:2 literature:1 prior:2 loss:1 lecture:1 interesting:2 limitation:2 analogy:1 versus:1 validation:3 consistent:1 principle:1 editor:2 ibm:3 side:2 exponentiated:3 burges:1 perceptron:29 dagan:1 curve:2 dimension:2 xn:1 lett:1 opper:1 computes:2 made:1 adaptive:1 testset:2 functionals:2 obtains:1 ignore:1 bernhard:1 kkt:4 summing:1 assumed:1 xi:9 discriminative:1 table:2 neurodynamics:1 reasonably:1 schuurmans:1 expansion:1 williamson:1 complex:1 constructing:1 iiwll:1 anthony:1 did:1 linearly:7 convey:1 augmented:1 referred:1 ny:1 tong:2 wiley:1 xl:2 comput:1 ix:7 down:2 theorem:5 removing:1 specific:1 pac:2 maxi:1 offset:1 svm:14 cortes:1 consist:1 vapnik:3 justifies:1 margin:13 entropy:8 springer:1 corresponds:3 satisfies:1 hard:1 specifically:1 typical:1 hyperplane:12 wt:1 called:5 experimental:2 perceptrons:1 select:1 support:4 mark:1 latter:1 alexander:1
911
1,834
. A new model of spatial representations In multimodal brain areas. Sophie Deneve Department of Brain and cognitive Science University of Rochester Rochester, NY 14620. [email protected] Jean-Rene Duhamel Institut des Sciences Cognitives C.N.R.S Bron, France 69675 [email protected]?fr Alexandre Pouget Department of Brain and Cognitive University of Rochester Rochester, NY 14620. [email protected] Abstract Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transformation and motor control requires more complicated intermediate representations that are not invariant in one frame of reference. We present an iterative basis function map that performs these spatial tasks optimally with gain modulated and partially shifting units, and tests it against neurophysiological and neuropsychological data. In order to perform an action directed toward an object, it is necessary to have a representation of its spatial location. The brain must be able to use spatial cues coming from different modalities (e.g. vision, audition, touch, proprioception), combine them to infer the position of the object, and compute the appropriate movement. These cues are in different frames of reference corresponding to different sensory or motor modalities. Visual inputs are primarily encoded in retinotopic maps, auditory inputs are encoded in head centered maps and tactile cues are encoded in skin-centered maps. Going from one frame of reference to the other might seem easy. For example, the head-centered position of an object can be approximated by the sum of its retinotopic position and the eye position. However, positions are represented by population codes in the brain, and computing a head-centered map from a retinotopic map is a more complex computation than the underlying sum. Moreover, as we get closer to sensory-motor areas it seems reasonable to assume Spksls 150 100 50 o Figure 1: Response of a VIP cell to visual stimuli appearing in different part of the screen, for three different eye positions. The level of grey represent the frequency of discharge (In spikes per seconds). The white cross is the fixation point (the head is fixed). The cell's receptive field is moving with the eyes, but only partially. Here the receptive field shift is 60% of the total gaze shift. Moreover this cell is gain modulated by eye position (adapted from Duhamel et al). that the representations should be useful for sensory-motor transformations, rather than encode an "invariant" representation. According to the linear model, space is always represented in the sensory and sensory-motor cortex in one particular egocentric frame of reference. This process is mediated by cells whose receptive fields are anchored to a particular body part. In this view spatial cues coming from different modalities should all be remapped in a common frame of reference at some point, that can be used in turn to compute motor maps (for reaching, grasping, etc ). The linear model was challenged when cells truly invariant in one modality failed to be found in parietal areas. Andersen et al, for example, found retinotopic cells that were gain modulated by eye position in LIP [1], but none of these cells had a headcentered receptive fields. Subsequent studies confirmed that gain-modulation by eye position is a very general phenomena in the cortex, whereas truly head-centered or arm-centered cells have rarely been reported. More recently, in VIP, Duhamel et al. found cells that were neither eye nor headcentered, but whose receptive fields were partially moving with the eyes [2]. As a consequence, the receptive fields appeared to be moving both in the retinotopic and head-centered frames of reference (see figure 1). The amount of shift with gaze varied from cell to cell, and was continuously distributed between 0% (head-centered) and 100% (retinotopic). Partially shifting cells where also found for auditory targets in LIP [5] and in the superior colliculus [3]. We will show in this paper that the nature of the problem of integrating postural and sensory inputs from different modalities, and providing motor outputs with distributed population codes lead us to postulate the existence of these gain modulated and/or partially moving receptive fields in the associative brain areas, instead of invariant representations. We present an interconnected network that can perform multi-directional coordinate and sensory-motor transforms by using intermediate basis function units. These intermediate units are gain modulated by eye position, have partially shifting receptive field and, as a result, represent space in a mixture of frames of reference. They provide a new model of spatial representations in multimodal areas according to which cells responses are not determined solely by the position of the stimulus in a particular egocentric frame of reference, but by the interactions between the dominant input modalities. 1 Sensory predictions and sensory-motor transformations with distributed population codes We will focus on the eye/head system which deals with two frames of reference (retinotopic and head-centered) and one postural input (the eye position). Sensory predictions consist of anticipating a stimulus in one sensory modality from a stimulus originating from the same location, but in another sensory modality. Predictions of auditory stimuli from visual stimuli, for example, requires the computation of a head-centered map from a retinotopic map. 1.1 Coordinate transforms and sensory predictions We assume that the tuned response of a retinotopic cell can be modeled by a Gaussian BT(R - Ri) of the distance between the stimulus position R and the receptive field center R i , and that the response of a postural cell to eye position can be modeled by a gaussian Be(E - Ej ) of the difference between the eye position E and the preferred angle E j . In addition we suppose that cells are organized topographically in each layer, so that a stimulus at position r and for eye position 9 will give rise to a hill of activity peaking at position r on the retinotopic map and 9 on the eye position map. We wish to compute a head-centered map where cells responses are described by head-centered gaussian tuning curves Bh(H - Hk) where H is the head-centered position and Hk the preferred position. Given the geometry of the eye/head system, we have approximately H = R + E, but this does not simplify the computation of coordinate transform with population codes. We certainly cannot have Bh(H - H k ) = Be(E - E j ) + BT(R - Rk). 1.2 Basis function map To solve this problem we could use an intermediate neural layer that implements a product between visual and postural tuning curves [4]. Products of Gaussians are basis functions and thus a population of retinotopic cells gain modulated by eye position, whose responses are described by BT(R - Ri)Be(E - Ej ) implement a basis function map of Rand E. Any function f(R, E) can be approximated by a linear combination of these cells responses: f(R,E) =L wijBT(R - Ri)Be(E - Ej ). (1) ij In particular, a head centered map is a function of retinotopic position and eye position and can be computed very easily from the basis function map (by a simple linear combination). Even more importantly, any sensory-motor transform can be implemented by feedforward weights coming from the basis function layer. The basis function map itself can be readily implemented from a retinotopic map and an eye position map, by connecting each unit with one visual cell and one eye position cell, and computing a product between these two inputs [4]. Similarly, another basis function map could be implemented by making the product between auditory and postural tuning curves, BT(R - Ri)Bh(H - Hk), in order to predict the position of a visual cue from the sound it makes, or to compute reaching toward auditory cues. However it would be better to combine these two basis function maps in a common architecture, especially if we want to integrate visual and auditory inputs or implement motor feedback to sensory representation, both of which require a multi-directional computation. 2 Multi-directional coordinate transforms with distributed population codes If we want to combine these two basis function maps without giving the priority to one modality, we can intuitively use basis functions that are a product between the three tuning curves: (2) From this intermediate representation, the three sensory maps Br(R- R i ), Be(EEj ) and Bh(H - Hi+j) can be computed by simple projections. This ensures that this basis function units can use the two sensory maps as both input and output. We implemented this idea in an interconnected neural network that non-linearly combines visual, auditory and postural inputs in an intermediate layer (the basis function map), which in turn is used to reconstruct the activities on the auditory, visual, and postural layers. This network is completely symmetric, similarly processing visual, postural and auditory inputs. It converges to stable hills of activity on the three neural maps that simultaneously gives the retinotopic position, headcentered position, and the eye position in the input (see figure 2A), performing multi-directional sensory prediction. For this reason, we called this model an iterative basis function network. 3 The iterative basis function network The network is represented on figure 2A. It has four layers: three visible, one dimensional layers (visual, auditory and postural) and a two dimensional hidden layer. The three input layers are not directly connected to one another, but they are all interconnected with the hidden layer. These interconnections are symmetric, i.e. the connection between neuron A and B has the same strength as the connection between neuron Band A. This ensures that the network will converge towards a stable state. We note W r , W h , we the respective weights of the retinotopic, head-centered and eye position layers with the hidden layer. All three weight matrices are circular symmetric gaussian filters. The connection between the ith unit in each input layers, and the hidden layer l, mare wr(i, l, m) = B(l-i), we(i, l, m) = B(m - i), Wh(i, l, m) = B((l + m) - i), where B is a circular gaussian matrix: (3) a w governs the width of the weight, Z is a constant that controls the dominance of the corresponding sensory or postural modality on the intermediate layer, and N is the number of units in the input layers. Note that with these weights, the hidden unit l, m is maximally connected to the unit l in the retinotopic layer, m in the eye position layer, and l +m in the head-centered layer. This connectivity is responsible A. H=R+G B. I~ct ? 6 0 4 .:( 5 ~ 00 2 0 o -100 ~ 0 100 ---B--,G = _100 - B - G = -100 Preferred head centered l OCaIJOIl 000000000 ]Wh Olil il il il il il '"'0000'"'0 OIilIilI.H.OIil R 00 2 /W~ OOOOO \ 000000 10 80 60 40 0000000 1il1il1il1il001il 1il1il1il1il1il01il G 0 0 0 0 0 0 0 00000 8 60 40 20 0 00 000 0, wh>wr 20 40 60 80 Retinal Loca1J. on o o 10 o e,:ltJ j:lKJ .~ ." 0 20 40 60 80 l::~.?'?". 1 .:(5 00 ~ 20 40 60 80 Retmal Localton Figure 2: A- Architecture of the iterative BF map. The intermediate cells look like partially shifting cell in VIP. B- An intermediate cell's response properties when one varies the ratio Zr/Zh of modality dominance (strength of the weights). The gain of the shift varies from 0 to 1 depending of the relative strength of Wh (the auditory weights) and W r (the visual weights). for the fact that the network will compute H = R+E. This approach can generalize to arbitrary mapping M = f(R, E) if we replace Wh(i, l, m) = B((l + m) - i) by Wh(i, l, m) = B(f(l, m) - i). Activities on the inputs layers are pooled linearly on the intermediate layers, according to the connection matrices. Then these pooled inputs are squared and normalized with a divisive inhibition. The resulting activities on the intermediate layer are then sent back to the input layers, through the symmetric connections, and in turn squared and normalized. The inputs are modeled by bell-shaped distribution of activities clamped on the input layers at time O. The amplitude of these initial hills of activity represents the contrasts of the stimuli. A purely visual stimulus, for example, would have an auditory contrast of 0 on the head-centered layer. Except for very low contrasts in all modalities, the network converges toward non-zero stable states when provided with visual, auditory, or bimodal input. These stable states are stable hills of activity on the visual, auditory and postural layers, so that the position of the hill on the head-centered layer is the sum of the position of the hill on the visual layer, and the position of the hill on the postural layer . When provided with visual and postural input, the network predicts the auditory position of the stimulus. When provided with auditory and postural input, the retinotopic position can be read from the position of the stable hill on the visual layer. Thus, the network is automatically doing coordinate transforms in both directions. The whole process takes no more than 2 iterations. 4 Spatial representation in the intermediate layer The cells in the intermediate layer provide a multimodal representation of space that we can characterize and compare to neurophysiological data. We will focus on the unit's response after the network reached its stable state. The final state depends only on the position encoded in the input, which implies that the unit's responses are identical regardless of the input modality (visual, auditory or bimodal). The receptive fields in different modalities are spatially congruent, like the receptive fields of most multimodal cells in the brain. In figure 2B, we plotted for different eye positions the activity of an intermediate cell as a function of the retinotopic position of the stimulus. Note that because of the symmetries in the network, all the other intermediate cells responses are translated version of this one. The critical parameter that will govern the intermediate representation is ratio Zr/Zh that defines the relative strength of visual and auditory weights. This is the only parameter we manipulated in this study. When neither the visual nor the auditory representation dominates (that is, when Zr/Zh = 1, see figure 2B, top panel), the intermediate cell's receptive field on the retina shift with the eyes, but it does not shift as much as the eyes do. This is a partially shifting cell, gain modulated by eye position. The amount of receptive field shift with the gaze is 50%. In fact we found that this cell's response was very close to a product between a gaussian of retinotopic position, head-centered position and eye position, thus implementing the basis function we already proposed as a solution to the multi-directional computation problem. This cell looks very much like a one dimensional version of the particular VIP cell plotted in figure 1A. Varying the ratio x = i~ does not affect the performance of the network for coordinate transform (the only change occurring on the input layers is a change in the amplitude of the stable hills) but it changes the intermediate representation, particularly the amount of receptive field shift with gaze. There is a continuum between a gain modulated retinotopic cell for a high value of x ( 0% shift, figure 2B, middle panel) and a gain modulated head-centered cell for a low value of x (100% shift, figure 2B, bottom panel). This behavior is easy to understand: an intermediate cell receives tuned retinotopic, head-centered and eye position inputs. This three tuned inputs will more or less influence the unit's response, depending on their strength. Thus, the whole distribution of shifts found in VIP could belong to an iterative basis function map with varying ratio between visual and "head-centered" weights. In the case of VIP, "head-centered" would correspond to tactile, as VIP is a visuo/tactile area. On the other hand, if one modality dominates in all cells (e.g. in LIP for vision), we can predict that the distribution of responses will be displaced toward the frame of reference of this modality. 5 Lesion of the iterative basis function map In order to link the intermediate representation with spatial representations in the human parietal cortex, we studied the consequences of a lesion to this network. Unilateral right parietal lesions result in a syndrome called hemineglect: The patient is slower to react to, and has difficulty detecting stimuli in the contralesional space. This is usually coupled with extinction of leftward stimuli by rightward stimuli. Two striking characteristics of hemineglect are that it is usually in a mixture of frames of reference, challenging the view that parietal cortex is a mosaic of areas devoted to spatial processing in different frames of reference. Additionally, extinction is frequently cross-modal. For example, tactile stimuli can be extinguished by visual stimuli, suggesting that the lesioned spatial representation are themselves multimodal. We modeled a right parietal lesion by implementing a gradient of units in the intermediate layer, so that there are more cells tuned to contralateral retinotopic (visual) and contralateral head-centered (auditory) positions. This correspond to the observed hemispheric asymmetries in the monkey's brain. This modification did not strongly affect the final estimates of position by the network, but the processing was slower (taking more time to reach the stable state) and the contrast threshold (minimal visual and auditory contrasts that drives the network) was higher for the leftward retinal and head-centered locations. Thus the network "neglected" stimuli in a mixture of frames of reference: The severity of neglect gradually increased from right to left both in retinotopic and head-centered coordinates. Furthermore when we entered two simultaneous inputs to the network, we observed that the leftward stimulus was always extinguished by the rightward stimulus (the final stable state reflected only the rightward stimulus), regardless of the modality. Thus we obtained extinction of auditory stimuli by visual stimuli, and vice-versa. In our model, these two aspects of neglect (mixture of frames of reference and cross modal extinction) can be explained by a lesion in only one multimodal brain area. 6 Conclusion Our approach can be easily generalized to sensory-motor transformations. In this case, the implementation of motor control (the feedback from the motor representations to the sensory representations) will lead to intermediate cells that partially shift in the sensory as well as the motor frame of reference. This model has other (related) interesting properties that we develop elsewhere. In the presence of noisy input, it can perform optimal multi-sensory cue integration, and allows an adaptative bayesian approach to cue integration, in a biologically realistic way. Iterative basis function maps provide a new model of spatial representations and processing that can be applied to neurophysiological and neuropsychological data. References [1] R. Andersen, R. Bracewell, S. Barash, J. Gnadt, and L. Fogassi. Eye position effect on visual memory and saccade-related activity in areas LIP and 7a of macaque. Journal of Neuroscience, 10:1176-1196,1990. [2] J. Duhamel, F. Bremmer, S. BenHamed, and W. Graf. Spacial invariance of visual receptive fields in parietal cortex. Nature, 389(6653):845-848,1997. [3] M. Jay and D. Sparks. Sensorimotor integration in the primate superior colliculus:l. motor convergence. Journal of Neurophysiology, 57:22-34, 1987. [4] A. Pouget and T. Sejnowski. Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience, 9(2), 1997. [5] B. Stricanne, P. Mazzoni, and R. Andersen. Modulation by the eye position of auditory responses of macaque area LIP in an auditory memory saccade task. In Society For Neuroscience Abstracts, page 26, Washington, D.C., 1993.
1834 |@word neurophysiology:1 version:2 middle:1 seems:1 bf:1 extinction:4 grey:1 initial:1 tuned:4 must:1 readily:1 visible:1 realistic:1 subsequent:1 motor:17 cue:8 ith:1 fogassi:1 detecting:1 location:3 fixation:1 combine:4 behavior:1 themselves:1 nor:2 frequently:1 multi:7 brain:9 automatically:1 provided:3 retinotopic:23 underlying:1 moreover:2 panel:3 monkey:1 barash:1 transformation:5 stricanne:1 control:3 unit:13 consequence:2 solely:1 modulation:2 approximately:1 might:1 studied:1 mare:1 challenging:1 limited:1 neuropsychological:2 directed:1 responsible:1 implement:3 area:10 bell:1 projection:1 integrating:1 get:1 cannot:1 close:1 bh:4 influence:1 map:30 center:1 regardless:2 spark:1 react:1 pouget:2 importantly:1 population:6 coordinate:7 discharge:1 target:1 suppose:1 mosaic:1 approximated:2 particularly:1 predicts:1 bottom:1 observed:2 ensures:2 connected:2 grasping:1 movement:1 govern:1 lesioned:1 neglected:1 solving:1 topographically:1 purely:1 basis:21 completely:1 translated:1 rightward:3 multimodal:6 easily:2 headcentered:3 represented:3 sejnowski:1 outside:1 jean:1 encoded:4 whose:3 solve:1 reconstruct:1 interconnection:1 transform:3 itself:1 noisy:1 final:3 associative:1 interconnected:3 coming:3 interaction:1 fr:1 cognitives:1 product:6 entered:1 ltj:1 convergence:1 asymmetry:1 congruent:1 bron:1 converges:2 object:3 depending:2 develop:1 ij:1 implemented:4 implies:1 direction:1 filter:1 centered:27 human:1 implementing:2 require:1 mapping:1 predict:2 continuum:1 vice:1 gaussian:6 always:2 rather:1 reaching:2 ej:3 varying:2 encode:1 focus:2 hk:3 contrast:5 cnrs:1 bt:4 hidden:5 originating:1 going:1 france:1 spatial:14 integration:4 field:16 shaped:1 washington:1 identical:1 represents:1 look:2 stimulus:23 simplify:1 extinguished:2 primarily:1 retina:1 manipulated:1 simultaneously:1 geometry:1 circular:2 certainly:1 truly:2 mixture:4 devoted:1 isc:1 closer:1 necessary:1 respective:1 bremmer:1 institut:1 plotted:2 minimal:1 increased:1 challenged:1 contralateral:2 optimally:1 reported:1 characterize:1 varies:2 gaze:4 connecting:1 continuously:1 connectivity:1 andersen:3 postulate:1 squared:2 bracewell:1 priority:1 cognitive:3 audition:1 suggesting:1 de:1 retinal:2 pooled:2 depends:1 view:2 doing:1 reached:1 complicated:1 rochester:6 il:5 characteristic:1 correspond:2 directional:5 generalize:1 bayesian:1 none:1 confirmed:1 drive:1 simultaneous:1 reach:1 against:1 sensorimotor:1 frequency:1 visuo:1 gain:12 auditory:24 wh:6 adaptative:1 organized:1 amplitude:2 anticipating:1 back:1 alexandre:1 higher:1 reflected:1 response:15 maximally:1 rand:1 modal:2 strongly:1 furthermore:1 hand:1 receives:1 touch:1 defines:1 effect:1 normalized:2 read:1 symmetric:4 spatially:1 white:1 deal:1 width:1 hemispheric:1 generalized:1 hill:9 performs:1 recently:1 common:2 superior:2 belong:1 rene:1 versa:1 tuning:4 similarly:2 had:1 moving:4 stable:10 cortex:8 inhibition:1 etc:1 dominant:1 leftward:3 syndrome:1 converge:1 sound:1 bcs:2 jrd:1 infer:1 cross:3 prediction:6 vision:2 patient:1 iteration:1 represent:2 bimodal:2 cell:41 whereas:1 addition:1 want:2 modality:17 sent:1 proprioception:1 seem:1 presence:1 intermediate:22 feedforward:1 easy:2 affect:2 architecture:2 idea:1 br:1 shift:12 tactile:4 unilateral:1 action:1 useful:1 governs:1 ooooo:1 amount:3 transforms:4 band:1 neuroscience:3 per:1 wr:2 dominance:2 four:1 threshold:1 neither:2 deneve:1 egocentric:3 sum:3 colliculus:2 angle:1 striking:1 reasonable:1 layer:34 hi:1 ct:1 activity:10 adapted:1 strength:5 alex:1 ri:4 aspect:1 performing:1 department:2 according:3 combination:2 making:1 modification:1 biologically:1 primate:1 peaking:1 invariant:4 intuitively:1 gradually:1 explained:1 turn:3 vip:7 gaussians:1 appropriate:1 appearing:1 slower:2 existence:1 top:1 neglect:2 giving:1 especially:1 postural:15 classical:1 society:1 skin:1 already:1 spike:1 mazzoni:1 receptive:16 primary:1 gradient:1 distance:1 link:1 toward:4 reason:1 code:5 modeled:4 providing:1 ratio:4 hemineglect:2 rise:1 implementation:1 perform:3 neuron:2 duhamel:4 displaced:1 parietal:7 severity:1 head:28 frame:16 varied:1 arbitrary:1 connection:5 macaque:2 able:1 lkj:1 usually:2 remapped:1 appeared:1 memory:2 shifting:6 critical:1 difficulty:1 zr:3 arm:1 eye:31 mediated:1 coupled:1 zh:3 relative:2 graf:1 interesting:1 integrate:1 elsewhere:1 understand:1 taking:1 distributed:4 curve:4 feedback:2 sensory:27 preferred:3 iterative:7 spacial:1 anchored:1 lip:5 additionally:1 nature:2 symmetry:1 complex:1 did:1 linearly:2 whole:2 lesion:5 body:1 screen:1 ny:2 position:50 wish:1 clamped:1 jay:1 rk:1 dominates:2 consist:1 occurring:1 neurophysiological:3 visual:28 failed:1 partially:10 saccade:2 towards:1 replace:1 change:3 determined:1 except:1 sophie:1 total:1 called:2 invariance:1 divisive:1 rarely:1 modulated:10 phenomenon:1
912
1,835
Keeping flexible active contours on track using Metropolis updates Trausti T. Kristjansson University of Waterloo tt kr i s tj @uwa te r l oo . ca Brendan J. Frey University of Waterloo f r ey@uwate r l oo . ca Abstract Condensation, a form of likelihood-weighted particle filtering, has been successfully used to infer the shapes of highly constrained "active" contours in video sequences. However, when the contours are highly flexible (e.g. for tracking fingers of a hand), a computationally burdensome number of particles is needed to successfully approximate the contour distribution. We show how the Metropolis algorithm can be used to update a particle set representing a distribution over contours at each frame in a video sequence. We compare this method to condensation using a video sequence that requires highly flexible contours, and show that the new algorithm performs dramatically better that the condensation algorithm. We discuss the incorporation of this method into the "active contour" framework where a shape-subspace is used constrain shape variation. 1 Introduction Tracking objects with flexible shapes in video sequences is currently an important topic in the vision community. Methods include curve fitting [9], layered models [1, 2, 3], Bayesian reconstruction of 3-D models from video[6], and active contour models [10, 14, 15]. Fitting curves to the outlines of objects has been attempted using various methods, including "Snakes" [8, 9], where an energy function is minimized so as to find the best fit. As with other optimization methods, this approach suffers from local maxima. This problem is amplified when using real data where edge noise can prevent the fit of the contour to the desired object outline. In contrast, Blake et at. [10] introduced a probabilistic framework for curve fitting and tracking. Instead of proposing one single best fit for the contour, a probability distribution over contours is found. The distribution is represented as a particle set where each particle represents one contour shape. Inference in these "active contour" models is accomplished using particle filtering. In the "active contour" method, a probabilistic dynamic system is used to model the distribution over the outline of the object (the contour) yt and the observations Zt at time t. Tracking is performed by inference in this model. The outline of an object is tracked through successive frames in a video by using a particle (a) (b) .... -..' H .'IO'III!!I.l' ~"'.~ .. . , r ,.):, ? ?t~~', . , ~; ~"\tt~. -'~! ~rJ" ~ - " Illlll'iIi,W f~'~ ~ ~~ . , #~., -''" " ,J" ".. Figure 1: (a) Condensation with Gaussian dynamics (result for best a = 2 shown) applied to a video sequence. The 200 contours corresponding to 200 particles fail to track the complex outline of the hand. The pictures show every 24th frame of a 211-frame sequence. (b) Metropolis updates with only 12 particles keep the contours on track. At each step, 4 iterations of Metropolis updates are applied with a = 3. distribution. Each particle Xn represents single contour Y 1 that approximates the outline of the object. For any given frame, a set of particles represents the probability distribution over positions and shapes of an object. In order to find the likelihood of an observation Zt, given a particle X n , lines perpendicular to the contour are examined and edges are detected. A variety of distributions can be used to model the likelihood of the edge positions along each line. We assume that the position of the edge belonging to the object is drawn from a Gaussian with mean position at the intersection of the contour and the measurement line Y(Sm) and the positions of the other edges are drawn from a Poisson distribution. The observation likelihood for a single measurement line Zm can be simplified to [10] p(zmlxn) ex: 1 + 1 V21fa m lQ L exp [_Izm ,j j B~sm)xnI2] (1) 2aml where Zm,j denotes the coordinates of an edge on measurement line m, and B(sm)xn = Yn(Sm) is the intersection of the contour and the measurement line (see later). Q = q>.. lNotation: We will use Y to refer to a curve, parameterized by x, and yes) for a particular point on the curve. x refers to a particle consisting of subspace parameters, or in our case, control points. n indexes a particle in a particle set, i indexes a component of a particle (i.e. a single control point), m indexes measurement lines and t is used as a frame index where q is the probability of not observing the edge, and A is the rate of the Poisson process. (J'rnl defines the standard deviation in pixels. A multitude of measurement lines is used along the contour, and (assuming independence) the contour likelihood is p(Zlxn) = IIP(ZrnIXn) (2) M where m E M is the set of measurement lines. As mentioned, in the condensation algorithm, a particle set is used to represent the distribution of contours. Starting from an initial distribution, a new distribution for a successive frame is produced by propagating each particle using the system dynamics P(xtlxt-t} . Now the observation likelihood P(Ztlxt) is calculated for each particle, and the particle set is resampled with replacement, using the likelihoods as weights. The resulting set of particles approximates the posterior distribution at time t and is then propagated to the next frame. Figure l(a) shows the results of using condensation with 200 particles. As can be seen, the result is poor. Intuitively, the reason condensation fails is that it is highly unlikely to draw a particle that has raised control points over the four fingers , while keeping the remainder fixed. Figure 1(b) shows the result of using Metropolis updates and 12 particles (equivalent amount of computation). 2 Keeping contours on track using Metropolis updates To reduce the dimensionality of the inference, a subspace is often used. For example, a fixed shape is only allowed horizontal and vertical translation. Using a subspace reduces the size of the required particle set, allowing for successful tracking using standard condensation. If the object can deform, a subspace that captures the allowed deformations may be used [15]. This increases the flexibility of the contour, but at the cost of enlarged dimensionality. In order to learn such a subspace, a large amount of training samples are used, which are supplied by hand fitting contour shapes to a large number of frames. However, even moderately detailed contours (say, the outline of a hand) will have many control points that interact in complex ways, making subspace modeling difficult or impractical. 2.1 Metropolis sampling Metropolis sampling is a popular Markov Chain Monte Carlo method for problems of large dimensionality[16, 17]. A new particle is drawn from a proposal density Q(X'; Xt) , where in our case, Xt is a particle (i.e. a set of control points) at time t, and x' is a tentative new particle produced by perturbing a subset of the control points. I Qi(X IXt) = 1 J'<\? V 27r(J'2 [ (x' - Xt)2] exp 2 2 . (3) (J' We then calculate (4) where p(Xt IXt-l)p(Zt IXt) is proportional to the posterior probability of observing the contour in that position. If a ~ 1 the proposed particle is accepted. If a < 1, it is accepted with probability a. Since Q is symmetric, the second factor Q(x' ; Xt)/Q(Xt; x') = 1. Metropolis sampling can be used in the framework of particle propagation in two ways. It can either be used to fit splines around contours of a training set that is used to construct a shape subspace, e.g. by PCA, or it can also be used to refine the shapes of the subspace to the actual data during tracking. 2.2 B-splines B-splines or basis function splines are parametric curves, defined as follows: Y(s) = B(s)C (5) where Y (s) is a two dimensional vector consisting of the 2-D coordinates of a point on the curve, B(s) is a matrix of polynomial basis functions, and C is a vector of control points. In other words, a point along the curve Y (s) is a weighted sum of the values of the basis functions B(s) for a particular value of s, where the weights are given by the values of C. The basis functions of b-splines have the characteristic that they are non-zero over a limited range of s. Thus a particular control point will only affect a portion of the curve. For regular b-splines of order 4 (the basis functions are 3rd degree polynomials), a single control point will only affect Y (s) over a range of s of length 4. Conversely, for particular Sm (m : Sm E SuppartO !(Xi), where i indexes the component of x that has been altered), Y(Sm) is affected by at most 4 control points (fewer towards the ends). As mentioned before, a detailed contour can have a large number of control points, and thus high dimensionality and so it is common to use a subspace. In this case C can be written as C = W x + Co where W defines a linear subspace and Co is the template of control points, and x represents perturbations from the template in the subspace. In this work we examine unconstrained models, where no prior knowledge about the deformations or dynamics of the object are presumed. In this case W is the identity matrix, Co = 0, and x are the actual coordinates of the control points. This allows the contour to deform in any way. 2.3 Metropolis updates in condensation The new algorithm consists of two steps: a Metropolis step, followed by a resampling step. 1. Iterate over control points: ? For one control point at a time, draw a proposal particle by drawing a new control point x~ from a 2-D Gaussian centered at the current control point Xt ,i, Eq. (3), and keeping all others unchanged. ? Calculate the observation likelihood for the new control point, Eq. (2). ? Calculate a (Eq. 4) and reject or accept the new particle 2. Resample 3. Get next image in video If the particle distribution at t - 1 reflects P(xt-lIZl, ... , Zt-t}, the Metropolis updates will converge to P(XtIZl, ... , Zt) [16]. As mentioned above, the affect of altering the position of a control point is to change the shape of the contour locally since the basis functions have limited support. Thus, when evaluating p(x~lxt-t}p(ZtlxD for a proposed particle, we only need to reexamine measurement lines and evaluate p(zm,t Ix~ ,t) for lines in the effected interval and similarly for p(x~,t IXn,t-l). This allows for an efficient algorithm implementation. The computation eM required to update a single particle using metropolis, compared to condensation is eM = o? it . ec where 0 is the order of the b-spline, it is the number of iterations, and ec is the number of computations required to update a particle using condensation. Thus, in the case offourth order splines such as the ones we use, the increase in computation for a single particle is only four for a single iteration, and eight for two iterations. However, we have seen that far fewer particles are required. Figure 2: The behavior of the algorithm with Metropolis updates is shown at frame 100 (t = 100) as a function of iterations and u. The columns, show, from left to right, 1,2,4 and 8 iterations, and the rows, from top to bottom show u = {I, 2, 3, 4}. The rejection ratio (i.e. the ratio of rejected proposal particles to the total number of proposed particles) is shown as a bar on the right side of each image. 3 Results We tested our algorithm on the video sequence shown in Figure 1. The contour had 56 2-D control points i.e a state space of 112 dimensions. Such high dimensionality is required for the detailed contours required to properly outline the fingers of the hand. The results presented are for relatively noise free data, i.e. free from background clutter. This allows us to contrast the performance of using Metropolis updates and standard condensation, for the scenarios of interest, i.e. the learning of subspace models and contour refinement. Figure l(b) shows the results for the Metropolis updates for 12 particles, 4 iterations and u = 3. The figure shows every 24th frame from frame 1 to frame 211. The outline of the splayed fingers is tracked very successfully. Figure l(a) shows every 24th frame for the condensation algorithm of equivalent complexity, using 200 particles and u = 2. This value of u gave the best results for 200 particles. As can be seen, the little finger is tracked moderately well. However the other parts of the hand are very poorly tracked. For lower values of u the contour distribution did not track the hand, but stayed in roughly the position of the initial contour distribution. For higher values of 0', the contour looped around in the general area of the fingers. Figure 2 shows the contour distribution for frame 100 and 12 particles, for different numbers of iterations and values of 0'. When 0' = 1 and 2 the contour distribution does not keep up with the deformation. For 0' = 4 the contour is correctly tracked except for the case of a single iteration. The rejection ratio (i.e. the ratio of rejected proposal particles to the total number of proposed particles) is shown as a bar on the right side of each image. Notice that the general trend is that rejection ratio increases as 0' increases, and decreases as the number of iterations is increased (due to a smaller 0' at each step). Intuitively, it is not surprising that our new algorithm outperforms standard condensation. In the case of condensation, Gaussian noise is added to each control point at each time step. One particle may be correctly positioned for the little finger and poorly positioned for the forefinger, whereas an other particle may be well positioned around the forefinger and poorly positioned around the little finger. In order to track the deformation of the hand, some particles are required that track both the little finger and the forefinger (and all other parts too). In contrast the Metropolis updates are likely to reject particles that are locally worse than the current particle, but accept local improvements. It should be noted that for lower dimensional problems, the increase in tracking performance is not as dramatic. E.g. in the case of tracking a rotating head, using a 12 control point b-spline, the two algorithms performed comparably. 4 Future work and conclusion We are currently examining the effects of background clutter on the performance of the algorithm. We are also investigating other sequences and groupings of control points for generating proposal particles, and ways of using subspace models in combination with Metropolis updates. In this paper we showed how Metropolis updates can be used to keep highly flexible active contours on track, and an efficient implementation strategy was presented. For high dimensional problems which are common for detailed shapes, the new algorithm presented produces dramatically better results than standard condensation. Acknowledgments We thank Andrew Blake and Dale Schuurmans for helpful discussions. References [1] 1. Y. A. Wang and E. H. Adelson "Representing moving images with layers." IEEE Transactions on Image Processing, Special Issue: Image Sequence Compression, vol. 3, no. 5. 1994, pp 625-638 [2] Y. Weiss "Smoothness in layers: Motion segmentation using nonparametric mixture estimation." Proceedings of IEEE conference on Computer Vision and Pattern Recognition, 1997. [3] A. Jepson and M. 1. Black "Mixture models for optical flow computation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [4] W. T. Freeman and P. A. Viola "Bayesian model of surface perception." Advances in Neural Information Processing Systems 10, MIT Press, 1998. [5] W. Freeman, E. Pasztor,"Leaming low-level vision," Proceedings of the International Conference on Computer Vision, 1999 pp. 1182-1189 [6] N. R. Howe, M. E. Leventon, W. T. Freeman, "Bayesian Reconstruction of 3D Human Motion from Single-Camera Video To appear in:" Advances in Neural Information Processing Systems 12, edited by S. A. Solla, T. KLeen, and K-R Muller, 2000. TR9937. [7] G. E. Hinton, Z. Ghahramani and Y. W. Teh "Learning to parse images." In S.A. Solla, T. KLeen, and K-R. Muller (eds) Advances in Neural Information Processing Systems 12, MIT Press, 2000 [8] D. Terzopoulos, R. Szeliski, "Tracking with Kalman snakes" In A. Blake and A. Yuille (ed) Active Vision, 3-20. MIT Press, Cambridge, MA, 1992 [9] N. Papanikolopoulos, P. Khosla, T. Kanade "Vision and Control Techniques for robotic visual tracking," In Proc. IEEE Int. Con! Robotics and Autmation 1, 1991, pp. 851 856. [10] A. Blake, M. Isard "Active Contours" Springer-Verlag 1998 ISBN 3540762175 [11] 1. MacCormick, A. Blake "A probabilistic exclusion principle for tracking multiple objects" Proc. 7th IEEE Int. Con! Computer Vision, 1999 [12] M. Isard, A. Blake "ICONDENSATION: Unifying low-level and high-level tracking in a stochastic framework" Proc. 5th European Con! Computer Vision, vol. 1 1998, pp.893-908 [13] 1. Sullivan, A. Blake, M. Isard, 1. MacCormick, "Object Localization by Bayesian Correlation" Proc. Int. Con! Computer Vision, 1999 [14] T. F. Cootes, G. H. Edwards, C. 1. Taylor, "Active Appearance Models" Proceedings of the European conference on Computer Vision, Vol. 2, 1998, pp. 484 - 498 [15] I. Matthews, J. A. Bangham, R. Harvey and S. Cox. Proc. Auditory-Visual Speech Processing (AVSP), 1998 pp. 73-78. [16] R. M. Neal, "Probabilistic Inference Using Markov Chain Monte Carlo Methods", Technical Report CR G-TR -93-1, University of Toronto, 1993 [17] D. J. C MacKay "Introduction to Monte Carlo methods" In M.1. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA, 1999
1835 |@word cox:1 compression:1 polynomial:2 kristjansson:1 dramatic:1 tr:1 initial:2 outperforms:1 current:2 surprising:1 written:1 kleen:2 shape:12 update:16 resampling:1 isard:3 fewer:2 toronto:1 successive:2 along:3 rnl:1 consists:1 fitting:4 presumed:1 roughly:1 behavior:1 examine:1 freeman:3 actual:2 little:4 proposing:1 impractical:1 every:3 control:24 yn:1 appear:1 before:1 frey:1 local:2 io:1 black:1 examined:1 conversely:1 co:3 limited:2 perpendicular:1 range:2 acknowledgment:1 camera:1 sullivan:1 area:1 reject:2 word:1 refers:1 regular:1 get:1 layered:1 equivalent:2 yt:1 starting:1 variation:1 coordinate:3 trend:1 recognition:2 bottom:1 trausti:1 wang:1 capture:1 reexamine:1 calculate:3 solla:2 decrease:1 edited:1 mentioned:3 complexity:1 moderately:2 dynamic:4 yuille:1 localization:1 basis:6 various:1 represented:1 finger:9 monte:3 detected:1 say:1 drawing:1 sequence:9 isbn:1 reconstruction:2 zm:3 remainder:1 flexibility:1 poorly:3 amplified:1 lxt:1 produce:1 generating:1 object:12 oo:2 andrew:1 propagating:1 eq:3 edward:1 aml:1 stochastic:1 centered:1 human:1 stayed:1 around:4 blake:7 exp:2 matthew:1 resample:1 estimation:1 proc:5 currently:2 waterloo:2 successfully:3 weighted:2 reflects:1 mit:4 gaussian:4 papanikolopoulos:1 cr:1 properly:1 improvement:1 likelihood:8 contrast:3 brendan:1 burdensome:1 helpful:1 inference:4 snake:2 unlikely:1 accept:2 pixel:1 issue:1 flexible:5 constrained:1 raised:1 special:1 mackay:1 construct:1 sampling:3 represents:4 adelson:1 looped:1 future:1 minimized:1 others:1 spline:9 report:1 consisting:2 replacement:1 interest:1 highly:5 mixture:2 tj:1 chain:2 edge:7 forefinger:3 taylor:1 rotating:1 desired:1 deformation:4 increased:1 column:1 modeling:1 altering:1 leventon:1 cost:1 deviation:1 subset:1 successful:1 examining:1 too:1 ixt:3 density:1 international:1 probabilistic:4 iip:1 worse:1 deform:2 int:3 performed:2 later:1 observing:2 portion:1 effected:1 characteristic:1 yes:1 bayesian:4 produced:2 comparably:1 carlo:3 suffers:1 ed:3 energy:1 pp:6 con:4 propagated:1 auditory:1 popular:1 knowledge:1 dimensionality:5 segmentation:1 positioned:4 higher:1 wei:1 rejected:2 correlation:1 hand:8 horizontal:1 parse:1 propagation:1 defines:2 effect:1 symmetric:1 neal:1 during:1 noted:1 outline:9 tt:2 performs:1 motion:2 image:7 common:2 tracked:5 perturbing:1 approximates:2 measurement:8 refer:1 cambridge:2 smoothness:1 rd:1 unconstrained:1 similarly:1 particle:52 had:1 moving:1 surface:1 posterior:2 showed:1 exclusion:1 scenario:1 verlag:1 harvey:1 accomplished:1 muller:2 seen:3 ey:1 converge:1 multiple:1 rj:1 infer:1 reduces:1 technical:1 qi:1 vision:11 poisson:2 iteration:10 represent:1 robotics:1 proposal:5 condensation:16 background:2 whereas:1 interval:1 howe:1 flow:1 jordan:1 iii:2 variety:1 independence:1 fit:4 affect:3 iterate:1 gave:1 reduce:1 pca:1 speech:1 dramatically:2 detailed:4 amount:2 clutter:2 nonparametric:1 locally:2 supplied:1 notice:1 track:8 correctly:2 vol:3 affected:1 four:2 drawn:3 prevent:1 sum:1 parameterized:1 cootes:1 draw:2 layer:2 resampled:1 followed:1 refine:1 incorporation:1 constrain:1 optical:1 relatively:1 combination:1 poor:1 belonging:1 smaller:1 em:2 metropolis:19 ztlxt:1 making:1 intuitively:2 computationally:1 discus:1 fail:1 needed:1 end:1 eight:1 denotes:1 top:1 include:1 graphical:1 unifying:1 ghahramani:1 unchanged:1 added:1 parametric:1 strategy:1 subspace:14 thank:1 maccormick:2 topic:1 reason:1 assuming:1 length:1 kalman:1 index:5 ratio:5 difficult:1 implementation:2 zt:5 allowing:1 teh:1 vertical:1 observation:5 markov:2 sm:7 pasztor:1 viola:1 hinton:1 head:1 frame:15 perturbation:1 community:1 introduced:1 required:7 tentative:1 bar:2 pattern:2 perception:1 including:1 video:10 ixn:1 representing:2 altered:1 picture:1 prior:1 filtering:2 proportional:1 degree:1 principle:1 translation:1 row:1 keeping:4 free:2 side:2 terzopoulos:1 szeliski:1 template:2 curve:9 calculated:1 xn:2 evaluating:1 dimension:1 contour:44 dale:1 refinement:1 simplified:1 ec:2 far:1 transaction:1 approximate:1 keep:3 active:10 investigating:1 robotic:1 xi:1 khosla:1 kanade:1 learn:1 ca:2 schuurmans:1 interact:1 complex:2 european:2 jepson:1 did:1 noise:3 xni2:1 allowed:2 enlarged:1 fails:1 position:8 lq:1 ix:1 xt:8 multitude:1 grouping:1 kr:1 te:1 rejection:3 intersection:2 likely:1 appearance:1 visual:2 tracking:12 springer:1 ma:2 identity:1 towards:1 leaming:1 change:1 except:1 total:2 accepted:2 attempted:1 support:1 evaluate:1 tested:1 ex:1
913
1,836
Partially Observable SDE Models for Image Sequence Recognition Tasks Javier R. Movellan Institute for Neural Computation University of California San Diego Paul Mineiro Department of Cognitive Science University of California San Diego R. J. Williams Department of Mathematics University of California San Diego Abstract This paper explores a framework for recognition of image sequences using partially observable stochastic differential equation (SDE) models. Monte-Carlo importance sampling techniques are used for efficient estimation of sequence likelihoods and sequence likelihood gradients. Once the network dynamics are learned, we apply the SDE models to sequence recognition tasks in a manner similar to the way Hidden Markov models (HMMs) are commonly applied. The potential advantage of SDEs over HMMS is the use of continuous state dynamics. We present encouraging results for a video sequence recognition task in which SDE models provided excellent performance when compared to hidden Markov models. 1 Introduction This paper explores a framework for recognition of image sequences using partially observable stochastic differential equations (SDEs). In particular we use SDE models of low-power non-linear RC circuits with a significant thermal noise component. We call them diffusion networks. A diffusion network consists of a set of n nodes coupled via a vector of adaptive impedance parameters>' which are tuned to optimize the network's behavior. The temporal evolution of the n nodes defines a continuous stochastic process X that satisfies the following Ito SDE: dX(t) = Ji-(X(t), >')dt + a dB(t), (1) (2) X(O) '" v, where v represents the (stochastic) initial conditions and B is standard Brownian motion. The drift is defined by a non-linear RC charging equation Ji-j(X(t),>') = -Kj1 ( ~j +Xj(t) 1 ) , - -Xj(t) Pj for j = 1,??? ,n, (3) where Ji-j is the drift of unit j, i.e., the ]fh component of Ji-. Here Xj is the internal potential at node j, Kj > 0 is the input capacitance, Pj the node resistance, ~j a Dlstllhutl(lll.l SDE Modeis ODE Models t+dt Hidden Markov Models Figure 1: An illustration of the differences between stochastic differential equation models (SDE), ordinary differential equation models (ODE) and Hidden Markov Models (HMM). In ODEs the the state dynamics are continuous and deterministic. In SDEs the state dynamics are continuous and stochastic. In HMMs the state dynamics are discrete and probabilistic. constant input current to the unit, Xj the net electrical current input to the node, n Xj(t) = L Wj ,m rp(Xm(t)), for j = 1,??? ,n, (4) m=l rp(x) = 1 1 + e- X ' for all x E JR, (5) where rp the input-output characteristic amplification, and l/wj,m is the impedance between the output Xm and the node j. Intuition for equation (3) can be achieved by thinking of it as the limit of a discrete time stochastic difference equation, X(t + ~t) = X(t) + /-?(X(t), A)~t + u-/MZ(t) , (6) where the Z(t) is an n-dimensional vector ofindependent standard Gaussian random variables. For a fixed state at time t there are two forces controlling the change in activation: the drift, which is deterministic, and the dispersion which is stochastic (see Figure 1). This results in a distribution of states at time t + ~t. As ~t goes to zero, the solution to the difference equation (6) converges to the diffusion process defined in (3). Figures 1 and 2 shows the relationship between SDE models and other approaches in the neural network and the stochastic filtering literature. The main difference between ODE models, like standard recurrent neural networks, and SDE models is that the first has deterministic dynamics while the second has probabilistic dynamics. The two approaches are similar in that the states are continuous. The main difference between HMMs and SDEs is that the first have discrete state dynamics while the second have continuous state dynamics. The main similarity is that both are probabilistic. Kalman filters are linear SDE models. If the impedance matrix is symmetric and the network is given enough time to approximate stochastic equilibrium, diffusion network behave like continuous Boltzmann machines (Ackley, Hinton & Sejnowski, 1985). If the network is discretized in state and time it becomes a standard HMM. Finally, if the dispersion constant is set to zero the network behaves like a deterministic recurrent neural network. In order to use of SDE models we need a method for finding the likelihood and the likelihood gradient of observed sequences. Kalman-Bucy Filters Boltzmann Machines Linear Dynamics Stochastic Equilibrium Diffusion Networks Discrete pace and Time Zer Noise Recurrent Neural Networks Hidden Markov Models Figure 2: Relationship between diffusion filters and other approaches in the neural network and stochastic filtering literature. 2 Observed sequence likelihoods We regard the first d components of an SDE model as observable and denote them by O. The last n - d components are denoted by H and named unobservable or hidden. Hidden components are included for modeling non-Markovian dependencies in the observable components. Let no, nh be the outcome spaces for the observable and hidden processes. Let n = no xnh the joint outcome space. Here each outcome W is a continuous path w : [0, T] --t IRn. For each wEn, we write w = (w o, Wh), where Wo represents the observable dimensions of the path and Wh the hidden dimensions. Let Q>'(A) represent the probability that a network with parameter A generates paths in the set A, Q~(Ao) the probability that the observable components generate paths in Ao and Q~(Ah) the probability that the hidden components generate paths in A h . To apply the familiar techniques of maximum likelihood and Bayesian estimation we use as reference the probability distribution of a diffusion network with zero drift, Le., the paths generated by this network are Brownian motion scaled by u. We denote such reference distribution as R, its observable and hidden components as R o, Rh. Using Girsanov's theorem (Karatzas & Shreve, 1991, p. 303) we have that f L~(wo) = ~~: (wo) = L~,h(wo,Wh) dRh(wh), Wo E no, (7) Oh where L~,h(W) = dQ>' dR (w) = exp {1 u2 ior T 1 f..L(w(t), A) . dw(t) - 2u 2 ior T 1f..L(w(t) , A)1 2 dt } . (8) The first integral in (8) is an Ito stochastic integral, the second is a standard Lebesgue integral. The term L~ is a Radon-Nikodym derivative that represents the probability density of Q~ with respect to Ro. For a fixed path Wo the term L~(wo) is a likelihood function of A that can be used for Maximum likelihood estimation. To obtain the likelihood gradient, we differentiate (7) which yields f \7>.logL~(wo) = L~lo(Wh Iwo)\7>.logL~,h(wo,wh) Oh dRh(wh), (9) where (10) (11) (12) (13) and [A is the joint innovation process [A(t,w) 2.1 = W(t) - W(O) -lot p,(w(u), A) duo (14) Importance sampling The likelihood of observed paths (7), and the gradient of the likelihood (9) require averaging with respect to the distribution of hidden paths Rh. We estimate these averages using an importance sampling in the space of sample paths. Instead of sampling from Rh we sample from a distribution that weights more heavily regions where L~ h is large. Each sample is then weighted by the density of the sampling distributi~n with respect to Rh. This weighting function is commonly known as the importance function in the Monte-Carlo literature (Fishman, 1996, p. 257). In particular for each observable path Wo we let the sampling distribution S~,wo be the probability distribution generated by a diffusion network with parameter A which has been forced to exhibit the path Wo over the observable units. The approach reminiscent of the technique of teacher forcing from deterministic neural networks. In practice, we generate Li.d. sampled hidden paths {h(i)}~l from S~,wo by numerically simulating a diffusion network with the observable units forced to exhibit the path Wo these hidden paths are then weighted by the density of S~,wo with respect to Rh, which acts as a Monte-Carlo importance function In practice we have obtained good results with m in the order of 20, i.e., we sample 20 hidden sequences per observed sequence. One interesting property of this approach is that the sampling distributions S~,wo change as learning progresses, since they depend on A. Figure 3 shows results of a computer simulation in which a 2 unit network was trained to oscillate. We tried an oscillation pattern because of its relevance for the application we explore in a later section, which involves recognizing sequences of lip movements. The figure shows the "training" path and a couple of sample paths, one obtained with the u parameter set to 0, and one with the parameter set to 0.5. '" =~~ J~ ~~ o 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 O .B 0 .9 ., o 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 O .B 0 .9 ... o 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 O .B 0 .9 ., Figure 3: Training a 2 unit network to maximize the likelihood of a sinusoidal path. The top graph shows the training path. It consists of two sinusoids out of phase each representing the activation of the two units in the network. The center graph shows a sample path obtained after training the network and setting a = 0, i.e., no noise. The bottom graph shows a sample path obtained with a = 0.5. 3 Recognizing video sequences In this section we illustrate the use of SDE models on a sequence classification task of reasonable difficulty with a body of realistic data. We chose this task since we know of SDE models used for tracking problems but know of no SDE models used for sequence recognition tasks. The potential advantage of SDEs over more established approaches such as HMMs is that they enforce continuity constraints, an aspect that may be beneficial when the actual signals are better described using continuous state dynamics. We compared a diffusion network approach with classic hidden Markov model approaches. We used Tulipsl (Movellan, 1995), a database consisting of 96 movies of 9 male and 3 female undergraduate students from the Cognitive Science Department at the University of California, San Diego. For each student two sample utterances were taken for each of the digits "one" through "four". The database is available at http://cogsci.ucsd.edu. We compared the performance of diffusion networks and HMMs using two different image processing techniques (contours and contours plus intensity) in combination with 2 different recognition engines (HMMs and diffusion networks). The image processing was performed by Luettin and colleagues (Luettin, 1997). They employ point density models, where each lip contour is represented by a set of points; in this case both the inner and outer lip contour are represented, corresponding to Luettin's double contour model. The dimensionality of the representation of the contours is reduced using principal component analysis. For the work presented here 10 principal components were used to approximate the contour, along with a scale parameter which measured the pixel distance between the mouth corners; associated with each of these 11 parameters was a corresponding "delta component", the left-hand temporal difference of the component (defined to be zero for the first frame). In this manner a total of 22 parameters were used to represent lip contour information for each still frame. These 22 parameters were represented using diffusion networks with 22 observation units, one per parameter value. We also tested the performance of a representation that used intensity information in addition to contour shape information. This approach used 62 parameters, which were represented using diffusion networks with 62 observation units. Approach Best HMM, shape information only Best diffusion network, shape information only Untrained human subjects Best HMM, shape and intensity information Best diffusion network, shape and intensity information Trained human subjects Correct Generalization 82.3% 85.4% 89.9% 90.6% 91.7% 95.5% Table 1: Average generalization performance on the Tulips1 database. Shown in order are the performance of the best performing HMM from (Luettin et al., 1996), which uses only shape information, the best diffusion network obtained using only shape information, the performance of untrained human subjects (Movellan, 1995), the HMM from Luettin's thesis (Luettin 1997) which uses both shape and intensity information, the best diffusion network obtained using both shape and intensity information, and the performance of trained human lipreaders (Movellan, 1995). We independently trained 4 diffusion networks, to approximate the distributions of lip-contour trajectories of each of the four words to be recognized, i.e., the first network was trained with examples of the word "one", and the last network with examples of the word "four". Each network had the same number of nodes, and the drift of each network was given by (3) with K.i = 1, ~ = 0 for all units, and ~ being part of the adaptive vector A. Thus, A = (~1'??? ,~n,Wl , 1,Wl,2,???Wn , n)/. The number of hidden units was varied from one to 5. We obtained optimal results with 4 hidden units. The initial state of the hidden units was set to (1, ... ,1) with probability 1, and u was set to 1 for all networks. The diffusion network dynamics were simulated using a forward-Euler technique, i.e., equation (1) is approximated in discrete time using (6). In our simulations we set tl.t = 1/30 seconds, the time between video frame samples. Each diffusion network was trained with examples of one of the 4 digits using the cost function ~(A) = L log i~(y(i)) - ~aIAI2, (16) i where {y(i)} are samples from the desired empirical distribution Po and a is the strength of a Gaussian prior on the network parameters. Best results were obtained with diffusion networks with 4 hidden units. The log-likelihood gradients were estimated using the importance sampling approach with m = 20, i.e., we generated 20 hidden sample paths per observed path. With this number of samples training took about 10 times longer with diffusion networks than with HMMs. At test time, computation of the likelihood estimates was very fast and could have been done in real time using a fast Pentium II. The generalization performance was estimated using a jacknife (one-out) technique: we trained on all subjects but one, which is used for testing. The process is repeated leaving a different subject out every time. Results are shown in Table 1. The table includes HMM results reported by Luettin (1997), who tried a variety of HMM architectures and reported the best results obtained with them. The only difference between Luettin's approach and our approach is the recognition engine, which was a bank of HMMs in his case and a bank of diffusion networks in our case. If anything we were at a disadvantage since the image representations mentioned above were optimized by Luettin to work best with HMMs. In all cases the best diffusion networks outperformed the best HMMs reported in the literature using exactly the same visual preprocessing. In all cases diffusion net- works outperformed HMMs. The difference in performance was not large. However obtaining even a 1% increment in performance on this database is very difficult. 4 Discussion While we presented results for a video sequence recognition task, the same framework can be used for tasks such as sequence recognition, object tracking and sequence generation. Our work was inspired by the rich literature on continuous stochastic filtering and stochastic neural networks. The idea was to combine the versatility of recurrent neural networks and the well known advantages of stochastic modeling approaches. The continuous-time nature of the networks is convenient for data with dropouts or variable sample rates, since the models we use define all the finite dimensional distributions. The continuous-state representation is well suited to problems involving inference about continuous unobservable quantities, as in visual tracking tasks. Since these networks enforce continuity constraints in the observable paths they may not have the well known problems encountered when HMMs are used as generative models of continuous sequences. We have presented encouraging results on a realistic sequence recognition task. However more work needs to be done, since the database we used is relatively small. At this point the main disadvantage of diffusion networks relative to conventional hidden Markov models is training speed. The diffusion networks used here were approximately 10 times slower to train than HMMs. Fortunately the Monte Carlo approximations employed herein, which represent the bulk of the computational burden, lend themselves to parallel and hardware implementations. Moreover, once a network is trained, the computation of the density functions needed in recognition tasks can be done in real time. We are exploring applications of diffusion networks to stochastic filtering problems (e.g., contour tracking) and sequence generation problems, not just sequence recognition problems. Our work shows that diffusion networks may be a feasible alternative to HMMs for problems in which state continuity is advantageous. The results obtained for the visual speech recognition task are encouraging, and reinforce the possibility that diffusion networks may become a versatile tool for a very wide variety of continuous signal processing tasks. References Ackley, D. H., Hinton, G. E., & Sejnowski, T. (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(2), 147- 169. Fishman, G. S. (1996). Monte Carlo Sampling: Concepts Algorithms and Applications. New York: Sprienger-Verlag. Karatzas, I. & Shreve, S. (1991). Springer. Brownian Motion and Stochastic Calculus. Luettin, J. (1997). Visual Speech and Speaker Recognition. PhD thesis, University of Sheffield. Movellan, J. (1995). Visual Speech Recognition with Stochastic Neural Networks. In G. Tesauro, D. Touretzky, & T. Leen (Eds.), Advances in Neural Information Processing Systems, volume 7. MIT Press. Oksendal, B. (1992). Stochastic Differential Equations. Berlin: Springer Verlag.
1836 |@word advantageous:1 calculus:1 simulation:2 tried:2 versatile:1 initial:2 tuned:1 current:2 activation:2 dx:1 reminiscent:1 realistic:2 shape:9 sdes:5 generative:1 node:7 rc:2 along:1 differential:5 become:1 consists:2 combine:1 manner:2 behavior:1 themselves:1 discretized:1 karatzas:2 inspired:1 encouraging:3 actual:1 lll:1 becomes:1 provided:1 moreover:1 circuit:1 duo:1 sde:16 finding:1 temporal:2 every:1 act:1 exactly:1 ro:1 scaled:1 unit:14 limit:1 path:24 approximately:1 chose:1 plus:1 hmms:15 testing:1 practice:2 movellan:5 digit:2 empirical:1 convenient:1 word:3 optimize:1 conventional:1 deterministic:5 center:1 williams:1 go:1 independently:1 oh:2 his:1 dw:1 classic:1 increment:1 diego:4 controlling:1 heavily:1 us:2 recognition:16 approximated:1 database:5 observed:5 ackley:2 bottom:1 electrical:1 wj:2 region:1 movement:1 mz:1 mentioned:1 intuition:1 dynamic:12 trained:8 depend:1 po:1 joint:2 represented:4 train:1 forced:2 fast:2 monte:5 sejnowski:2 cogsci:1 outcome:3 differentiate:1 sequence:22 advantage:3 net:2 took:1 zer:1 amplification:1 double:1 converges:1 object:1 illustrate:1 recurrent:4 tulips1:1 measured:1 progress:1 involves:1 correct:1 filter:3 stochastic:20 human:4 require:1 ao:2 generalization:3 exploring:1 exp:1 equilibrium:2 fh:1 estimation:3 outperformed:2 wl:2 tool:1 weighted:2 mit:1 gaussian:2 likelihood:14 pentium:1 inference:1 hidden:22 irn:1 pixel:1 unobservable:2 classification:1 denoted:1 once:2 sampling:9 represents:3 thinking:1 employ:1 wen:1 familiar:1 phase:1 consisting:1 lebesgue:1 versatility:1 possibility:1 male:1 integral:3 desired:1 modeling:2 markovian:1 disadvantage:2 ordinary:1 cost:1 euler:1 recognizing:2 reported:3 bucy:1 dependency:1 teacher:1 density:5 explores:2 probabilistic:3 thesis:2 dr:1 cognitive:3 corner:1 derivative:1 li:1 potential:3 sinusoidal:1 student:2 includes:1 later:1 performed:1 lot:1 parallel:1 characteristic:1 who:1 yield:1 bayesian:1 carlo:5 trajectory:1 ah:1 girsanov:1 touretzky:1 ed:1 colleague:1 associated:1 couple:1 sampled:1 wh:7 dimensionality:1 javier:1 dt:3 leen:1 done:3 just:1 shreve:2 hand:1 xnh:1 logl:2 continuity:3 defines:1 concept:1 evolution:1 sinusoid:1 symmetric:1 speaker:1 anything:1 motion:3 image:6 behaves:1 ji:4 nh:1 volume:1 numerically:1 significant:1 mathematics:1 had:1 similarity:1 longer:1 brownian:3 female:1 forcing:1 tesauro:1 verlag:2 fortunately:1 employed:1 recognized:1 maximize:1 signal:2 ii:1 involving:1 sheffield:1 represent:3 luettin:10 achieved:1 addition:1 ode:4 leaving:1 oksendal:1 subject:5 db:1 call:1 enough:1 wn:1 variety:2 xj:5 architecture:1 inner:1 idea:1 wo:16 resistance:1 speech:3 york:1 oscillate:1 hardware:1 reduced:1 generate:3 http:1 delta:1 estimated:2 per:3 pace:1 bulk:1 discrete:5 write:1 four:3 pj:2 diffusion:31 graph:3 named:1 reasonable:1 oscillation:1 radon:1 dropout:1 encountered:1 strength:1 constraint:2 generates:1 aspect:1 speed:1 performing:1 relatively:1 department:3 combination:1 jr:1 beneficial:1 taken:1 equation:10 needed:1 know:2 jacknife:1 available:1 apply:2 enforce:2 simulating:1 alternative:1 slower:1 rp:3 top:1 kj1:1 capacitance:1 quantity:1 exhibit:2 gradient:5 distance:1 reinforce:1 ior:2 simulated:1 hmm:8 outer:1 berlin:1 kalman:2 relationship:2 illustration:1 innovation:1 difficult:1 iwo:1 implementation:1 boltzmann:3 observation:2 dispersion:2 markov:7 finite:1 behave:1 thermal:1 hinton:2 frame:3 ucsd:1 varied:1 drift:5 intensity:6 optimized:1 california:4 engine:2 learned:1 herein:1 established:1 pattern:1 xm:2 video:4 lend:1 charging:1 power:1 mouth:1 difficulty:1 force:1 representing:1 movie:1 coupled:1 utterance:1 kj:1 prior:1 literature:5 relative:1 interesting:1 generation:2 filtering:4 dq:1 nikodym:1 bank:2 lo:1 last:2 institute:1 wide:1 regard:1 dimension:2 contour:11 rich:1 forward:1 commonly:2 adaptive:2 san:4 preprocessing:1 approximate:3 observable:13 drh:2 continuous:15 mineiro:1 table:3 impedance:3 lip:5 nature:1 obtaining:1 excellent:1 untrained:2 main:4 rh:5 noise:3 paul:1 repeated:1 body:1 tl:1 weighting:1 ito:2 theorem:1 burden:1 undergraduate:1 importance:6 phd:1 suited:1 explore:1 visual:5 tracking:4 partially:3 u2:1 springer:2 satisfies:1 feasible:1 change:2 included:1 averaging:1 principal:2 total:1 internal:1 relevance:1 tested:1
914
1,837
Hierarchical Memory-Based Reinforcement Learning Natalia Hernandez-Gardio} Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] Sridhar Mahadevan Department of Computer Science Michigan State University East Lansing, MI 48824 [email protected] Abstract A key challenge for reinforcement learning is scaling up to large partially observable domains. In this paper, we show how a hierarchy of behaviors can be used to create and select among variable length short-term memories appropriate for a task. At higher levels in the hierarchy, the agent abstracts over lower-level details and looks back over a variable number of high-level decisions in time. We formalize this idea in a framework called Hierarchical Suffix Memory (HSM). HSM uses a memory-based SMDP learning method to rapidly propagate delayed reward across long decision sequences. We describe a detailed experimental study comparing memory vs. hierarchy using the HSM framework on a realistic corridor navigation task. 1 Introduction Reinforcement learning encompasses a class of machine learning problems in which an agent learns from experience as it interacts with its environment. One fundamental challenge faced by reinforcement learning agents in real-world problems is that the state space can be very large, and consequently there may be a long delay before reward is received. Previous work has addressed this issue by breaking down a large task into a hierarchy of subtasks or abstract behaviors [1, 3, 5]. Another difficult issue is the problem of perceptual aliasing: different real-world states can often generate the same observations. One strategy to deal with perceptual aliasing is to add memory about past percepts. Short-term memory consisting of a linear (or tree-based) sequence of primitive actions and observations has been shown to be a useful strategy [2]. However, considering short-term memory at a flat, uniform resolution of primitive actions would likely scale poorly to tasks with long decision sequences. Thus, just as spatio-temporal abstraction of the state space improves scaling in completely observable environments, for large partially observable environments a similar benefit may result if we consider the space of past experience at variable resolution. Given a task, we want a hierarchical strategy for rapidly bringing to bear past experience that is appropriate to the grain-size of the decisions being considered. abstraction level: navigation comer T-junction Ii II o D3 _0 ..01 dead end =::J C _ O D2 _ O D3 ", ~ II Ii II _ 0 D1 _ O D3 / -0- ' * '---.... ">. Y - - J- - o~~ocl! ... O~ , abstraction level: traversal '---v--:J abstraction level: primitive i o .. 0 .. 0 .. .. ~ . g .. 0 . ~ .. 0 .. ... ~ *" i 0 .. 0 .. 0 .. .. ~ Figure 1: This figure illustrates memory-based decision making at two levels in the hierarchy of a navigation task. At each level, each decision point (shown with a star) examines its past experience to find states with similar history (shown with shadows). At the abstract (navigation) level, observations and decisions occur at intersections. At the lower (corridor-traversal) level, observations and decisions occur within the corridor. In this paper, we show that considering past experience at a variable, taskappropriate resolution can speed up learning and greatly improve performance under perceptual aliasing. The resulting approach, which we call Hierarchical Suffix Memory (HSM), is a general technique for solving large, perceptually aliased tasks. 2 Hierarchical Suffix Memory By employing short-term memory over abstract decisions, each of which involves a hierarchy of behaviors, we can apply memory at a more informative level of abstraction. An important side-effect is that the agent can look at a decision point many steps back in time while ignoring the exact sequence of low-level observations and actions that transpired. Figure 1 illustrates the HSM framework. The problem of learning under perceptual aliasing can be viewed as discovering an informative sequence of past actions and observations (that is, a history suffix) for a given world state that enables an agent to act optimally in the world. We can think of each situation in which an agent must choose an action (a choice point) as being labeled with a pair [0", l]: l refers to the abstraction level and 0" refers to the history suffix. In the completely observable case, 0" has a length of one, and decisions are made based on the current observation. In the partially observable case, we must additionally consider past history when making decisions. In this case, the suffix 0", is some sequence of past observations and actions that must be learned. This idea of representing memory as a variable-length suffix derives from work on learning approximations of probabilistic suffix automata [2, 4]. Here is the general HSM procedure (including model-free and model-based updates): 1. Given an abstraction levell and choice point s within l: for each potential future decision, d, examine the history at level l to find a set of past choice points that have executed d and whose incoming (suffix) history most closely matches that of the current point. Call this set of instances the "voting set" for decision d. 2. Choose dt as the decision with the highest average discounted sum of reward over the voting set. Occasionally, choose dt using an exploration strategy. Here, t is the event counter of the current choice point at level l. 3. Execute the decision dt and record: 0t, the resulting observation; Tt, the reward received; and nt, the duration of abstract action dt (measured by the number of primitive environment transitions executed by the abstract action). Note that for every environment transition from state Si-l to state Si with reward Ti and discount I, we accumulate any reward and update the discount factor: Tt ~ Tt + ItTi It ~ lIt 4. Update the Q-value for the current decision point and for each instance in the voting set using the decision, reward, and duration values recorded along with the instance. Model-free: use an SMDP Q-Iearning update rule ((3 is the learning rate): QI(St, dt ) ~ (1- (3)QI(St, dt ) + (3h + It max QI(St+n" d)) d Model-based: if a state-transition model is being used, a sweep of value iteration can be executed 1 . Let the state corresponding to the decision point at time t be represented by the suffix s: QI(s,dt ) ~ RI(S,dt ) + 2:l1(SI I s,dt)"Vi(S')(, Ndt ) s' where RI(S, dt ) is the estimated immediate reward from executing decision dt from the choice point [s, l]; FI(S' I s, dt ) is the estimated probability that the agent arrives in [s',l] given that it executed dt from [s,l]; Vt(S') is the utility of the situation [S', l]; and Nd t is the average duration of the transition [s,l] to [s',l] under abstract action dt. HSM requires a technique for short-term memory. We implemented the Nearest Sequence Memory (NSM) and Utile Suffix Memory (USM) algorithms proposed by McCallum [2]. NSM records each of its raw experiences as a linear chain. To choose the next action, the agent evaluates the outcomes of the k "nearest" neighbors in the experience chain. NSM evaluates the closeness between two states according to the match length of the suffix chain preceding the states. The chain can either be grown indefinitely, or old experiences can be replaced after the chain reaches a maximum length. With NSM, a model-free learning method, HSM uses an SMDP Q-Iearning rule as described above. USM also records experience in a linear time chain. However, instead of attempting to choose actions based on a greedy history match, USM tries to explicitly determine how much memory is useful for predicting reward. To do this, the agent builds a tree-like structure for state representation online, selectively adding depth to the tree if the additional history distinction helps to predict reward. With USM, which learns a model, HSM updates the Q-values by doing one sweep of value iteration with the leaves of the tree as states. Finally, to implement the hierarchy of behaviors, in principle any hierarchical reinforcement learning method may be used. For our implementation, we used the Hierarchy of Abstract Machines (HAM) framework proposed by Parr and Russell [3]. When executed, an abstract machine executes a partial policy and returns control to the caller upon termination. The HAM architecture uses a Q-Iearning rule modified for SMDPs. lIn this context , "state" is represented by the history suffix. That is, an instance is in a "state" if the instance's incoming history matches the suffix representing the state. In this case, the voting set is exactly the set of instances in the same state as the current choice point 8t J Shor1 SeilS Homad(l) Ihl'l-~ bJlJn~"': LL (-w"'r'lR4~.-O I~U;4, c:n , (Jl( .....'JWM75, . 00002169l itctuat PO,ltlon::?=?,?,-.,.,-::<lI"Y=-;).'.JO!'iSS=I?... <lT=(1(Q3 Encoder po"lt.lr~: :{ ="OOJo<;~"'5 Y=-l'." u oj l,7 5=,0';01'1 T=O(o'? Co1'lp(lssl.,<,1,,,:>:r,:3::' P,?"" ;UUtl." ,,... J;:: I (: Unit~ : C(JOI"?d ulJt ,...'" 0.1 ' rie hl',,; iJ,.,1 n" 0. 1 d~. ,?~~" Figure 2: The corridor environment in the Nomad 200 robot simulator. The goal is the 4-way junction. The robot is shown at the middle T-junction. The robot is equipped with 16 short-range infrared and long-range sonar sensors. The other figures in the environment are obstacles around which the robot must maneuver. 3 The Navigation Task To test the HSM framework, we devised a navigation task in a simulated corridor environment (see Figure 2). The task is for the robot to find its way from the start, the center T-junction, to the goal, the four-way junction. The robot receives a reward at the goal intersection and and a small negative reward for each primitive step taken. Our primary testbed was a simulated agent using a Nomad 200 robot simulator. This simulated robot is equipped with 20 bumper and 16 sonar and infrared sensors, arranged radially. The dynamics of the simulator are not "grid world" dynamics: the Nomad 200 simulator represents continuous, noisy sensor input and the occasional unreliability of actuators. The environment presents significant perceptual ambiguity. Additionally, sensor readings can be noisy; even if the agent is at the goal or an intersection, it might not "see" it. Note the size of the robot relative to the environment in Figure 2. What makes the task difficult are the several activities that must be executed concurrently. Conceptually, there are two levels to our navigation problem. At the top, most abstract, level is the root task of navigating to the goal. At the lower level is the task of physically traversing the corridors, avoiding obstacles, maintaining alignment with the walls, etc. 4 Implementation of the Learning Agents In our experiments, we compared several learning agents: a basic HAM agent, four agents using HSM (each using a different short-term memory technique), and a "flat" NSM agent. To build a set of behaviors for hallway navigation, we used a three-level hierarchy. The top abstract level is basically a choice state for choosing a hallway navigation direction (see Figure 3a). In each of the four nominal directions (front, back, left, right), the agent can make one of three observations: wall, open, or unknown. The agent must learn to choose among the four abstract machines to reach the next go orwar (a) Figure 3: Hierarchical structure of behaviors for hallway navigation. Figure (a) shows the most abstract level - responsible for navigating in the environment. Figures (b) and (c) show two implementations of the hall-traversal machines. The machine in Figure (b) is reactive, and Figure (c) is a machine with a choice point. intersection. This top level machine has control initially, and it regains control at intersections. The second level of the hierarchy contains the machines for traversing the hallway. The traversal behavior is shown in Figure 3b. Each of the four machines at this level executes a reactive strategy for traversing a corridor. Finally, the third level of the hierarchy implements the follow-wall and avoid-obstacle strategies using primitive actions. Both the avoid-obstacle and the follow-wall strategies were themselves trained previously using Q-Iearning to exploit the power of reuse in the hierarchical framework. The HAM agent uses a three-level behavior hierarchy as described above. There is a single choice state, at the top level, and the agent learns to coordinate its choices by keeping a table of Q-values. The Q-value table is indexed by the current percepts and the chosen action (one of four abstract machines). The HAM agent uses a discount of 0.9, and a learning rate of 0.1. Exploration is done with a simple epsilon-greedy strategy. The first pair of HSM agents use the same behavior hierarchy as the HAM agent. However, they use short-term memory at the most abstract level to learn a strategy for navigating the corridor. The first of these agents uses NSM at the top level with a history length of 1000, k = 4, a discount of 0.9, and a learning rate of 0.1. The second agent uses USM at the top level with a discount of 0.95. The performance of these top-level memory agents was studied as a control against the more complex multi-level memory agents described next. The next pair of HSM agents use short-term memory both at the abstract navigation level and at the intermediate level. The behavior decomposition at the abstract navigation level is the same for the previous agents; however, the traversal behavior is in turn composed of machines that must make a decision based on short-term memory. Each of the machines at the traversal level uses short-term memory to learn to coordinate a strategy behaviors for traversing a corridor. The memorybased version of the traversal machine is shown in Figure 3c. The first of these agents uses NSM as the short-term memory technique at both levels of the hierarchy. It uses a history length of 1000, k = 4, a discount of 0.9, and a learning rate of 0.1. The second agent uses USM as the short-term memory technique at the top level with a discount of 0.95. At the intermediate level, it uses NSM with the same learning parameters as the preceding agent. Exploration is done with a simple epsilon-greedy strategy in all cases. Finally, we study the behavior of a "flat" NSM agent. The flat agent must keep track of the following perceptual data: first, it needs the same perceptual information as the top-level HAM (so it can identify the goal); second, it needs the additional perceptual data for aligning to walls and for avoiding obstacles: whether it was bumped, and the angle to the wall (binned into 4 groups of 45? each). The flat agent chooses among four primitive actions: go-forward, veer-left, veer-right, and back-up. Not only must it learn to make it to the goal, it must simultaneously learn to align itself to walls and avoid obstacles. The NSM agent uses a history length of 1000 , k = 4, a discount of 0.9, and a learning rate of 0.1. Exploration is done with a simple epsilon-greedy strategy. 5 Experimental Results In Figure 4, we see the learning performance of each agent in the navigation task. The graphs show the performance advantage of both multi-level HSM agents over the other agents. In particular, we find that the flat memory-based agent does considerably worse than the other three, as expected. The flat agent must carry around the perceptual data to perform both high and low-level behaviors. From the point of view of navigation, this results in long strings of uninformative corridor states between the more informative intersection states. Since takes such an agent longer to discover patterns in its experience, it never quite learns to navigate successfully to the goal. Next, both multi-level memory-based hierarchical agents outperform the HAM agent. The HAM agent does better at navigation than the flat agent since it abstracts away the perceptually aliased corridor states. However, it is unable to distinguish between all of the intersections. Without the ability to tell which Tjunctions lead to the goal, and which to a dead end, the HAM agent does not perform as well. The multi-level HSM agents also outperform the single-level ones. The multi-level agents can tune their traversing strategy to the characteristics of the cluttered hallway by using short-term memory at the intermediate level. Finally, although it initially does worse, the multi-level HSM agent with USM soon outperforms the multi-level HSM agent with NSM. This is because the USM algorithm forces the agent to learn a state representation that uses only as much incoming history as needed to predict reward. That is, it tries to learn the right history suffix for each situation rather approximating the suffix by simply matching greedily on incoming history. Learning such a representation takes some time, but, once learned, produces better performance. 6 Conclusions and Future Work In this paper we described a framework for solving large perceptually aliased tasks called Hierarchical Suffix Memory (HSM). This approach uses a hierarchical behavioral structure to index into past memory at multiple levels of resolution. Organizing past experience hierarchically scales better to problems with long decision sequences. We presented an experiment comparing six different learning methods, showing that hierarchical short-term memory produces overall the best performance 0.0012 "- as ~ ~ multi-level memory (USM+HAM ) multi-level memory (NSM+HAM ) no memory ~HAM ~ flat memory NSM 0.001 0.0012 --- as ~ o.ooos ~ (!) B ~ multi-leve l memory multi-leve l memory top-level only memory top-level only memory 0.001 (USM+HAM) (NSM+HAM) (USM+HAM) (NSM+HAM) ---- o.ooos (!) B 0.0006 ~ f- a "- 0.0006 ..... ,. f- .ll 0.0004 z 0.0002 a ? .ll 0.0004 z 0.0002 ? -.-- ----.~-'.------ 0 10000 20000 30000 40000 0 Number of Pnmltlve Steps 10000 20000 30000 40000 Number of Primitive Steps Figure 4: Learning performance in the navigation task. Each curve is averaged over eight trials for each agent. in a perceptually aliased corridor navigation task. One key limitation of the current HSM framework is that each abstraction level examines only the history at its own level. Allowing interaction between the memory streams at each level of the hierarchy would be beneficial. Consider a navigation task in which the decision at a given intersection depends on an observation seen while traversing the corridor. In this case, the abstract level should have the ability to "zoom in" to inspect a particular low-level experience in greater detail. We expect that pursuit of general frameworks such as HSM to manage past experience at variable granularity will lead to strategies for control that are able to gracefully scale to large, partially observable problems. Acknowledgements This research was carried out while the first author was at the Department of Computer Science and Engineering, Michigan State University. This research is supported in part by a KDI grant from the National Science Foundation ECS9873531. References [1] Thomas G . Dietterich. The MAXQ method for hierarchical reinforcement learning. In Autonomous Robots Journal, Special Issue on Learning in Autonomous Robots, 1998. [2] Andrew K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, 1995. [3] Ron Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD thesis, University of California at Berkeley, 1998. [4] Dana Ron, Yoram Singer, and Naftali Tishby. The power of amnesia: Learning probabilistic automata with variable mem ory length. Machine Learning, 25:117- 149, 1996. [5] R. Sutton, D. Precup, and S. Singh. Intra-option learning about temporally abstract actions. In Proceedings of the 15th International Conference on Machine Learning, pages 556- 564, 1998.
1837 |@word trial:1 version:1 middle:1 nd:1 open:1 termination:1 d2:1 propagate:1 decomposition:1 carry:1 contains:1 past:12 outperforms:1 current:7 comparing:2 nt:1 si:3 must:11 grain:1 realistic:1 informative:3 enables:1 update:5 smdp:3 v:1 intelligence:1 discovering:1 greedy:4 leaf:1 mccallum:2 hallway:5 short:15 record:3 utile:1 indefinitely:1 lr:1 cse:1 ron:2 along:1 corridor:13 amnesia:1 behavioral:1 lansing:1 expected:1 behavior:14 themselves:1 examine:1 aliasing:4 simulator:4 multi:11 discounted:1 equipped:2 considering:2 discover:1 aliased:4 what:1 string:1 temporal:1 berkeley:1 every:1 act:1 voting:4 ti:1 iearning:4 exactly:1 control:6 unit:1 grant:1 unreliability:1 maneuver:1 before:1 engineering:1 sutton:1 hernandez:1 might:1 studied:1 range:2 averaged:1 responsible:1 implement:2 procedure:1 matching:1 refers:2 context:1 center:1 primitive:8 go:2 duration:3 automaton:2 cluttered:1 resolution:4 examines:2 rule:3 caller:1 coordinate:2 autonomous:2 hierarchy:15 nominal:1 ooos:2 exact:1 us:15 infrared:2 labeled:1 counter:1 highest:1 russell:1 environment:11 ham:17 reward:13 traversal:7 dynamic:2 trained:1 singh:1 solving:2 upon:1 completely:2 comer:1 po:2 represented:2 grown:1 describe:1 artificial:1 tell:1 outcome:1 choosing:1 whose:1 quite:1 encoder:1 ability:2 think:1 noisy:2 itself:1 online:1 sequence:8 advantage:1 interaction:1 rapidly:2 organizing:1 poorly:1 produce:2 natalia:1 executing:1 help:1 andrew:1 measured:1 ij:1 nearest:2 received:2 implemented:1 shadow:1 involves:1 direction:2 bumped:1 closely:1 exploration:4 wall:7 memorybased:1 around:2 considered:1 hall:1 predict:2 parr:2 joi:1 ndt:1 create:1 successfully:1 mit:1 concurrently:1 sensor:4 modified:1 rather:1 avoid:3 q3:1 greatly:1 greedily:1 abstraction:8 suffix:17 initially:2 hidden:1 selective:1 issue:3 among:3 overall:1 special:1 once:1 never:1 represents:1 lit:1 look:2 future:2 nsm:15 composed:1 simultaneously:1 national:1 veer:2 zoom:1 delayed:1 replaced:1 consisting:1 intra:1 alignment:1 navigation:18 arrives:1 chain:6 partial:1 experience:13 traversing:6 tree:4 indexed:1 old:1 instance:6 obstacle:6 ory:1 uniform:1 delay:1 front:1 tishby:1 optimally:1 considerably:1 chooses:1 st:3 ocl:1 fundamental:1 international:1 probabilistic:2 precup:1 jo:1 thesis:2 nhg:1 recorded:1 ambiguity:1 manage:1 choose:6 worse:2 dead:2 itti:1 return:1 li:1 potential:1 leve:2 star:1 explicitly:1 vi:1 stream:1 depends:1 try:2 root:1 lab:1 view:1 doing:1 start:1 option:1 rochester:1 characteristic:1 percept:2 identify:1 conceptually:1 raw:1 basically:1 executes:2 history:17 reach:2 evaluates:2 against:1 mi:1 radially:1 massachusetts:1 ihl:1 improves:1 formalize:1 back:4 higher:1 dt:14 follow:2 rie:1 nomad:3 execute:1 arranged:1 done:3 just:1 receives:1 effect:1 dietterich:1 hsm:20 deal:1 ll:3 naftali:1 tt:3 l1:1 fi:1 jl:1 accumulate:1 significant:1 cambridge:1 ai:1 grid:1 robot:11 longer:1 etc:1 add:1 aligning:1 align:1 own:1 occasionally:1 vt:1 seen:1 additional:2 greater:1 preceding:2 determine:1 ii:5 multiple:1 levell:1 match:4 long:6 lin:1 devised:1 qi:4 basic:1 physically:1 iteration:2 want:1 uninformative:1 addressed:1 bringing:1 call:2 mahadeva:1 granularity:1 intermediate:3 mahadevan:1 architecture:1 idea:2 whether:1 six:1 utility:1 reuse:1 action:15 useful:2 detailed:1 tune:1 discount:8 generate:1 outperform:2 estimated:2 track:1 group:1 key:2 four:7 d3:3 graph:1 regains:1 sum:1 angle:1 decision:24 scaling:2 distinguish:1 activity:1 binned:1 occur:2 ri:2 flat:9 speed:1 attempting:1 department:2 according:1 across:1 beneficial:1 lp:1 making:2 hl:1 taken:1 previously:1 turn:1 kdi:1 needed:1 singer:1 end:2 junction:5 pursuit:1 apply:1 actuator:1 hierarchical:14 occasional:1 appropriate:2 away:1 eight:1 thomas:1 top:11 maintaining:1 exploit:1 yoram:1 epsilon:3 build:2 approximating:1 sweep:2 strategy:14 primary:1 interacts:1 navigating:3 unable:1 simulated:3 gracefully:1 length:9 index:1 difficult:2 executed:6 negative:1 implementation:3 policy:1 unknown:1 perform:2 allowing:1 inspect:1 observation:11 markov:1 immediate:1 situation:3 subtasks:1 pair:3 california:1 learned:2 distinction:1 testbed:1 maxq:1 able:1 pattern:1 perception:1 reading:1 challenge:2 encompasses:1 including:1 memory:42 max:1 oj:1 power:2 event:1 force:1 predicting:1 representing:2 improve:1 technology:1 temporally:1 carried:1 faced:1 acknowledgement:1 co1:1 relative:1 expect:1 bear:1 limitation:1 dana:1 foundation:1 agent:53 principle:1 supported:1 free:3 keeping:1 soon:1 side:1 institute:1 neighbor:1 benefit:1 curve:1 depth:1 world:5 transition:4 forward:1 made:1 reinforcement:7 author:1 employing:1 observable:6 usm:11 keep:1 incoming:4 mem:1 spatio:1 msu:1 continuous:1 sonar:2 table:2 additionally:2 learn:7 ignoring:1 complex:1 domain:1 hierarchically:1 sridhar:1 i:1 perceptual:9 breaking:1 bumper:1 third:1 learns:4 down:1 navigate:1 showing:1 closeness:1 derives:1 adding:1 phd:2 perceptually:4 illustrates:2 intersection:8 michigan:2 lt:2 simply:1 likely:1 partially:4 ma:1 viewed:1 goal:9 consequently:1 smdps:1 called:2 experimental:2 east:1 select:1 selectively:1 reactive:2 d1:1 avoiding:2
915
1,838
Feature Correspondence: A Markov Chain Monte Carlo Approach Frank Dellaert, Steven M. Seitz, Sebastian Thrun, and Charles Thorpe Department of Computer Science &Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {dellaert,seitz,thrun,cet }@cs.cmu.edu Abstract When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one "best" correspondence, and instead consider the distribution over all possible correspondences. We treat both a fully Bayesian approach that yields a posterior distribution, and a MAP approach that makes use of EM to maximize this posterior. We show how Markov chain Monte Carlo methods can be used to implement these techniques in practice, and present experimental results on real data. 1 Introduction Structure from motion (SFM) addresses the problem of simultaneously recovering camera pose and a three-dimensional model from a collection of images. This problem has received considerable attention in the computer vision community [1 , 2, 3]. Methods that can robustly reconstruct the 3D structure of environments have a potentially large impact in many areas of societal importance, such as architecture, entertainment, space exploration and mobile robotics. A fundamental problem in SFM is data association, i. e., the question of determining correspondence between features observed in different images. This problem has been referred to as the most difficult part of structure recovery [4], and is particularly challenging if the images have been taken from widely separated viewpoints. Virtually all existing approaches assume that either the correspondence is known a priori, or that features can be tracked from frame to frame [1 , 2]. Methods based on the robust recovery of epipolar geometry [3 , 4] can cope with larger inter-frame displacements, but still depend on the ability to identify a set of initial correspondences to seed the robust matching process. In this paper, we are interested in cases where individual camera images are recorded from vastly different viewpoints, which renders existing SFM approaches inapplicable. Traditional approaches for establishing correspondence between sets of 2D features [5 , 6, 7] are of limited use in this domain, as the projected 3D structure can look very different in each image. This paper proposes a Bayesian approach to data association. Instead of considering a single correspondence only (which we conjecture to be brittle), our approach considers whole distributions over correspondences. As a result, our approach is more robust, and from a Bayesian perspective it is also sound. Unfortunately, no closed-form solution exists for calculating these distributions conditioned on the camera images. Therefore, we propose to use the Metropolis-Hastings algorithm, a popular Markov chain Monte Carlo (MCMC) method, to sample from the posterior. In particular, we propose two different algorithms. The first method, discussed in Section 2, is mathematically more powerful but computationally expensive. It uses MCMC to sample from the joint distribution over both correspondences and threedimensional scene structure. While this approach is mathematically elegant from a Bayesian point of view, we have so far only been able to obtain results for simple, artificial domains. Thus, to cope with large-scale data sets, we propose in Section 3 a maximum a posteriori (MAP) approach using the Expectation-Maximization (EM) algorithm to maximize the posterior. Here we use MCMC sampling only for the data association problem. Simulated annealing is used to reduce the danger of getting stuck in local minima. Experimental results obtained in realistic domains and presented in Section 4 suggest that this approach works well in the general SFM case, and that it scales favorably to complex computer vision problems. The idea of using MCMC for data association has been used before by [8] in the context of a traffic surveillance application. However, their approach is not directly applicable to SFM, as the computer vision domain is characterized by a large number of local minima. Our paper goes beyond theirs in two important aspects: First, we develop a framework for MCMC sampling over both the data association and the model, and second, we apply annealing to smooth the posterior so as to reduce the chance to get stuck in local minima. In a previous paper [9] we have discussed the idea of using EM for SFM, but without the unifying framework presented below. 2 A Fully Bayesian Approach using MCMC Below we derive the general approach for MCMC sampling from the joint posterior over data association and models. We only show results for a simple example from pose estimation, as this approach is computationally very demanding. An EM approach based on the general principles described here, but applicable to largerscale problems, will be described in the next section. 2.1 Structure from Motion The structure from motion problem is this: given a set of images of a scene, taken from different viewpoints, recover the 3D structure of the scene along with the camera parameters. In the feature-based approach to SFM, we consider the situation in which a set of N 3D features Xj is viewed by a set of m cameras with parameters mi. As input data we are given the set of 2D measurements Uik in the images, where k E {l..Ki} and Ki is the number of measurements in the i-th image. To model correspondence information, we introduce for each measurement Uik the indicator variable j ik, indicating that Uik is a measurement of the jik-th feature Xj,k. The choice of feature type and camera model determines the measurement function h(mi,Xj), predicting the measurement Uik given mi and Xj (with j =jik): Uik = h(mi, Xj) + n where n is the measurement noise. Without loss of generality, let us consider the case in which the features Xj are 3D points and the measurements Uik are points in the 2D image. In this case the measurement function can be written as a 3D rigid displacement followed by a projection: (1) where Ri and ti are the rotation matrix and translation of the i-th camera, respectively, and <J> : ~ 3 -+ ~ 2 is the camera projection model. 2.2 Deriving the Posterior Whereas previous methods single out a single "best" correspondence across images, in a Bayesian framework we are interested in characterizing our knowledge about the unknowns conditioned on the data only, averaging over all possible correspondences. Thus, we are interested in the posterior distribution P(OIU), where 0 collects the unknown model parameters mi and Xj. In the case of unknown correspondence, we need to sum over all possible assignments J = {jik} to obtain P(O IU) = L:P(J,O IU) ex P(O) L:P(UIJ,O)P(JIO) J (2) J where we have applied Bayes law and the chain rule. Let us assume for now that there are no occlusions or spurious measurements, so that Ki = Nand J is a set of m permutations J i of the indices l..N. Then, assuming i.i.d. normally distributed noise on the measurements, each term in (2) can be calculated using (3) if each J i is a permutation, and a otherwise. Here N(.; J-L, 0") denotes the normal distribution with mean J-L and standard deviation 0". The first identity in (3) holds if we assume each of the N! possible permutations to be equally likely a priori. 2.3 Sampling from the Posterior using MCMC Unfortunately, direct computation of the total posterior distribution P(OIU) in (2) is intractable in general, because the number of correspondence assignments J is combinatorial in the number of features and images. As a solution to this computational challenge we propose to instead sample from P(O IU). Sampling directly from P(O IU) is equally difficult , but if we can obtain a sample {(o(r) , J(r))} from the joint distribution P (0, J IU), we can simply discard the correspondence part J(r) to obtain a sample {o(r)} from the marginal distribution P(OIU). To sample from the joint distribution P(O, JIU) we propose to use MCMC sampling, in particular the Metropolis-Hastings algorithm [10]. This method involves simulating a Markov chain whose equilibrium distribution is the desired posterior distribution P(O,JIU). Defining X~ (J,O), the algorithm is: l. Start with a random initial state X(O). 2. Propose a new state X' using a chosen proposal density Q(X'; X(r)). 3. Compute the ratio P(X'IU) Q(X(r); X') a - --'---;-;--'-- ---'----,--,,-'- - p(X(r)IU) Q(X';X(r)) 4. Accept X' as X(r+1) with probability min(a, 1), otherwise X(r+1) (4) = X(r). [~~---Ci:" ' X2 ? X3 ! \ ?X X'Z I 3 3 r ,Z ~~~X: ! "5 ~ 6 Figure 1: Left: A 2D model shape, defined by the 6 feature points XJ' Right: Transformed shape (by a simple rotation) and 6 noisy measurements Uk of the transformed features. The true rotation is 70 degrees, noise is zero-mean Gaussian. The sequence of tuples (e(r), J(r)) thus generated will be a sample from p(e, J IV), if the sampler is run sufficiently long. To calculate the acceptance ratio a, we assume that the noise on the feature measurements is normally distributed and isotropic. Using Bayes law and eq. (3), we can then rewrite a from (4) as n::1n~~l h(m~, xj,,), Q(X(r); X') -n-m-n-::CK-N-(--'h-(----;-C(r)"""-='(""'r)-)-) Q (X' . N(U;k; a= ;=1 =-' k=l Uik, 0") m; , Xj,k ,0" , X( r) ) Simplifying the notation by defining h~~) ~ h(mY), xt~), we obtain _ Q(X(r); X') Q(X/;X(r)) exp a - [1 '"f;'(lluik 20"2 (r) -hik 2 I 2] II -lluik -hik)11 ) (5) The proposal density Q(.; .) is application dependent, and an example is given below. 2.4 Example: A 2D Pose Estimation Problem To illustrate this method , we present a simple example from pose estimation. Assume we have a 2D model shape, given in the form of a set of 2D points Xj, as shown in Figure l. We observe an image of this shape which has undergone a rotation to be estimated. This rotated shape is shown at right in the figure, along with 6 noisy measurements Uk on the feature points. In Figure 2 at left we show the posterior distribution over the rotation parameter, given the measurements from Figure 1 and with known correspondence. In this case, the posterior is unimodal. In the case of unknown correspondence, the posterior conditioned on the data alone is shown at right in Figure 2 and is a mixture of 6!=720 functions of the form (3), with 6 equally likely modes induced by the symmetry of the model shape. e In order to perform MCMC sampling, we implement the proposal step by choosing randomly between two strategies. (a) In a "small perturbation" we keep the correspondence assignment J but add a small amount of noise to e. This serves to explore the values of within a mode of the posterior probability. (b) In a "long jump" , we completely randomize both and J. This provides a way to jump between probability modes. Note that Q(X(r); X') /Q(X/; X(r)) = 1 for this proposal density. The result of the sampling procedure is shown as a histogram of the rotation parameter in Figure 3. The histogram is a non-parametric approximation to the analytic posterior shown in Figure 2. The figure shows the results of running a sampler for 100,000 steps, the first 1000 of which were discarded as a transient . Note that even for this simple example, there is still considerable correlation in the sample e e e "' '. '. \ J \ \ ) Figure 2: (Left) The posterior distribution over rotation B with known correspondence, and (Right) with unknown correspondence, a mixture with 720 components. Figure 3: Histogram for the values of B obtained in one MeMe run, for the situation in Figure 1. The MeMe sampler was run for 100,000 steps. of 100,000 states as evidenced by the uneven mass in each of the 6 analytically predicted modes. 3 Maximum a Posteriori Estimation using MCEM As illustrated above, sampling from the joint probability over assignments J and parameters 0 using MCMC can be very expensive. However, if only a maximum a posteriori (MAP) estimate is needed, sampling over the joint space can be avoided by means of the EM algorithm. To obtain the MAP estimate, we need to maximize P(OIU) as given by (2). This is intractable in general because of the combinatorial number of terms. The EM algorithm provides a tractable alternative to maximizing P (0 IU), using the correspondence J as a hidden variable [ll). It iterates over: E-step: Calculate the expected log-posterior Qt(0): Qt(0) ~ Eo,{logP(OIU, J)IU} =L P(J IU, ot)logP(O IU , J) (6) J where the expectation is taken with respect to the posterior distribution P(JIU, ot) over all possible correspondence assignments J given the measurement data U and a current gu ess ot for the parameters. M-step: Re-estimate OtH by maximizing Q t (0), i.e., OtH = argmax 0 Qt(0) Instead of calculating Qt (0) exactly using (6) , which again involves summing over a combinatorial number of terms, we can replace it by a Monte Carlo approximation: R Qt (0) i=::j ~ LlogP(O IU,J(r)) (7) r=l where {J(r)} is a sample from P(J IU , ot) obtained by MCMC sampling. Formally this can be justified in the context of a Monte Carlo EM or MCEM, a version Figure 4: Three out of 11 cube images. Although the images were originally taken as a sequence in time, the ordering of the images is irrelevant to our method. 1=0 0"=0.0 1=1 u=25.1 1=10 u=18.7 1=20 u=13.5 1=100 u=1 .0 Figure 5: Starting from random structure (t=O) we recover gross 3D structure in the very first iteration (t=l). As the annealing parameter (1' is gradually decreased, successively finer details are resolved (iterations 1,10,20, and 100 are shown). of the EM algorithm where the E-step is executed by a Monte-Carlo process [11]. The sampling proceeds as in the previous section, using the Metropolis-Hastings algorithm, but now with a fixed parameter f) f)t. Note that at each iteration the estimate f)t changes and we sample from a different posterior distribution P(JIU, f)t). = In practice it is important to add annealing to this basic EM scheme, to avoid getting stuck in local minima. In simulated annealing we artificially increase the noise parameter (T for the early iterations, gradually decreasing it to its correct value. This has two beneficial consequences. First, the posterior distribution P(JIU, f)t) is less peaked when (T is high, allowing the MCMC sampler to explore the space of assignments J more easily. Second, the expected log-posterior Qt (e) is smoother and has fewer local maxima for higher values of (T. 4 Results To validate our approach we have conducted a number of experiments, one of which is presented here. The input data in this experiment consisted of 55 manually selected measurements in each of 11 input images, three of which are shown in Figure 4. Note that features are not tracked from frame to frame and the images can be presented in arbitrary order. To initialize the 11 cameras mi are all placed at the origin, looking towards the 55 model points Xj, who themselves are normally distributed at unit distance from the cameras. We used an orthographic projection model. The EM algorithm was run for 100 iterations, and the sampler for 10000 steps per image. For this data set the algorithm took about a minute to complete on a standard PC. The algorithm converges consistently and fast to an estimate for the structure and motion where the correct correspondence is the most probable one, and where all assignments in the different images agree with each other. A typical run of the algorithm is shown in Figure 5, where we have shown a wireframe model of the recovered structure at several points during the run. There are two important points to note: (a) the gross structure is recovered in the very first iteration, starting from random initial structure, and (b) finer details of the structure are gradually resolved as the annealing parameter (T is decreased. The estimate for the structure after convergence is almost identical to the one found by the factorization method [1] when this is provided with the correct correspondence. 5 Conclusions and Future Directions In this paper we presented a theoretically sound method to deal with ambiguous feature correspondence, and have shown how Markov chain Monte Carlo sampling can be used to obtain practical algorithms. We have detailed this for two cases: (1) obtaining a posterior distribution over the parameters fJ, and (2) obtaining a MAP estimate by means of EM. In future work, we would like to apply these methods in other domains where data association plays a central role. In particular, in the highly active area of mobile robot mapping, the data association problem is currently a major obstacle to building large-scale maps [12, 13]. We conjecture that our approach is equally applicable to the robotic mapping problem, and can lead to qualitatively new solutions in that domain. References [1] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. Int. J. of Computer Vision, 9(2):137- 154, Nov. 1992. [2] R.1. Hartley. Euclidean reconstruction from uncalibrated views. In Application of Invariance in Computer Vision, pages 237-256, 1994. [3] P.A. Beardsley, P.H.S. Torr, and A. Zisserman. 3D model acquisition from extended image sequences. In E'l.w. Conf. on Computer Vision (ECCV), pages 11:683-695, 1996. [4] P. Torr, A. Fitzgibbon, and A. Zisserman. Maintaining multiple motion model hypotheses over many views to recover matching and structure. In Int. Con/. on Computer Vision (ICC V), pages 485-491, 1998. [5] G.L. Scott and H.C. Longuet-Higgins. An algorithm for associating the features of two images. Proceedings of Royal Society of London, B-244:21-26, 1991. [6] L.S. Shapiro and J.M. Brady. Feature-based correspondence: An eigenvector approach. Image and Vision Computing, 10(5):283-288, June 1992. [7] S. Gold, A. Rangaraj an , C. Lu, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching. Pattern Recognition, 31(8):1019-1031, 1998. [8] H. Pasula, S. Russell, M. Ostland, and Y. Ritov. Tracking many objects with many sensors. In Int. Joint Con/. on Artificial Intelligence (IlCAI), Stockholm, 1999. [9] F. Dellaert, S.M. Seitz, C.E. Thorpe, and S. Thrun. Structure from motion without correspondence. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2000. [10] W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors. Markov chain Monte Carlo in practice. Chapman and Hall, 1996. [11] M.A. Tanner. Tools for Statistical Inference. Springer, 1996. [12] J.J. Leonard and H.J.S. Feder. A computationally efficient method for large-scale concurrent mapping and localization. In Proceedings of the Ninth International Symposium on Robotics Research, Salt Lake City, Utah, 1999. [13] J.A. Castellanos and J.D. Tard6s. Mobile Robot Localization and Map Building; A Multisensor Fusion Approach. Kluwer Academic Publishers, Boston, MA, 2000.
1838 |@word version:1 seitz:3 simplifying:1 initial:3 existing:3 current:1 recovered:2 written:1 realistic:1 shape:7 analytic:1 alone:1 intelligence:1 fewer:1 selected:1 isotropic:1 es:1 iterates:1 provides:2 along:2 direct:1 symposium:1 ik:1 ostland:1 introduce:1 theoretically:1 inter:1 expected:2 themselves:1 decreasing:1 considering:1 provided:1 notation:1 mass:1 eigenvector:1 jik:3 brady:1 ti:1 exactly:1 uk:2 normally:3 unit:1 before:1 local:5 treat:1 consequence:1 establishing:2 collect:1 challenging:1 limited:1 factorization:2 practical:1 camera:11 gilks:1 pappu:1 practice:3 orthographic:1 implement:2 x3:1 fitzgibbon:1 procedure:1 displacement:2 danger:1 area:2 matching:4 projection:3 suggest:1 get:1 context:2 map:7 maximizing:2 go:1 attention:1 starting:2 recovery:2 rule:1 higgins:1 deriving:1 brittleness:1 play:1 us:1 hypothesis:1 origin:1 pa:1 expensive:2 particularly:1 recognition:2 observed:1 steven:1 role:1 calculate:2 mjolsness:1 ordering:1 russell:1 uncalibrated:1 gross:2 environment:1 meme:2 depend:1 rewrite:1 mcem:2 inapplicable:1 localization:2 completely:1 gu:1 resolved:2 joint:7 easily:1 llogp:1 separated:1 fast:1 london:1 monte:8 artificial:2 choosing:1 whose:1 widely:1 larger:1 cvpr:1 reconstruct:1 otherwise:2 ability:1 richardson:1 noisy:2 sequence:3 took:1 propose:7 reconstruction:1 gold:1 validate:1 getting:2 convergence:1 converges:1 rotated:1 object:1 derive:1 develop:1 illustrate:1 pose:4 qt:6 received:1 eq:1 recovering:1 c:1 involves:2 predicted:1 direction:1 hartley:1 correct:3 exploration:1 transient:1 probable:1 stockholm:1 mathematically:2 hold:1 sufficiently:1 hall:1 normal:1 exp:1 seed:1 equilibrium:1 mapping:3 major:1 early:1 estimation:4 applicable:3 combinatorial:3 currently:1 concurrent:1 city:1 tool:1 sensor:1 gaussian:1 ck:1 avoid:1 mobile:3 surveillance:1 june:2 consistently:1 posteriori:3 inference:1 dependent:1 rigid:1 nand:1 accept:1 spurious:1 uij:1 hidden:1 transformed:2 interested:3 iu:13 priori:2 proposes:1 initialize:1 marginal:1 cube:1 sampling:13 manually:1 identical:1 chapman:1 look:1 peaked:1 future:2 thorpe:2 randomly:1 simultaneously:1 individual:1 geometry:1 occlusion:1 argmax:1 acceptance:1 highly:1 mixture:2 pc:1 oth:2 chain:7 iv:1 euclidean:1 desired:1 re:1 obstacle:1 castellanos:1 logp:2 assignment:7 maximization:1 deviation:1 conducted:1 my:1 density:3 fundamental:1 international:1 tanner:1 vastly:1 again:1 recorded:1 successively:1 central:1 conf:2 int:3 stream:1 view:3 closed:1 traffic:1 start:1 recover:4 bayes:2 who:1 yield:1 identify:1 bayesian:7 lu:1 carlo:8 finer:2 sebastian:1 acquisition:1 associated:1 mi:6 con:2 popular:1 knowledge:1 originally:1 higher:1 zisserman:2 ritov:1 generality:1 correlation:1 pasula:1 hastings:3 mode:4 facilitate:1 utah:1 building:2 consisted:1 true:1 analytically:1 illustrated:1 deal:1 ll:1 during:1 ambiguous:1 trying:1 hik:2 complete:1 motion:8 fj:1 image:25 charles:1 rotation:7 tracked:3 salt:1 association:8 discussed:2 kluwer:1 theirs:1 mellon:1 measurement:18 robot:2 add:2 posterior:23 perspective:1 irrelevant:1 discard:1 societal:1 minimum:4 eo:1 maximize:3 ii:1 smoother:1 multiple:1 sound:2 unimodal:1 smooth:1 characterized:1 academic:1 long:2 equally:4 impact:1 basic:1 vision:9 cmu:1 expectation:2 histogram:3 iteration:6 orthography:1 robotics:3 proposal:4 whereas:2 justified:1 annealing:6 decreased:2 publisher:1 ot:4 induced:1 virtually:1 elegant:1 xj:12 architecture:1 associating:1 reduce:2 idea:2 feder:1 render:1 dellaert:3 detailed:1 amount:1 shapiro:1 ilcai:1 estimated:1 per:1 carnegie:1 wireframe:1 cet:1 sum:1 run:6 powerful:1 almost:1 lake:1 sfm:7 ki:3 followed:1 correspondence:29 constraint:1 scene:3 ri:1 x2:1 aspect:1 min:1 conjecture:2 department:1 jio:1 across:2 beneficial:1 em:11 metropolis:3 restricted:1 gradually:3 taken:4 computationally:3 agree:1 needed:1 tractable:1 serf:1 apply:2 observe:1 simulating:1 robustly:1 alternative:1 denotes:1 running:1 entertainment:1 maintaining:1 unifying:1 calculating:2 exploit:1 threedimensional:1 society:1 question:1 strategy:1 randomize:1 parametric:1 traditional:1 distance:1 thrun:3 simulated:2 considers:1 multisensor:1 assuming:1 index:1 ratio:2 difficult:3 unfortunately:2 executed:1 potentially:1 frank:1 favorably:1 unknown:5 perform:1 allowing:1 markov:6 discarded:1 situation:2 defining:2 looking:1 extended:1 frame:6 perturbation:1 ninth:1 arbitrary:1 community:1 evidenced:1 tomasi:1 address:1 able:1 beyond:1 proceeds:1 below:3 pattern:2 scott:1 challenge:1 royal:1 epipolar:1 demanding:1 predicting:1 indicator:1 scheme:1 spiegelhalter:1 icc:1 determining:1 law:2 fully:2 loss:1 permutation:3 brittle:1 degree:1 undergone:1 principle:1 viewpoint:3 editor:1 translation:1 eccv:1 placed:1 institute:1 characterizing:1 distributed:3 calculated:1 avoids:1 stuck:3 collection:1 jump:2 projected:1 avoided:1 qualitatively:1 far:1 cope:2 nov:1 keep:1 active:1 robotic:1 summing:1 pittsburgh:1 tuples:1 kanade:1 robust:3 longuet:1 symmetry:1 obtaining:2 complex:1 artificially:1 domain:6 whole:1 noise:6 referred:1 uik:7 minute:1 xt:1 fusion:1 exists:1 intractable:2 importance:1 ci:1 conditioned:3 boston:1 simply:1 likely:2 explore:2 tracking:1 springer:1 chance:1 determines:1 ma:1 viewed:1 identity:1 leonard:1 towards:1 replace:1 considerable:2 change:1 typical:1 torr:2 averaging:1 sampler:5 total:1 invariance:1 experimental:2 beardsley:1 indicating:1 formally:1 uneven:1 mcmc:13 rigidity:1 ex:1
916
1,839
A Neural Probabilistic Language Model Yoshua Bengio; Rejean Ducharme and Pascal Vincent Departement d'Informatique et Recherche Operationnelle Centre de Recherche Mathematiques Universite de Montreal Montreal, Quebec, Canada, H3C 317 {bengioy,ducharme, vincentp }@iro.umontreal.ca Abstract A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed representation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly improves on a state-of-the-art trigram model. 1 Introduction A fundamental problem that makes language modeling and other learning problems difficult is the curse of dimensionality. It is particularly obvious in the case when one wants to model the joint distribution between many discrete random variables (such as words in a sentence, or discrete attributes in a data-mining task). For example, if one wants to model the joint distribution of 10 consecutive words in a natural language with a vocabulary V of size 100,000, there are potentially 10000010 - 1 = 1050 - 1 free parameters. A statistical model of language can be represented by the conditional probability of the P( Wt Iwf-1), next word given all the previous ones in the sequence, since P( W'[) = where Wt is the t-th word, and writing subsequence w[ = (Wi, Wi+1, ... , Wj-1, Wj). rri=l When building statistical models of natural language, one reduces the difficulty by taking advantage of word order, and the fact that temporally closer words in the word sequence are statistically more dependent. Thus, n-gram models construct tables of conditional probabilities for the next word, for each one of a large number of contexts, i.e. combinations of the last n - 1 words: p(wtlwf-1) ~ P(WtIW!=~+l)' Only those combinations of successive words that actually occur in the training corpus (or that occur frequently enough) are considered. What happens when a new combination of n words appears that was not seen in the training corpus? A simple answer is to look at the probability predicted using smaller context size, as done in back-off trigram models [7] or in smoothed (or interpolated) trigram models [6]. So, in such models, how is generalization basically obtained from sequences of "Y.B. was also with AT&T Research while doing this research. words seen in the training corpus to new sequences of words? simply by looking at a short enough context, i.e., the probability for a long sequence of words is obtained by "gluing" very short pieces of length 1, 2 or 3 words that have been seen frequently enough in the training data. Obviously there is much more information in the sequence that precedes the word to predict than just the identity of the previous couple of words. There are at least two obvious flaws in this approach (which however has turned out to be very difficult to beat): first it is not taking into account contexts farther than 1 or 2 words, second it is not taking account of the "similarity" between words. For example, having seen the sentence Th e ca t i s wa l k in g i n t he b e droom in the training corpus should help us generalize to make the sentence A d og was r u nning in a room almost as likely, simply because "dog" and "cat" (resp. "the" and "a", "room" and "bedroom", etc ... ) have similar semantics and grammatical roles . 1.1 Fighting the Curse of Dimensionality with its Own Weapons In a nutshell , the idea of the proposed approach can be summarized as follows: 1. associate with each word in the vocabulary a distributed "feature vector" (a real- valued vector in ~m), thereby creating a notion of similarity between words, 2. express the joint probability fun ction of word sequences in terms of the feature vectors of these words in the sequence, and 3. learn simultaneously the word feature vectors and the parameters of thatfitnction. The feature vector represents different aspects of a word: each word is associated with a point in a vector space. The number of features (e.g. m = 30,60 or 100 in the experiments) is much smaller than the size of the vocabulary. The probability function is expressed as a product of conditional probabilities of the next word given the previous ones, (e.g. using a multi-layer neural network in the experiment). This function has parameters that can be iteratively tuned in order to maximize the log-likelihood of the training data or a regularized criterion, e.g. by adding a weight decay penalty. The feature vectors associated with each word are learned, but they can be initialized using prior knowledge. Why does it work? In the previous example, if we knew that d og and cat played similar roles (semantically and syntactically), and similarly for (t h e ,a), (b edroo m, room), (i s ,was ), (runn i ng,wa l k i ng), we could naturally generalize from The cat i s wa l k i ng i n t h e b e droom to A d og was ru n n i ng i n a room and likewise to many other combinations. In the proposed model, it will so generalize because "similar" words should have a similar feature vector, and because the probability function is a smooth function of these feature values, so a small change in the features (to obtain similar words) induces a small change in the probability : seeing only one of the above sentences will increase the probability not only of that sentence but also of its combinatorial number of "neighbors" in sentence .Ipace (as represented by sequences of feature vectors). 1.2 Relation to Previous Work The idea of using neural networks to model high-dimensional discrete distributions has already been found useful in [3] where the joint probability of Zl ... Zn is decomposed as a product of conditional probabilities: P(Zl = Zl, " ', Zn = Zn) = Oi P(Zi = zilgi(Zi-l, Zi-2, ... , Zl)), where g(.) is a function represented by part of a neural network, and it yields parameters for expressing the distribution of Zi. Experiments on four VCI data sets show this approach to work comparatively very well [3, 2]. The idea of a distributed representation for symbols dates from the early days of connectionism [5]. More recently, Hinton's approach was improved and successfully demonstrated on learning several symbolic relations [9] . The idea of using neural networks for language modeling is not new either, e.g. [8] . In contrast, here we push this idea to a large scale, and concentrate on learning a statistical model of the distribution of word sequences, rather than learning the role of words in a sentence. The proposed approach is also related to previous proposals of character-based text compression using neural networks [11]. Learning a clustering of words [10, 1] is also a way to discover similarities between words. In the model proposed here, instead of characterizing the similarity with a discrete random or deterministic variable (which corresponds to a soft or hard partition of the set of words), we use a continuous real-vector for each word, i.e. a distributed feature vector, to indirectly represent similarity between words. The idea of using a vector-space representation for words has been well exploited in the area of information retrieval (for example see [12]), where vectorial feature vectors for words are learned on the basis of their probability of co-occurring in the same documents (Latent Semantic Indexing [4]). An important difference is that here we look for a representation for words that is helpful in representing compactly the probability distribution of word sequences from natural language text. Experiments indicate that learning jointly the representation (word features) and the model makes a big difference in performance. 2 The Proposed Model: two Architectures The training set is a sequence Wi ... WT of words Wt E V, where the vocabulary V is a large but finite set. The objective is to learn a good model f(wt,? . . , Wt-n) = P(wtlwi-i), in the sense that it gives high out-of-sample likelihood. In the experiments, we will report the geometric average of l/P(wtlwi-i), also known as perplexity, which is also the exponential of the average negative log-likelihood. The only constraint on the model is that for any choice of wi- i , Ei~i f(i, Wt-i, Wt-n) = 1. By the product of these conditional probabilities, one obtains a model of the joint probability of any sequence of words. The basic form of the model is described here. Refinements to speed it up and extend it will be described in the following sections. We decompose the function f (Wt, .. . , Wt-n) = P( Wt Iwi- i ) in two parts: 1. A mapping C from any element of V to a real vector C(i) E Rm . It represents the "distributed feature vector" associated with each word in the vocabulary. In practice, C is represented by a IV I x m matrix (of free parameters). 2. The probability function over words, expressed with C . We have considered two alternative formulations : (a) The direct architecture: a function 9 maps a sequence of feature vectors for words in context (C(Wt-n),?? ? , C(wt-d) to a probability distribution over words in V. It is a vector function whose i-th element estimates the probability P(Wt = ilwi- i ) as in figure 1. f(i, Wt-i,?? ? , Wt-n) = g(i, C(Wt-i),? ?? , C(Wt-n)). We used the "softmax" in the output layer of a neural net: P( Wt = ilwi- i ) = ehi / E j eh;, where hi is the neural network output score for word i . (b) The cycling architecture: a function h maps a sequence of feature vectors (C(Wt-n),???, C(Wt-i), C(i)) (i.e. including the context words and a candidate next word i) to a scalar hi, and again using a softmax, P(Wt = ilwi- i ) = ehi /Eje h;. f(Wt,Wt-i,? ? ?,Wt-n) = g(C(Wt), C(wt-d,??? ,C(Wt-n)). We call this architecture "cycling" because one repeatedly runs h (e.g. a neural net), each time putting in input the feature vector C(i) for a candidate next word i. The function f is a composition of these two mappings (C and g), with C being shared across all the words in the context. To each of these two parts are associated some parameters. The parameters of the mapping C are simply the feature vectors themselves (represented by a IVI x m matrix C whose row i is the feature vector C(i) for word i). The function 9 may be implemented by a feed-forward or recurrent neural network or another parameterized function, with parameters (). i-ill output = P (Wt = i I eMtext ) : computed onl y for WOlds in shOlt list Tabl e look-up inC index fOl w i-n index fot W'-2 index fot Wt_l Figure 1: "Direct Architecture": f(i, Wt-l, ? ", Wt-n) = g(i, C(Wt-l),???, C(Wt-n)) where 9 is the neural network and C(i) is the i-th word feature vector. Training is achieved by looking for ((), C) that maximize the training corpus penalized loglikelihood: L = ~ ~t logpw. (C(Wt-n),???, C(Wt-l)j ()) + R((), C), where R((), C) is a regularization term (e.g. a weight decay ).11()11 2 , that penalizes slightly the norm of (). 3 Speeding-up and other Tricks Short list. The main idea is to focus the effort of the neural network on a "short list" of words that have the highest probability. This can save much computation because in both of the proposed architectures the time to compute the probability of the observed next word scales almost linearly with the number of words in the vocabulary (because the scores hi associated with each word i in the vocabulary must be computed for properly normalizing probabilities with the softmax). The idea of the speed-up trick is the following: instead of computing the actual probability of the next word, the neural network is used to compute the relative probability of the next word within that short list. The choice of the short list depends on the current context (the previous n words). We have used our smoothed trigram model to pre-compute a short list containing the most probable next words associated to the previous two words. The conditional probabilities P(Wt = ilht ) are thus computed as follows, denoting with h t the history (context) before Wt. and L t the short list of words for the prediction of Wt. If i E L t then the probability is PNN(Wt = ilWt E Lt , ht)Ptrigram(Wt E Ltlht ), else it is Ptrigram(Wt = ilht ), where PNN(Wt = ilWt E L t , ht) are the normalized scores of the words computed by the neural network, where the "softmax" is only normalized over the words in the short list Lt, and Ptrigram(Wt E Ltlht ) = ~iEL. Ptrigram(ilht), with Ptrigram(ilht) standing for the next-word probabilities computed by the smoothed trigram. Note that both L t and Ptrigram(Wt E Ltlht ) can be pre-computed (and stored in a hash table indexed by the last two words). Table look-up for recognition. To speed up application of the trained model, one can pre-compute in a hash table the output of the neural network, at least for the most frequent input contexts. In that case, the neural network will only be rarely called upon, and the average computation time will be very small. Note that in a speech recognition system, one needs only compute the relative probabilities of the acoustically ambiguous words in each context, also reducing drastically the amount of computations. Stochastic gradient descent. Since we have millions of examples, it is important to converge within only a few passes through the data. For very large data sets, stochastic gradient descent convergence time seems to increase sub-linearly with the size of the data set (see experiments on Brown vs Hansard below). To speed up training using stochastic gradient descent, we have found it useful to break the corpus in paragraphs and to randomly permute them. In this way, some of the non-stationarity in the word stream is eliminated, yielding faster convergence. Capacity control. For the "smaller corpora" like Brown (1.2 million examples), we have found early stopping and weight decay useful to avoid over-fitting. For the larger corpora, our networks still under-fit. For the larger corpora, we have found double-precision computation to be very important to obtain good results. Mixture of models. We have found improved performance by combining the probability predictions of the neural network with those of the smoothed trigram, with weights that were conditional on the frequency of the context (same procedure used to combine trigram, bigram, and unigram in the smoothed trigram). Initialization of word feature vectors. We have tried both random initialization (uniform between -.01 and .01) and a "smarter" method based on a Singular Value Decomposition (SVD) of a very large matrix of "context features". These context features are formed by counting the frequency of occurrence of each word in each one of the most frequent contexts (word sequences) in the corpus. The idea is that "similar" words should occur with similar frequency in the same contexts. We used about 9000 most frequent contexts, and compressed these to 30 features with the SVD. Out-of-vocabulary words. For an out-of-vocabulary word Wt we need to come up with a feature vector in order to predict the words that follow, or predict its probability (that is only possible with the cycling architecture). We used as feature vector the weighted average feature vector of all the words in the short list, with the weights being the relative probabilities ofthose words: E[C(wt)lhtl = Ei C(i)P(wt = ilh t ). 4 Experimental Results Comparative experiments were performed on the Brown and Hansard corpora. The Brown corpus is a stream of 1,181,041 words (from a large variety of English texts and books). The first 800,000 words were used for training, the following 200,000 for validation (model selection, weight decay, early stopping) and the remaining 181,041 for testing. The number of different words is 47, 578 (including punctuation, distinguishing between upper and lower case, and including the syntactical marks used to separate texts and paragraphs). Rare words with frequency:::; 3 were merged into a single token, reducing the vocabulary size to IVI = 16,383. The Hansard corpus (Canadian parliament proceedings, French version) is a stream of about 34 million words, of which 32 millions (set A) was used for training, 1.1 million (set B) was used for validation, and 1.2 million (set C) was used for out-of-sample tests. The original data has 106, 936 different words, and those with frequency:::; 10 were merged into a single token, yielding IVI = 30,959 different words. The benchmark against which the neural network was compared is an interpolated or smoothed trigram model [6]. Let qt = l(Jreq(Wt-l,Wt-2)) represent the discretized frequency of occurrence of the context (Wt-l, Wt-2) (we used l(x) = -log((l + x)/T)l where x is the frequency of occurrence of the context and T is the size of the training corpus). A conditional mixture of the trigram, bigram, unigram and zero-gram was learned on the validation set, with mixture weights conditional on discretized frequency. r Below are measures of test set perplexity (geometric average of 1/ p( Wt Iwi- 1 ) for different models P. Apparent convergence of the stochastic gradient descent procedure was obtained after around 10 epochs for Hansard and after about 50 epochs for Brown, with a learning rate gradually decreased from approximately 10- 3 to 10- 5 . Weight decay of 10- 4 or 10- 5 was used in all the experiments (based on a few experiments compared on the validation set). The main result is that the neural network performs much better than the smoothed trigram. On Brown the best neural network system, according to validation perplexity (among different architectures tried, see below) yielded a perplexity of 258, while the smoothed trigram yields a perplexity of 348, which is about 35% worse. This is obtained using a network with the direct architecture mixed with the trigram (conditional mixture), with 30 word features initialized with the SVD method, 40 hidden units, and n = 5 words of context. On Hansard, the corresponding figures are 44.8 for the neural network and 54.1 for the smoothed trigram, which is 20.7% worse. This is obtained with a network with the direct architecture, 100 randomly initialized words features, 120 hidden units, and n = 8 words of context. More context is useful. Experiments with the cycling architecture on Brown, with 30 word features, and 30 hidden units, varying the number of context words: n = 1 (like the bigram) yields a test perplexity of 302, n = 3 yields 291 , n = 5 yields 281 , n = 8 yields 279 (N.B. the smoothed trigram yields 348). Hidden units help. Experiments with the direct architecture on Brown (with direct input to output connections), with 30 word features, 5 words of context, varying the number of hidden units: 0 yields a test perplexity of 275, 10 yields 267, 20 yields 266, 40 yields 265, 80 yields 265. Learning the word features jointly is important. Experiments with the direct architecture on Brown (40 hidden units, 5 words of context), in which the word features initialized with the SVD method are kept fixed during training yield a test perplexity of 345.8 whereas if the word features are trained jointly with the rest of the parameters, the perplexity is 265. Initialization not so useful. Experiments on Brown with both architectures reveal that the SVD initialization of the word features does not bring much improvement with respect to random initialization: it speeds up initial convergence (saving about 2 epochs), and yields a perplexity improvement of less than 0.3 %. Direct architecture works a bit better. The direct architecture was found about 2% better than the cycling architecture. Conditional mixture helps but even without it the neural net is better. On Brown, the best neural net without the mixture yields a test perplexity of 265, the smoothed trigram yields 348, and their conditional mixture yields 258 (i.e., better than both). On Hansard the improvement is less: a neural network yielding 46.7 perplexity, mixed with the trigram (54.1), yields a mixture with perplexity 45.1. 5 Conclusions and Proposed Extensions The experiments on two corpora, a medium one 0.2 million words), and a large one (34 million words) have shown that the proposed approach yields much better perplexity than a state-of-the-art method, the smoothed trigram, with differences on the order of 20% to 35 %. We believe that the main reason for these improvements is that the proposed approach allows to take advantage of the learned distributed representation to fight the curse of dimensionality with its own weapons: each training sentence informs the model about a combinatorial number of other sentences. Note that if we had a separate feature vector for each "context" (short sequence of words), the model would have much more capacity (which could grow like that of n-grams) but it would not naturally generalize between the many different ways a word can be used. A more reasonable alternative would be to explore language units other than words (e.g. some short word sequences, or alternatively some sub-word morphemic units). There is probably much more to be done to improve the model, at the level of architecture, computational efficiency, and taking advantage of prior knowledge. An important priority of future research should be to evaluate and improve the speeding-up tricks proposed here, and find ways to increase capacity without increasing training time too much (to deal with corpora with hundreds of millions of words). A simple idea to take advantage of temporal structure and extend the size of the input window to include possibly a whole paragraph, without increasing too much the number of parameters, is to use a time-delay and possibly recurrent neural network. In such a multi-layered network the computation that has been performed for small groups of consecutive words does not need to be redone when the network input window is shifted. Similarly, one could use a recurrent network to capture potentially even longer term information about the subject of the text. A very important area in which the proposed model could be improved is in the use of prior linguistic knowledge: semantic (e.g. Word Net), syntactic (e.g. a tagger), and morphological (radix and morphemes). Looking at the word features learned by the model should help understand it and improve it. Finally, future research should establish how useful the proposed approach will be in applications to speech recognition, language translation, and information retrieval. Acknowledgments The authors would like to thank Leon Bottou and Yann Le Cun for useful discussions. This research was made possible by funding from the NSERC granting agency. References [1] D. Baker and A. McCallum. Distributional clustering of words for text classification. In SIGlR '98,1998. [2] S. Bengio and Y. Bengio. Taking on the curse of dimensionality in joint distributions using neural networks. IEEE Transactions on Neural Networks, special issue on Data Mining and Knowledge Discovery, 11(3):550-557, 2000. [3] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In S. A. Solla, T. K. Leen, and K-R. Mller, editors, Advances in Neural Information Processing Systems 12, pages 400--406. MIT Press, 2000. [4] S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, and R.Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407, 1990. [5] G.E . Hinton. Learning distributed representations of concepts. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pages 1-12, Amherst 1986, 1986. Lawrence Erlbaum, Hillsdale. [6] F. Jelinek and R. L. Mercer. Interpolated estimation of Markov source parameters from sparse data. In E. S. Gelsema and L. N. Kanal, editors, Pattern Recognition in Practice. North-Holland, Amsterdam, 1980. [7] Slava M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP35(3):400-401 , March 1987. [8] R. Miikkulainen and M.G. Dyer. Natural language processing with modular neural networks and distributed lexicon. Cognitive Science, 15:343- 399, 1991. [9] A. Paccanaro and G.E. Hinton. Extracting distributed representations of concepts and relations from positive and negative propositions. In Proceedings of the International Joint Conference on Neural Network, lJCNN'2000, Como, Italy, 2000. IEEE, New York. [10] F. Pereira, N. Tishby, and L. Lee. Distributional clustering of english words. In 30th Annual Me eting of the Association for Computational Linguistics, pages 183- 190, Columbus, Ohio, 1993. [11] Jiirgen Schmidhuber. Sequential neural text compression. IEEE Transactions on Neural Networks, 7(1):142- 146, 1996. [12] H. Schutze. Word space. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages pp. 895- 902, San Mateo CA, 1993. Morgan Kaufmann.
1839 |@word version:1 bigram:3 compression:2 norm:1 seems:1 tried:2 decomposition:1 thereby:1 initial:1 score:3 tuned:1 document:1 denoting:1 current:1 must:1 partition:1 hash:2 v:1 mccallum:1 short:12 farther:1 recherche:2 granting:1 redone:1 lexicon:1 successive:1 tagger:1 along:1 direct:9 fitting:1 combine:1 paragraph:3 operationnelle:1 themselves:1 frequently:2 multi:3 discretized:2 decomposed:1 actual:1 curse:5 window:2 increasing:2 discover:1 baker:1 medium:1 what:1 temporal:1 fun:1 nutshell:1 rm:1 zl:4 control:1 unit:8 harshman:1 before:2 positive:1 ilwi:3 approximately:1 initialization:5 mateo:1 co:1 statistically:1 acknowledgment:1 fot:2 testing:1 practice:2 hansard:6 procedure:2 area:2 significantly:1 word:121 pre:3 seeing:1 symbolic:1 get:1 selection:1 layered:1 context:26 writing:1 deterministic:1 demonstrated:1 map:2 notion:1 rri:1 resp:1 distinguishing:1 samy:1 associate:1 element:2 trick:3 recognition:4 particularly:1 distributional:2 observed:1 role:3 capture:1 wj:2 morphological:1 solla:1 highest:1 agency:1 trained:2 upon:1 efficiency:1 basis:1 compactly:1 joint:8 represented:5 cat:3 ilh:1 informatique:1 ction:1 precedes:1 whose:2 apparent:1 ducharme:2 valued:1 larger:2 loglikelihood:1 modular:1 compressed:1 syntactic:1 jointly:3 h3c:1 obviously:1 sequence:21 advantage:4 net:5 propose:1 product:3 frequent:3 turned:1 combining:1 date:1 convergence:4 double:1 comparative:1 help:4 recurrent:3 montreal:2 iwf:1 informs:1 qt:1 implemented:1 predicted:1 indicate:1 come:1 concentrate:1 merged:2 attribute:1 stochastic:4 hillsdale:1 generalization:2 decompose:1 proposition:1 probable:1 connectionism:1 mathematiques:1 extension:1 around:1 considered:2 lawrence:1 mapping:3 predict:3 mller:1 trigram:18 consecutive:2 early:3 recognizer:1 estimation:2 combinatorial:2 successfully:1 weighted:1 mit:1 rather:1 avoid:1 og:3 varying:2 linguistic:1 focus:1 properly:1 improvement:4 likelihood:3 contrast:1 schutze:1 sense:1 helpful:1 flaw:1 dependent:1 stopping:2 fight:2 hidden:6 relation:3 semantics:1 issue:1 among:1 ill:1 pascal:1 morpheme:1 classification:1 art:2 softmax:4 special:1 construct:1 never:1 having:1 ng:4 eliminated:1 saving:1 represents:2 look:4 future:2 yoshua:2 report:2 few:2 randomly:2 simultaneously:2 stationarity:1 mining:2 mixture:8 punctuation:1 yielding:3 closer:1 indexed:1 iv:1 initialized:4 penalizes:1 modeling:4 soft:1 giles:1 eje:1 zn:3 rare:1 uniform:1 hundred:1 delay:1 erlbaum:1 too:2 tishby:1 stored:1 answer:1 dumais:1 ehi:2 fundamental:1 amherst:1 international:1 standing:1 probabilistic:1 off:1 lee:1 acoustically:1 again:1 containing:1 possibly:2 priority:1 worse:2 cognitive:2 creating:1 book:1 american:1 account:2 de:2 summarized:1 north:1 inc:1 depends:1 stream:3 piece:1 performed:2 break:1 doing:1 fol:1 iwi:2 oi:1 formed:1 kaufmann:1 likewise:1 yield:19 generalize:4 vincent:1 basically:1 ipace:1 history:1 against:1 frequency:8 pp:1 obvious:2 jiirgen:1 universite:1 associated:6 naturally:2 couple:1 intrinsically:1 knowledge:4 dimensionality:5 improves:1 actually:1 back:1 appears:1 feed:1 day:1 follow:1 improved:3 formulation:1 done:2 wold:1 leen:1 just:1 ei:2 vci:1 french:1 columbus:1 reveal:1 believe:1 building:1 brown:11 concept:2 normalized:2 regularization:1 iteratively:1 semantic:3 deal:1 during:1 slava:1 ambiguous:1 criterion:1 paccanaro:1 performs:1 syntactically:1 bring:1 ohio:1 recently:1 umontreal:1 funding:1 million:9 extend:2 he:1 association:1 katz:1 expressing:1 composition:1 similarly:2 centre:1 language:12 had:1 similarity:6 longer:1 etc:1 own:3 italy:1 perplexity:14 schmidhuber:1 exploited:1 seen:6 morgan:1 converge:1 maximize:2 signal:1 reduces:1 smooth:1 faster:1 long:1 retrieval:2 prediction:2 basic:1 represent:2 smarter:1 achieved:1 pnn:2 proposal:1 whereas:1 want:2 decreased:1 else:1 singular:1 grow:1 source:1 weapon:3 rest:1 ivi:3 pass:1 probably:1 subject:1 cowan:1 quebec:1 call:1 extracting:1 counting:1 canadian:1 bengio:5 enough:3 variety:1 fit:1 zi:4 bedroom:1 architecture:18 idea:10 syntactical:1 effort:1 penalty:1 speech:4 york:1 repeatedly:1 useful:7 amount:1 induces:1 ilwt:2 shifted:1 discrete:5 express:1 group:1 putting:1 four:1 ht:2 kept:1 deerwester:1 run:1 parameterized:1 almost:2 reasonable:1 yann:1 bit:1 layer:3 hi:3 played:1 yielded:1 annual:2 occur:3 vectorial:1 constraint:1 interpolated:3 aspect:1 speed:5 leon:1 according:1 combination:4 march:1 smaller:3 across:1 slightly:1 character:1 wi:4 gluing:1 departement:1 cun:1 happens:1 ilht:4 gradually:1 indexing:2 dyer:1 indirectly:1 occurrence:3 save:1 alternative:2 original:1 clustering:3 remaining:1 include:1 linguistics:1 establish:1 society:2 comparatively:1 objective:1 already:2 cycling:5 gradient:4 separate:2 thank:1 capacity:3 me:1 iro:1 reason:1 ru:1 length:1 fighting:1 index:3 difficult:3 onl:1 potentially:2 negative:2 upper:1 markov:1 benchmark:1 finite:1 descent:4 beat:1 hinton:3 looking:3 smoothed:12 canada:1 dog:1 sentence:10 connection:1 radix:1 hanson:1 acoustic:1 learned:5 rejean:1 below:3 pattern:1 eighth:1 vincentp:1 including:3 natural:4 difficulty:1 regularized:1 eh:1 representing:1 improve:3 temporally:1 speeding:2 text:8 prior:3 geometric:2 epoch:3 discovery:1 relative:3 mixed:2 validation:5 ljcnn:1 mercer:1 parliament:1 editor:3 translation:1 row:1 penalized:1 token:2 last:2 free:2 english:2 drastically:1 understand:1 neighbor:1 taking:5 characterizing:1 jelinek:1 sparse:2 distributed:9 grammatical:1 vocabulary:10 gram:3 forward:1 made:2 refinement:1 author:1 san:1 miikkulainen:1 transaction:3 obtains:1 corpus:17 knew:1 alternatively:1 landauer:1 subsequence:1 continuous:1 latent:2 why:1 table:4 learn:3 ca:3 kanal:1 permute:1 bottou:1 main:3 linearly:2 big:1 whole:1 furnas:1 precision:1 sub:2 pereira:1 exponential:1 bengioy:1 candidate:2 learns:1 unigram:2 showing:1 symbol:1 list:9 decay:5 normalizing:1 adding:1 sequential:1 iel:1 push:1 occurring:1 lt:2 simply:3 likely:1 explore:1 forming:1 expressed:3 nserc:1 amsterdam:1 scalar:1 holland:1 corresponds:1 conditional:12 goal:1 identity:1 room:4 shared:1 change:2 hard:1 reducing:2 semantically:1 wt:52 called:1 svd:5 experimental:1 rarely:1 mark:1 evaluate:1
917
184
169 DOES THE NEURON "LEARN" LIKE THE SYNAPSE? RAOUL TAWEL Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 Abstract. An improved learning paradigm that offers a significant reduction in computation time during the supervised learning phase is described. It is based on extending the role that the neuron plays in artificial neural systems. Prior work has regarded the neuron as a strictly passive, non-linear processing element, and the synapse on the other hand as the primary source of information processing and knowledge retention. In this work, the role of the neuron is extended insofar as allowing its parameters to adaptively participate in the learning phase. The temperature of the sigmoid function is an example of such a parameter. During learning, both the synaptic interconnection weights w[j and the neuronal temperatures are optimized so as to capture the knowledge contained within the training set. The method allows each neuron to possess and update its own characteristic local temperature. This algorithm has been applied to logic type of problems such as the XOR or parity problem, resulting in a significant decrease in the required number of training cycles. Tr INTRODUCTION One of the current issues in the theory of supervised learning concerns the scaling properties of neural networks. While low-order neural computations are easily handled on sequential or parallel processors, high-order problems prove to be intractable. The computational burden involved in implementing supervised learning algorithms, such as back-propagation, on networks with large connectivity and/or large training sets is immense and impractical at present. Therefore the treatment of 'real' applications in such areas as image recognition or pattern classification require the development of computationally efficient learning rules. This paper reports such an algorithm. Current neuromorphic models regard the neuron as a strictly passive non-linear element, and the synapse on the other hand as the primary source of knowledge retention. In these models, information processing is performed by propagating the synaptically weighed neuronal contributions in either a feed forward, feed backward, or fully recurrent fashion [1]-[3). Artificial neural networks commonly take the point of view that the neuron can be modeled by a simple non-linear 'wire' type of device. However, evidence exists that information processing in biological neural networks does occur at the neuronal level [4]. Although neuromorphic nets based on simple neurons are useful as a first approximation, a considerable richness is to be gained by extending 'learning' to the neuron. In this work, such an extension is made. The neuron is then seen to provide an additional or secondary source of information processing and knowledge retention. This is achieved by treating both the neuronal and synaptic variables as optimization parameters. The temperature of the sigmoid function is an example of such a neuronal parameter. In much 170 Tawel the same way that the synaptic interconnection weights require optimization to reflect the knowledge contained within the training set, so should the temperature terms be optimized. It should be emphasized that the method does not optimize a global neuronal temperature for the whole network, but rather allows each neuron to posses and update its own characteristic local value. ADAPTIVE NEURON MODEL Although the principle of neuronal optimization is an entirely general concept, and therefore applicable to any learning scheme, the popular feed forward back propagation (BP) learning rule has been selected for its implementation and performance evaluation. In this section we develop the mathematical formalism neccessary to implement the adaptive neuron model (ANM). Back propagation is an example of supervised learning where, for each presentation consisting of an input vector iJip and its associated target vector tp, the algorithm attemps to adjust the synaptic weights so as to minimize the sum-squared error E over all patterns p. In its simplest form, back propagation treats the interconnection weights as the only variable and consequently executes gradient descent in weight space. The error term is given by E = L: Ep = ~ L: L: [tf P P o?]2 i The quantity tf is the ith component of the pth desired output vector pattern and o? is the activation of the corresponding neuron in the final layer n . For notational ease the summation over p is dropped and a single pattern is considered. On completion of learning, the synaptic weights capture the transformation linking the input to output variables. In applications other than toy problems, a major drawback of this algorithm is the excessive convergence time. In this paper it is shown that a significant decrease in convergence time can be realized by allowing the neurons to adaptively participate in the learning process. This means that each neuron is to be characterized by a set of parameters, such as temperature, whose values are optimized according to a rule, and not in a heuristic fashion as in simulated annealing. Upon training completion, learning is thus captured in both the synaptic and neuronal parameters. The activation of a unit - say the ith neuron on the mth layer - is given by This response is computed by a non-linear operation on the weighed responses of neurons from the previous layer, as seen in Figure 1. A common function to use is the logistic funtion, 1 o~ = ----..".-=1 1 + e-{36'i or. = and T 1/ f3 is the temperature of the network. The net weighed input to the neuron is found by summing products of the synaptic weights and corresponding neuronal ouputs from units on the previous layer, sf!' 1 = ""' L..J w~-lof!'-l I] j ] Does the Neuron "Learn" Like the Synapse? ~ ~m Sm f= 1 +e k k )-' I-----O~ om-t 3 ' - - - v - r - - - J1 ' ' - - - -?.-------1 INPUT FROM PREVIOUS LAYER NEURON '----v--------OUTPUT FROM NEURON Figure 1. Each neuron in a network is charaterized by a local, temperature dependent, sigmoidal activation function. oj-1 represents fan-in units and the wij-l represent the pairwise connection where strength between neuron i in layer m and neuron j in layer m - l. We have investigated several mathematical methods for the determination of the optimal neuronal temperatures. In this paper, the rule that was selected to optimize these parameters is based on executing gradient descent in the sum squared error E in temperature space. The method requires that the incremental change in the temperature term be proportional to the negative of the derivative of the error term with respect to the temperature. Focussing on the 11h neuron on the ouput layer n, we have ~T.', = -71 aE " aT," In this expression, ij is the temperature learning rate. This equation can be expressed as the product of two terms by the chain rule aE _ aE ao; ar.,n aon" ar.n Substituting expressions and leaving the explicit functional form of the activation function unspecified, i.e. we obtain 0; = f(r,n, ... ) aE = - [t, -ar,n n af aT," 0,] - In a similar fashion, the temperature update equation for the previous layer is given by, ~r::-1 k __ - aE TJ ar::- 1 k - 171 172 Tawel Using the chain rule, this can be expressed as aE aE ao; as; ao~-l ar::= LI " aon as ao1:n- ar;:-l k 1 1 n Substituting expressions and simplifying reduces the above to aE = [~[ ar::-1 L...J - t, k, n] 0, af WI1:n-l] ar,n-l af asn , 1: By repeating the above derivation for the previous layer, i.e. determining the partial derivative of E with respect to Tj-2 etc., a simple recursive relationship emerges for the temperature terms. Specifically, the updating scheme for the kth neuronal temperature on the mth layer is given by A rpm U.l k where aE aEm = -1] ar. k - m ar,m k af = -6 k ar,m 1: In the above expression, the error signal 6r takes on the value, if neuron m lies on an output layer, or em Vk af v, , as, ~ = L...J em+l m m+l W'k if the neuron lies on a hidden layer. SIMULATION RESULTS OF TEMPERATURE OPTIMIZATION The new algorithm was applied to logic problems. The network was trained on a standard benchmark - the exclusive-or logic problem. This is a classic problem requiring hidden units and since many problems involve an XOR as a subproblem. As in plain BP, the application of the proposed learning rule involves two passes. In the first, an input pattern is presented and propagated forward through the network to compute the output values oj. This output is compared to its target value, resulting in an error signal for each output unit. The second pass involves a backward pass through the network during which the error signal is passed along the network and the appropriate weight and temperature changes made. Note that since the synapses and neurons have their own characteristic learning rate, i.e 1] and fj respectively, an additional degree of freedom is introduced in the simulation. This is equivalent to allowing for relative updating time scales for the weights and Does the Neuron "Learn" Like the Synapse? temperatures, i.e. Tw and TT respectively. We have now generated a gradient descent method for finding weights and temperatures in a feed forward network. In deriving the learning rule for temperature optimization in the above section, the derivative of the activation function of a neuron played a key role. We have used a sigmoidal type of function in our simulations whose explicit form is given by, f (81:\ If) = 1 ~f3"'s", +e "" and in Figure 2 it is shown to be extremely sensitive to small changes in temperature. 10. 0.8 0.6 ;:- i 0.4 0.2 0. -5 -3 -1 Figure 2. Activation function shown plotted for several different temperatures. The sigmoid is shown plotted against the net input to a neuron for temperatures ranging from 0.2 to 2.0, in increments of 0.2. However, the steepest curve was for a temperature of 0.01. The derivative of the activation function taken with respect to the temperature is given by of oTr: As shown in Figure 3, the XOR architecture selected has two input units, two hidden units, and a single output unit. Each neuron is characterized by a temperature, and neurons are connected by weights. Prior to training the network, both the weights and temperatures were randomized. The initial and final optimization parameters for a sample training exercise are shown in Figure 3(a) & (b). Specifically, Figure 3(a) shows the values of the randomized weights and temperatures prior to training, and Figure 3(b) shows their values after training the network for 1000 iterations. This is a case where the network has reached a global minimum. In both figures, the numbers associated with the dashed arrows represent the thresholds of the neurons, and the numbers written next to the solid arrows represent the 173 174 Tawel OUT OUT T=O.951 '"0.450 ,/ -0.979 ," ,. ...... , -1.183 0.476 (a) (b) Figure 3. Architecture of NN for XOR problem showing neuronal temperatures and synaptic weights before ( a) and after training (b). excitatory/inhibitory strengths of the pairwise connections. To fully evaluate the convergence speed of the proposed algorithm, a benchmark comparison between it and plain BP was made. In both cases the training was started with identical initial random synaptic weights lying within the range [-2.0, +2.0] and the same synaptic weight learning rate TJ 0.1. The temperatures of the neurons in the AN M model were randomly selected to lie wjthin the narrow range of [0.9,1.1] and the temperature learning rate ij set at 0.1. Figures 4(a) & (b) summarize the training statistics of this comparison. = 100 ._- ....... 10"1 ,, 10"1 '. ' .......... 111"2 i... , " ", 10-2 '. '. ", ,, 1'ct3 " 10"3 10-4 10-4 110"5 10"5 10-6 , / "./ /\ \ \ \ i \ \ 10-6 100 10 1 102 103 ITERATION - 104 \ ," '. , ' .... 105 108 10"7 100 101 102 103 ITERATION Figure 4. Comparison of training statistics between the adaptive neuron model and plain back propagation. 104 105 '\. loe Does the Neuron "Learn" Like the Synapse? In both figures, the solid lines represent the ANM and the dashed lines represent the plain BP model. In Figure 4( a), the error is plotted against the training iteration number. In Figure 4(b), the standard deviation of the error over the training set is shown plotted against the training iteration. In the first few hundred training iterations in Figure 4( a), the performance ofBP and the ANM is similar and appears as a broad shoulder in the curve. Recall that both the weights and temperatures are randomized prior to training, and are therefore far from their final values. As a consequence of the low values of the learning rates used, the error is large, and will only begin to get smaller when the weights and temperatures begin to fall in the right domain of values. In the ANM, the shoulder terminus is marked by a phase-transition like discontinuity in both error and standard deviation. For the particular example shown, this occured at the 637 th iteration. A several order of magnitude drop in the error and standard deviation is observed within the next 10 iterations. This sharp drop off is followed by a much more gradual decrease in both the error and standard deviation. A more detailed analysis of these results will be published in a longer paper. In learning the XOR problem using standard BP, it has been observed that the network frequently gets trapped in local minima. In Figure 5(a) & (b) we observe such a case as shown by the dotted line. In numerous simulations on this problem, we have determined that the ANM is much less likely to become trapped in local mInIma. 100 ~----""".--- ........ "..----------- "_._--------- 10-1 i i /" 10-2 i 10-3 w 11)'4 1C)'6 ~ 100 101 102 103 ITERATION Figure 5. Training case where the adaptive neuron model escapes a local minima and plain back propagation does not. CONCLUSIONS In this paper we have attempted to upgrade and enrich the model of the neuron from a simple static non-linear wire-type construct, to a dynamically reconfigurable one. From a purely computational point of view, there are definite advantages in such an extension. Recall that if N is the number of neurons in a network then the number of synaptic connections typically increases as O(N2). Since the activation 175 176 Tawel function is extremely sensitive to small changes in temperature and that there are far fewer neuronal parameters to update than synaptic weights, suggests that the adaptive neuron model should offer a significant reduction in convergence time. In this paper we have also shown that the active participation of the neurons during the supervised learning phase led to a significant reduction in the number of training cycles required to learn logic type of problems. In the adaptive neuron model both the synaptic weight interconnection strengths and the neuronal temperature terms are treated as optimization parameters and have their own updating scheme and time scales. This learning rule is based on implementing gradient descent in the sum squared error E with respect to both the weights wr] and temperatures Tim. Preliminary results indicate that the new algorithm can significantly outperform back propagation by reducing the learning time by several orders of magnitude. Specifically, the XOR problem was learnt to a very high precision by the network in :::::: 10 3 training iterations with a mean square error of:::::: 10- 6 versus over 106 iterations with a corresponding mean square error of:::::: 10- 3 . Acknowledgements. The work described in this paper was performed by the Jet Propulsion Laboratory, California Institute of Technology, and was supported in parts by the National Aeronautics and Space Administration and the Defense Advanced Research Projects Agency through an agreement with the National Aeronautics and Space Administration. REFERENCES 1. D. Rummelhart, J. McClelland, "Parallel Distributed Processing," M.I.T. Press, Cambridge, MA,1986. 2. J. J. Hopfield, Neural Networks as Physical Systems with Emergent Collective Computational Abilities, Proceedings of the National Academy of Science USA 79 (1982), 2554-2558. 3. F. J. Pineda, Generalization Of Backpropagation To Recurrent and Higher Order Neural Networks, in "Neural Information Processing Systems Proceedings," AlP, New York, 1981. 4. L. R. Carley, Presynaptic Neural Information Processing, in "Neural Information Processing Systems Proceedings," AlP, New York, 1981.
184 |@word gradual:1 simulation:4 simplifying:1 tr:1 solid:2 reduction:3 initial:2 terminus:1 current:2 activation:8 written:1 j1:1 treating:1 drop:2 update:4 selected:4 device:1 fewer:1 ith:2 steepest:1 sigmoidal:2 mathematical:2 along:1 become:1 ouput:1 prove:1 pairwise:2 frequently:1 begin:2 project:1 funtion:1 unspecified:1 neccessary:1 transformation:1 finding:1 impractical:1 unit:8 before:1 retention:3 dropped:1 local:6 treat:1 consequence:1 dynamically:1 suggests:1 ease:1 range:2 recursive:1 implement:1 definite:1 backpropagation:1 area:1 significantly:1 get:2 weighed:3 optimize:2 equivalent:1 rule:9 regarded:1 deriving:1 classic:1 increment:1 target:2 play:1 agreement:1 element:2 recognition:1 updating:3 ep:1 role:3 subproblem:1 observed:2 capture:2 cycle:2 richness:1 connected:1 decrease:3 agency:1 trained:1 purely:1 upon:1 easily:1 hopfield:1 emergent:1 derivation:1 artificial:2 whose:2 heuristic:1 say:1 interconnection:4 ability:1 statistic:2 final:3 pineda:1 advantage:1 net:3 product:2 academy:1 convergence:4 extending:2 incremental:1 executing:1 tim:1 recurrent:2 develop:1 propagating:1 completion:2 ij:2 involves:2 indicate:1 drawback:1 alp:2 implementing:2 require:2 ao:3 generalization:1 preliminary:1 biological:1 summation:1 strictly:2 extension:2 lying:1 considered:1 lof:1 substituting:2 major:1 wi1:1 applicable:1 sensitive:2 tf:2 rather:1 notational:1 vk:1 dependent:1 nn:1 typically:1 pasadena:1 mth:2 hidden:3 wij:1 issue:1 classification:1 development:1 enrich:1 construct:1 f3:2 identical:1 represents:1 broad:1 excessive:1 rummelhart:1 report:1 escape:1 few:1 randomly:1 national:3 phase:4 consisting:1 freedom:1 evaluation:1 adjust:1 tj:3 immense:1 chain:2 partial:1 desired:1 plotted:4 formalism:1 ar:11 tp:1 neuromorphic:2 deviation:4 hundred:1 learnt:1 adaptively:2 randomized:3 off:1 connectivity:1 squared:3 reflect:1 derivative:4 toy:1 li:1 performed:2 view:2 reached:1 parallel:2 contribution:1 minimize:1 om:1 square:2 xor:6 characteristic:3 upgrade:1 published:1 processor:1 executes:1 synapsis:1 synaptic:13 against:3 involved:1 associated:2 static:1 propagated:1 treatment:1 popular:1 recall:2 knowledge:5 emerges:1 occured:1 back:7 appears:1 feed:4 higher:1 supervised:5 response:2 improved:1 synapse:6 hand:2 propagation:7 logistic:1 usa:1 concept:1 requiring:1 laboratory:2 during:4 anm:5 tt:1 temperature:38 passive:2 fj:1 image:1 ranging:1 sigmoid:3 common:1 functional:1 physical:1 linking:1 significant:5 cambridge:1 longer:1 etc:1 aeronautics:2 own:4 aem:1 seen:2 captured:1 additional:2 minimum:4 paradigm:1 focussing:1 signal:3 dashed:2 reduces:1 jet:2 characterized:2 determination:1 offer:2 af:5 ae:9 iteration:11 represent:5 synaptically:1 achieved:1 annealing:1 source:3 leaving:1 posse:2 pass:1 insofar:1 architecture:2 administration:2 expression:4 handled:1 defense:1 passed:1 aon:2 york:2 useful:1 detailed:1 involve:1 repeating:1 ouputs:1 mcclelland:1 simplest:1 outperform:1 ct3:1 inhibitory:1 dotted:1 trapped:2 wr:1 raoul:1 key:1 threshold:1 backward:2 sum:3 scaling:1 rpm:1 entirely:1 layer:13 followed:1 played:1 fan:1 strength:3 occur:1 bp:5 attemps:1 speed:1 extremely:2 according:1 smaller:1 em:2 tw:1 taken:1 computationally:1 equation:2 operation:1 observe:1 appropriate:1 asn:1 quantity:1 realized:1 primary:2 exclusive:1 gradient:4 kth:1 simulated:1 propulsion:2 participate:2 presynaptic:1 modeled:1 relationship:1 negative:1 implementation:1 collective:1 allowing:3 neuron:44 wire:2 sm:1 benchmark:2 descent:4 extended:1 shoulder:2 sharp:1 introduced:1 required:2 optimized:3 connection:3 california:2 narrow:1 discontinuity:1 pattern:5 summarize:1 oj:2 treated:1 participation:1 advanced:1 scheme:3 technology:2 numerous:1 started:1 prior:4 acknowledgement:1 determining:1 relative:1 fully:2 proportional:1 versus:1 degree:1 principle:1 excitatory:1 supported:1 parity:1 institute:2 fall:1 distributed:1 regard:1 curve:2 plain:5 transition:1 forward:4 commonly:1 made:3 adaptive:6 pth:1 far:2 logic:4 global:2 active:1 summing:1 learn:5 ca:1 investigated:1 domain:1 arrow:2 whole:1 n2:1 neuronal:14 fashion:3 precision:1 explicit:2 sf:1 exercise:1 lie:3 emphasized:1 reconfigurable:1 showing:1 concern:1 evidence:1 intractable:1 burden:1 exists:1 sequential:1 gained:1 magnitude:2 led:1 likely:1 expressed:2 contained:2 carley:1 ma:1 marked:1 presentation:1 consequently:1 considerable:1 change:4 specifically:3 determined:1 reducing:1 tawel:5 secondary:1 pas:2 attempted:1 evaluate:1
918
1,840
APRICODD: Approximate Policy Construction using Decision Diagrams Robert St-Aubin Jesse Hoey Craig Boutilier Dept. of Computer Science University of British Columbia Vancouver, BC V6T lZA Dept. of Computer Science University of British Columbia Vancouver, BC V6T lZA Dept. of Computer Science University of Toronto Toronto, ON M5S 3H5 [email protected] [email protected] [email protected] Abstract We propose a method of approximate dynamic programming for Markov decision processes (MDPs) using algebraic decision diagrams (ADDs). We produce near-optimal value functions and policies with much lower time and space requirements than exact dynamic programming. Our method reduces the sizes of the intermediate value functions generated during value iteration by replacing the values at the terminals of the ADD with ranges of values. Our method is demonstrated on a class of large MDPs (with up to 34 billion states), and we compare the results with the optimal value functions. 1 Introduction The last decade has seen much interest in structured approaches to solving planning problems under uncertainty formulated as Markov decision processes (MDPs). Structured algorithms allow problems to be solved without explicit state-space enumeration by aggregating states of identical value. Structured approaches using decision trees have been applied to classical dynamic programming (DP) algorithms such as value iteration and policy iteration [7, 3]. Recently, Hoey et.al. [8] have shown that significant computational advantages can be obtained by using an Algebraic Decision Diagram (ADD) representation [1, 4, 5]. Notwithstanding such advances, large MDPs must often be solved approximately. This can be accomplished by reducing the "level of detail" in the representation and aggregating states with similar (rather than identical) value. Approximations of this kind have been examined in the context of tree structured approaches [2]; this paper extends this research by applying them to ADDs. Specifically, the terminal of an ADD will be labeled with the range of values taken by the corresponding set of states. As we will see, ADDs have a number of advantages over trees. We develop two approximation methods for ADD-structured value functions, and apply them to the value diagrams generated during dynamic programming. The result is a nearoptimal value function and policy. We examine the tradeoff between computation time and decision quality, and consider several variable reordering strategies that facilitate approximate aggregation. 2 Solving MDPs using Algebraic Decision Diagrams We assume a fully-observable MDP [10] with finite sets of states S and actions A, transition function Pr(s, a, t), reward function R, and a discounted infinite-horizon optimality criterion with discount factor 13. Value iteration can be used to compute an optimal stationary policy 7r : S -t A by constructing a series of n-stage-to-go value functions, where: V n+1(s) = R(s) + max {f3 L aEA Pr(s, a, t) . Vn(t)} (1) tES The sequence of value functions vn produced by value iteration converges linearly to the optimal value function V*. For some finite n, the actions that maximize Equation 1 form an optimal policy, and vn approximates its value. ADDs [1,4, 5] are a compact, efficiently manipulable data structure for representing realvalued functions over boolean variables En -t R. They generalize a tree-structured representation by allowing nodes to have multiple parents, leading to the recombination of isomorphic sub graphs and hence to a possible reduction in the representation size. A more precise definition of the semantics of ADDs can be found in [9]. Recently, we applied ADDs to the solution of large MDPs [8], yielding significant space/time savings over related tree-structured approaches. We assume the state of an MDP is characterized by a set of variables X = {Xl,' .. ,Xn }. Values of variable Xi will be denoted in lowercase (e.g., Xi). We assume each Xi is boolean'! Actions are described using dynamic Bayesian networks (DBNs) [6, 3] with ADDs representing their conditional probability tables. Specifically, a DBN for action a requires two sets of variables, one set X = {X I, ... , Xn} referring to the state of the system before action a has been executed, and X' = {X~, . .. ,X~} denoting the state after a has been executed. Directed arcs from variables in X to variables in X' indicate direct causal influence. The conditional probability table (CPT) for each post-action variable XI defines a conditional distribution PJc~ over XI-i.e., a's effect on Xi-for each instantiation of its parents. This can be viewed as a function PJc~ (Xl ... X n), but where the function value (distribution) depends only on those Xj that ar~ parents of X:' We represent this function using an ADD. Reward functions can also be represented using ADDs. Figure I(a) shows a simple example of a single action represented as a DBN as well as a reward function. We use the method of Hoey et. al [8] to perform value iteration using ADDs. We refer to that paper for full details on the algorithm, and present only a brief outline here. The ADD representation of the CPTs for each action, PJc~ (X), are referred to as action diagrams, as shown in Figure 1(b), where X represents the' set of pre-action variables, {Xl, ... X n}. These action diagrams can be combined into a complete action diagram (Figure I(c?: n pa(x"X) = IIX:. PJc;(X) + XI? (1- PJc;(X)). (2) i=l The complete action diagram represents all the effects of pre-action variables on postaction variables for a given action. The immediate reward function R(X') is also represented as an ADD, as are the n-stage-to-go value functions vn(x). Given the complete action diagrams for each action, and the immediate reward function, value iteration can be performed by setting VO = R, and applying Eq. 1, vn+1(x) = R(X) + ~a;: {f3 L E 1 An pa(X', X) . Vn(X')} , x' extension to multi-valued variables would be straightforward. (3) ~ ~ x ~ y X? > > > ~ ~~ ~ :;~ ; ~ -------- -ttft Y Y' o~~ y T 08 F 02 x lrew,,,, T P 10 0 Ma trIx A DD Comple te RepresentatI on RepresentatJOn ActIon Diagra m (b) (a) (e) Figure 1: ADD representation of an MDP: (a) action network for a single action (top) and the immediate reward network (bottom) (b) Matrix and ADD representation of CPTs (action diagrams) (c) Complete action diagram. x x x y y z l.l [H] jfJ ~ ~ [2}J ~ 1 [9.7.9.8J I 1.1 (b) 0.1 (e) 0.5 Figure 2: Approximation of original value diagram (a) with errors of 0.1 (b) and 0.5 (c). followed by swapping all unprimed variables with primed ones. All operations in Equation 3 are well defined in terms of ADDs [8, 12]. The value iteration loop is continued until some stopping criterion is met. Various optimizations are applied to make this calculation as efficient as possible in both space and time. 3 Approximating Value Functions While structured solution techniques offer many advantages, the exact solution of MDPs in this way can only work if there are "few" distinct values in a value function. Even if a DBN representation shows little dependence among variables from one stage to another, the influence of variables tends to "bleed" through a DBN over time, and many variables become relevant to predicting value. Thus, even using structured methods, we must often relax the optimality constraint and generate only approximate value functions, from which near-optimal policies will hopefully arise. It is generally the case that many of the values distinguished by DP are similar. Replacing such values with a single approximate values leads to size reduction, while not significantly affecting the precision of the value diagrams. 3.1 Decision Diagrams and Approximation Consider the value diagram shown in Figure 2(a), which has eight distinct values as shown. The value of each state s is represented as a pair [l, u], where the lower, l, and upper, u, bounds on the values are both represented. The span of a state, s, is given by span(s)=u-l. Point values are represented by setting u=l , and have zero span. Now suppose that the diagram in Figure 2(a) exceeds resource limits, and a reduction in size is necessary to continue the value iteration process. If we choose to no longer distinguish values which are within 0.1 or 0.5 of each other, the diagrams in Figure 2(b) or (c) result, respectively. The states which had proximal values have been merged, where merging a set of states 81,82, ... ,8 n with values [it, U1], ... , [ln, un], results in an aggregate state, t, with a ranged value [min(h, . .. , In), max:(u1, ... , un)]. The midpoint of the range estimates the true value of the states with minimal error, namely, 8pan( t) / 2. The span of V is the maximum of all spans in the value diagram, and therefore the maximum error in V is simply span ( V) / 2 [2]. The combined span of a set of states is the span of the pair that would result from merging them all. The extent of a value diagram V is the combined span of the portion of the state space which it represents. The span of the diagram in Figure 2(c) is 0.5, but its extent is 8.7. ADD-structured value functions can be leveraged by approximation techniques because approximations can always be performed directly without pre-processing techniques such as variable reordering. Of course, variable reordering can still play an important computational role in ADD-structured methods, but are not needed for discovering approximations. 3.2 Value Iteration with Approximate Value Functions Approximate value iteration simply means applying an approximation technique to the nstage to go value function generated at each iteration of Eq. 3. Available resources might dictate that ADDs be kept below some fixed size. In contrast, decision quality might require errors below some fixed value, referred to as the pruning strength, 8. The remainder of this paper will focus on the latter, although we have examined the former as well [9]. Thus, the objective of a single approximation step is a reduction in the size of a ranged value ADD by replacing all leaves which have combined spans less than the specified error bound by a single leaf. Given a leaf [l, u] in V, the set of all leaves [li, ud such that the combined span of [li, Ui] with [l, u] is less than the specified error are merged. Repeating this process until no more merges are possible gives the desired result. We have also examined a quicker, but less exact, method for approximation, which exploits the fact that simply reducing the precision of the values at the leaves of an ADD merges the similar values. We defer explanations to the longer version of this paper [9]. The sequence of ranged value functions, Vn , converges after n' iterations to an approximate (non-ranged) value function, V, by taking the mid-points of each ranged terminal node in Vn '. The pruning strength, 8, then gives the percentage difference between V and the optimal n'-stage-to-go value function V n '. The value function V induces a policy, n, the value of which is ViTo In general, however, ViT # V [11] 2. 3.3 Variable Reordering As previously mentioned, variable reordering can have a significant effect on the size of an ADD, but finding the variable ordering which gives rise to the smallest ADD for a boolean function is co-NP-complete [4]. We examine three reordering methods. The first two are standard for reordering variables in BDDs: Rudell's sifting algorithm and random reordering [12]. The last reordering method we consider arises in the decision tree induction literature, and is related to the information gain criterion. Given a value diagram V with extent 8, each variable x is considered in tum. The value diagram is restricted first with x = true, and the extent 8t and the number of leaves nt are calculated for the restricted ADD. Similar values 8f and n f are found for the x = false restriction. If we collapsed the entire ADD into a single node, assuming a uniform distribution over values in the resulting 2In fact, the equality arises if and only if V = V?, where V? is the optimal value function. range gives us the entropy for the entire ADD: E = J (4) p(v)log(p(v))dv = log(J), and represents our degree of uncertainty about the values in the diagram. Splitting the values with the variable x results in two new value diagrams, for each of which the entropy is calculated. The gain in information (decrease in entropy) values are used to rank the variables, and the resulting order is applied to the diagram. This method will be referred to as the minimum span method. 4 Results The procedures described above were implemented using a modified version of the CUDD package [12] , a library of C routines which provides support for manipulation of ADDs. Experimental results from this section were all obtained using one processor on a dualprocessor Pentium II PC running at 400Mhz with O.5Gb of RAM. Our approximation methods were tested on various adaptations of a process planning problem taken from [7, 8]. 3 4.1 Approximation All experiments in this section were performed on problem domains where the variable ordering was the one selected implicitly by the constructors of the domains. 4 Value Function Optimal Approximate 0 (%) 0 1 2 3 4 5 10 15 20 30 40 50 time (s) iter 270.91 562.35 547.00 1l2.7 68.53 38.06 6.24 0.70 0.57 0.05 0.07 0.04 44 44 44 15 12 10 6 4 4 2 2 1 nodes leaves (inl) 22170 17108 15960 15230 14510 11208 3739 580 299 50 10 0 527 117 77 58 48 38 15 9 6 3 2 1 IV" - V*I (%) 0.0 0.13 0.14 5.45 lo20 2.48 llo33 14.1l 16.66 25.98 30.28 3lo25 Table 1: Comparing optimal with approximate value iteration on a domain with 28 boolean variables. In Table 1 we compare optimal value iteration using ADDs (SPUDD as presented in [8]) with approximate value iteration using different pruning strengths J. In order to avoid overly aggressive pruning in the early stage of the value iterations, we need to take into account the size of the value function at every iteration. Therefore, we use a sliding pruning strength specified as J E~=o j3i extent(R) where R is the initial reward diagram, j3 is the discount factor introduced earlier and n is the iteration number. We illustrate running time, value function size (internal nodes and leaf nodes), number of iterations, and the average sum of squared difference between the optimal value function, V* , and the value of the approximate policy, Vir. It is important to note that the pruning strength is an upper bound on the approximation error. That is, the optimal values are guaranteed to lie within the ranges of the approximate 3 See [9] for details. 4Experiments showed that conclusions in this section are independent of variable order. ranged value function. However, as noted earlier, this bound does not hold for the value of an induced policy, as can be seen at 3% pruning in the last column of Table 1. The effects of approximation on the performance of the value iteration algorithm are threefold. First, the approximation itself introduces an overhead which depends on the size of the value function being approximated. This effect can be seen in Table 1 at low pruning strengths (1 - 2%), where the running time is increased from that taken by optimal value iteration. Second, the ranges in the value function reduce the number of iterations needed to attain convergence, as can be seen in Table 1 for pruning strengths greater than 2%. However, for the lower pruning strengths, this effect is not observed. This can be explained by the fact that a small number of states with values much greater (or much lower) than that of the rest of the state space may never be approximated. Therefore, to converge, this portion of the state space requires the same number of iterations as in the optimal case 5. The third effect of approximation is to reduce the size of the value functions, thus reducing the per iteration computation time during value iteration. This effect is clearly seen at pruning strengths greater than 2%, where it overtakes the cost of approximation, and generates significant time and space savings. Speed ups of 2 and 4 fold are obtained for pruning strengths of 3% and 4% respectively. Furthermore, fewer than 60 leaf nodes represent the entire state space, while value errors in the policy do not exceed 6%. This confirms our initial hypothesis that many values within a given domain are very similar and thus, replacing such values with ranges drastically reduces the size of the resulting diagram without significantly affecting the quality of the resulting policy. Pruning above 5% has a larger error, and takes a very short time to converge. Pruning strengths of more than 40% generate policies which are close to trivial, where a single action is always taken. 4.2 Variable reordering lO'r;===:Sh=:U:;;:"I=:gd;'=_=n=o=ra=Ord=;:e=,===:::;--~------, -e- Intuitive (unshuffled) - no reorder ~ shuffled - reorder mlnspan o shuffled - reorder random - - shuffled - reorder sift o la' o o 102 '-----_ _ _~_ _ _~_ _ _~_ _ _----' 15 20 25 30 35 boolesn variables Figure 3: Sizes of final value diagrams plotted as a function of the problem domain size. Results in the previous section were all generated using the "intuitive" variable ordering for the problem at hand. It is probable that such an ordering is close to optimal, but such orderings may not always be obvious, and the effects of a poor ordering on the resources required for policy generation can be extreme. Therefore, to characterize the reordering methods discussed in Section 3.3, we start with initially randomly shuffled orders and compare the sizes of the final value diagrams with those found using the intuitive order. 5We are currently looking into alleviating this effect in order to increase convergence speed for low pruning strengths In Figure 3 we present results obtained from approximate value iteration with a pruning strength of 3% applied to a range of problem domain sizes. In the absence of any reordering, diagrams produced with randomly shuffled variable orders are up to 3 times larger than those produced with the intuitive (unshuffled) order. The minimum span reordering method, starting from a randomly shuffled order, finds orders which are equivalent to the intuitive one, producing value diagrams with nearly identical size. The sifting and random reordering methods find orders which reduce the sizes further by up to a factor of 7. Reordering attempts take time, but on the other hand, DP is faster with smaller diagrams. Value iteration with the sifting reordering method (starting with shuffled orders) was found to run in time similar to that of value iteration with the intuitive ordering, while the other reordering methods took slightly longer. All reordering methods, however, reduced running times and diagram sizes from that using no reordering, by factors of 3 to 5. 5 Concluding Remarks We examined a method for approximate dynamic programming for MDPs using ADDs. ADDs are found to be ideally suited to this task. The results we present have clearly shown their applicability on a range of MDPs with up to 34 billion states. Investigations into the use of variable reordering during value iteration have also proved fruitful, and yield large improvements in the sizes of value diagrams. Results show that our policy generator is robust to the variable order, and so this is no longer a constraint for problem specification. References [1] R. Iris Bahar, Erica A. Frohm, Charles M. Gaona, Gary D. Hachtel, Enrico Macii, Abelardo Pardo, and Fabio Somenzi. Algebraic decision diagrams and their applications. In International Conference on Computer-Aided Design, pages 188- 191. IEEE, 1993. [2] Craig Boutilier and Richard Dearden. Approximating value trees in structured dynamic programming. In Proceedings ICML-96, Bari, Italy, 1996. [3] Craig Boutilier, Richard Dearden, and Moises Goldszmidt. Exploiting structure in policy construction. In Proceedings Fourteenth Inter. Conf on AI (IJCAI-95), 1995. [4] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers, C-35(8):677--691, 1986. [5] E. M. Clarke, K. L. McMillan, X. Zhao, M. Fujita, and J. Yang. Spectral transforms for large boolean functions with applications to technology mapping. In DAC, 54-60. ACMIIEEE, 1993. [6] Thomas Dean and Keiji Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5(3):142- 150, 1989. [7] Richard Dearden and Craig Boutilier. Abstraction and approximate decision theoretic planning. Artificial Intelligence, 89:219- 283, 1997. [8] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic planning using decision diagrams. In Proceedings of UAI99, Stockholm, 1999. [9] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. Optimal and approximate planning using decision diagrams. Technical Report TR-OO-05 , UBC, June 2000. [10] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York, NY., 1994. [11] Satinder P. Singh and Richard C. Yee. An upper bound on the loss from approximate optimalvalue function. Machine Learning, 16:227- 233, 1994. [12] Fabio Somenzi. CUDD: CU decision ft p : / /vl s i . c o l o r ado. edu/pub /, 1998. diagram package. Available from
1840 |@word cu:1 version:2 hu:2 confirms:1 tr:1 reduction:4 initial:2 series:1 pub:1 denoting:1 bc:2 unprimed:1 comparing:1 nt:1 must:2 gaona:1 randal:1 stationary:1 intelligence:2 discovering:1 leaf:9 selected:1 fewer:1 short:1 provides:1 node:7 toronto:3 direct:1 become:1 overhead:1 inter:1 abelardo:1 ra:1 planning:5 examine:2 multi:1 terminal:3 discounted:1 moises:1 little:1 enumeration:1 kind:1 finding:1 every:1 bryant:1 vir:1 producing:1 before:1 aggregating:2 tends:1 limit:1 approximately:1 might:2 examined:4 co:1 range:9 directed:1 frohm:1 procedure:1 significantly:2 dictate:1 attain:1 ups:1 persistence:1 pre:3 close:2 context:1 influence:2 applying:3 collapsed:1 yee:1 restriction:1 equivalent:1 fruitful:1 demonstrated:1 dean:1 jesse:3 go:4 straightforward:1 starting:2 vit:1 splitting:1 bahar:1 continued:1 constructor:1 construction:2 dbns:1 suppose:1 play:1 exact:3 programming:7 alleviating:1 hypothesis:1 pa:2 approximated:2 bari:1 labeled:1 v6t:2 bottom:1 role:1 quicker:1 observed:1 solved:2 ft:1 ordering:7 sifting:3 decrease:1 mentioned:1 ui:1 reward:7 ideally:1 dynamic:8 vito:1 singh:1 solving:2 represented:6 various:2 distinct:2 artificial:1 aggregate:1 larger:2 valued:1 relax:1 itself:1 final:2 advantage:3 sequence:2 took:1 propose:1 remainder:1 adaptation:1 relevant:1 loop:1 intuitive:6 billion:2 parent:3 convergence:2 requirement:1 bdds:1 exploiting:1 produce:1 ijcai:1 converges:2 illustrate:1 develop:1 oo:1 eq:2 implemented:1 c:3 indicate:1 met:1 somenzi:2 merged:2 stochastic:2 require:1 aubin:3 investigation:1 probable:1 stockholm:1 extension:1 hold:1 considered:1 mapping:1 early:1 smallest:1 currently:1 clearly:2 always:3 modified:1 rather:1 primed:1 avoid:1 focus:1 june:1 improvement:1 rank:1 contrast:1 pentium:1 abstraction:1 stopping:1 lowercase:1 vl:1 entire:3 initially:1 semantics:1 fujita:1 among:1 denoted:1 f3:2 saving:2 never:1 identical:3 represents:4 icml:1 nearly:1 np:1 report:1 richard:4 few:1 causation:1 randomly:3 attempt:1 interest:1 introduces:1 sh:1 extreme:1 yielding:1 pc:1 swapping:1 hachtel:1 necessary:1 tree:7 iv:1 desired:1 plotted:1 causal:1 minimal:1 increased:1 column:1 earlier:2 boolean:6 ar:1 mhz:1 cost:1 applicability:1 uniform:1 characterize:1 nearoptimal:1 proximal:1 combined:5 referring:1 st:3 gd:1 international:1 squared:1 choose:1 leveraged:1 conf:1 zhao:1 leading:1 li:2 aggressive:1 account:1 depends:2 cpts:2 performed:3 portion:2 start:1 aggregation:1 defer:1 efficiently:1 yield:1 generalize:1 bayesian:1 rudell:1 craig:6 produced:3 m5s:1 processor:1 definition:1 obvious:1 gain:2 proved:1 routine:1 tum:1 furthermore:1 stage:5 until:2 hand:2 dac:1 replacing:4 hopefully:1 defines:1 quality:3 mdp:3 facilitate:1 effect:10 ranged:6 true:2 former:1 hence:1 equality:1 shuffled:7 puterman:1 during:4 noted:1 iris:1 criterion:3 outline:1 complete:5 theoretic:1 vo:1 reasoning:1 recently:2 charles:1 discussed:1 approximates:1 significant:4 refer:1 ai:1 dbn:4 had:1 specification:1 longer:4 add:33 showed:1 italy:1 manipulation:2 continue:1 accomplished:1 seen:5 minimum:2 greater:3 converge:2 maximize:1 ud:1 ii:1 sliding:1 multiple:1 full:1 reduces:2 exceeds:1 alan:2 faster:1 characterized:1 calculation:1 offer:1 technical:1 dept:3 post:1 j3:1 iteration:30 represent:2 affecting:2 enrico:1 diagram:39 keiji:1 rest:1 induced:1 near:2 yang:1 intermediate:1 exceed:1 xj:1 reduce:3 tradeoff:1 gb:1 algebraic:4 aea:1 york:1 action:23 remark:1 cpt:1 boutilier:6 generally:1 transforms:1 repeating:1 discount:2 mid:1 induces:1 reduced:1 generate:2 percentage:1 overly:1 per:1 comple:1 discrete:1 threefold:1 iter:1 kept:1 ram:1 graph:2 sum:1 run:1 package:2 fourteenth:1 uncertainty:2 extends:1 vn:8 decision:17 clarke:1 bound:5 followed:1 distinguish:1 guaranteed:1 fold:1 strength:13 constraint:2 generates:1 u1:2 speed:2 pardo:1 optimality:2 span:14 min:1 concluding:1 martin:1 structured:12 poor:1 smaller:1 slightly:1 pan:1 dv:1 restricted:2 hoey:5 pr:2 explained:1 taken:4 ln:1 equation:2 resource:3 previously:1 needed:2 available:2 operation:1 apply:1 eight:1 spectral:1 distinguished:1 original:1 thomas:1 top:1 running:4 j3i:1 ado:1 iix:1 exploit:1 recombination:1 approximating:2 classical:1 objective:1 strategy:1 dependence:1 dp:3 fabio:2 extent:5 trivial:1 induction:1 assuming:1 executed:2 robert:3 rise:1 design:1 policy:16 perform:1 allowing:1 upper:3 ord:1 markov:3 arc:1 finite:2 immediate:3 looking:1 precise:1 spudd:2 overtakes:1 mcmillan:1 introduced:1 pair:2 namely:1 specified:3 required:1 merges:2 below:2 max:2 explanation:1 dearden:3 predicting:1 representing:2 technology:1 mdps:9 brief:1 library:1 realvalued:1 columbia:2 literature:1 l2:1 vancouver:2 reordering:20 fully:1 loss:1 generation:1 manipulable:1 generator:1 degree:1 dd:1 lo:1 course:1 cebly:1 last:3 unshuffled:2 drastically:1 allow:1 taking:1 midpoint:1 calculated:2 xn:2 transition:1 transaction:1 erica:1 approximate:18 observable:1 compact:1 pruning:16 implicitly:1 satinder:1 instantiation:1 reorder:4 xi:7 un:2 decade:1 table:7 robust:1 ca:2 apricodd:1 constructing:1 domain:6 linearly:1 arise:1 referred:3 en:1 ny:1 wiley:1 precision:2 sub:1 explicit:1 xl:3 lie:1 third:1 british:2 sift:1 kanazawa:1 false:1 merging:2 notwithstanding:1 te:2 horizon:1 suited:1 entropy:3 simply:3 trix:1 inl:1 ubc:3 gary:1 ma:1 conditional:3 viewed:1 formulated:1 macii:1 absence:1 aided:1 specifically:2 infinite:1 reducing:3 isomorphic:1 experimental:1 la:1 internal:1 support:1 latter:1 arises:2 goldszmidt:1 h5:1 tested:1
919
1,841
Robust Reinforcement Learning J un Morimoto Graduate School of Information Science Nara Institute of Science and Technology; Kawato Dynamic Brain Project, JST 2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288 JAPAN [email protected] Kenji Doya ATR International; CREST, JST 2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288 JAPAN [email protected] Abstract This paper proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both off-line learning by simulations and for on-line action planning. However, the difference between the model and the real environment can lead to unpredictable, often unwanted results. Based on the theory of H oocontrol, we consider a differential game in which a 'disturbing' agent (disturber) tries to make the worst possible disturbance while a 'control' agent (actor) tries to make the best control input. The problem is formulated as finding a minmax solution of a value function that takes into account the norm of the output deviation and the norm of the disturbance. We derive on-line learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call "Robust Reinforcement Learning (RRL)," in the task of inverted pendulum. In the linear domain, the policy and the value function learned by the on-line algorithms coincided with those derived analytically by the linear H ootheory. For a fully nonlinear swingup task, the control by RRL achieved robust performance against changes in the pendulum weight and friction while a standard RL control could not deal with such environmental changes. 1 Introduction In this study, we propose a new reinforcement learning paradigm that we call "Robust Reinforcement Learning (RRL)." Plain, model-free reinforcement learning (RL) is desperately slow to be applied to on-line learning of real-world problems. Thus the use of environmental models have been quite common both for on-line action planning [3] and for off-line learning by simulation [4]. However, no model can be perfect and modeling errors can cause unpredictable results, sometimes worse than with no model at all. In fact , robustness against model uncertainty has been the main subject of research in control community for the last twenty years and the result is formalized as the "'Hoo" control theory [6). In general, a modeling error causes a deviation of the real system state from the state predicted by the model. This can be re-interpreted as a disturbance to the model. However, the problem is that the disturbance due to a modeling error can have a strong correlation and thus standard Gaussian assumption may not be valid. The basic strategy to achieve robustness is to keep the sensitivity I of the feedback control loop against a disturbance input small enough so that any disturbance due to the modeling error can be suppressed if the gain of mapping from the state error to the disturbance is bounded by 1;'. In the 'Hooparadigm, those 'disturbance-to-error' and 'error-to-disturbance' gains are measured by a max norms of the functional mappings in order to assure stability for any modes of disturbance. In the following, we briefly introduce the 'Hoo paradigm and show that design of a robust controller can be achieved by finding a min-max solution of a value nmction, which is formulated as Hamilton-Jacobi-Isaacs (HJI) equation. We then derive on-line algorithms for estimating the value functions and for simultaneously deriving the worst disturbance and the best control that, respectively, maximizes and minimizes the value function. We test the validity of the algorithms first in a linear inverted pendulum task. It is verified that the value function as well as the disturbance and control policies derived by the on-line algorithm coincides with the solution of Riccati equations given by 'Hootheory. We then compare the performance of the robust RL algorithm with a standard model-based RL in a nonlinear task of pendulum swing-up [3). It is shown that robust RL controller can accommodate changes in the weight and the friction of the pendulum, which a standard RL controller cannot cope with. 2 H 00 Control W(s)--..-j z(s) u(s) W~G z y u K (a) (b) Figure 1: (a) Generalized Plant and Controller, (b) Small Gain Theorem The standard 'Hoocontrol [6) deals with a system shown in Fig.l(a), where G is the plant, K is the controller, u is the control input, y is the measurement available to the controller (in the following, we assume all the states are observable, i.e. y = x), w is unknown disturbance, and z is the error output that is desired to be kept small. In general, the controller K is designed to stabilize the closed loop system based on a model of the plant G. However, when there is a discrepancy between the model and the actual plant dynamics, the feedback loop could be unstable. The effect of modeling error can be equivalently represented as a disturbance w generated by an unknown mapping ~ of the plant output z, as shown in Fig.1(b). The goal of 1(,,,control problem is to design a controller K that brings the error z to zero while minimizing the Hoonorm of the closed loop transfer function from the disturbance w to the output z (1) Here II ? 112 denotes ?2 norm and i7 denotes maximum singular value. The small gain theorem assures that if IITzwiloo ~ 'Y, then the system shown in Fig. l(b) will be stable for any stable mapping ~ : z f-t w with 11~1100 < ~. 2.1 Min-max Solution to HooProblem We consider a dynamical system x = f(x, u, w) . Hoocontrol problem is equivalent to finding a control output u that satisfies a constraint (2) against all possible disturbance w with x(O) = 0, because it implies (3) We can consider this problem as differential game[5] in which the best control output u that minimizes V is sought while the worst disturbance w that maximizes V is chosen. Thus an optimal value function V* is defined as V* = minmax (00 (zT(t)z(t) _ 'Y2w T (t)w(t))dt. w u (4) 10 The condition for the optimal value function is given by oV* 0= minmax[zT z - 'Y2W TW + ~ f(x, u, w)] u w (5) uX which is known as Hamilton-Jacobi-Isaacs (HJI) equation. From (5), we can derive the optimal control output u op and the worst disturbance wop by solving OZT Z OU 3 oV of (x, u, w) _ 0 + ox OU - d an OZT Z oW _ 2 'YW T oV of (x, u, w) _ 0 + ox ow -. (6) Robust Reinforcement Learning Here we consider a continuous-time formulation of reinforcement learning [3] with the system dynamics x = f(x, u) and the reward r(x, u). The basic goal is to find a policy u = g(x) that maximizes the cumulative future reward !too e-?~t r(x(s), u(s))ds for any given state x(t), where T is a time constant of evaluation. However, a particular policy that was optimized for a certain environment may perform badly when the environmental setting changes. In order to assure robust performance under changing environment or unknown disturbance, we introduce the notion of worst disturbance in 1i<x> control to the reinforcement learning paradigm. In this framework, we consider an augmented reward q(t) = r(x(t), u(t)) + s(w(t)), (7) where s(w(t)) is an additional reward for withstanding a disturbing input, for example, s(w) = 'Y2w T w. The augmented value function is then defined as V(x(t)) = 1 <X> e- ' -;' q(x(s), u(s), w(s))ds. (8) The optimal value function is given by the solution of a variant of HJI equation 1 - V*(x) T aV* = maxmin[r(x , u) + s(w) + ~ f(x, u, w)]. ux U (9) W Note that we can not find appropriate policies (Le. the solutions of the HJI equation) if we choose too small 'Y . In the robust reinforcement learning (RRL) paradigm, the value function is update by using the temporal difference (TD) error [3] 8(t) = q(t) - ~ V(t) + V(t), while the best action and the worst disturbance are generated by maximizing and minimizing, respectively, the right hand side of HJI equation (9). We use a function approximator to implement the value function V(x(t); v), where y is a parameter vector. As in the standard continuous-time RL , we define eligibility trace for a parameter Vi as ei(s) = e- ' ;;' 8~jit)dt and up- J; date rule as ei(t) = -~ei(t) + 8~v~t) , where", is the time constant of the eligibility trace[3] . We can then derive learning rule for value function approximator [3] as Vi = rJ8(t)ei(t), where rJ denotes the learning rate. Note that we do not assume f(x = 0) = 0 because the error output z is generalized as the reward r(x , u) in RRL framework. 3.1 Actor-disturber-critic We propose actor-disturber-critic architecture by which we can implement robust RL in a model-free fashion as the actor-critic architecture[l]. We define the policies of the actor and the disturber implemented as u(t) = Au(x(t); yU) + nu(t) and w(t) = Aw(x(t); y W) +nw(t), respectively, where Au(x(t); y U) and Aw(x(t); yW) are function approximators with parameter vectors, yU and yW, and nu(t) and nw(t) are noise terms for exploration. The parameters of the actor and the disturber are updated by vr = rJu8(t)nu(t) aAu(;~~; (10) yU) t where rJu and rJw denote the learning rates. 3.2 Robust Policy by Value Gradient Now we assume that an input-Affine model of the system dynamics and quadratic models of the costs for the inputs are available as x r(x , u) = f(x) + gl(X)W + g2(X)U Q(x) - uTR(x)u, s(w) = 'Y2wTw. In this case, we can derive the best action and the worst disturbance in reference to the value function V as u op T = "21 R (X) -1 g2T (X)( 8V Ox) (11) We can use the policy (11) using the value gradient ~~ derived from the value function approximator. 3.3 Linear Quadratic Case Here we consider a case in which a linear dynamic model and quadratic reward models are available as x = Ax+B1w+B2u r(x, u) In this case, the value function is given by a quadratic form V is the solution of a Riccati equation TIT -1 T A P+ PA+ P('iB1B1 - B2R B2 )P+ Q , = _xT Px, where P 1 = -Po T (12) Thus we can derive the best action and the worst disturbance as (13) Simulation 4 We tested the robust RL algorithm in a task of swinging up a pendulum. The dynamics of the pendulum is given by ml2jj = + mgl sin /9 + T, where /9 is the angle from the upright position , T is input torque, p, = 0.01 is the coefficient of friction, m = 1.0[kg] is the weight of the pendulum, l = 1.0[m] is the length of the pendulum, and g = 9.8[m/s 2 ] is the gravity acceleration. The state vector is defined as x = (/9,e)T. -p,e 4.1 Linear Case We first considered a linear problem in order to test if the value function and the policy learned by robust RL coincides with the analytic solution of 1icx:>control problem. Thus we use a locally linearized dynamics near the unstable equilibrium point x = (0, O)T . The matrices for the linear model are given by A= (~ ~~ ),B =(~, ),B =(~, ),Q= (~ ~ ),R=1. (14) 1 The reward function is given by q( t) criteria, = 2.0. 2 = _xT Qx - u 2 + ,2W 2, where robustness The value function, V = _xT Px, is parameterized by a symmetric matrix P. For on-line estimation of P, we define vectors x = (xi, 2X1X2, XDT, p = (Pll,P12,P22)T and reformulate V as V = _pTx. Each element of p is updated using recursive least squares method[2]. Note that we used pre-designed stabilizing controller as the initial setting of RRL controller for stable learning[2]. 4.1.1 Learning of the value function Here we used the policy by value gradient shown in section 3.2. Figure 2(a) shows that each element of the vector p converged to the solution of the Ricatti equation (12). 4.1.2 Actor-disturber-critic Here we used robust RL implemented by the actor-disturber-critic shown in section 3.1. In the linear case, the actor and the disturber are represented as the linear controllers, A,,(x; v") = v"x and Aw(x; VW) = vWx, respectively. The actor and the disturber were almost converged to the policy in (13) which derived from the Ricatti equation (12) (Fig. 2(b)). 100 lOf 80 P" .-----------------~ - --""'"~ . :~------------------::---, -5 v, P22 ~~~e::::;;=:;';::~250~300 (a) Elements of p -25 __ ? ?? __ ? _________ ?? _ ?? ___ ~~ ___ _ -3?0 50 100 150 200 250 300 Trials (b) Elements of v Figure 2: Time course of (a)elements of vector p = (Pll,P12,P22) and (b) elements of gain vector of the actor v" = (vf, v~) and the disturber VW = (vi", v2"). The dash-dotted lines show the solution of the Ricatti equation. 4.2 Applying Robust RL to a Non-linear Dynamics We consider non-linear dynamical system (11), where f(x) =( ~ sine _ ~e ) ,gt{x) = ( ~ ) ,g2(X) = ( ~ ) Q(x) = cos(e) - 1, R(x) = 0.04. (15) From considering (7) and (15), the reward function is given by q(t) = cos(e) - 1 0.04u 2 + "'??w 2 , where robustness criteria 'Y = 0.22. For approximating the value function, we used Normalized Gaussian Network (NGnet)[3]. Note that the input gain g(x) was also learned[3]. Fig.3 shows the value functions acquired by robust RL and standard model-based RL[3]. The value function acquired by robust RL has a shaper ridge (Fig.3(a)) attracts swing up trajectories than that learned with standard RL. In FigA, we compared the robustness between the robust RL and the standard RL. Both robust RL controller and the standard RL controller learned to swing up and hold a pendulum with the weight m = 1.0[m] and the coefficient of friction J-t = 0.01 (FigA(a)) . The robust RL controller could successfully swing up pendulums with different weight m = 3.0[kg] and the coefficient of friction J-t = 0.3 (FigA(b)). This result showed robustness of the robust RL controller. The standard RL controller could achieve the task in fewer swings for m = 1.0[kg] and J-t = 0.01 (FigA(a)). However, the standard RL controller could not swing up the pendulum with different weight and friction (FigA(b)). v " ?00' 1- 1.00 1 - 2.001 th th (a) Robust RL (b) Standard RL Figure 3: Shape of the value function after 1000 learning trials with m = 1.0[m], and J1, = 0.01. = 1. 0 [kg] , l 2 . 1 - -------------r'--------1 1 OS (\ \! \ .~. Ir_ '-"-"R~obu~ "' -' ::~~... -------------- '. o ------ ------- -'-------I ~~~-~ --~S~ m "~ d'~ ~ ~~3~~~~~ Time [sec) (a) m = 1.0, J.I, = 0.01 (b) m = 3.0,J.I, = 0.3 Figure 4: Swing up trajectories with pendulum with different weight and friction. The dash-dotted lines show upright position. 5 Conclusions In this study, we proposed new RL paradigm called "Robust Reinforcement Learning (RRL)." We showed that RRL can learn analytic solution of the 1-l oo controller in the linearized inverted pendulum dynamics and also showed that RRL can deal with modeling error which standard RL can not deal with in the non-linear inverted pendulum swing-up simulation example. We will apply RRL to more complex task like learning stand-up behavior[4]. References [1] A. G . Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13:834- 846, 1983. [2] S. J. Bradtke. Reinforcement learning Applied to Linear Quadratic Regulation. In S. J. Hanson, J. D . Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 295- 302. Morgan Kaufmann , San Mateo, CA, 1993. [3] K. Doya. Reinforcement Learning in Continuous Time and Space. Neural Computation, 12(1):219-245, 2000. [4] J . Morimoto and K. Doya. Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. In Proceedings of Seventeenth International Conference on Machine Learning, pages 623- 630, San Francisco, CA, 2000. Morgan Kaufmann. [5] S. Weiland. Linear Quadratic Games, H co , and the Riccati Equation. In Proceedings of the Workshop on the Riccati Equation in Control, Systems, and Signals, pages 156- 159. 1989. [6] K. Zhou, J . C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New J ersey, 1996.
1841 |@word trial:2 briefly:1 norm:4 simulation:4 linearized:2 accommodate:1 initial:1 minmax:3 j1:1 shape:1 analytic:2 designed:2 update:1 fewer:1 mgl:1 glover:1 differential:2 introduce:2 acquired:2 behavior:2 seika:2 planning:2 brain:1 torque:1 td:1 actual:1 unpredictable:2 considering:1 project:1 estimating:2 bounded:1 maximizes:3 kg:4 interpreted:1 minimizes:2 finding:3 temporal:1 unwanted:1 gravity:1 control:21 hamilton:2 sutton:1 au:2 mateo:1 g2t:1 co:5 graduate:1 seventeenth:1 recursive:1 implement:2 pre:1 cannot:1 prentice:1 applying:1 equivalent:1 maximizing:1 swinging:1 stabilizing:1 formalized:1 rule:2 deriving:1 stability:1 notion:1 updated:2 pa:1 assure:2 element:7 worst:9 environment:3 reward:8 dynamic:9 ov:3 solving:1 tit:1 po:1 represented:2 quite:2 solve:1 propose:2 b2r:1 loop:4 date:1 riccati:4 achieve:2 pll:2 perfect:1 derive:6 oo:1 measured:1 school:1 op:2 strong:1 implemented:2 kenji:1 predicted:1 implies:1 exploration:1 jst:2 hold:1 considered:1 lof:1 hall:1 equilibrium:1 mapping:4 nw:2 sought:1 estimation:1 successfully:1 weiland:1 gaussian:2 rrl:10 zhou:1 barto:1 derived:4 ax:1 proposes:1 yu:3 discrepancy:1 future:1 simultaneously:1 doyle:1 neuronlike:1 evaluation:1 re:1 desired:1 modeling:7 giles:1 cost:1 deviation:2 too:2 aw:3 cho:2 international:2 sensitivity:1 off:2 choose:1 worse:1 japan:2 account:2 b2:1 stabilize:1 sec:1 coefficient:3 explicitly:1 vi:3 sine:1 try:2 closed:2 pendulum:15 square:1 morimoto:2 kaufmann:2 trajectory:2 cybernetics:1 converged:2 against:4 acquisition:1 isaac:2 jacobi:2 gain:6 ngnet:1 popular:1 ou:2 dt:2 maxmin:1 formulation:1 ox:3 anderson:1 correlation:1 d:2 hand:1 xmorimo:1 ei:4 nonlinear:2 o:1 mode:1 brings:1 effect:1 validity:1 normalized:1 swing:8 analytically:1 symmetric:1 deal:4 sin:1 erato:1 game:3 eligibility:2 x1x2:1 coincides:2 criterion:2 generalized:2 ridge:1 p12:2 bradtke:1 common:1 kawato:1 functional:1 rl:30 jp:2 measurement:1 figa:5 stable:3 actor:11 robot:1 gt:1 showed:3 certain:1 approximators:1 inverted:4 morgan:2 additional:1 paradigm:7 signal:1 ii:1 rj:1 kyoto:2 nara:1 variant:1 basic:2 controller:18 sometimes:1 achieved:2 singular:1 subject:1 cowan:1 call:2 near:1 vw:2 enough:1 architecture:2 attracts:1 i7:1 rjw:1 soraku:2 cause:2 action:5 yw:3 locally:1 dotted:2 hji:5 changing:1 verified:1 kept:1 ptx:1 isd:1 year:1 angle:1 parameterized:1 uncertainty:1 almost:1 doya:4 vf:1 dash:2 quadratic:6 badly:1 constraint:1 friction:7 min:2 px:2 hoo:2 suppressed:1 tw:1 equation:12 assures:1 available:3 apply:1 hierarchical:1 v2:1 appropriate:1 robustness:6 denotes:3 xdt:1 calculating:1 hikaridai:2 approximating:1 strategy:1 ow:2 gradient:3 p22:3 atr:3 gun:2 unstable:2 jit:1 length:1 reformulate:1 minimizing:2 equivalently:1 difficult:1 regulation:1 trace:2 design:2 zt:2 policy:11 twenty:1 unknown:3 perform:1 av:1 community:1 optimized:1 hanson:1 learned:5 icx:1 nu:3 dynamical:2 max:3 disturbance:25 shaper:1 ozt:2 technology:1 swingup:1 fully:1 plant:5 y2w:3 approximator:3 agent:2 affine:1 editor:1 critic:5 course:1 gl:1 last:1 free:2 side:1 institute:1 feedback:2 plain:1 world:1 valid:1 cumulative:1 stand:2 reinforcement:14 disturbing:2 adaptive:1 san:2 cope:1 qx:1 transaction:1 crest:1 observable:1 keep:1 francisco:1 xi:1 un:1 continuous:3 learn:1 transfer:1 robust:25 ca:2 complex:1 domain:1 main:1 withstanding:1 noise:1 augmented:2 fig:6 fashion:1 slow:1 vr:1 position:2 ricatti:3 coincided:1 theorem:2 xt:3 utr:1 workshop:1 vwx:1 ux:2 g2:2 environmental:4 satisfies:1 goal:2 formulated:2 acceleration:1 man:1 change:4 upright:2 called:1 tested:2
920
1,842
Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex Szabolcs Ka1i 1 ,2 Peter Dayan 1 1 Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London, England, WCIN 3AR. 2Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, U.S.A. [email protected] Abstract In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself. 1 Introduction The hippocampal formation and adjacent cortical areas have long been believed to be involved in the acquisition and retrieval of long-term memory for events and other declarative information. Clinical studies in humans and animal experiments indicate that damage to these regions results in amnesia, whereby the ability to acquire new declarative memories is impaired and some of the memories acquired before the damage are lost [I]. The observation that recent memories are more likely to be lost than old memories in these cases has generally been interpreted as evidence that the role of these medial temporal lobe structures in the storage and/orretrieval of declarative memories is only temporary. In particular, several investigators have advocated the general idea that, in the course of a relatively long time period (from several days in rats up to decades in humans), memories are reorganized (or consolidated) so that memories whose successful recall initially depends on the hippocampus gradually become independent of this structure (see Refs. 2-4). However, other possible interpretations of the data have also been proposed [5]. There have been several analyses of the computational issues underlying consolidation. There is a general consensus that memory recall involves the reinstatement of cortical activation patterns which characterize the original episodes, based only on partial or noisy input. Thus the computational goal for the memory systems is cortical pattern completion; this should be possible after just a single presentation of the particular pattern when the hippocampus is intact, and should be possible independent of the presence or absence of the hippocampus once consolidation is complete. The hippocampus plays a double role: a) supporting one-shot learning and subsequent completion of patterns in the cortical areas it is directly connected to, and b) directing consolidation by reinstating these stored patterns in those same cortical regions and allowing the efficacies of cortical synapses to change. Despite the popularity of the ideas outlined above, there have been surprisingly few attempts to construct quantitative models of memory consolidation. Alvarez and Squire (1994) is the only model we could find that has actually been implemented and tested quantitatively. Although it embodies the general principles above, the authors themselves acknowledge that the model has some rather serious limitations, largely due to its spartan simplicity (eg it only considers 2 perfectly orthogonal patterns over 2 cortical areas of 8 units each) which also makes it hard to test comprehensively. Perhaps most importantly, though (and this feature is shared with qualitative models such as Murre (1997?, the model requires some way of establishing and/or strengthening functional connections between neurons in disparate areas of neocortex (representing different aspects of the same episode) which would not normally be expected to enjoy substantial reciprocal anatomical connections. In this paper, we consider consolidation using a model whose complexity brings to the fore consideration of computational issues that are invisible to simpler proposals. In particular, it treats cortex as a hierarchical structure, with hierarchical codes for input patterns acquired through a process of unsupervised learning. This allows us to study the relationship between coding for generic patterns, which forms a sort of semantic memory, and the coding for the specific patterns through consolidation. It also allows us to consider consolidation as happening in hierarchical connections (in which the cortex abounds) as an alternative to consolidation only between disparate areas at the same level of the hierarchy. The next section of the paper describes the model in detail and section 3 shows its performance. 2 The Model Figure la shows the architecture of the model, which involves three cortical areas (A, B, and C) that represent different aspects of the world. We can understand consolidation as follows : across the whole spectrum of possible inputs, there is structure in the activity within each area; but there are no strong correlations between the activities in different areas (these are the generic patterns referred to above). Thus, for instance, nothing in particular can be concluded about the pattern of activity in area C given just the activities in areas A and B. However, for the specific patterns that form particular episodes, there are correlations in these activities. As a result of this, it becomes possible to be much more definite about the pattern in C given activities in A and B that reinstate part of the episode. Before consolidation, information about these correlations is stored in the hippocampus and related structures; after consolidation, the information is stored directly in the weights that construct cortical representations. The model does not assume that there are any direct connections between the cortical areas. Instead, as a closer match to the available anatomical data, we assume a hierarchy of cortical regions (in the present model having just two layers) below the hippocampus. It is hard to establish an exact correspondence between model components and anatomical regions, so we tentatively call the model region on the top of the cortical hierarchy entorhinal/parahippocampal/perirhinal area (E/P), and lump together all parts of the hippocampal formation into an entity we call hippocampus (HC). EIP is connected bidirectionally to all the cortical areas.
1842 |@word implemented:1 establish:1 involves:2 come:1 indicate:1 hippocampus:9 damage:2 semantic:2 lobe:1 exploration:1 human:2 eg:1 adjacent:1 whereby:1 shot:1 require:1 entity:1 rat:1 hippocampal:2 efficacy:1 considers:1 complete:3 consensus:1 declarative:4 invisible:1 helping:1 code:1 relationship:1 activation:1 brings:1 consideration:1 acquire:1 subsequent:1 functional:1 murre:1 disparate:2 medial:1 interpretation:1 instantiate:2 perform:1 reorganized:1 allowing:1 observation:1 cambridge:1 neuron:1 reciprocal:1 acknowledge:1 outlined:1 supporting:1 directing:1 simpler:1 rather:1 cortex:4 direct:1 become:2 amnesia:1 qualitative:2 recent:1 focus:1 connection:5 acquired:3 expected:1 temporary:1 themselves:1 brain:1 dependent:1 dayan:1 below:1 pattern:14 initially:3 little:1 period:1 becomes:1 underlying:1 memory:14 issue:2 event:1 consolidated:1 interpreted:1 animal:1 match:1 england:1 believed:1 long:3 retrieval:1 once:1 construct:2 having:1 clinical:1 representing:1 technology:1 temporal:1 quantitative:2 numerous:1 unsupervised:2 hippocampally:1 tentatively:1 uk:1 quantitatively:1 unit:2 normally:1 enjoy:1 serious:1 few:1 represent:1 proposal:1 before:3 treat:1 concluded:1 despite:1 abounds:1 limitation:1 establishing:1 attempt:1 lump:1 call:2 principle:1 presence:1 course:1 consolidation:16 responsible:1 surprisingly:1 architecture:1 lost:2 closer:1 definite:1 partial:2 perfectly:1 idea:2 understand:1 orthogonal:1 institute:1 comprehensively:1 old:1 episodic:1 area:13 cortical:13 world:1 instance:1 modeling:1 author:1 peter:1 ar:1 parahippocampal:1 reinstatement:1 queen:1 storage:1 generally:1 successful:1 neocortex:2 characterize:1 stored:3 simplicity:1 spectrum:1 importantly:1 neuroscience:1 decade:1 popularity:1 probabilistic:1 anatomical:3 together:1 hierarchy:3 play:1 exact:1 hc:1 cognitive:1 szabolcs:2 whole:1 nothing:1 ref:1 role:2 coding:2 referred:1 region:5 gatsby:2 connected:2 squire:1 episode:4 depends:1 substantial:1 complexity:1 sort:1 layer:1 correspondence:1 ultimately:1 activity:6 specific:2 square:1 purely:1 eip:1 largely:1 evidence:1 aspect:2 entorhinal:1 fore:1 relatively:1 london:2 department:1 spartan:1 synapsis:1 formation:2 describes:1 across:1 whose:2 likely:1 bidirectionally:1 happening:1 acquisition:1 involved:1 ability:1 gradually:1 itself:1 noisy:1 massachusetts:1 ma:1 recall:3 ucl:1 goal:1 presentation:1 strengthening:1 actually:1 shared:1 absence:1 change:1 hard:2 available:1 day:1 alvarez:1 hierarchical:5 appropriate:1 though:1 generic:2 just:3 experimental:1 la:1 alternative:1 correlation:3 perirhinal:1 double:1 impaired:1 wcin:1 reinstate:1 original:1 top:1 intact:1 college:1 ac:1 completion:3 investigator:1 perhaps:1 embodies:1 advocated:1 tested:1 strong:1
921
1,843
From Mixtures of Mixtures to Adaptive Transform Coding Cynthia Archer and Todd K. Leen Department of Computer Science and Engineering Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Rd, Beaverton, OR 97006-1000 E-mail: archer, [email protected] Abstract We establish a principled framework for adaptive transform coding. Transform coders are often constructed by concatenating an ad hoc choice of transform with suboptimal bit allocation and quantizer design. Instead, we start from a probabilistic latent variable model in the form of a mixture of constrained Gaussian mixtures. From this model we derive a transform coding algorithm, which is a constrained version of the generalized Lloyd algorithm for vector quantizer design. A byproduct of our derivation is the introduction of a new transform basis, which unlike other transforms (PCA, DCT, etc.) is explicitly optimized for coding. Image compression experiments show adaptive transform coders designed with our algorithm improve compressed image signal-to-noise ratio up to 3 dB compared to global transform coding and 0.5 to 2 dB compared to other adaptive transform coders. 1 Introduction Compression algorithms for image and video signals often use transform coding as a low-complexity alternative to vector quantization (VQ). Transform coders compress multi-dimensional data by transforming the signal vectors to new coordinates and coding the transform coefficients independently of one another with scalar quantizers. The coordinate transform may be fixed a priori as in the discrete cosine transform (DCT). It can also be adapted to the signal statistics using, for example, principal component analysis (PCA), where the goal is to concentrate signal energy in a few signal components. Noting that signals such as images and speech are nonstationary, several researchers have developed non-linear [1, 2] and local linear or adaptive [3,4] PCA transforms for dimension reduction!. None of these transforms are designed to minimize compression distortion nor are they designed in concert with quantizer development. lIn dimension reduction the original d-dimensional signal is projected onto a subspace or submanifold of lower dimension. The retained coordinates are not quantized. Several researchers have extended the idea of local linear transforms to transform coding [5, 6, 7]. In these adaptive transform coders, the signal space is partitioned into disjoint regions and a transform and set of scalar quantizers are designed for each region. In our own previous work [7], we use k-means partitioning to define the regions. Dony and Haykin [5] partition the space to minimize dimension-reduction error. Tipping and Bishop [6] use soft partitioning according to a probabilistic rule that reduces, in the appropriate limit, to partitioning by dimension-reduction error. These systems neither design transforms nor partition the signal space with the goal of minimizing compression distortion. This ad hoc construction contrasts sharply with the solid grounding of vector quantization. Nowlan [8] develops a probabilistic framework for VQ by demonstrating the correspondence between a VQ and a mixture of spherically symmetric Gaussians. In the limit that the mixture component variance goes to zero, the ExpectationMaximization (EM) procedure for fitting the mixture model to data becomes identical to the Linde-Buzo-Gray (LBG) algorithm [9] for vector quantizer design. This paper develops a similar grounding for both global and adaptive (local) transform coding. We define a constrained mixture of Gaussians model that provides a framework for transform coder design. Our new design algorithm is simply a constrained version of the LBG algorithm. It iteratively optimizes the signal space partition, the local transforms, the allocation of coding bits, and the scalar quantizer reproduction values until it reaches a local distortion minimum. This approach leads to two new results, an orthogonal transform and a method of partitioning the signal space, both designed to minimize coding error. 2 Global Transform Coder Model In this section, we develop a constrained mixture of Gaussians model that provides a probabilistic framework for global transform coding. 2.1 Latent Variable Model A transform coder converts a signal to new coordinates and then codes the coordinate values independently of one another with scalar quantizers. To replicate this structure, we envision the data as drawn from a d-dimensionallatent data space, S, in which the density p( 8) = p( 81,82, ... ,8d) is a product of the marginal densities, PJ(8J), J = 1. . . d. a I - - r2 ?12 s? ~ 11,12 r1? - 11I 1 S Figure 1: Structure of latent variable space, S, and mapping to observed space, X . The latent data density consists of a mixture of spherical Gaussians with component means qa constrained to lie at the vertices of a rectangular grid. The latent data is mapped to the observed space by an orthogonal transform, W. We model the density in the latent space with a constrained mixture of Gaussian densities K p(s) = 7ra P(sla) (1) L a=l where 7ra are the mixing coefficients and p(sla) = N(qa, ~a) is Gaussian with mean qa and variance ~a. The mixture component means, qa, lie at the vertices of a rectangular grid as illustrated in figure (1). The coordinates of qa are [rliu r2i2" .. , rdiJ T , where r JiJ is the i~h grid mark on the SJ axis. There are KJ grid mark values on the SJ axis, so the total number of grid vertices K = TIJ KJ. We constrain the mixture components variances, ~a to be spherically symmetric with the same variance, (721, with 1 the identity matrix. We do not fit (72 to the data, but treat it as a "knob", which we will turn to zero to reveal a transform coder. These mean and variance constraints yield marginal densities PJ(sJliJ) = N(r JiJ> (72). We write the density of S conditioned on a as d p(sla) =P(Sl, ... ,Sdla(i1, ... ,id)) = IIpJ(SJli J ). (2) J=l and constrain each 7ra to be a product of prior probabilities, 7r a (il, ... ,id) = TIJ PJir Incorporating these constraints into (1) and noting that the sum over the mixture components a is equivalent to sums over all grid mark values, the latent density becomes Kl p(s) = K2 d Kd L L ... L IIpJiJ PJ(sJliJ) = KJ II L PJiJ PJ(sJliJ)' (3) where the second equality comes by regrouping terms. The latent data is mapped to the observation space by an orthogonal transformation, W (figure 1). Using p(xls) = c5(x - Ws - fJ,) and (1), the density on observed data x conditioned on component a is p(xla) = N(W qa + fJ" (721). The total density on x is K p(x) =L 7ra p(xla) . (4) a=l The data log likelihood for N data vectors, {x n , n = 1 ... N}, averaged over the posterior probabilities p(alx n ) is (5) 2.2 Model Fitting and Transform Coder design The model (4) can be fit to data using the EM algorithm. In the limit that the variance of the mixture components goes to zero, the EM procedure for fitting the mixture model to data corresponds to a constrained LBG (CLBG) algorithm for optimal transform coder design. In the limit (72 -+ 0 the entropy term, In 7r a , becomes insignificant and the component posteriors collapse to (6) Each data vector is assigned to the component whose mean has the smallest Euclidean distance to it. These assignments minimize mean squared error. In the limit that (72 -+ 0, maximizing the likelihood (5) is equivalent to minimizing compression distortion 1 D = 7ra N Ix - Wqa _1-?1 2 (7) L L a a xER", where Ra = {x Ip(alx) = I}, Na is the number of x ERa, and 7ra = Na/N. To optimize the transform, we find the orientation of the current quantizer grid which minimizes (7). The transform, W, is constrained to be orthogonal, that is WTW = I. We first define the matrix of outer products Q Q= La 7r aqa (~ L (x-I-?f) . (8) a xER", Minimizing the distortion (7) with respect to some element of W and using Lagrange multipliers to enforce the orthogonality of W yields the condition QW = WTQT (9) or QW is symmetric. This symmetry condition and the orthogonality condition, WTW = I, uniquely determine the coding optimal transform (COT) W. The COT reduces to the PCA transform when the data is Gaussian. However, in general the COT differs from PCA. For instance in global transform coding trials on a variety of grayscale images, the COT improves signal-to-noise ratio (SNR) relative to PCA by 0.2 to 0.35 dB for fixed-rate coding at 1.0 bits per pixel (bpp). For variable-rate coding, SNR improvement due to using the COT is substantial, 0.3 to 1.2 dB for entropies of 0.25 to 1.25 bpp. We next minimize (7) with respect to the grid mark values, r JiJ' for J = 1 .. . d and iJ = 1 . .. K J and the number of grid values K J for each coordinate. It is advantageous to rewrite compression distortion as the sum of distortions D = LJ DJ due to quantizing the transform coefficients SJ = WJ x, where WJ is the Jfh column vector of W. The rJiJ grid mark values that minimize each DJ are the reproduction values of a scalar Lloyd quantizer [10] designed for the transform coefficients, SJ. KJ is the number of reproduction values in the quantizer for transform coordinate J. Allocating the log2 (K) coding bits among the transform coordinates so that we minimize distortion [11] determines the optimal KJ's. 3 Local Transform Coder Model In this section, we develop a mixture of constrained Gaussian mixtures model that provides a probabilistic framework for adaptive transform coding. 3.1 Latent Variable Model A local or adaptive transform coder identifies regions in data space that require different quantizer grids and orthogonal transforms. A separate transform coder is designed for each of these regions. To replicate this structure, we envision the observed data as drawn from one of M grids in the latent space. The latent variables, s, are modeled with a mixture of Gaussian densities, where the mixture components are constrained to lie at the grid vertices. Each grid has the same number of mixture components, K, however the number and spacing of grid marks on each axis can differ. This is illustrated schematically (in the hard-clustering limit) in figure 2. 51 ~ + q a. W(2)5 J.L(2) Figure 2: Nonstationary data model: Structure of latent variable space, S, and mapping (in the hard clustering limit) to observed space, X. The density in the latent space consists of mixtures of spherically symmetric Gaussians. The mixture component means, qim ), lie at the vertices the mth grid. Latent data is mapped to the observation space by W(m). The density on s conditioned on a single mixture component, 0: in grid m,2 is = N(q~m), 0- 2 1). The latent density is a mixture of constrained Gaussian mixture densities, M K p(slo:, m) p(s) = L'lrm LP(o:lm) p(slo:, m) m=l 0=1 (10) The latent data is mapped to the observation space by orthonormal transforms w(m). The density on x conditioned on 0: in grid m is p(xlo:, m) = N(w(m)q~m) + p,(m),0- 2 1). The observed data density is p(x) 3.2 M K m=l 0=1 = L'lrm (11) L p(o:lm) p(xlo:, m) Optimal Adaptive Transform Coder Design In the limit that 0- 2 -t 0, the EM procedure for fitting this model corresponds to a constrained LBG algorithm for adaptive transform coder design. As before, a single mixture component becomes responsible for Xn 0: mx -t { 1 if Ix - w(m)q~m) p( , I) 0 otherwise - p,(m) 12 ::; Ix - w(m)q~m) - p,(m) 12 '<I fil, 'Y (12) The coding optimal partition assigns each data vector to the region, m, whose transform coder compresses it with the least distortion. This differs from prior methods that use other partitioning criteria such as K-means clustering or Local peA partitioning. In K-means clustering, a data vector is assigned to the coder whose mean has the smallest Euclidean distance to it. Local peA partitions the data space to minimize dimension reduction error [3], not the coding error. Local peA requires a priori selection of a target dimension, instead of allowing the dimension to be optimized for the desired level of compression. To minimize distortion with respect to the transform coders, we can optimize the parameters of each region separately. A region's parameters are estimated from just the data vectors assigned to it. We find each region's transform and the number and placement of grid mark values as we did for the global transform coder. 2Each grid has its own mixture component index, a to simplify notation. am. We drop the m subscript from 4 Adaptive Transform Coding Results We find the adaptive transform coder for a set of images by applying our algorithm to a training image. The data vectors are 8 x 8 image pixel blocks. Then we compress a test image using the resulting transform coder. We measure compressed test image quality with signal-to-noise ratio, SNR = 10log1o (pixel variance/MSE), where MSE is the per pixel mean-squared coding error. Our implementation modifies codebook optimization to reduce computational requirements. First, instead of using optimal bit allocation, we use a greedy algorithm [12], which allocates bits one at a time to the coordinate with the largest distortion. In global transform coding trials (0.375 to 0.75 bpp), this substitution reduced SNR by < 0.1 dB. Second, instead of using the coding optimal transform (9), we use the peA transform. In global transform coding trials (0.25 to 0.75 bpp), this substitution reduced SNR by 0.05 to 0.27 dB. We report on compression experiments using two types of images, Magnetic Resonance Images (MRI) and gray-scale natural images of traffic moving through street intersections. These MRI images were used by Dony and Haykin in [5] and we duplicate their image pre-processing. One MRI image is decomposed into overlapping 8 x 8 blocks to form 15,625 training vectors; a second image is used for testing. The traffic images are frames from two video sequences. We use frames from the first half of both sequences for training and frames from the last halves for testing. 21 ~-~--~-~--~-~ 15~-~--~-~--~-~ 20 14 CC19 B a iii 18 0: ~ ~17 z I ~16 ~ ~15 14 1 3 '---~--~-~--~---' 0 .3 0 .4 0 .5 0 .6 0 .7 Compressed Bit-rate (bpp) (a) MRl test image SNR. All adaptive coders have 16 regions. 0 .6 7 '---~--~-~--~---' 0 .3 0 .4 0 .5 0 .6 0 .7 0 .6 Compressed bit-rate (bpp) (b) Traffic test image SNR. All adaptive coders have 32 regions. Figure 3: The x is our coding optimal partition, 0 local peA partition with dimension eight, e k-means clustering, and + is global peA. The dotted line values are local peA results from [5]. Errorbars indicate standard deviation of 8 trials. Figure 3 shows compressed test image SNR for four compressed bit-rates and four compression methods. The quoted bit-rates include the bits necessary to specify region assignments. The x results are for our transform coder which uses coding optimal partitioning. Our system increases SNR compared to global peA (+) by 2.3 to 3.0 dB, k-means clustering (e) by 1.1 to 1.8 dB and local peA partitioning with target dimension eight (0) by 0.5 to 2.0 dB. In addition, our system yields image SNRs 1.6 to 3.0 dB higher that Dony and Haykin's local peA transform coder (dimension eight) [5]. Their local peA coder does not use optimal bit allocation or Lloyd quantizers, which further reduces compressed image SNR. 5 Summary In this paper, we cast the design of both conventional and adaptive transform coders as a constrained optimization procedure. We derive our algorithm from the EM procedure for fitting a mixture of mixtures model to data. In contrast to standard transform coder design, all operations: partitioning the signal space (for the adaptive case), transform design, allocation of coding bits, and quantizer design, are coupled together to minimize compression distortion. This approach leads to a new transform basis that is optimized for coding. The coding optimal transform is in general different from PCA. This approach also leads to a method of data space partitioning that is optimized for coding. This method assigns each signal vector to the coder the compresses it with the least distortion. Our empirical results show marked SNR improvement (0.5 to 2 dB) relative to other partitioning methods. Acknow ledgeIllents The authors wish to thank Robert Dony and Simon Haykin for the use of their MRI image data and the Institute fur Algorithmen und Kognitive Systems, Universitiit Karlsruhe for making their traffic images available. This work was funded by NSF under grants ECS-9704094 and ECS-9976452. References [1] Mark A. Kramer. Nonlinear prinipal component analysis using autoassociative neural networks. AIChE journal, 37(2):233- 243, February 1991. [2] David DeMers and Garrison Cottrell. Non-linear dimensionality reduction. In Giles, Hanson, and Cowan , editors, Advances in Neural Information Processing Systems 5, San Mateo, CA, 1993. Morgan Kaufmann. [3] N anda Kambhatla and Todd K. Leen. Fast non-linear dimension reduction. In Cowan, Tesauro, and Alspector, editors, Advances in Neural Information Processing Systems 6, pages 152- 159. Morgan Kauffmann, Feb 1994. [4] G. Hinton, M. Revow, and P. Dayan. Recognizing handwritten digits using mixtures of linear models. In Tesauro, Touretzky, and Leen , editors , Advances in Neural Information Processing Systems 7, pages 1015-1022. MIT Press, 1995. [5] Robert D. Dony and Simon Haykin. Optimally adaptive transform coding. IEEE Transactions on Image Processing, 4(10) :1358- 1370, 1995. [6] M. Tipping and C. Bishop. Mixture of probabilistic principal component analyzers. Neural Computation, 11(2):443- 483, 1999. [7] C. Archer and T.K. Leen. Optimal dimension reduction and transform coding with mixture principal components. In Proceedings of International Joint Conference on Neural Networks , July 1999. [8] Steve Nowlan. Soft Competitive Adaptation: neural network learning algorithms based on fitting statistical mixtures. PhD thesis, School of Computer Science, Carnegie Mellon University, 1991. [9] Y. Linde, A. Buzo, and R.M. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(1):84- 95, January 1980. [10] S. Lloyd. Least square optimization in PCM. IEEE Transactions on Information Theory, 28(2):129- 137, 1982. [11] Eve A. Riskin. Optimal bit allocation via the generalized BFOS algorithm. IEEE Transactions on Information Theory, 37(2) :400- 402 , 1991. [12] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Kluwer Academic, 1992.
1843 |@word trial:4 mri:4 version:2 compression:11 advantageous:1 replicate:2 solid:1 reduction:8 substitution:2 envision:2 current:1 nowlan:2 dct:2 cottrell:1 partition:7 designed:7 concert:1 drop:1 greedy:1 half:2 haykin:5 quantizer:11 quantized:1 cse:1 provides:3 codebook:1 constructed:1 consists:2 fitting:6 ra:7 cot:5 alspector:1 nor:2 multi:1 spherical:1 decomposed:1 becomes:4 notation:1 qw:2 coder:30 minimizes:1 developed:1 transformation:1 k2:1 partitioning:11 grant:1 before:1 engineering:1 local:15 todd:2 treat:1 limit:8 era:1 id:2 subscript:1 lrm:2 mateo:1 collapse:1 graduate:1 averaged:1 responsible:1 testing:2 block:2 differs:2 digit:1 procedure:5 empirical:1 pre:1 onto:1 selection:1 applying:1 optimize:2 equivalent:2 conventional:1 maximizing:1 modifies:1 go:2 independently:2 rectangular:2 assigns:2 rule:1 orthonormal:1 coordinate:10 kauffmann:1 construction:1 target:2 us:1 element:1 observed:6 region:12 wj:2 aqa:1 principled:1 substantial:1 transforming:1 und:1 complexity:1 rewrite:1 basis:2 joint:1 derivation:1 snrs:1 fast:1 whose:3 distortion:13 qim:1 otherwise:1 compressed:7 statistic:1 transform:62 ip:1 hoc:2 sequence:2 quantizing:1 dimensionallatent:1 product:3 jij:3 adaptation:1 mixing:1 requirement:1 r1:1 derive:2 develop:2 ij:1 school:1 expectationmaximization:1 come:1 indicate:1 differ:1 concentrate:1 pea:11 require:1 fil:1 buzo:2 mapping:2 lm:2 kambhatla:1 smallest:2 largest:1 mit:1 gaussian:7 knob:1 improvement:2 fur:1 likelihood:2 contrast:2 am:1 dayan:1 lj:1 w:1 mth:1 archer:3 i1:1 pixel:4 among:1 orientation:1 priori:2 development:1 resonance:1 constrained:14 marginal:2 identical:1 report:1 develops:2 simplify:1 few:1 duplicate:1 mixture:34 allocating:1 byproduct:1 necessary:1 allocates:1 orthogonal:5 euclidean:2 desired:1 instance:1 column:1 soft:2 giles:1 assignment:2 vertex:5 deviation:1 snr:11 submanifold:1 recognizing:1 optimally:1 density:17 international:1 probabilistic:6 together:1 na:2 squared:2 thesis:1 coding:33 lloyd:4 coefficient:4 oregon:1 explicitly:1 ad:2 traffic:4 start:1 competitive:1 simon:2 minimize:10 il:1 square:1 variance:7 kaufmann:1 yield:3 handwritten:1 none:1 researcher:2 reach:1 touretzky:1 energy:1 demers:1 bpp:6 improves:1 dimensionality:1 steve:1 higher:1 tipping:2 specify:1 leen:4 just:1 until:1 nonlinear:1 overlapping:1 quality:1 gray:4 reveal:1 karlsruhe:1 grounding:2 multiplier:1 equality:1 assigned:3 spherically:3 symmetric:4 iteratively:1 illustrated:2 ogi:1 uniquely:1 cosine:1 criterion:1 generalized:2 slo:2 xla:2 fj:2 image:26 kluwer:1 mellon:1 rd:1 grid:20 analyzer:1 dj:2 funded:1 moving:1 etc:1 feb:1 posterior:2 own:2 aiche:1 optimizes:1 tesauro:2 regrouping:1 morgan:2 minimum:1 determine:1 july:1 signal:18 ii:1 alx:2 reduces:3 academic:1 lin:1 schematically:1 addition:1 separately:1 spacing:1 walker:1 unlike:1 db:11 cowan:2 nonstationary:2 eve:1 noting:2 iii:1 variety:1 fit:2 suboptimal:1 reduce:1 idea:1 pca:7 linde:2 speech:1 autoassociative:1 tij:2 transforms:8 reduced:2 sl:1 nsf:1 dotted:1 estimated:1 disjoint:1 per:2 discrete:1 write:1 carnegie:1 four:2 demonstrating:1 drawn:2 sla:3 neither:1 pj:4 wtw:2 convert:1 sum:3 bit:14 tleen:1 correspondence:1 adapted:1 placement:1 constraint:2 sharply:1 constrain:2 orthogonality:2 department:1 according:1 kd:1 em:5 partitioned:1 lp:1 making:1 vq:3 turn:1 gersho:1 available:1 gaussians:5 operation:1 eight:3 appropriate:1 enforce:1 magnetic:1 alternative:1 original:1 compress:4 clustering:6 include:1 beaverton:1 log2:1 establish:1 february:1 subspace:1 distance:2 separate:1 mapped:4 mx:1 thank:1 street:1 outer:1 mail:1 code:1 retained:1 modeled:1 index:1 ratio:3 minimizing:3 robert:2 quantizers:4 acknow:1 design:15 implementation:1 allowing:1 observation:3 january:1 extended:1 hinton:1 communication:1 frame:3 david:1 cast:1 kl:1 optimized:4 hanson:1 errorbars:1 qa:6 video:2 natural:1 improve:1 technology:1 identifies:1 axis:3 coupled:1 kj:5 prior:2 relative:2 allocation:6 editor:3 summary:1 last:1 institute:2 dimension:13 xn:1 author:1 c5:1 adaptive:18 projected:1 san:1 ec:2 transaction:4 sj:4 global:10 anda:1 quoted:1 grayscale:1 latent:16 ca:1 symmetry:1 mse:2 did:1 noise:3 garrison:1 lbg:4 wish:1 concatenating:1 xl:1 lie:4 ix:3 bishop:2 cynthia:1 r2:1 insignificant:1 reproduction:3 incorporating:1 quantization:3 phd:1 conditioned:4 entropy:2 intersection:1 simply:1 pcm:1 lagrange:1 xlo:2 scalar:5 corresponds:2 determines:1 goal:2 identity:1 marked:1 kramer:1 revow:1 universitiit:1 hard:2 principal:3 total:2 la:1 iipj:1 mark:8
922
1,844
A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work Ralf Herbrich Statistics Research Group Computer Science Department Technical University of Berlin [email protected] Thore Graepel Statistics Research Group Computer Science Department Technical University of Berlin [email protected] Abstract We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training set. The result is obtained in a PAC- Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound by Shawe-Taylor et al. [8] and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and - for maximum margins - to a vanishing complexity term. Furthermore, the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the volume ratio between the whole hypothesis space and the subset of consistent hypotheses. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal w.r.t. the new bound only if the feature vectors are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only - a recommendation that is well supported by our numerical experiments on two benchmark data sets. 1 Introduction Linear classifiers are exceedingly popular in the machine learning community due to their straight-forward applicability and high flexibility which has recently been boosted by the so-called kernel methods [13]. A natural and popular framework for the theoretical analysis of classifiers is the PAC (probably approximately correct) framework [11] which is closely related to Vapnik's work on the generalisation error [12]. For binary classifiers it turned out that the growth function is an appropriate measure of "complexity" and can tightly be upper bounded by the VC (Vapnik-Chervonenkis) dimension [14]. Later, structural risk minimisation [12] was suggested for directly minimising the VC dimension based on a training set and an a priori structuring of the hypothesis space. In practice, e.g. in the case of linear classifiers, often a thresholded real-valued func- tion is used for classification. In 1993, Kearns [4] demonstrated that considerably tighter bounds can be obtained by considering a scale-sensitive complexity measure known as the fat shattering dimension. Further results [1] provided bounds on the Growth function similar to those proved by Vapnik and others [14,6]. The popularity of the theory was boosted by the invention of the support vector machine (SVM) [13] which aims at directly minimising the complexity as suggested by theory. Until recently, however, the success of the SVM remained somewhat obscure because in PAC/VC theory the structuring of the hypothesis space must be independent of the training data - in contrast to the data-dependence of the canonical hyperplane. As a consequence Shawe-Taylor et.al. [8] developed the luckiness framework, where luckiness refers to a complexity measure that is a function of both hypothesis and training sample. Recently, David McAllester presented some PAC-Bayesian theorems [5] that bound the generalisation error of Bayesian classifiers independently of the correctness of the prior and regardless of the underlying data distribution - thus fulfilling the basic desiderata of PAC theory. In [3] McAllester's bounds on the Gibbs classifier were extended to the Bayes (optimal) classifier. The PAC-Bayesian framework provides a posteriori bounds and is thus closely related in spirit to the luckiness framework l . In this paper we give a tight margin bound for linear classifiers in the PAC-Bayesian framework. The main idea is to identify the generalisation error of the classifier h of interest with that of the Bayes (optimal) classifier of a (point-symmetric) subset Q that is summarised by h. We show that for a uniform prior the normalised margin of h is directly related to the volume of a large subset Q summarised by h. In particular, the result suggests that a learning algorithm for linear classifiers should aim at maximising the normalised margin instead of the classical margin. In Section 2 and 3 we review the basic PAC-Bayesian theorem and show how it can be applied to single classifiers. In Section 4 we give our main result and outline its proof. In Section 5 we discuss the consequences of the new result for the application of SVMs and demonstrate experimentally that in fact a normalisation of the feature vectors leads to considerably superior generalisation performance. We denote n-tuples by italic bold letters (e.g. x = (Xl, ... ,X n )), vectors by roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E, I and .e~ denote a probability measure, the expectation of a random variable, the indicator function and the normed space (2-norm) of sequences of length n, respectively. 2 A PAC Margin Bound We consider learning in the PAC framework. Let X be the input space, and let y = {-1,+1}. Let a labelled training sample z = (x,y) E (X x y)m = zm be drawn iid according to some unknown probability measure P z = PYIXPx . Furthermore for a given hypothesis space 1t ~ yX we assume the existence of a ''true'' hypothesis h * E 1t that labelled the data PYIX=x (y) = Iy=h*(x). (1) We consider linear hypotheses 1t = {hw: X f-t sign((w,?(x)}x:;) I w E W}, W = {w E K IllwlllC = 1}, (2) lIn fact, even Shawe-Taylor et.al. concede that "... a Bayesian might say that luckiness is just a complicated way of encoding a prior. The sole justification for our particular way of encoding is that it allows us to get the PAC like results we sought ... " [9, p. 4]. where the mapping ? : X ~ K ~ f~ maps2 the input data to some feature space K and Ilwll,>;; = 1 leads to a one-to-one correspondence of hypotheses hw to their parameters w. From the existence of h* we know that there exists a version space V(z) ~ W, V(z)={wEW IV(x,y)EZ: hw(x)=y}. Our analysis aims at bounding the true risk R [w] of consistent hypotheses hw, R[w] = P XY (hw (X):I Y) . Since all classifiers w E V (z) are indistinguishable in terms of number of errors committed on the given training set z let us introduce the concept of the margin 'Yz (w) of a classifier w, i.e. 'Yz () w . = (ximIn ,y;)Ez ydw, Xi),>;; Ilwll,>;; (3) The following theorem due to Shawe-Taylor et al. [8] bounds the generalisation errors R [w] of all classifier wE V (z) in terms of the margin 'Yz (w). Theorem 1 (PAC margin bound). For all probability measures P z such that P x (II? (X) II,>;; :::; ~) = 1, for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z, if we succeed in correctly classifying m samples z with a linear classifier w achieving a positive margin 'Yz (w) > J32~2 1m then the generalisation R [w] of w is bounded from above by As the bound on R [w] depends linearly on 'Y;2 (w) we see that Theorem 1 provides a theoretical foundation of all algorithms that aim at maximising 'Yz (w), e.g. SVMs and Boosting [13, 7]. 3 PAC-Bayesian Analysis We first present a result [5] that bounds the risk of the generalised Gibbs classification strategy Gibbsw(z) by the measure Pw (W (z)) on a consistent subset W (z) ~ V (z). This average risk is then related via the Bayes-Gibbs lemma to the risk ofthe Bayes classification strategy Bayesw(z) on W (z). For a single consistent hypothesis w E W it is then necessary to identify a consistent subset Q (w) such that the Bayes strategy BayesQ(w) on Q (w) always agrees with w. Let us define the Gibbs classification strategy Gibbsw(z) w.r.t. the subset W (z) ~ V (z) by Gibbsw(z) (x) = hw (x) , w '" PWIWEW(z) . (5) Then the following theorem [5] holds for the risk of Gibbsw(z). Theorem 2 (PAC-Bayesian bound for subsets of classifiers). For any measure P w and any measure P z , for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z for all subsets W (z) ~ V (z) such that Pw (W (z)) > 0 the generalisation error of the associated Gibbs classification strategy Gibbsw(z) is bounded from above by R [Gibbsw(z)] :::; ~ (In (PW (~(Z))) + 2ln (m) + In (~) + 1) . (6) 2For notational simplicity we sometimes abbreviate cf> (x) by x which should not be confused with the sample x of training objects. Now consider the Bayes classifier Bayesw(z), Bayesw(z) (x) = sign (EwIWEW(z) [hw (x)]) , where the expectation EWIWEW(z) is taken over a cut-off posterior given by combining the PAC-likelihood (1) and the prior P w . Lemma 1 (Bayes-Gibbs Lemma). For any two measures P w and P XY and any setW ~ W P XY (Bayesw (X) =I Y) :s; 2? P XY (Gibbsw (X) =I Y) . (7) Proof. (Sketch) Consider only the simple PAC setting we need. At all those points x E X at which Bayesw is wrong by definition at least half ofthe classifiers wE W under consideration make a mistake as well. D The combination of Lemma 1 with Theorem 2 yields a bound on the risk of Bayesw(z). For a single hypothesis wE W let us find a (Bayes-admissible) subset Q (w) of version space V (z) such that BayesQ(w) on Q (w) agrees with w on every point in X. Definition 1 (Bayes-admissibility). Given the hypothesis space in (2) and a prior measure Pw over W we call a subset Q (w) ~ W Bayes admissible w.r.t. w and P w if and only if 'r/xEX: hw (x) = BayesQ(w) (x) . Although difficult to achieve in general the following geometrically plausible lemma establishes Bayes-admissibility for the case of interest. Lemma 2 (Bayes-admissibility for linear classifiers). For uniform measure P w over W each ball Q (w) = {v E W Illw - vlliC :s; r} is Bayes admissible w.r.t. its centre w . Please note that by considering a ball Q (w) rather than just w we make use of the fact that w summarises all its neighbouring classifiers v E Q (w). Now using a uniform prior P w the normalised margin (8) quantifies the relative volume of classifiers summarised by wand thus allows us to bound its risk. Note that in contrast to the classical margin '"Yz (see 3) this normalised margin is a dimensionless quantity and constitutes a measure for the relative size of the version space invariant under rescaling of both weight vectors w and feature vectors Xi. 4 A PAC-Bayesian Margin Bound Combining the ideas outlined in the previous section allows us to derive a generalisation error bound for linear classifiers w E V (z) in terms of their normalised margin r z (w). Figure 1: Illustration of the volume ratio for the classifier at the north pole. Four training points shown as grand circles make up version space - the polyhedron on top of the sphere. The radius of the "cap" of the sphere is proportional to the margin r %, which only for constant Ilxill.~: is maximised by the SVM. Theorem 3 (PAC-Bayesian margin bound). Suppose K ~ f~ is a given feature space of dimensionality n. For all probability measures Pz, for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z, if we succeed in correctly classifying m samples z with a linear classifier w achieving a positive margin r % (w) > 0 then the generalisation error R [w] of w is bounded from above by ~(dln( 1 - VI 1- r~ (w) )+2In(m)+ln(~)+2) u m where d (9) = min (m, n). Proof. Geometrically the hypothesis space W is the unit sphere in ~n (see Figure 1). Let us assume that P w is uniform on the unit sphere as suggested by symmetry. Given the training set z and a classifier wall classifiers v E Q (w) Q (w) = {v E W I (w, v)K > Vl- r~ (w) } (10) are within V (z) (For a proof see [2]). Such a set Q (w) is Bayes-admissible by Lemma 2 and hence we can use P w (Q (w? to bound the generalisation error of w. Since Pw is uniform, the value -In (P w (Q (w?) is simply the logarithm of the volume ratio between the surface of the unit sphere and the surface of all v fulfilling equation (10). In [2] it is shown that this ratio is exactly given by ( In f;1r sinn - 2 (B) dB rarccos( Vl-r;(w)). smn Jo ) -2 . (B) dB It can be shown that this ratio is tightly bounded from above by n In ( 1- 1 Vl- r~ (w) ) + In (2) . 1? "'I--H-+-I__I__ 1 H--l--l-- j--j--H+H '1--1" . '!-- j--j-_j__ j_-}_ +++-j--j--I--l--}--j?+ l p p (a) (b) Figure 2: Generalisation errors of classifiers learned by an SVM with (dashed line) and without (solid line) normalisation of the feature vectors Xi. The error bars indicate one standard deviation over 100 random splits of the data sets. The two plots are obtained on the (a) thyroid and (b) sonar data set. With In (2) < 1 we obtain the desired result. Note that m points maximally span an m- dimensional space and thus we can marginalise over the remaining n - m 0 dimensions of feature space K . This gives d = min (m, n). An appealing feature of equation (9) is that for r z (w) = 1 the bound reduces to ~ (21n (m) - In (8) + 2) with a rapid decay to zero as m increases. In case of margins r z (w) > 0.91 the troublesome situation of d = m, which occurs e.g. for RBF kernels, is compensated for. Furthermore, upper bounding 1/(1- vr=r') by 2/r we see that Theorem 3 is an exponential improvement of Theorem 1 in terms of the attained margins. It should be noted, however, that the new bound depends on the dimensionality of the input space via d = min (m, n). 5 Experimental Study Theorem 3 suggest the following learning algorithm: given a version space V (z) (through a given training set z) find the classifier w that maximises r z (w). This algorithm, however, is given by the SVM only if the training data in feature space K are normalised. We investigate the influence of such a normalisation on the generalisation error in the feature space K of all monomials up to the p-th degree (well-known from handwritten digit recognition, see [13]). Since the SVM learning algorithm as well as the resulting classifier only refer to inner products in K, it suffices to use an easy-to-calculate kernel function k : X X X -t IR such that for all x, x' EX, k (x, x') = (</> (x) ,</> (X')}JC' given in our case by the polynomial kernel VpE N: k (X,X') = ((x,x'h + l)P . Earlier experiment have shown [13] that without normalisation too large values of p may lead to "overfitting". We used the VCI [10] data sets thyroid (d = 5, m = 140, mtest = 75) and sonar (d = 60, m = 124, mtest = 60) and plotted the generalisation error of SVM solutions (estimated over 100 different splits of the data set) as a function of p (see Figure 2). As suggested by Theorem 3 in almost all cases the normalisation improved the performance of the support vector machine solution at a statistically significant level. As a consequence, we recommend: When training an SVM, always normalise your data in feature space. Intuitively, it is only the spatial direction of both weight vector and feature vectors that determines the classification. Hence the different lengths of feature vectors in the training set should not enter the SVM optimisation problem. 6 Conclusion The PAC-Bayesian framework together with simple geometrical arguments yields the so far tightest margin bound for linear classifiers. The role of the normalised margin r % in the new bound suggests that the SVM is theoretically justified only for input vectors of constant length. We hope that this result is recognised as a useful bridge between theory and practice in the spirit of Vapnik's famous statement: Nothing is more practical than a good theory Acknowledgements We would like to thank David McAllester, John ShaweTaylor, Bob Williamson, Olivier Chapelle, John Langford, Alex Smola and Bernhard SchOlkopf for interesting discussions and useful suggestions on earlier drafts. References [1) N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale sensitive dimensions, uniform convergence and learnability. Journal of the ACM, 44(4}:615-631, 1997. [2) R. Herbrich. Learning Linear Classifiers - Theory and Algorithms. PhD thesis, Technische Universitat Berlin, 2000. accepted for publication by MIT Press. [3) R. Herbrich, T. Graepel, and C. Campbell. Bayesian learning in reproducing kernel Hilbert spaces. Technical report, Technical University of Berlin, 1999. TR 99-11. [4) M. J. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences , 48(2}:464-497, 1993. [5) D. A. McAllester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230- 234 , Madison, Wisconsin, 1998. [6) N. Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13:145- 147, 1972. [7) R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In Proceedings of the 14- th International Conference in Machine Learning, 1997. [8) J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5}:1926- 1940, 1998. [9) J. Shawe-Taylor and R. C. Williamson. A PAC analysis of a Bayesian estimator. Technical report, Royal Holloway, University of London, 1997. NC2- TR- 1997- 013. [10) UCI. University of California Irvine: Machine Learning Repository, 1990. [11) L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11}:11341142, 1984. [12) V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer, 1982. [13) V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [14) V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Application, 16(2}:264- 281, 1971.
1844 |@word repository:1 version:5 pw:5 polynomial:1 norm:1 tr:2 solid:1 series:1 chervonenkis:2 must:1 john:2 numerical:1 shawetaylor:1 plot:1 xex:1 half:1 maximised:1 vanishing:1 provides:2 boosting:2 coarse:1 draft:1 herbrich:3 scholkopf:1 eleventh:1 introduce:1 theoretically:1 ilxill:1 rapid:1 considering:2 provided:1 confused:1 bounded:5 underlying:1 developed:1 every:1 voting:1 growth:2 fat:1 exactly:1 classifier:35 wrong:1 control:1 unit:3 positive:2 generalised:1 mistake:1 consequence:4 encoding:2 troublesome:1 approximately:1 might:1 suggests:2 statistically:1 practical:2 practice:2 digit:1 empirical:1 refers:1 suggest:1 get:1 risk:9 dimensionless:1 influence:1 demonstrated:1 compensated:1 regardless:1 independently:1 normed:1 simplicity:1 estimator:1 haussler:1 ralf:1 justification:1 hierarchy:1 suppose:1 neighbouring:1 olivier:1 hypothesis:14 logarithmically:1 recognition:1 cut:1 role:1 calculate:1 complexity:5 tight:1 vpe:1 london:1 refined:1 ralfh:1 valued:1 say:1 plausible:1 statistic:2 sequence:1 product:1 zm:1 tu:2 turned:1 combining:2 uci:1 flexibility:1 achieve:1 convergence:2 ben:1 object:1 derive:1 alon:1 sole:1 c:2 indicate:1 direction:1 radius:1 closely:2 correct:1 vc:3 mcallester:4 suffices:1 wall:1 tighter:1 sans:1 hold:1 sufficiently:1 mapping:1 sought:1 estimation:1 combinatorial:1 sensitive:2 bridge:1 agrees:2 correctness:1 establishes:1 hope:1 minimization:1 mit:1 always:2 aim:4 rather:1 boosted:2 publication:1 minimisation:1 structuring:2 improvement:2 notational:1 polyhedron:1 likelihood:1 sinn:1 contrast:2 posteriori:1 dependent:1 vl:3 illw:1 classification:6 priori:1 spatial:1 shattering:1 constitutes:2 others:1 recommend:2 report:2 roman:1 tightly:2 interest:2 normalisation:5 investigate:1 necessary:1 xy:4 sauer:1 iv:1 taylor:6 logarithm:1 circle:1 desired:1 plotted:1 theoretical:2 earlier:2 applicability:1 pole:1 deviation:1 subset:10 monomials:1 technische:1 uniform:7 too:2 learnability:1 universitat:1 guru:1 considerably:2 density:1 grand:1 international:1 probabilistic:1 off:1 lee:1 together:1 iy:1 jo:1 thesis:1 cesa:1 rescaling:1 de:2 bold:2 north:1 jc:1 depends:2 vi:1 later:1 tion:1 bayes:14 complicated:1 wew:1 nc2:1 ir:1 yield:2 identify:2 ofthe:2 bayesian:16 handwritten:1 famous:1 iid:1 bob:1 straight:1 definition:2 frequency:1 proof:4 associated:1 irvine:1 proved:1 popular:2 cap:1 dimensionality:2 graepel:2 hilbert:1 campbell:1 attained:1 maximally:1 improved:1 furthermore:3 just:2 smola:1 until:1 langford:1 sketch:1 vci:1 thore:1 concept:2 true:2 hence:2 symmetric:1 indistinguishable:1 please:1 noted:1 recognised:1 outline:1 demonstrate:1 geometrical:2 consideration:1 recently:3 superior:1 volume:5 refer:1 significant:1 gibbs:6 enter:1 outlined:1 centre:1 shawe:6 chapelle:1 surface:2 posterior:1 binary:1 success:1 calligraphic:1 somewhat:1 dashed:1 ii:2 reduces:1 technical:5 minimising:2 sphere:5 lin:1 desideratum:1 basic:2 optimisation:1 expectation:2 kernel:5 sometimes:1 justified:1 probably:1 db:2 spirit:2 effectiveness:1 call:1 structural:2 split:2 easy:1 inner:1 idea:2 bartlett:2 j_:1 useful:2 svms:4 schapire:2 canonical:1 sign:2 dln:1 estimated:1 popularity:1 correctly:2 summarised:3 group:2 four:1 achieving:2 drawn:1 smn:1 thresholded:1 invention:1 geometrically:2 wand:1 inverse:1 letter:3 almost:1 family:1 draw:3 bound:28 correspondence:1 annual:1 your:1 alex:1 thyroid:2 argument:2 min:3 span:1 mtest:2 department:2 according:1 combination:1 ball:2 appealing:1 intuitively:1 invariant:1 fulfilling:2 taken:1 ln:2 equation:2 discus:1 know:1 tightest:2 appropriate:1 existence:2 top:1 remaining:1 cf:1 madison:1 yx:1 yz:6 classical:3 summarises:1 quantity:3 occurs:1 font:1 strategy:5 dependence:2 italic:1 thank:1 berlin:6 normalise:1 trivial:1 maximising:2 length:4 illustration:1 ratio:5 difficult:1 statement:1 unknown:1 maximises:1 upper:2 bianchi:1 benchmark:1 situation:1 extended:1 communication:1 committed:1 reproducing:1 community:1 david:3 marginalise:1 california:1 learned:1 suggested:4 bar:1 royal:1 explanation:1 event:1 natural:1 indicator:1 abbreviate:1 capitalised:1 func:1 prior:6 review:1 acknowledgement:1 relative:3 wisconsin:1 freund:1 admissibility:3 interesting:1 suggestion:1 proportional:1 foundation:1 degree:1 consistent:5 classifying:2 obscure:1 supported:1 free:1 normalised:8 dimension:6 exceedingly:1 forward:1 far:2 transaction:1 bernhard:1 overfitting:1 tuples:1 xi:3 quantifies:1 sonar:2 why:1 nature:1 symmetry:1 williamson:3 anthony:1 main:2 linearly:1 whole:1 bounding:2 nothing:1 vr:1 exponential:2 xl:1 lie:1 hw:8 admissible:4 theorem:14 remained:1 luckiness:4 pac:22 symbol:1 learnable:1 pz:1 svm:10 decay:1 essential:1 serif:1 ilwll:2 vapnik:7 exists:1 valiant:1 phd:1 margin:29 simply:1 ez:2 recommendation:1 springer:2 determines:1 acm:2 succeed:2 rbf:1 labelled:2 experimentally:1 generalisation:15 hyperplane:1 kearns:2 lemma:7 called:1 accepted:1 experimental:1 holloway:1 support:3 relevance:1 ex:1
923
1,845
Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 {jbt,gruffydd}@psych.stanfo rd .edu Abstract We use graphical models to explore the question of how people learn simple causal relationships from data. The two leading psychological theories can both be seen as estimating the parameters of a fixed graph. We argue that a complete account of causal induction should also consider how people learn the underlying causal graph structure, and we propose to model this inductive process as a Bayesian inference. Our argument is supported through the discussion of three data sets. 1 Introduction Causality plays a central role in human mental life. Our behavior depends upon our understanding of the causal structure of our environment, and we are remarkably good at inferring causation from mere observation. Constructing formal models of causal induction is currently a major focus of attention in computer science [7], psychology [3,6], and philosophy [5]. This paper attempts to connect these literatures, by framing the debate between two major psychological theories in the computational language of graphical models. We show that existing theories equate human causal induction with maximum likelihood parameter estimation on a fixed graphical structure, and we argue that to fully account for human behavioral data, we must also postulate that people make Bayesian inferences about the underlying causal graph structure itself. Psychological models of causal induction address the question of how people learn associations between causes and effects, such as P ( C ----+ E), the probability that some event C causes outcome E. This question might seem trivial at first; why isn't P( C----+E) simply P( e+ Ic+), the conditional probability that E occurs (E = e+ as opposed to e-) given that C occurs? But consider the following scenarios. Three case studies have been done to evaluate the probability that certain chemicals, when injected into rats, cause certain genes to be expressed. In case 1, levels of gene 1 were measured in 100 rats injected with chemical 1, as well as in 100 uninjected rats; cases 2 and 3 were conducted likewise but with different chemicals and genes. In case 1, 40 out of 100 injected rats were found to have expressed the gene, while 0 out of 100 uninjected rats expressed the gene. We will denote these results as {40/100, O/IOO}. Case 2 produced the results {7/100, O/IOO}, while case 3 yielded {53/100, 46/100}. For each case, we would like to know the probability that the chemical causes the gene to be expressed, P ( C ----+ E), where C denotes the chemical and E denotes gene expression. People typically rate P( C----+E) highest for case 1, followed by case 2 and then case 3. In an experiment described below, these cases received mean ratings (on aO-20 scale) of14.9?.8, 8.6 ? .9, and 4.9 ? .7, respectively. Clearly P(C----+E) # P(e+lc+), because case 3 has the highest value of P(e+ Ic+) but receives the lowest rating for P(C----+E). The two leading psychological models of causal induction elaborate upon this basis in attempting to specify P(C----+E). The f:j.P model [6] claims that people estimate P(C----+E) according to (1) ? (We restrict our attention here to facilitatory causes, in which case f:j.P is always between and 1.) Equation 1 captures the intuition that C is perceived to cause E to the extent that C's occurence increases the likelihood of observing E. Recently, Cheng [3] has identified several shortcomings of f:j.P and proposed that P( C----+E)instead corresponds to causal power, the probability that C produces E in the absence of all other causes. Formally, the power model can be expressed as: f:j.P (2) power = 1 _ P(e+ lc)' There are a variety of normative arguments in favor of either of these models [3,7]. Empirically, however, neither model is fully adequate to explain human causal induction. We will present ample evidence for this claim below, but for now, the basic problem can be illustrated with the three scenarios above. While people rate P(C----+E) higher for case 2, {71100,OllOO}, than for case 3, {531100,4611O0}, f:j.p rates them equally and the power model ranks case 3 over case 2. To understand this discrepancy, we have to distinguish between two possible senses of P( C----+E): "the probability thatC causes E (on any given trial when C is present)" versus "the probability that C is a cause of E (in general, as opposed to being causally independent of E)". Our claim is that the f:j.P and power models concern only the former sense, while people's intuitions about P( C ----+ E) are often concerned with the latter. In our example, while the effect of C on any given trial in case 3 may be equal to (according to f:j.P) or stronger than (according to power) its effect in case 2, the general pattern of results seems more likely in case 2 than in case 3 to be due to a genuine causal influence, as opposed to a spurious correlation between random samples of two independent variables. In the following section, we formalize this distinction in terms of parameter estimation versus structure learning on a graphical model. Section 3 then compares two variants of our structure learning model with the parameter estimation models (f:j.P and power) in light of data from three experiments on human causal induction. 2 Graphical models of causal induction The language of causal graphical models provides a useful framework for thinking about people's causal intuitions [5,7]. All the induction models we consider here can be viewed as computations on a simple directed graph (Graph l in Figure 1). The effect node E is the child of two binary-valued parent nodes: C, the putative cause, and B, a constant background. Let X = (Cl , El)"'" (CN, EN) denote a sequence of N trials in which C and E are each observed to be present or absent; B is assumed to be present on all trials. (To keep notation concise in this section, we use 1 or in addition to + or - to denote presence or absence of an event, e.g. Ci = 1 if the cause is present on the ith trial.) Each parent node is associated with a parameter, W B or We, that defines the strength of its effect on E. In the ? !:l.P model, the probability of E occuring is a linear function of C: Q(e+ le; WB , w e ) = WB + w e' e. (3) (We use Q to denote model probabilities and P for empirical probabilities in the sample X .) In the causal power model, as first shown by Glymour [5], E is a noisy-OR gate: (4) 2.1 Parameter inferences: !:l.P and Causal Power In this framework, both the!:l.P and power model's predictions for P (C - E ) can be seen as maximum likelihood estimates of the causal strength parameter We in Graph1' but under different parameterizations. For either model, the loglikelihood of the data is given by N 2:)og [(Q(eile; ))"' (I- Q(eil e;) )1- e. ] (5) N L ei log Q(ei lei ) + (1 - ei ) log(1 - Q( e; lei )) , (6) ; =: 1 where we have suppressed the dependence of Q(ei Ie;) on WB , We . Breaking this sum into four parts, one for each possible combination of {e+, e-} and {e+ , e-} that could be observed, ?( XI WB , w e) can be written as + N P( e+) [P( e+ le+) log Q( e+ le+) + (1 - P(e+ le+)) log(1 - Q(e+ le+))] N P(e-) [P( e+le-)logQ(e+le- ) + (1- P(e+ le-))log(l - Q (e + le-))] (7) By the Information inequality [4], Equation 7 is maximized whenever WB and We can be chosen to make the model probabilities equal to the empirical probabilites: Q( e+ le+; WB , we ) Q (e + le-; WB, w e ) P(e+le+) , P(e+Ic -)? (8) (9) To show that the !:l.P model's predictions for P( C - E ) correspond to maximum likelihood estimates of We under a linear parameterization of Graph1 ' we identify We in Equation 3 with!:l.P (Equation 1), and WB with P ( e+ le-) . Equation 3 then reduces to P(e+ le+) for the case e = e+ (i.e., e = 1) and to P( e+ le-) for the case e = e- (i.e., e = 0), thus satisfying the sufficient conditions in Equations 8-9 for WB and We to be maximum likelihood estimates. To show that the causal power model's predictions for P (C - E ) correspond to maximum likelihood estimates of We under a noisy-OR parameterization, we follow the analogous procedure: identify We in Equation 4 with power (Equation 2), and WB with P (e+ le-). Then Equation 4 reduces to P( e+ le+) for e = e+ and to P( e+ le-) for e = e- , again satisfying the conditions for WB and We to be maximum likelihood estimates. 2.2 Structural inferences: Causal Support and x2 The central claim of this paper is that people's judgments of P( C - E ) reflect something other than estimates of causal strength parameters - the quantities that we have just shown to be computed by !:l.P and the power model. Rather, people's judgments may correspond to inferences about the underlying causal structure, such as the probability that C is a direct cause of E. In terms of the graphical model in Figure 1, human causal induction may be focused on trying to distinguish between Graph), in which C is a parent of E, and the "null hypothesis" of Grapho, in which C is not. This structural inference can be formalized as a Bayesian decision. Let he be a binary variable indicating whether or not the link C -+ E exists in the true causal model responsible for generating our observations. We will assume a noisy-OR gate, and thus our model is closely related to causal power. However, we propose to model human estimates of P(C -+E ) as causal support, the log posterior odds in favor of Graph l (h e = 1) over Grapho (h e = 0) : support P(h e = l 1X ) = log P (h e =O IX ) . (10) Via Bayes' rule, we can express P( he = l 1X) in terms of the marginal likelihood or eviden ce, P(X lhe = 1), and the prior probability that C is a cause of E , P(h e = 1) : P(h e = l 1X) ex P( Xl he = I)P(h e = 1). (11) For now, we take P(h e = 1) = P(h e = 0) = 1/2. Computing the evidence requires integrating the likelihood P( X IW B , we ) over all possible values of the strength parameters: We take p( WB , We Ihe = 1) to be a uniform density, and we note that p( X IWB, we ) is simply the exponential of ?( XI W8, we ) as defined in Equation 5. P( Xl he = 0), the marginal likelihood for Grapho, is computed similarly, but with the prior P(WB , wei he = 1) in Equation 12 replaced by p( WB Ihe = 0)8( we ). We again take p( WB Ihe = 0) to be uniform. The Dirac delta distribution on We = 0 enforces the restriction that the C -+ E link is absent. By making these assumptions, we eliminate the need for any free numerical parameters in our probabilistic model (in contrast to a similar Bayesian account proposed by Anderson [1]). Because causal support depends on the full likelihood functions for both Graph ) and Grapho, we may expect the support model to be modulated by causal power - which is based strictly on the likelihood maximum estimate for Graphl - but only in interaction with other factors that determine how much of the posterior probability mass for We in Graph) is bounded away from zero (where it is pinned in Graph o). In general, evaluating causal support may require fairly involved computations, but in the limit of large N and weak causal strength We , it can be approximated by the familiar X 2 statistic for independence, N L c,e (P( c ' ~o(~~~c , e))2 . Here Po(c , e) = P( c)P(e) is the factorized approximation to P(c , e), which assumes C and E to be independent (as they are in Graph o). 3 Comparison with experiments In this section we examine the strengths and weaknesses of the two parameter inference models, !::..P and causal power, and the two structural inference models, causal support and X2, as accounts of data from three behavioral experiments, each designed to address different aspects of human causal induction. To compensate for possible nonlinearities in people's use of numerical rating scales on these tasks, all model predictions have been scaled by power-law transformations, f( x ) = sign(x) Ixl 'Y, with I chosen separately for each model and each data set to maximize their linear correlation. In the figures, predictions are expressed over the same range as the data, with minimum and maximum values aligned. Figure 2 presents data from a study by Buehner & Cheng [2], designed to contrast the predictions of flP and causal power. People judged P ( C -+ E) for hypothetical medical studies much like the gene expression scenarios described above, seeing eight cases in which C occurred and eight in which C did not occur. Some trends in the data are clearly captured by the causal power model but not by flP, such as the monotonic decrease in P( C -+ E ) from {l.00 , 0.75} to {.25 , O.OO}, as flP stays constant but P( e+lc-) (and hence power) decreases (columns 6-9). Other trends are clearly captured by flP but not by the power model, like the monotonic increase in P(C -+E ) as P(e+ Ic+) stays constant at 1.0 but P(e+ Ic-) decreases, from {l.00 , l. 00} to {l.00 , O.OO} (columns 1, 6, 10, 13, 15). However, one of the most salient trends is captured by neither model: the decrease in P(C -+E ) as flP stays constant at 0 but P (e+lc-) decreases (columns 1-5). The causal support model predicts this decrease, as well as the other trends . The intuition behind the model's predictions for flP = 0 is that decreasing the base rate P(e+ Ic-) increases the opportunity to observe the cause's influence and thus increases the statistical force behind the inference that C does not cause E , given flP = O. This effect is most obvious when P( e+lc+) = P( e+lc-) = 1, yielding a ceiling effect with no statistical leverage [3], but also occurs to a lesser extent for P (e +Ie) < 1. While x 2 generally approximates the support model rather well, italso fails to explain the cases with P( e+ Ic+) = P( e+ Ic-), which always yield X2 = O. The superior fit of the support model is reflected in its correlation with the data, giving R 2 = 0.95 while the power, flP , and x 2 models gave R 2 values of 0.81,0 .82, and 0.82 respectively. Figure 3 shows results from an experiment conducted by Lober and Shanks [6], designed to explore the trend in Buehner and Cheng's experiment that was predicted by flP but not by the power model. Columns 4-7 replicated the monotonic increase in P ( C -+ E ) when P(e+lc+) remains constant at 1.0 but P(e+lc-) decreases, this time with 28 cases in which C occurred and 28 in which C did not occur. Columns 1-3 show a second situation in which the predictions of the power model are constant, but judgements of P ( C -+ E ) increase. Columns 8-10 feature three scenarios with equal flP, for which the causal power model predicts a decreasing trend. These effects were explored by presenting a total of 60 trials, rather than the 56 used in Columns 4-7. For each of these trends the flP model outperforms the causal power model, with overall R 2 values of 0.95 and 0.35 respectively. However, it is important to note that the responses of the human subjects in columns 8-10 (contingencies {l.00 , 0.50} , {0 .80 , 0.40} , {0 .40, O.OO}) are not quite consistent with the predictions of flP : they show a slight V-shaped non-linearity, with P( C -+E) judged to be smaller for 0.80 , 0.40 than for either of the extreme cases. This trend is predicted by the causal support model and its X 2 approximation, however, which both give the slightly better R 2 of 0.99. Figure 4 shows data that we collected in a similar survey, aiming to explore this non-linear effect in greater depth. 35 students in an introductory psychology class completed the survey for partial course credit. They each provided a judgment of P ( C -+ E ) in 14 different medical scenarios, where information about P( e+ Ic+) and P(e+ Ic-) was provided in terms of how many mice from a sample of 100 expressed a particular gene. Columns 1-3, 5-7, and 9-11 show contingency structures designed to elicit V-shaped trends in P ( C -+ E ). Columns 4 and 8 give intermediate values, also consistent with the observed non-linearity. Column 14 attempted to explore the effects of manipulating sample size, with a contingency structure of {7/7, 93/ 193}. In each case, we observed the predicted nonlinearity : in a set of situations with the same flP , the situations involving less extreme probabilities show reduced judgments of P( C -+E ) . These non-linearities are not consistent with the flP model, but are predicted by both causal support and X2 . I:!.P actually achieves a correlation comparable to X2 (R 2 = 0.92 for both models) because the non-linear effects contribute only weakly to the total variance. The support model gives a slightly worse fit than X2, R2 = 0.80, while the power model gives a poor account of the data, R2 = 0.38. 4 Conclusions and future directions In each of the studies above, the structural inference models based on causal support or X2 consistently outperformed the parameter estimation models, I:!.P and causal power. While causal power and I:!.P were each capable of capturing certain trends in the data, causal support was the only model capable of predicting all the trends. For the third data set, X2 provided a significantly better fit to the data than did causal support. This finding merits future investigation in a study designed to tease apart X2 and causal support; in any case, due to the close relationship between the two models, this result does not undermine our claim that probabilistic structural inferences are central to human causal induction. One unique advantage of the Bayesian causal support model is its ability to draw inferences from very few observations. We have begun a line of experiments, inspired by Gopnik, Sobel & Glymour (submitted), to examine how adults revise their causal judgments when given only one or two observations, rather than the large samples used in the above studies. In one study, subjects were faced with a machine that would inform them whether a pencil placed upon it contained "supedead" or ordinary lead. Subjects were either given prior knowledge that superlead was rare or that it was common. They were then given two pencils, analogous to B and C in Figure 1, and asked to rate how likely these pencils were to have supedead, that is, to cause the detector to activate. Mean responses reflected the induced prior. Next, they were shown that the superlead detector responded when Band C were tested together, and their causal ratings of both Band C increased. Finally, they were shown that B set off the supedead detector on its own, and causal ratings of B increased to ceiling while ratings of C returned to their prior levels. This situation is exactly analogous to that explored in the medical tasks described above, and people were able to perform accurate causal inductions given only one trial of each type. Of the models we have considered, only Bayesian causal support can explain this behavior, by allowing the prior in Equation 11 to adapt depending on whether superlead is rare or common. We also hope to look at inferences about more complex causal structures, including those with hidden variables. With just a single cause, causal support and X2 are highly correlated, but with more complex structures, the Bayesian computation of causal support becomes increasingly intractable while the X2 approximation becomes less accurate. Through experiments with more complex structures, we hope to discover where and how human causal induction strikes a balance between ponderous rationality and efficient heuristic. Finally, we should stress that despite the superior performance of the structural inference models here, in many situations estimating causal strength parameters is likely to be just as important as inferring causal structure. Our hope is that by using graphical models to relate and extend upon existing accounts of causal induction, we have provided a framework for exploring the interplay between the different kinds of judgments that people make. References [1] J. Anderson (1990). The adaptive character of thought. Erlbaum. [2] M. Buehner & P. Cheng (1997) Causal induction; The power PC theory versus the RescorlaWagner theory. In Proceedings of the 19th Annual Conference of the Cognitive Science Society. [3] P. Cheng (1997). From covariation to causation: A causal power theory. Psychological Review 104,367-405. [4] T. Cover & J .Thomas (1991). Elements of information theory. Wiley. [5] C. Glymour (1998). Learning causes: Psychological explanations of causal explanation. Minds and Machines 8, 39-60. [6] K. Lober & D. Shanks (2000). Is causal induction based on causal power? Critique of Cheng (1997). Psychological Review 107, 195-212. [7] J. Pearl (2000). Causality. Cambridge University Press. Graph 1 (he = 1) ~c Model Grapho (he = 0) P(e+lc+) P(e+lc-) ~~c Form of P(elb,c) P(C->E) M' Linear Power Noisy OR-gate we Support Noisy OR-gate Iog P(he=l) __ we 100 075 050 025 000 1 aD 075 050 025100075050100 075 1 aD 100 075 050 025 000075050 025 000 050 025 000 025 000 000 ??? __ ???? 1?? 11 1 1_.::___ ??????? 111 Humans '?5?:[ I N/ ? . : : . _ _ Figure 1: Different theories of human causal induction expressed as different operations on a simple graphical model. The M' and power models correspond to maximum likelihood parameter estimates on a fixed graph (Graph[), while the support model corresponds to a (Bayesian) inference about which graph is the true causal structure. 1 ._ ?? _??????? 11 I _..:' _ _ _ ? ? ? ? ? ? ? 111 Support I P(he = O) I??? II.III Figure 2: Computational models compared with the performance of human participants from Buehner and Cheng [1], Experiment lB. Numbers along the top of the figure show stimulus contingencies. [....II.... ............... . III........ _ II .III........ .._._-_. .. ? p(e+lc+) p(e+lc-) 090 080 070 100 100 100 100 100 080 040 090 066033 000 075 050 025 000 060 040 000 083 I '::or 1 I I Humans 1.111 ??? :.: 11111111 ? .::w.:. ?? 1.1 I ??? I.III??? ~ p(e+lc+) P(e+lc-) 20 f ,: 040 070 1 00 090 007 053 1 00 074 002 051 1 00 0 10 1 00 1 00 0 00 030 060 083 0 00 046 093 072 0 00 049 098 0 10 1 00 048 I ___ Humans ~p I ? ? Support I ?? Figure 3: Computational models compared with the performance of human participants from Lober and Shanks [5], Experiments 4-6. [1111 ?? I ??? I,:w::.1 II I Support i I Figure 4: Computational models compared with the performance of human participants on a set of stimuli designed to elicit the non-monotonic trends shown in the data of Lober and Shanks [5].
1845 |@word trial:7 judgement:1 stronger:1 seems:1 concise:1 outperforms:1 existing:2 must:1 written:1 numerical:2 designed:6 parameterization:2 ith:1 mental:1 provides:1 parameterizations:1 contribute:1 node:3 along:1 direct:1 introductory:1 behavioral:2 behavior:2 examine:2 inspired:1 decreasing:2 eil:1 becomes:2 provided:4 estimating:2 underlying:3 notation:1 bounded:1 mass:1 factorized:1 lowest:1 null:1 linearity:3 kind:1 probabilites:1 psych:1 finding:1 transformation:1 w8:1 hypothetical:1 exactly:1 scaled:1 medical:3 causally:1 limit:1 aiming:1 despite:1 critique:1 might:1 range:1 directed:1 unique:1 responsible:1 enforces:1 procedure:1 empirical:2 elicit:2 significantly:1 thought:1 integrating:1 griffith:1 seeing:1 graph1:2 close:1 judged:2 influence:2 restriction:1 attention:2 focused:1 survey:2 formalized:1 rule:1 analogous:3 play:1 rationality:1 hypothesis:1 trend:12 element:1 satisfying:2 approximated:1 predicts:2 logq:1 observed:4 role:1 capture:1 decrease:7 highest:2 intuition:4 environment:1 asked:1 weakly:1 upon:4 basis:1 po:1 shortcoming:1 activate:1 outcome:1 quite:1 heuristic:1 stanford:2 valued:1 loglikelihood:1 favor:2 statistic:1 ability:1 itself:1 noisy:5 interplay:1 sequence:1 advantage:1 propose:2 interaction:1 aligned:1 dirac:1 parent:3 produce:1 generating:1 oo:3 depending:1 measured:1 ex:1 received:1 predicted:4 direction:1 closely:1 gopnik:1 human:19 pinned:1 require:1 ao:1 investigation:1 strictly:1 exploring:1 credit:1 ic:10 considered:1 claim:5 major:2 achieves:1 perceived:1 estimation:4 outperformed:1 currently:1 iw:1 hope:3 clearly:3 always:2 rather:4 og:1 focus:1 consistently:1 rank:1 likelihood:13 contrast:2 sense:1 inference:15 el:1 typically:1 eliminate:1 spurious:1 hidden:1 manipulating:1 overall:1 fairly:1 marginal:2 equal:3 genuine:1 shaped:2 look:1 thinking:1 discrepancy:1 future:2 stimulus:2 few:1 causation:2 familiar:1 replaced:1 attempt:1 highly:1 weakness:1 extreme:2 yielding:1 light:1 sens:1 behind:2 pc:1 sobel:1 accurate:2 capable:2 partial:1 causal:69 psychological:7 increased:2 column:11 wb:15 cover:1 ordinary:1 rare:2 uniform:2 conducted:2 erlbaum:1 connect:1 density:1 ie:2 stay:3 probabilistic:2 off:1 together:1 mouse:1 again:2 central:3 postulate:1 reflect:1 opposed:3 worse:1 cognitive:1 leading:2 account:6 nonlinearities:1 student:1 depends:2 ad:2 observing:1 bayes:1 participant:3 responded:1 variance:1 equate:1 likewise:1 maximized:1 correspond:4 identify:2 judgment:6 yield:1 weak:1 bayesian:8 produced:1 mere:1 submitted:1 explain:3 detector:3 inform:1 whenever:1 involved:1 obvious:1 associated:1 begun:1 revise:1 covariation:1 knowledge:1 formalize:1 actually:1 higher:1 follow:1 reflected:2 specify:1 wei:1 response:2 done:1 anderson:2 just:3 correlation:4 receives:1 undermine:1 ei:4 defines:1 lei:2 effect:11 true:2 inductive:1 former:1 hence:1 chemical:5 pencil:3 jbt:1 illustrated:1 rat:5 trying:1 presenting:1 stress:1 complete:1 occuring:1 recently:1 rescorlawagner:1 superior:2 common:2 empirically:1 extend:1 association:1 he:9 occurred:2 approximates:1 slight:1 buehner:4 cambridge:1 rd:1 flp:14 similarly:1 nonlinearity:1 language:2 base:1 something:1 posterior:2 own:1 apart:1 scenario:5 certain:3 inequality:1 binary:2 life:1 joshua:1 seen:2 minimum:1 greater:1 captured:3 determine:1 maximize:1 strike:1 ii:4 full:1 reduces:2 adapt:1 compensate:1 equally:1 iog:1 prediction:9 variant:1 basic:1 involving:1 background:1 remarkably:1 addition:1 separately:1 subject:3 induced:1 ample:1 seem:1 odds:1 structural:6 lober:4 presence:1 leverage:1 intermediate:1 iii:4 concerned:1 variety:1 independence:1 fit:3 psychology:3 gave:1 restrict:1 identified:1 cn:1 lesser:1 absent:2 whether:3 expression:2 o0:1 returned:1 cause:18 adequate:1 useful:1 generally:1 tenenbaum:1 band:2 reduced:1 sign:1 delta:1 express:1 four:1 salient:1 neither:2 ce:1 graph:15 sum:1 injected:3 discover:1 putative:1 draw:1 decision:1 comparable:1 capturing:1 shank:4 followed:1 distinguish:2 cheng:7 yielded:1 annual:1 strength:7 occur:2 x2:11 facilitatory:1 aspect:1 argument:2 gruffydd:1 attempting:1 glymour:3 department:1 according:3 combination:1 poor:1 smaller:1 slightly:2 increasingly:1 suppressed:1 character:1 making:1 ceiling:2 equation:12 remains:1 ihe:3 know:1 merit:1 mind:1 operation:1 eight:2 observe:1 away:1 gate:4 thomas:2 denotes:2 assumes:1 top:1 completed:1 graphical:9 opportunity:1 lhe:1 giving:1 society:1 question:3 quantity:1 occurs:3 dependence:1 link:2 argue:2 extent:2 collected:1 trivial:1 induction:19 relationship:2 balance:1 relate:1 debate:1 perform:1 allowing:1 observation:4 situation:5 lb:1 superlead:3 rating:6 framing:1 distinction:1 pearl:1 address:2 adult:1 able:1 below:2 pattern:1 including:1 explanation:2 power:34 event:2 force:1 predicting:1 isn:1 occurence:1 faced:1 prior:6 understanding:1 literature:1 review:2 law:1 fully:2 expect:1 versus:3 ixl:1 contingency:4 sufficient:1 consistent:3 course:1 supported:1 placed:1 free:1 tease:1 formal:1 understand:1 depth:1 evaluating:1 adaptive:1 replicated:1 gene:9 keep:1 assumed:1 xi:2 why:1 learn:3 ca:1 ioo:2 cl:1 complex:3 constructing:1 did:3 child:1 causality:2 en:1 elaborate:1 wiley:1 lc:14 fails:1 inferring:2 exponential:1 xl:2 breaking:1 third:1 ix:1 normative:1 explored:2 r2:2 evidence:2 concern:1 exists:1 intractable:1 ci:1 simply:2 explore:4 likely:3 expressed:8 contained:1 monotonic:4 corresponds:2 conditional:1 viewed:1 absence:2 total:2 attempted:1 indicating:1 formally:1 people:15 support:26 latter:1 modulated:1 philosophy:1 evaluate:1 tested:1 correlated:1
924
1,846
The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity David Cohn Burning Glass Technologies 201 South Craig St, Suite 2W Pittsburgh, PA 15213 david. cohn @burning-glass.com Thomas Hofmann Department of Computer Science Brown University Providence, RI 02192 [email protected] Abstract We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis. 1 Introduction No text, no paper, no book can be isolated from the all-embracing corpus of documents it is embedded in. Ideas, thoughts, and work described in a document inevitably relate to and build upon previously published material. 1 Traditionally, this interdependency has been represented by citations, which allow authors to explicitly make references to related documents. More recently, a vast number of documents have been "published" electronically on the world wide web; here, interdependencies between documents take the form of hyperlinks, and allow instant access to the referenced material. We would like to have some way of modeling these interdependencies, to understand the structure implicit in the contents and connections of a given document base without resorting to manual clustering, classification and ranking of documents. The main goal of this paper is to present a joint probabilistic model of document content and connectivity, i.e., a parameterized stochastic process which mimics the generation of documents as part of a larger collection, and which could make accurate predictions about the existence of hyperlinks and citations. More precisely, we present an extension of our work on Probabilistic Latent Semantic Analysis (PLSA) [4, 7] and Probabilistic HITS (PHITS) [3, 8] and propose a mixture model to perform a simultaneous decomposition of the contingency tables associated with word occurrences and citations/links into "topic" factors. Such a model can be extremely useful in many applications, a few of which are: ? Identifying topics and common subjects covered by documents. Representing 1 Although the weakness of our memory might make us forget this at times. documents in a low-dimensional space can help understanding of relations between documents and the topics they cover. Combining evidence from terms and links yields potentially more meaningful and stable factors and better predictions. ? Identifying authoritative documents on a given topic. The authority of a document is correlated with how frequently it is cited, and by whom. Identifying topicspecific authorities is a key problems for search engines [2]. ? Predictive navigation. By predicting what content might be found "behind" a link, a content/connectivity model directly supports navigation in a document collection, either through interaction with human users or for intelligent spidering. ? Web authoring support. Predictions about links based on document contents can support authoring and maintenance of hypertext documents, e.g., by (semi-) automatically improving and updating link structures. These applications address facets of one of the most pressing challenges of the "information age": how to locate useful information in a semi-structured environment like the world wide web. Much of this difficulty, which has led to the emergence of an entire new industry, is due to the impoverished explicit structure of the web as a whole. Manually created hyperlinks and citations are limited in scope - the annotator can only add links and pointers to other document they are aware of and have access to. Moreover, these links are static; once the annotator creates a link between documents, it is unchanging. If a different, more relevant document appears (or if the cited document disappears), the link may not get updated appropriately. These and other deficiencies make the web inherently "noisy" - links between relevant documents may not exist and existing links might sometimes be more or less arbitrary. Our model is a step towards a technology that will allow us to dynamically infer more reliable inter-document structure from the impoverished structure we observe. In the following section, we first review PLSA and PHITS. In Section 3, we show how these two models can be combined into a joint probabilistic term-citation model. Section 4 describes some of the applications of this model, along with preliminary experiments in several areas. In Section 5 we consider future directions and related research. 2 PLSA and PHITS PLSA [7] is a statistical variant of Latent Semantic Analysis (LSA) [4] that builds a factored multinomial model based on the assumption of an underlying document generation process. The starting point of (P)LSA is the term-document matrix N of word counts, i.e., N ij denotes how often a term (single word or phrase) ti occurs in document dj . In LSA, N is decomposed by a SVD and factors are identified with the left/right principal eigenvectors. In contrast, PLSA performs a probabilistic decomposition which is closely related to the non-negative matrix decomposition presented in [9]. Each factor is identified with a state Zk (1 :::; k :::; K) of a latent variable with associated relative frequency estimates P(ti IZk) for each term in the corpus. A document d j is then represented as a convex combination of factors with mixing weights P( Zk Idj ), i.e., the predictive probabilities for terms in a particular document are constrained to be of the functional form P(ti ldj) = I:k P(ti lzk) P( Zk Idj ), with non-negative probabilities and two sets of normalization constraints I:i P(ti IZk) = 1 for all k and I:k P(Zk Idj ) = 1 for all.i. Both the factors and the document -specific mixing weights are learned by maximizing the likelihood of the observed term frequencies. More formally, PLSA aims at maximizing L = I:i,j N ij IogI:kP(til zk) P( zkl dj). Since factors Zk can be interpreted as states of a latent mixing variable associated with each observation (i.e., word occurrence), the Expectation-Maximization algorithm can be applied to find a local maximum of L. PLSA has been demonstrated to be effective for ad hoc information retrieval, language modeling and clustering. Empirically, different factors usually capture distinct "topics" of a document collection; by clustering documents according to their dominant factors, useful topic-specific document clusters often emerge (using the Gaussian factors of LSA, this approach is known as "spectral clustering"). It is important to distinguish the factored model used here from standard probabilistic mixture models. In a mixture model, each object (such as a document) is usually assumed to come from one of a set of latent sources (e.g. a document is either from Z l or Z2). Credit for the object may be distributed among several sources because of ambiguity, but the model insists that only one of the candidate sources is the true origin of the object. In contrast, a factored model assumes that each object comes from a mixture of sources - without ambiguity, it can assert that a document is half Z l and half Z2. This is because the latent variables are associated with each observation and not with each document (set of observations). PHITS [3] performs a probabilistic factoring of document citations used for bibliometric analysis. Bibliometrics attempts to identify topics in a document collection, as well as influential authors and papers on those topics, based on patterns in citation frequency. This analysis has traditionally been applied to references in printed literature, but the same techniques have proven successful in analyzing hyperlink structure on the world wide web [8]. In traditional bibliometrics, one begins with a matrix A of document-citation pairs. Entry Aij is nonzero if and only if document di is cited by document dj or, equivalently, if dj contains a hyperlink to di . 2 The principal eigenvectors of AA' are then extracted, with each eigenvector corresponding to a "community" of roughly similar citation patterns. The coefficient of a document in one of these eigenvectors is interpreted as the "authority" of that document within the community - how likely it is to by cited within that community. A document's coefficient in the principal eigenvectors of A' A is interpreted as its "hub" value in the community - how many authoritative documents it cites within the community. In PHITS, a probabilistic model replaces the eigenvector analysis, yielding a model that has clear statistical interpretations. PHITS is mathematically identical to PLSA, with one distinction: instead of modeling the citations contained within a document (corresponding to PLSA's modeling of terms in a document), PHITS models "inlinks," the citations to a document. It substitutes a citation-source probability estimate P( cl lzk) for PLSA's term probability estimate. As with PLSA and spectral clustering, the principal factors of the model are interpreted as indicating the principal citation communities (and by inference, the principal topics). For a given factor/topic Z b the probability that a document is cited, P( dj IZk ), is interpreted as the document's authority with respect to that topic. 3 A Joint Probabilistic Model for Content and Connectivity Linked and hyperlinked documents are generally composed of terms and citations; as such, both term-based PLSA and citation-based PHITS analyses are applicable. Rather than applying each separately, it is reasonable to merge the two analyses into ajoint probabilistic model, explaining terms and citations in terms of a common set of underlying factors . Since both PLSA and PHITS are based on a similar decomposition, one can define the following joint model for predicting citationsllinks and terms in documents: P(til dj ) = LP(ti!zk )P(Zkldj) , P(czldj) = k LP(Cl !zk )P(Zkldj). (1) k Notice that both decompositions share the same document-specific mixing proportions P(Zk Idj ). This couples the conditional probabilities for terms and citations: each "topic" 2In fact, since multiple citationsllinks may exist, we treat A ij as a count variable. has some probability P( cllzk) of linking to document dl as well as some probability P(ti lzk) of containing an occurrence of tenn ti. The advantage of this joint modeling approach is that it integrates content- and link-information in a principled manner. Since the mixing proportions are shared, the learned decomposition must be consistent with content and link statistics. In particular, this coupling allows the model to take evidence about link structure into account when making predictions about document content and vice versa. Once a decomposition is learned, the model may be used to address questions like "What words are likely to be found in a document with this link structure?" or "What link structure is likely to go with this document?" by simple probabilistic inference. The relative importance one assigns to predicting terms and links will depend on the specific application. In general, we propose maximizing the following (nonnalized) loglikelihood function with a relative weight 0:. L = ~ [0: ~ L~Xri'j 10g~P(tilzk)P(zk l dj) + (1 - 0:) L I L A~,. log LP(CI IZk)P( Zkldj )] I' I J (2) k The normalization by term/citation counts ensures that each document is given the same weight in the decomposition, regardless of the number of observations associated with it. Following the EM approach it is straightforward to derive a set of re-estimation equations. For the E-step one gets formulae for the posterior probabilities of the latent variables associated with each observation3 4 Experiments In the introduction, we described many potential applications of the the joint probabilistic model. Some, like classification, are simply extensions of the individual PHITS and PLSA models, relying on the increased power of the joint model to improve their performance. Others, such as intelligent web crawling, are unique to the joint model and require its simultaneous modelling of a document's contents and connections. In this section, we first describe experiments verifying that the joint model does yield improved classification compared with the individual models. We then describe a quantity called "reference flow" which can be computed from the joint model, and demonstrate its use in guiding a web crawler to pages of interest. 30ur experiments used a tempered version of Equation 3 to minimize overfitting; see [7] for details. 0 .5 0.38 0.45 0_34 0.4 0.32 0.3 0.35 0.28 0.3 0.26 VVebKB da.t:a.. 0.24 s1:c:I e r r o r o 0.2 0.4 ~ 0.6 a.lpha. Cora.. dat:a. --0.25 0.8 1 stc:l error o 0.2 0.4 --~ 0.6 0.8 alpha. Figure 1: Classification accuracy on the WebKB and Cora data sets for PHITS (a = 0), PLSA (a = 1) and the joint model (0 < a < 1). We used two data sets in our experiments. The WebKB data set [11], consists of approximately 6000 web pages from computer science departments, classified by school and category (student, course, faculty, etc.). The Cora data set [10] consists of the abstracts and references of approximately 34,000 computer science research papers; of these, we used the approximately 2000 papers categorized into one of seven subfields of machine learning. 4.1 Classification Although the joint probabilistic model performs unsupervised learning, there are a number of ways it may be used for classification. One way is to associate each document with its dominant factor, in a form of spectral clustering. Each factor is then given the label of the dominant class among its associated documents. Test documents are judged by whether their dominant factor shares their label. Another approach to classification (but one that forgoes clustering) is a factored nearest neighbor approach. Test documents are judged against the label of their nearest neighbor, but the "nearest" neighbor is determined by cosines of their projections in factor space. This is the method we used for our experiments. For the Cora and WebKB data, we used seven factors and six factors respectively, arbitrarily selecting the number to correspond to the number of human-derived classes. We compared the power of the joint model with that of the individual models by varying a from zero to one, with the lower and upper extremes corresponding to PHITS and PLSA, respectively. For each value of a, a randomly selected 15% of the data were reserved as a test set. The models were tempered (as per [7]) with a lower limit of (3 = 0.8, decreasing (3 by a factor of 0.9 each time the data likelihood stopped increasing. Figure 1 illustrates several results. First, the accuracy of the joint model (where a is neither onor 1), is greater than that of either model in isolation, indicating that the contents and link structure of a document collection do indeed corroborate each other. Second, the increase in accuracy is robust across a wide range of mixing proportions. 4.2 Reference Flow The previous subsection demonstrated how the joint model amplifies abilities found in the individual models. But the joint model also provides features found in neither of its progenitors. A document d may be thought of as occupying a point Z = {P(zl ld) , ... , P(Zk ld)} in the joint model's space of factor mixtures. The terms in d act as "signposts" describing Z, and the links act as directed connections between that point and others. Together, they provide a reference flow, indicating a referential connection between one topic and another. This reference flow exists between arbitrary points in the factor space, even in the absence of documents that map directly to those points. ~prOj dept.! faculty student! 1J depa~ent faculty ?::::::l course Consider a reference from document di to document ~ dj , and two points in factor space zm and zn, not particularly associated with di or dj . Our model Figure 2: Principal reference allows us to compute P(di lzm) and P(dj l~) , the flow between the primary topics probability that the combination of factors at zm identified in the examined subset and ~ are responsible for di and d j respectively. of the WebKB archive. Their product P(dil~) P(dj lzn) is then the probability that the observed link represents a reference between those two points in factor space. By integrating over all links in the corpus we can compute, fmn = 2:. ?A 1,l.. T...i.oP(dilzm)P(dJ?I~), an unnormalized "reference flow" between zm and ~. Figure 2 shows the principal reference flow between several topics in the WebKB archive. ~,J. 4.3 Intelligent Web Crawling with Reference Flow Let us suppose that we want to find new web pages on a certain topic, described by a set of words composed into a target pseudodocument dt . We can project dt into our model to identify the point Zt in factor space that represents that topic . Now, when we explore web pages, we want to follow links that will lead us to new documents that also project to ?,; . To do so, we can use reference flow. Consider a web page ds (or section of a web page4 ). Although we don't know where its links point, we do know what words it contains. We can project them as a peudodocument to find Zs the point in factor space the page/section occupies, prior to any information about its links. We can then use our model to compute the reference flow fst indicating the (unnormalized) probability that a document at Zs would contain a link to one at Zt . 350'-~~---t=,u-e~s-ou~m~e-_ ~_~_-' 'placebo' --- --- 300 250 ~ 200 ~ I 150 100 50 102030405060708090100 As a greedy solution, we could simply rank follow links in documents or sections that have the highest reference flow toFigure 3: When ranked according to magward the target topic. Or if computation nitude of reference flow to a designated taris no barrier, we could (in theory) use refget, a "true source" scores much higher than erence flow as state transition probabilia placebo source document drawn at random. ties and find an optimal link to follow by treating the system as a continuous-state Markov decision process. 4Tbough not described here, we have had success using our model for document segmentation, following an approach similar to that of [6]. By projecting successive n-sentence windows of a document into the factored model, we can observe its trajectory through "topic space." A large jump in the factor mixture between successive windows indicates a probable topic boundary in document. To test our model's utility in intelligent web crawling, we conducted experiments on the WebKB data set using the greedy solution. On each trial, a "target page" dt was selected at random from the corpus. One "source page" d s containing a link to the target was identified, and the reference flow 1st computed. The larger the reference flow, the stronger our model's expectation that there is a directed link from the source to the target. We ranked this flow against the reference flow to the target from 100 randomly chosen "distractor" pages dr1 , dr2 ... , drlOO . As seen in Figure 3, reference flow provides significant predictive power. Based on 2400 runs, the median rank for the "true source" was 27/100, versus a median rank of 50/100 for a "placebo" distractor chosen at random. Note that the dis tractors were not screened to ensure that they did not also contain links to the target; as such, some of the high-ranking dis tractors may also have been valid sources for the target in question. 5 Discussion and Related Work There have been many attempts to combine link and term information on web pages, though most approaches are ad hoc and have been aimed at increasing the retrieval of authoritative documents relevant to a given query. Bharat and Henzinger [1] provide a good overview of research in that area, as well as an algorithm that computes bibliometric authority after weighting links based on the relevance of the neighboring terms. The machine learning community has also recently taken an interest in the sort of relational models studied by Bibliometrics. Getoor et al. [5] describe a general framework for learning probabilistic relational models from a database, and present experiments in a variety of domains. In this paper, we have described a specific probabilistic model which attempts to explain both the contents and connections of documents in an unstructured document base. While we have demonstrated preliminary results in several application areas, this paper only scratches the surface of potential applications of a joint probabilistic document model. References [1] K. Bharat and M. R. Henzinger. Improved algorithms for topic distillation in hyperlinked environments. In Proceedings of the 21 st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1998. [2] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. Technical report, Computer Science Department, Stanford University, 1998. [3] D. Cohn and H. Chang. Learning to probabilistically identify authoritative documents. In Proceedings of the 17th International Conference on Machine Learning, 2000. [4] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. J. of the American Society for Information Science , 41 :391--407, 1990. [5] L. Getoor, N. Friedman, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In S. Dzeroski and N. Lavrac, editors, Relational Data Mining. Springer-Verlag, 2001. [6] M. Hearst. Multi-paragraph segmentation of expository text. In Proceedings ofACL, June 1994. [7] T. Hofmann. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on Uncertainty in AI, pages 289-296, 1999. [8] J. Kleinberg. Authoritative sources in a hyperlinked environment. In Proc. 9th ACM-SIAM Symposium on Discrete Algorithms, 1998. [9] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization . Nature , pages 788- 791,1999. [10] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construction of internet portals with machine learning. Information Retrieval Journal, 3: 127-163, 2000. [11] Web--->KB. Available electronically at http://www.es . emu. edu/ -WebKB/.
1846 |@word trial:1 faculty:3 version:1 proportion:3 stronger:1 plsa:16 decomposition:9 ld:2 contains:2 score:1 selecting:1 document:83 existing:1 com:1 z2:2 crawling:4 must:1 unchanging:1 hofmann:2 treating:1 half:2 tenn:1 selected:2 greedy:2 mccallum:1 pointer:1 provides:2 authority:5 successive:2 along:1 symposium:1 fmn:1 consists:2 combine:1 paragraph:1 bharat:2 manner:1 inter:2 indeed:1 roughly:1 frequently:1 distractor:2 multi:1 relying:1 decomposed:1 decreasing:1 automatically:1 window:2 increasing:2 begin:1 webkb:7 moreover:1 underlying:2 project:3 what:4 interpreted:5 eigenvector:2 z:2 suite:1 lzm:1 assert:1 ti:8 act:2 tie:1 hit:1 zl:1 lsa:4 harshman:1 referenced:1 local:1 treat:1 limit:1 analyzing:1 merge:1 approximately:3 might:3 studied:1 examined:1 dynamically:1 limited:1 factorization:1 range:1 subfields:1 directed:2 unique:1 responsible:1 area:3 erence:1 thought:2 printed:1 projection:1 word:7 integrating:1 get:2 judged:2 applying:1 www:1 map:1 demonstrated:3 missing:1 maximizing:3 go:1 regardless:1 starting:1 straightforward:1 convex:1 focused:1 sigir:1 identifying:4 assigns:1 unstructured:1 factored:5 traditionally:2 updated:1 target:8 suppose:1 construction:1 user:1 origin:1 pa:1 associate:1 particularly:1 updating:1 database:1 pfeffer:1 observed:2 capture:1 hypertext:2 verifying:1 ensures:1 highest:1 principled:1 environment:3 seung:1 depend:1 predictive:4 upon:1 creates:1 joint:19 represented:2 distinct:1 describe:4 effective:1 kp:1 query:2 larger:2 stanford:1 loglikelihood:1 rennie:1 ability:1 statistic:1 emergence:1 noisy:1 hoc:2 advantage:1 pressing:1 hyperlinked:3 propose:2 interaction:1 product:1 zm:3 ajoint:1 neighboring:1 relevant:3 combining:1 mixing:6 amplifies:1 ent:1 cluster:1 object:5 help:1 coupling:1 derive:1 nearest:3 op:1 dzeroski:1 ij:3 school:1 c:1 come:2 direction:1 anatomy:1 closely:1 stochastic:1 kb:1 human:2 occupies:1 material:2 brin:1 require:1 preliminary:2 probable:1 mathematically:1 extension:2 credit:1 scope:1 estimation:1 proc:1 integrates:1 applicable:1 label:3 vice:1 occupying:1 cora:4 gaussian:1 aim:1 rather:1 varying:1 probabilistically:1 derived:1 june:1 modelling:1 likelihood:2 rank:3 indicates:1 contrast:2 glass:2 inference:2 factoring:1 entire:1 relation:1 koller:1 proj:1 among:3 classification:7 development:1 constrained:1 aware:1 once:2 manually:1 identical:1 represents:2 unsupervised:1 mimic:1 future:1 others:2 report:1 intelligent:4 few:1 randomly:2 composed:2 individual:4 attempt:3 friedman:1 interest:2 mining:1 weakness:1 mixture:6 navigation:2 extreme:1 yielding:1 behind:1 accurate:1 re:1 isolated:1 stopped:1 increased:1 industry:1 modeling:6 facet:1 cover:1 corroborate:1 zn:1 maximization:1 phrase:1 entry:1 subset:1 placebo:3 successful:1 conducted:1 providence:1 combined:1 dumais:1 st:3 cited:5 international:2 siam:1 automating:1 probabilistic:21 lee:1 together:1 connectivity:5 ambiguity:2 containing:2 book:1 american:1 til:2 account:1 potential:2 student:2 coefficient:2 explicitly:1 ranking:2 ad:2 linked:1 sort:1 minimize:1 accuracy:3 reserved:1 yield:2 identify:3 correspond:1 identification:1 craig:1 trajectory:1 published:2 classified:1 simultaneous:2 explain:1 manual:1 against:2 frequency:3 henzinger:2 associated:8 di:6 static:1 couple:1 subsection:1 segmentation:2 ou:1 impoverished:2 appears:1 insists:1 dt:3 higher:1 follow:3 improved:2 though:1 furthermore:1 implicit:1 d:1 web:21 cohn:3 dil:1 brown:2 true:3 contain:2 nonzero:1 semantic:4 cosine:1 unnormalized:2 demonstrate:1 performs:3 recently:2 phits:12 common:2 multinomial:1 functional:1 empirically:1 overview:1 linking:1 interpretation:1 significant:1 distillation:1 versa:1 ai:1 resorting:1 language:1 dj:12 zkl:1 had:1 access:2 stable:1 seymore:1 fst:1 surface:1 etc:1 base:2 add:1 dominant:4 posterior:1 certain:1 verlag:1 arbitrarily:1 success:1 tempered:2 embracing:1 seen:1 greater:1 semi:2 multiple:1 interdependency:3 infer:1 technical:1 retrieval:5 prediction:4 variant:1 maintenance:1 expectation:2 sometimes:1 normalization:2 want:2 separately:1 median:2 source:12 appropriately:1 archive:3 south:1 subject:1 flow:18 variety:1 isolation:1 identified:4 idea:1 whether:1 six:1 dr1:1 utility:1 useful:3 generally:1 covered:1 eigenvectors:4 clear:1 forgoes:1 aimed:1 referential:1 category:1 http:1 exist:2 notice:1 per:1 discrete:1 key:1 drawn:1 neither:2 vast:1 deerwester:1 run:1 screened:1 parameterized:1 uncertainty:1 reasonable:1 disambiguation:1 decision:1 lzk:3 internet:1 distinguish:1 replaces:1 hypertextual:1 annual:1 precisely:1 deficiency:1 constraint:1 ri:1 kleinberg:1 extremely:1 department:3 structured:1 according:2 influential:1 designated:1 combination:2 expository:1 describes:1 across:1 em:1 ur:1 lp:3 making:1 s1:1 projecting:1 indexing:1 taken:1 equation:2 previously:1 describing:1 count:3 know:2 available:1 observe:2 izk:4 spectral:3 occurrence:3 existence:1 thomas:1 substitute:1 denotes:1 clustering:7 assumes:1 ensure:1 instant:1 build:3 society:1 dat:1 question:2 quantity:1 occurs:1 burning:2 primary:1 traditional:1 link:34 mapped:1 nitude:1 topic:25 whom:1 seven:2 relationship:1 equivalently:1 potentially:1 relate:1 xri:1 negative:3 zt:2 perform:1 upper:1 observation:4 markov:1 inevitably:1 relational:4 nonnalized:1 locate:1 inlinks:1 arbitrary:2 hearst:1 community:7 david:2 pair:1 connection:5 sentence:1 engine:2 learned:3 distinction:1 emu:1 address:2 usually:2 pattern:2 hyperlink:5 challenge:1 reliable:1 memory:1 power:3 getoor:2 difficulty:1 ranked:2 predicting:3 representing:1 improve:1 technology:2 disappears:1 created:1 authoring:3 text:2 review:1 understanding:1 literature:1 prior:1 relative:3 tractor:2 embedded:1 generation:2 proven:1 versus:1 age:1 annotator:2 authoritative:6 contingency:1 dr2:1 consistent:1 editor:1 lzn:1 share:2 bibliometrics:3 course:2 idj:4 electronically:2 dis:2 aij:1 allow:3 understand:1 wide:4 explaining:1 neighbor:3 barrier:1 emerge:1 distributed:1 boundary:1 world:3 transition:1 valid:1 computes:1 author:2 collection:7 jump:1 citation:18 alpha:1 overfitting:1 corpus:4 pittsburgh:1 assumed:1 landauer:1 don:1 search:3 latent:9 continuous:1 table:1 nature:1 zk:11 robust:1 inherently:1 nigam:1 improving:1 cl:2 stc:1 da:1 domain:1 did:1 main:1 whole:1 categorized:1 furnas:1 guiding:1 explicit:1 candidate:1 weighting:1 formula:1 specific:5 hub:1 evidence:2 dl:1 exists:1 bibliometric:3 importance:1 ci:1 portal:1 illustrates:1 forget:1 led:1 simply:2 likely:3 explore:1 contained:1 chang:1 springer:1 aa:1 cite:1 extracted:1 acm:2 conditional:1 goal:1 towards:1 shared:1 absence:1 content:15 determined:1 principal:9 called:1 svd:1 e:1 meaningful:1 indicating:4 formally:1 support:3 crawler:1 relevance:1 dept:1 scratch:1 correlated:1
925
1,847
Stability and noise in biochemical switches William Bialek NEC Research Instit ute 4 Independence Way Princeton, New Jersey 08540 bialek@research. nj. nec. com Abstract Many processes in biology, from the regulation of gene expression in bacteria to memory in the brain, involve switches constructed from networks of biochemical reactions. Crucial molecules are present in small numbers, raising questions about noise and stability. Analysis of noise in simple reaction schemes indicates that switches stable for years and switchable in milliseconds can be built from fewer than one hundred molecules. Prospects for direct tests of this prediction, as well as implications, are discussed. 1 Introduction The problem of building a reliable switch arises in several different biological contexts. The classical example is the switching on and off of gene expression during development [1], or in simpler systems such as phage .A [2]. It is likely that the cell cycle should also be viewed as a sequence of switching events among discrete states, rather than as a continuously running clock [3]. The stable switching of a specific class of kinase molecules between active and inactive states is believed to playa role in synaptic plasticity, and by implication in the maintenance of stored memories [4] . Although many details of mechanism remain to be discovered, these systems seem to have several common features. First, the stable states of the switches are dissipative, so that they reflect a balance among competing biochemical reactions. Second, the total number of molecules involved in the construction of the switch is not large. Finally, the switch, once flipped, must be stable for a time long compared to the switching time, perhaps- for development and for memory- even for a time comparable to the life of the organism. Intuitively we might expect that systems with small numbers of molecules would be subject to noise and instability [5], and while this is true we shall see that extremely stable biochemical switches can in fact be built from a few tens of molecules. This has interesting implications for how we think about several cellular processes, and should be testable directly. Many biological molecules can exist in multiple states, and biochemical switches use this molecular multistability so that the state of the switch can be 'read out' by sampling the states (or enzymatic activities) of individual molecules. Nonetheless, these biochemical switches are based on a network of reactions, with stable states that are collective properties of the network dynamics and not of any individual molecule. Most previous work on the properties of biochemical reaction networks has involved detailed simulation of particular kinetic schemes [6], for example in discussing the kinase switch that is involved in synaptic plasticity [7]. Even the problem of noise has been discussed heuristically in this context t8]. The goal in the present analysis is to separate the problem of noise and st a bility from other issues, and to see if it is possible to make some general statements about the limits to stability in switches built from a small number of molecules. This effort should be seen as being in the same spirit as recent work on bacterial chemotaxis, where the goal was to understand how certain feat ures of the computations involved in signal processing can emerge robustly from the network of biochemical reactions, independent of kinetic det ails [9]. 2 Stochastic kinetic equations Imagine that we write down the kinetic equations for some set of biochemical reactions which describe the putative switch. Now let us assume that most of the reactions are fast, so that there is a single molecular species whose concentration varies more slowly than all the others. Then the dynamics of the switch essentially are one dimensional, and this simplification allows a complete discussion using standard analytical methods. In particular , in this limit there are general bounds on the stability of switches, a nd these bounds are independent of (incompletely known) details in the biochemical kinetics. It should be possible to make progress on multidimensional versions of the problem, but the point here is to show that there exists a limit in which stable switches can be built from small numbers of molecules. Let the number of molecules of the 'slow species' be n. All the different reactions can be broken into two classes: the synthesis of the slow species at a rate f(n) molecules per second, and its degradation at a rat e g(n) molecules per second; the dependencies on n can be complicated because they include the effects of all other species in the system. Then, if we could neglect fluct uations, we would write the effective kinetic equation dn dt = f(n) (1) - g(n). If the system is to function as a switch, then the stationarity condition f(n) = g(n) must have multiple solutions with appropriat e local stability properties. The fact that molecules are discrete units means that we need to give the chemical kinetic Eq. (1) another interpretation. It is the mean field approximation to a stochastic process in which there is a probability per unit time f(n) of making the transition n ---+ n+1, and a probability per unit time g(n) of the opposite transition n ---+ n - 1. Thus if we consider the probability P(n , t) for there being n molecules at time t , this distribution obeys the evolution (or 'master') equation ap~~, t) = f (n - l )P(n - 1, t) + g(n + l)P(n + 1, t) - [f(n) + g(n)]P(n, t),(2) with obvious corrections for n = 0, 1. We are interested in the effects of stochasticity for n not too small. Then 1 is small compared with typical values of n, and we can approximate P(n, t) as being a smooth funct ion of n. We can expand Eq. (2) in derivatives of the distribution, and keep the leading terms: ap~~, t) = :n { [g(n) - f(n)]P(n , t) + ~ :n [f(n) + g(n)]P(n, t)} . (3) This is analogous to the diffusion equation for a particle moving in a potential, but this analogy works only if allow the effective temperature to vary with the position of the particle. As with diffusion or Brownian motion, there is an alternative to the diffusion equation for P( n, t) and this is to write an equation of motion for n( t) which supplements Eq. (1) by the addition of a random or Langevin force dn dt \~(t)~(t') ) f(n) - g(n) [f(n) + ~(t), + g(n)]b(t - t'). ~(t): (4) (5) From the Langevin equation we can also develop the distribution functional for the probability of trajectories n(t). It should be emphasized that all of these approaches are equivalent provided that we are careful to treat the spatial variations of the effective temperature [10].1 In one dimension this complication does not impede solving the problem. For any particular kinetic scheme we can compute the effective potential and temperature, and kinetic schemes with multiple stable states correspond to potential functions with multiple minima. 3 Noise induced switching rates We want to know how the noise term destabilizes the distinct stable states of the switch. If the noise is small, then by analogy with thermal noise we expect that there will be some small jitter around the stable states, but also some rate of spontaneous jumping between the states, analogous to thermal activat ion over an energy barrier as in a chemical reaction. This jumping rate should be the product of an "attempt frequency"-of order the relaxation rate in the neighborhood of one stable stateand a "Boltzmann factor" that expresses the exponentially small probability of going over the barrier. For ordinary chemical reactions this Boltzmann factor is just exp(-Ft /kB T), where Ft is the activat ion free energy. If we want to build a switch that can be stable for a time much longer than the switching time itself, then the Boltzmann factor has to provide this large ratio of time scales. There are several ways to calculate the analog of the Boltzmann factor for the dynamics in Eq. (4). The first step is to make more explicit the analogy with Brownian motion and thermal activation. Recall that Brownian motion of an overdamped particle is described by the Langevin equation dx "( dt = -V'(x) + 'T/(t) , (6) where,,( is drag coefficient of the particle, V(x) is the potential, and the noise force has correlations \'T/(t)'T/(t') ) = 2"(Tb(t - t') , where T is the absolute temperature measured in energy units so that Boltzmann's constant is equal to one. Comparing with Eq. (4), we see that our problem is equivalent to a particle with "( = 1 in an effective potential If,,ff(n) such that V:ff(n) = g(n) - f(n), at an effective temperature Teff(n) = [f(n) + g(n)]/2. If the temperature were uniform then the equilibrium distribution of n would be Peq(n) ex exp[-Veff(n)/Teff ]' With nonuniform temperature the result is (up to l In a review written for a biological audience, McAdams and Arkin [11] state that Langevin methods are unsound and can yield invalid predictions precisely for the case of bistable reaction systems which interests us here; this is part of their argument for the necessity of stochastic simulation methods as opposed to analytic approaches. Their reference for the failure of Langevin methods [12], however, seems to consider only Langevin terms with constant spectral density, thus ignoring (in the present language) the spatial variations of effective temperature. For the present problem this would mean replacing the noise correlation function [f(n) + g(n)]8(t - t') in Eq. (5) by Q8(t - t') where Q is a constant. This indeed is wrong, and is not equivalent to the master equation. On the other hand, if the arguments of Refs. [11 , 12] were generally correct, they would imply that Langevin methods could not used for the description of Brownian motion with a spatially varying temperature, and this would be quite a surprise. weakly varying prefactors) ex U(n) (7) exp[-U(n)] r Jo dy V:ff(y). Teff (Y) (8) One way to identify the Boltzmann factor for spontaneous switching is then to compute the relative equilibrium occupancy of the stable states (no and nd and the unstable "transition state" at n*. The result is that the effective activation energy for transitions from a stable state at n = no to the stable state at n = nl > no is t ----+ F (no _ nl) - 2kBT In. g(n) - f(n) dn g(n) + f(n)' no (9) where n* is the unstable point , and similarly for the reverse transition, t ----+ F (nl _ no) - 2kBT ln n. 1 f(n) - g(n) dn g(n) + f(n)' (10) An alternative approach is to note that the distribution of trajectories n(t) includes locally optimal paths that carry the system from each stable point up to the transition state; the effective activation free energy can then be written as an integral along these optimal paths. The use of optimal path ideas in chemical kinetics has a long history, going back at least to Onsager. A discussion in the spirit of the present one is Ref. [13]. For equations of the general form ~: = - V:ff(n) + ~(t), (11) with (~(t)~(t')) = 2Teff(t)J(t-t'), the probability distribution for trajectories P[n(t)] can be written as [10] P [n(t)] exp (-S[n(t)]) (12) S [n(t)] ~ J dtTe~(t) [n(t) + V:ff (n(t))]2 - ~ J dtV:~(n(t)). (13) If the temperature Teff is small, then the trajectories that minimize the action should be determined primarily by minimizing the first term in Eq. (13), which is "-' I/Teff . Identifying the effective potential and temperature as above, the relevant term is ~ J dt [n - f(n) 2 + g(n)]2 f(n) + g(n) -1 Jdt 2 n2 f(n) + g(n) [f(n) - g(n)j2 + -1 Jdt -'-'---::-':--'-:------'---'c-'--;---2 f(n) + g(n) - J dtn f(n) - g(n) . f(n) + g(n) (14) We are searching for trajectories which take n(t) from a stable point no where f(no) = g(no) through the unstable point n* where f and 9 are again equal but the derivative of their difference (the curvature of the potential) has changed sign. For a discussion of the analogous quantum mechanical problem of tunneling in a double well, see Ref. [14]. First we note that along any trajectory from no to n* we can simplify the third term in Eq. (14): J d . f(n) - g(n) tn f(n) + g(n) = In' no d f(n) - g(n) n f(n) + g(n)' (15) This term thus depends on the endpoints of the trajectory and not on the path, and therefore cannot contribute to the structure of the optimal path. In the analogy to mechanics , the first two terms are equivalent to the (Euclidean) action for a particle with position dependent mass in a potential; this means that along extremal trajectories there is a conserved energy E = ~ n2 1 [f(n) - g(n}F 2 f(n) + g(n) - 2 f(n) + g(n) . At the endpoints of the trajectory we have n = a and f(n) looking for zero energy trajectories, along which n(t) = ?[f(n(t)) - g(n(t))] . (16) = g(n) , and so we are (17) Substituting back into Eq. (14) , and being careful about the signs, we find once again Eq's. (9 ,10). Both the 'transition state' and the optimal path method involve approximations, but if the noise is not too large the approximations are good and the results of the two methods agree. Yet another approach is to solve the master equation (2) directly, a nd again one gets the same answer for the switching rate when the noise is small, as expected since all the different approaches are all equivalent if we make consistent approximations. It is much more work to find the prefactors of the rates, but we are concerned here with orders of magnitude, and hence the prefactors aren't so important. 4 Interpretation The crucial thing to notice in this calculation is that the integrands in Eq's. (9,10) are bounded by one, so the activation energy (in units of the thermal energy kBT) is bounded by twice the change in the number of molecules. Translating back to the spontaneous switching rates , the result is that the noise driven switching time is longer than the relaxation time after switching by a factor that is bounded, (A ) spontaneous switching time .. < exp un , relaxatiOn tIme (18) where ,6.n is the change in the number of molecules required to go from one stable 'switched' state to the other. Imagine that we have a reaction scheme in which the difference between the two stable states corresponds to roughly 25 molecules. Then it is possible to have a Boltzmann factor of up to exp(25) rv 1010. Usually we think of this as a limit to stability: with 25 molecules we can have a Boltzmann factor of no more than rv 1010. But here I want to emphasize the positive statement that there exist kinetic schemes in which just 25 molecules would be sufficient to have this level of stability. This corresponds to years per millisecond: with twenty five molecules, a biochemical switch that can flip in milliseconds can be stable for years. Real chemical reaction schemes will not saturate this bound, but certainly such stability is possible with roughly 100 molecules. The genetic switch in A phage operates with roughly 100 copies of the repressor molecules, and even in this simple system there is extreme stability: the genetic switch is flipped spontaneously only once in 105 generations of the host bacterium [2]. Kinetic schemes with greater cooperativity get closer to the bound, achieving greater stability for the same number of molecules. In electronics, the construction of digital elements provides insulation against fluctuations on a microscopic scale and allows a separation between the logical and physical design of a large system. We see that , once a cell has access to several tens of molecules, it is possible to construct 'digital' switch elements with dynamics that are no longer significantly affected by microscopic fluctuations. Furthermore, weak interactions of these molecules with other cellular components cannot change the basic 'states' of the switch, although these interactions can couple state changes to other events. The importance of this 'digitization' on the scale of 10 -100 molecules is illustrated by different models for pattern formation in development. In the classical model due to Turing, patterns are expressed by spatial variations in the concentration of different molecules, and patterns arise because uniform concentrations are rendered unstable through the combination of nonlinearities in the kinetics with the different diffusion constants of different substances. In this picture, the spatial structure of the pattern is linked directly to physical properties of the molecules. An alternative that each spatial location is labelled by a set of discrete possible states, and patterns evolve out of the 'automaton' rules by which each location changes state in relation to the neighboring states. In this picture states and rules are more abstract, and the dynamics of pattern formation is really at a different level of description from the molecular dynamics of chemical reactions and diffusion. Reliable implementations of automaton rules apparently are accessible as soon as the relevant chemical reactions involve a few dozen molecules. Biochemical switches have been reconstituted in vitro, but I am not aware of any attempts to verify that stable switching is possible with small numbers of molecules. It would be most interesting to study model systems in which one could confine and monitor sufficiently few molecules that it becomes possible to observe spontaneous switching, that is the breakdown of stability. Although genetic switches have certain advantages, even the simplest systems would require full enzymatic apparatus for gene expression (but see Ref. [16] for recent progress on controllable in vitro expression systems).2 Kinase switches are much simpler, since they can be constructed from just a few proteins and can be triggered by calcium; caged calcium allows for an optical pulse to serve as input. At reasonable protein concentrations, 10 - 100 molecules are found in a volume of roughly 1 (J.tm) 3 . Thus it should be possible to fabricate an array of 'cells' with linear dimensions ranging from 100 nm to 10 J.tm, such that solutions of kinase and accessory proteins would switch stably in the larger cells but exhibit instability and spontaneous switching in the smaller cells. The state of the switch could be read out by including marker proteins that would serve as substrates of the kinase but have, for example, fluorescence lines that are shifted by phosphorylation, or by having fluorescent probes on the kinase itself; transitions of single enzyme molecules should be observable [15]. A related idea would be to construct vesicles containing ligand gate ion channels which can conduct calcium, and then have inside the vesicle enzymes for synthesis and degradation of the ligand which are calcium sensitive. The cGMP channels of rod photoreceptors are an example, and in rods the cyclase synthesizing cGMP is calcium sensitive, but the sign is wrong to make a switch [17]; presumably this could solved by appropriate mixing and matching of protein components from different cells. In such a vesicle the different stable states would be distinguished by different 2Note also that reactions involving polymer synthesis (mRNA from DNA or protein from mRNA) are not 'elementary' reactions in the sense described by Eq. (2). Synthesis of a single mRN A molecule involves thousands of steps, each of which occurs (conditionally) at constant probability per unit time, and so the noise in the overall synthesis reaction is very different. If the synthesis enzymes are highly processive, so that the polymerization apparatus incoporates many monomers into the polymer before 'backing up' or falling off the template, then synthesis itself involves a delay but relatively little noise; the dominant source of noise becomes the assembly and disassembly of the polymerization complex. Thus there is some subtlety in trying to relate a simple model to the complex sequence of reactions involved in gene expression. On the other hand a detailed simulation is problematic, since there are so many different elementary steps with unknown rates. This combination of circumstances would make experiments on a minimal, in vitro genetic switch espcially interesting. levels of internal calcium (as with adaptation states in the rod), and these could be read out optically using calcium indicators; caged calcium would again provide an optical input to flip the switch. Amusingly, a close packed array of such vesicles with rv 100 nm dimension would provide an optically addressable and writable memory with storage density comparable to current RAM, albeit with much slower switching. In summary, it should be possible to build stable biochemical switches from a few tens of molecules, and it seems likely that nature makes use of these. To test our understanding of stability we have to construct systems which cross the threshold for observable instabilities, and this seems accessible experimentally in several systems. Acknowledgments Thanks to M. Dykman, J. J. Hopfield, and A. J. Libchaber for helpful discussions. References 1. J. M. W. Slack, Fmm Egg to Embryo: Determinative Events in Early Development (Cambridge University Press, Cambridge, 1983); P. A. Lawrence, The Making of a Fly: The Genetics of Animal Design (Blackwell Science, Oxford, 1992). 2. M. Ptashne, A Genetic Switch: Phage ..\ and Higher Organisms, 2nd Edition (Blackwell, Cambridge MA, 1992); A. D. Johnson, A. R. Poteete, G. Lauer, R. T. Sauer, G. K. Ackers, and M. Ptashne, Nature 294, 217-223 (1981). 3. A. W. Murray, Nature 359, 599-604 (1992). 4. S. G. Miller and M. B. Kennedy, Cell 44, 861- 870 (1986); M. B. Kennedy, Ann. Rev. Biochem. 63, 571- 600 (1994). 5. E. Schrodinger, What is Life? (Cambridge University Press, Cambridge, 1944). 6. H. H. McAdams and A. Arkin, Ann. Rev. Biophys. Biomol. Struct. 27, 199-224 (1998); U. S. Bhalla and R. Iyengar, Science 283,381-387 (1999). 7. J. E. Lisman, Pmc. Nat. Acad. Sci. (USA) 82, 3055- 3057 (1985). 8. J. E. Lisman and M. A. Goldring, Pmc. Nat. Acad. Sci. (USA) 85, 5320- 5324 (1988). 9. N. Barkai and S. Leibler, Nature 387, 913-917 (1997). 10. J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Clarendon Press, Oxford, 1989). 11. H. H. McAdams and A. Arkin, Trends Genet. 15,65- 69 (1999). 12. F. Baras, M. Malek Mansour and J. E. Pearson, J. Chem. Phys. 105, 8257- 8261 (1996). 13. M. 1. Dykman, E. Mori, J. Ross, and P. M. Hunt, J. Chem. Phys. 100, 5735-5750 (1994). 14. S. Coleman, Aspects of Symmetry (Cambridge University Press, Cambridge, 1975). 15. H. P. Lu, L. Xun, and X. S. Xie, Science 282, 1877- 1882 (1998); T. Ha, A. Y. Ting, J. Liang, W. B. Caldwell, A. A. Deniz , D. S. Chemla, P. G. Schultz, and S. Weiss, Pmc. Nat. Acad. Sci. (USA) 96, 893- 898 (1999). 16. G. V. Shivashankar, S. Liu & A. J. Libchaber, Appl. Phys. Lett. 76, 3638-3640 (2000). 17. F. Rieke and D. A. Baylor, Revs. Mod. Phys. 70, 1027-1036 (1998).
1847 |@word version:1 seems:3 nd:4 heuristically:1 simulation:3 pulse:1 carry:1 phosphorylation:1 electronics:1 liu:1 necessity:1 optically:2 genetic:5 reaction:20 current:1 com:1 comparing:1 activation:4 yet:1 dx:1 must:2 written:3 deniz:1 plasticity:2 analytic:1 fewer:1 disassembly:1 coleman:1 provides:1 complication:1 contribute:1 location:2 simpler:2 five:1 dn:4 constructed:2 direct:1 along:4 fabricate:1 inside:1 expected:1 indeed:1 roughly:4 fmm:1 bility:1 mechanic:1 brain:1 little:1 becomes:2 provided:1 bounded:3 mass:1 what:1 ail:1 onsager:1 nj:1 multidimensional:1 wrong:2 unit:6 positive:1 before:1 local:1 treat:1 apparatus:2 limit:4 switching:16 acad:3 oxford:2 path:6 fluctuation:2 ap:2 might:1 twice:1 dissipative:1 drag:1 appl:1 hunt:1 obeys:1 acknowledgment:1 spontaneously:1 insulation:1 addressable:1 significantly:1 matching:1 protein:6 get:2 cannot:2 close:1 storage:1 context:2 instability:3 equivalent:5 go:1 mrna:2 automaton:2 identifying:1 rule:3 array:2 stability:12 searching:1 rieke:1 variation:3 analogous:3 construction:2 imagine:2 spontaneous:6 substrate:1 arkin:3 element:2 fluct:1 trend:1 breakdown:1 role:1 ft:2 fly:1 solved:1 calculate:1 thousand:1 cycle:1 prospect:1 broken:1 dynamic:6 weakly:1 uations:1 solving:1 ackers:1 funct:1 vesicle:4 serve:2 hopfield:1 jersey:1 distinct:1 fast:1 describe:1 effective:10 cooperativity:1 formation:2 neighborhood:1 pearson:1 whose:1 quite:1 larger:1 solve:1 think:2 itself:3 mcadams:3 sequence:2 advantage:1 triggered:1 analytical:1 interaction:2 product:1 adaptation:1 j2:1 relevant:2 neighboring:1 mixing:1 description:2 xun:1 double:1 develop:1 measured:1 progress:2 eq:12 involves:2 correct:1 kbt:3 stochastic:3 kb:1 bistable:1 translating:1 require:1 really:1 polymer:2 biological:3 elementary:2 kinetics:3 correction:1 around:1 confine:1 sufficiently:1 exp:6 presumably:1 equilibrium:2 lawrence:1 substituting:1 vary:1 early:1 extremal:1 fluorescence:1 sensitive:2 ross:1 iyengar:1 rather:1 varying:2 indicates:1 am:1 sense:1 helpful:1 dependent:1 biochemical:13 relation:1 expand:1 going:2 interested:1 backing:1 issue:1 among:2 overall:1 development:4 animal:1 spatial:5 bacterial:1 once:4 field:2 equal:2 integrands:1 sampling:1 construct:3 biology:1 flipped:2 having:1 aware:1 others:1 simplify:1 jdt:2 few:5 primarily:1 unsound:1 individual:2 william:1 attempt:2 stationarity:1 interest:1 highly:1 certainly:1 extreme:1 nl:3 implication:3 integral:1 closer:1 bacteria:1 jumping:2 sauer:1 conduct:1 phage:3 euclidean:1 minimal:1 ordinary:1 hundred:1 uniform:2 delay:1 johnson:1 too:2 stored:1 dependency:1 answer:1 varies:1 st:1 density:2 thanks:1 accessible:2 off:2 chemotaxis:1 synthesis:7 continuously:1 jo:1 again:4 reflect:1 nm:2 opposed:1 containing:1 slowly:1 activat:2 derivative:2 leading:1 potential:8 nonlinearities:1 repressor:1 includes:1 coefficient:1 depends:1 linked:1 apparently:1 complicated:1 minimize:1 miller:1 correspond:1 yield:1 identify:1 caldwell:1 weak:1 lu:1 trajectory:10 kennedy:2 history:1 phys:4 synaptic:2 failure:1 against:1 nonetheless:1 energy:9 frequency:1 involved:5 obvious:1 couple:1 logical:1 recall:1 back:3 clarendon:1 higher:1 dt:4 xie:1 wei:1 furthermore:1 just:3 q8:1 accessory:1 clock:1 correlation:2 hand:2 replacing:1 marker:1 stably:1 perhaps:1 barkai:1 impede:1 effect:2 usa:3 verify:1 true:1 building:1 evolution:1 hence:1 chemical:7 read:3 spatially:1 leibler:1 illustrated:1 conditionally:1 during:1 rat:1 trying:1 complete:1 schrodinger:1 tn:1 motion:5 temperature:11 ranging:1 common:1 functional:1 physical:2 vitro:3 endpoint:2 exponentially:1 volume:1 libchaber:2 discussed:2 organism:2 interpretation:2 analog:1 cambridge:7 similarly:1 particle:6 stochasticity:1 language:1 ute:1 stable:23 moving:1 longer:3 access:1 prefactors:3 biochem:1 playa:1 curvature:1 brownian:4 enzyme:3 recent:2 dominant:1 driven:1 reverse:1 lisman:2 certain:2 discussing:1 life:2 conserved:1 seen:1 minimum:1 greater:2 mrn:1 signal:1 rv:3 multiple:4 full:1 smooth:1 calculation:1 believed:1 long:2 cross:1 host:1 molecular:3 prediction:2 involving:1 basic:1 maintenance:1 essentially:1 circumstance:1 cell:7 ion:4 audience:1 addition:1 want:3 ures:1 source:1 crucial:2 lauer:1 subject:1 induced:1 caged:2 thing:1 spirit:2 seem:1 mod:1 concerned:1 switch:35 independence:1 dtv:1 competing:1 opposite:1 idea:2 tm:2 genet:1 det:1 inactive:1 rod:3 expression:5 effort:1 enzymatic:2 action:2 generally:1 detailed:2 involve:3 ten:3 locally:1 instit:1 simplest:1 dna:1 exist:2 problematic:1 millisecond:3 notice:1 shifted:1 sign:3 per:6 appropriat:1 discrete:3 write:3 shall:1 affected:1 express:1 bhalla:1 threshold:1 achieving:1 monitor:1 falling:1 diffusion:5 ram:1 relaxation:3 year:3 zinn:1 turing:1 master:3 jitter:1 reasonable:1 separation:1 putative:1 dy:1 tunneling:1 comparable:2 bound:4 simplification:1 activity:1 precisely:1 aspect:1 argument:2 extremely:1 optical:2 rendered:1 relatively:1 combination:2 remain:1 smaller:1 rev:3 making:2 teff:6 intuitively:1 embryo:1 ln:1 equation:12 agree:1 mori:1 slack:1 mechanism:1 know:1 flip:2 multistability:1 probe:1 observe:1 spectral:1 appropriate:1 robustly:1 distinguished:1 alternative:3 struct:1 gate:1 slower:1 running:1 include:1 assembly:1 neglect:1 testable:1 ting:1 build:2 murray:1 classical:2 question:1 occurs:1 bacterium:1 concentration:4 bialek:2 microscopic:2 exhibit:1 separate:1 incompletely:1 sci:3 digitization:1 cellular:2 unstable:4 ratio:1 balance:1 minimizing:1 baylor:1 pmc:3 regulation:1 liang:1 statement:2 relate:1 synthesizing:1 design:2 implementation:1 collective:1 boltzmann:8 kinase:6 twenty:1 calcium:8 unknown:1 packed:1 thermal:4 langevin:7 looking:1 discovered:1 nonuniform:1 mansour:1 mechanical:1 required:1 blackwell:2 raising:1 justin:1 usually:1 pattern:6 tb:1 built:4 reliable:2 memory:4 including:1 event:3 critical:1 force:2 indicator:1 scheme:8 occupancy:1 imply:1 picture:2 monomer:1 review:1 understanding:1 evolve:1 relative:1 expect:2 interesting:3 generation:1 fluorescent:1 analogy:4 digital:2 switched:1 sufficient:1 destabilizes:1 consistent:1 switchable:1 genetics:1 changed:1 summary:1 free:2 copy:1 soon:1 allow:1 understand:1 template:1 barrier:2 emerge:1 absolute:1 dimension:3 lett:1 transition:8 dtn:1 quantum:2 baras:1 schultz:1 approximate:1 emphasize:1 observable:2 feat:1 gene:4 keep:1 active:1 photoreceptors:1 un:1 channel:2 nature:4 molecule:37 controllable:1 ignoring:1 symmetry:1 complex:2 t8:1 noise:18 arise:1 edition:1 n2:2 peq:1 ref:4 ff:5 egg:1 slow:2 position:2 explicit:1 third:1 dozen:1 down:1 saturate:1 specific:1 emphasized:1 substance:1 exists:1 albeit:1 importance:1 supplement:1 nec:2 magnitude:1 nat:3 biophys:1 surprise:1 aren:1 likely:2 expressed:1 ptashne:2 subtlety:1 ligand:2 corresponds:2 malek:1 kinetic:10 ma:1 veff:1 viewed:1 goal:2 ann:2 careful:2 invalid:1 labelled:1 change:5 experimentally:1 typical:1 determined:1 operates:1 degradation:2 total:1 specie:4 internal:1 arises:1 chem:2 overdamped:1 princeton:1 phenomenon:1 ex:2
926
1,848
.N-Body. Problems in Statistical Learning Alexander G. Gray Department of Computer Science Carnegie Mellon University [email protected] Andrew W. Moore Robotics Inst. and Dept. Compo Sci. Carnegie Mellon University [email protected] Abstract We present efficient algorithms for all-point-pairs problems , or 'Nbody '-like problems , which are ubiquitous in statistical learning. We focus on six examples, including nearest-neighbor classification, kernel density estimation, outlier detection , and the two-point correlation. These include any problem which abstractly requires a comparison of each of the N points in a dataset with each other point and would naively be solved using N 2 distance computations. In practice N is often large enough to make this infeasible. We present a suite of new geometric t echniques which are applicable in principle to any 'N-body ' computation including large-scale mixtures of Gaussians, RBF neural networks, and HMM 's. Our algorithms exhibit favorable asymptotic scaling and are empirically several orders of magnitude faster than the naive computation, even for small datasets. We are aware of no exact algorithms for these problems which are more efficient either empirically or theoretically. In addition, our framework yields simple and elegant algorithms. It also permits two important generalizations beyond the standard all-point-pairs problems, which are more difficult. These are represented by our final examples, the multiple two-point correlation and the notorious n-point correlation. 1 Introduction This paper is about accelerating a wide class of statistical methods that are naively quadratic in the number of datapoints. 1 We introduce a family of dual kd-tree traversal algorithms for these problems. They are the statistical siblings of powerful state-of-the-art N -body simulation algorithms [1 , 4] of computational physics, but the computations within statistical learning present new opportunities for acceleration and require t echniques more general than those which have been exploited for the special case of potential-based problems involving forces or charges. We describe in detail a dual-tree algorithm for calculating the two-point correlation, the simplest case of the problems we consider; for the five other statistical problems we consider, we show only performance results for lack of space. The last of our examples, 1 In the general case, when we are computing distances between two different datasets having sizes Nl and N2, as in nearest-neighbor classification with separate training and test sets, say, the cost is O(NlN2). Figure 1: A kd-tree. (a) Nodes at level 3. (b) Nodes at level 5. The dots are the individual data points. The sizes and positions of the disks show the node counts and centroids. The ellipses and rectangles show the covariances and bounding boxes. (c) The rectangles show the nodes pruned dlITing a RangeSearch for one (depicted) query and radius. (d) More pruning is possible using RangeCount instead of RangeSearch. the n-point correlation, illustrates a generalization from all-point-pairs problems to alln-tuples problems, which are much harder (naively O(N ")). For all the examples, we b elieve there exist no exact algorithms which are faster either empirically or theoretically, nor any approximate algorithms that are faster while providing guarantees of acceptably high accuracy (as ours do). For n-tuple N -body problems in particular, this type of algorithm design appears to have surpassed the existing computational barriers. In addition , all the algorithms in this paper can be compactly defined and are easy to implement. Statistics and geometry. We proceed by viewing these statistical problems as geometric problems, exploiting the data's hyperstructure. Each algorithm utilizes Multiresolution kd-trees, providing a geometric partitioning of the data space which is used to reason about entire chunks of the data simultaneously. A review of kd-trees and mrkd-trees. A kd-tree [3] records a d-dimensional data set containing N records. Each node represents a set of data points by their bounding box. Non-leaf nodes have two children, obtained by splitting the widest dimension of the parent 's bounding box. For the purposes of this paper, nodes are split until they contain only one point, where they become leaves. An mrkd-tree [2, 6] is a conventional kd-tree decorated, at each node , with extra statistics about the node's data, such as their count, centroid, and covariance. They are an instance of the idea of cached sufficient statistics [8] and are quite efficient in practice. 2 See Figure 1. 2 The 2-point correlation function The two-point correlation is a spatial statistic which is of fundamental importance in many natural sciences, in particular astrophysics and biology. It can be thought of roughly as a measure of the dumpiness of a set of points. It is easily defined as the number of pairs of points in a dataset which lie within a given radius l' of each other. 2.1 Previous approaches Quadratic algorithm. The most naive approach is to simply compare each datum to each other one, incrementing a count if the distance between them is less than 1'. This has O( N 2 ) cost , unacceptably high for problems of practical interest. 2 mrkd-trees can be built quickly, in time O( dN log N +d 2 N). Although we have not needed to do so, they can modified to become disk-resident for data sets with billions of records, and they can be efficiently updated incrementally. They scale poorly to higher dimensions but recent work [7] significantly remedies the dimensionality problem. Binning and gridding algorithms. The schemes in widespread use [12, 13] are mainly of this sort. The idea of binning is simply to divide the data space into a fine grid defining a set of bins, perform the quadratic algorithm on the bins as if they were individual data, then multiply by the bin sizes as appropriate to get an estimate of the total count. The idea of grid ding is to divide the data space into a coarse grid, perform the quadratic algorithm within each bin, and sum the results over all bins to get an estimate of the total count. These are both of course very approximate methods yielding large errors. They are not usable when r is small or r is large , respectively. Range-searching with a kd-tree. An approach to the two-point correlation computation that has been taken is to treat it as a range-searching problem [5 , 10], since kd-trees have been historically almost synonymous with range-searching. The idea is that we will make each datapoint in turn a qu ery point and then execute a range search of the kd-tree to find all other points within distance r of the query. A search is a depth-first traversal of the kd-tree, always checking the minimum possible distance d min between the query and the hyper-rectangle surrounding the current node. If d min > r there is no point in visiting the node's children, and computation is saved. We call this exclusion-based pruning. The range searching avoids computing most of the distances between pairs of points further than r apart, which is a considerable saving if r is small. But is it the best we can do? And what if r is large? We now propose several layers of new approaches. 2.2 Better geometric approaches: new algorithms Single-tree search (Range-Counting Algorithm). A straightforward extension can exploit the fact that unlike conventional use of range searching, these statistics frequently don 't need to retrieve all the points in the radius but merely to count them. The mrkd-tree has, in each node, the count of the number of data it contains-the simplest kind of cached sufficient statistic. At a given node, if the distance between the query and the farthest point of the bounding box of the data in the node is smaller than the radius r, clearly every datum in the node is within range of the query. We can then simply add the node 's stored count to the total count. We call this subsumption. 3 (Note that both exclusion and subsumption are simple computations because the geometric regions are always axis-parallel rectangles.) This paper introduces new single-tree algorithms for most of our examples, though it is not our main focus. Dual-tree search. This is the primary topic of this paper. The idea is to consider the query points in chunks as well , as defined by nodes in a kd-tree. In the general case where the query points are different from the data being queried, a separate kd-tree is built for the query points; otherwise a query node and a data node are simply pointers into the same kd-tree. Dual-tree search can be thought of as a simultaneous traversal of two trees, instead of iterating over the query points in an outer loop and only exploiting single-tree-search in the inner loop. Dual-tree search is based on node-node comparisons while Single-tree search was based on point-node comparisons. Pseudocode for a recursive procedure called TwoPointO is shown in Figure 2. It counts the number of pairs of points (x q E QNODE, Xd E DNoDE) such that I X q xdl < r. Before doing any real work, the procedure checks whether it can perform an exclusion pruning (in which case the call terminates, returning 0) or subsumption pruning (in which case the call terminates, returning the product of the number of points in the two nodes). If neither of these prunes occur, then depending on whether QNODE and/or DNODE are leaves, the corresponding recursive calls are made. 3S ubsumption can also be exploited when other aggregate statistics, such as centroids or covariances of sets of points in a range are required [2 , 14, 9]. TwoPoint( QNODE,DNODE ,r) if excludes(QNODE,DNODE,r), r eturn ; if subsumes(QNoDE,DNoDE,r) total = total + ( count(QNoDE) X count(DNoDE) ); r eturn; if le af(QNoDE) and leaf(DNoDE) if distance(QNoDE,DNODE) < r, total = total + 1; if le af(QNoDE) and notleaf(DNoDE) TwoPoint( Q NODE,leftchild (D NODE), r ); Two Point (Q NODE,rightchild (D NODE) ,r ); if notle af(QNoDE) and le af(DNoDE) TwoPoint(leftchild(QNoDE ) ,DNoDE,r); TwoPoint ( rightchild ( QNODE) ,DNoDE,r); if notleaf(QNoDE) and notleaf(DNoDE) TwoPoint(leftchild(QNoDE) ,left child(DNoDE) ,r ); TwoPoint(leftchild ( QNODE) ,rightchild(DNoDE) ,r); TwoPoint(rightchild( QNODE) ,leftchild (DNoDE) ,r); TwoPoint(rightchild(QNoDE) ,rightchild(DNoDE) ,r); Figure 2: A recursive Dual-tree code. All the reported algorithms have a similar brevity. Importantly, both kinds of prunings can now apply to many query points at once, instead of each nearby query point rediscovering the same prune during the Singletree search. The intuition behind Dual-tree's advantage can be seen by considering two cases . First, if l' is so large that all pairs of points are counted then the Single-Tree search will perform O(N) operations, where each query point immediately prunes at the root , while Dual-Tree search will perform 0 (1) operations. Second, if l' is so small that no pairs of points are counted, Single-Tree search will run to one leaf for each query, m eaning total work O(N log N ) whereas Dual-tree search will visit each leaf once, meaning O(N) work. Note, however , that in the middle case of a medium-size 1', Dual-tree is theoretically only a constant-factor superior to Single-tree. 4 Non-redundant dual-tree search. So far , we have discussed two operations which cut short the need to traverse the tree further - exclusion and subsumption. Another form of pruning is to eliminate node-node comparisons which have been p erformed already in the reverse order. This can be done [11] simply by (virtually) ranking the datapoints according to their position in a depth-first traversal of the tree , then recording for each node the minimum and maximum ranks of the points it owns, and pruning whenever QNODE'S maximum rank is less than DNODE's minimum rank. This is useful for all-pairs problems , but becomes essential for all-n-tuples problems. This kind of pruning is not practical for Single-tree search. Figure 3 shows the p erformance of a two-point correlation algorithm using all the aforementioned pruning m ethods. Multiple radii simultaneously. Most often in practice, the two-point is computed for many successive radii so that a curve can be plotted, indicating the clumpiness on different scales. Though the m ethod presented so far is fast , it may have to be run once for each of, say, 1,000 radii. It is possible to perform a single, faster computation for all the radii simultaneously, by taking advantage of the nesting structure of the ordered radii , with an algorithm which recursively narrows the radii which still need to 4We'1l summarize the asymptotic analysis briefly. If the data is uniformly distributed in d-dimensional space, the cost of computing the n-point correlation function on a dataset with N points using the Dual-tree (n-tree) algorithm is O( NOnd) where and is the dimensionality of the manifold of n-tuples that are just on the border between being matched and not-matched, and is and = n' (1 - n~;;-l) where n' = min( n, d) For example, the 2-point correlation function in two dimensions is O(N3/2), considerably better than the O(N2) naive algorithm. Disappointingly, for 2-point, this performance is asymptotically the same cost as Single-tree. For n > 2 our algorithm is better. Furthermore, if we can accept an approximate nd /(n-O nd) ) wh IC ' h IS ' In . d epend ent 0 f N . . (nond)(O answer, t he cost IS -f- I # I Data QuadratIc Smgle-tree Dual-tree ST Speedup DT Speedup 132 3300 est. 30899 est. 123599 est. 2.2 11.8 37 76 139 556 est. 3475 est. 2.0 11.6 30.6 1.2 7.0 20 40 1.4 9.8 26.4 60 280 835 1626 nearest nearest nearest 10,000 50,000 150,000 300,000 10,000 20,000 50,000 70 48 114 110 471 1545 3090 99 57 132 outliers outliers outliers outliers 10,000 50,000 150,000 300,000 141 3525 est. 33006 est. 132026 est. 2.3 12 36 72 1.2 6.5 21 44 61 294 917 1834 118 542 1572 3001 Algorithm twopoint twopomt twopoint twopoint Figure 3: Our experiments timed our algorithms on large astronomical datasets of current scientific interest , consisting of x-y positions of sky objects from the Sloane Digital Sky Survey. All times are given in seconds, and runs were performed on a Pentium III-500 MHz Linux workstation. The larger runtimes for the quadratic algorithm were estimated based on those for smaller datasets. The dual kd-tree method is about a factor of 2 faster than the single kd-tree method, and both are 3 orders of magnitude faster than the quadratic method for a medium-sized dataset of 300,000 points. I # Data 10 ,000 20 ,000 50 ,000 150 ,000 I I 1.2 2.8 7.0 20 100 1.8 6.4 31 133 I 1000 2.4 6.6 31 146 I Speedup 500 424 226 137 I I# Data 10 ,000 50 ,000 150,000 300,000 I Quadratic 226 5650 est. 50850 est. 203400 est. I 10 1.2 10.4 32 73 I 10 3.0 16.8 65 151 Speedup 188 543 1589 2786 Figure 4: (a) Runtimes for multiple 2-point correlation with increasing number of radii, and the speedup factored compared to 1,000 separate Dual-tree 2-point correlations. (b) Runtimes for kernel density estimation with decreasing levels of approximation, controlled by parameter ~, and speedup over quadratic. be considered based on the current closest and farthest distances between the nodes. The details are omitted for space, regrettably. The results in Figure 4 confirm that the algorithm quickly focuses on the radii of relevance: for 150 ,000 data, computing 1,000 2-point correlations took only 7 times as long as computing one. 3 Kernel density estimation Approximation accelerations. A fourth major type of pruning opportunity is approximation. This is often needed in all-point-pairs computations which involve computing some real-valued function f(x, y) between every pair of points x and y. An example is kernel density estimation with an infinite-tailed kernel such as a Gaussian, in which every training point has some non-zero (though perhaps infinitesimal) contribution to the density at each test point. For each query point Xq we need to accumulate K Ei w(lxq - Xii) where K is a normalizing constant and w is a weighting function (which we will need to assume is monotonic). A recursive call of the Dual-tree implementation has the following job: for Xq E QNODE compute the contribution to xq's summed weights that are due to all points in DNODE. Once again, before doing any real work we use simple rectangle geometry to compute the shortest and furthest possible distances between any (x q , Xd) pair. This bounds the minimum and maximum possible values of Kw(lx q - xdl). If these bounds are tight enough (according to an approximation parameter f) we prune by simply distributing the midpoint weight to all the points in QNODE. I # Data 1000 2000 10000 20000 I 1 13 1470 14441 - - Time 1 2 3 4 < < < < 1 1 1 1 < 3 6 7 1 < 1 23 57 73 Figure 5: (a) Runtimes for approximate n-point correlation with t = 0.02 and 20 ,000 data. (b) Runtimes for approximate 4-point with t = 0.02 and increasing data size. (c) Runtimes for exact n-point, run on 2000 datapoints of galaxies in d-dimensional color space. 4 The n-point correlation, for n >2 The n-point correlation is the generalization of the 2-point correlation, which counts the number of n-tuples of points lying within radius 7' of each other , or more generally, between some 7'min and 7'max. 5 The implementation is entirely analogous to the 2point case , using n trees in general instead of two , except that there is more benefit in being careful about which of 2n possible recursive calls to choose in the cases where you cannot prune, the approximation versions are harder, there is no immediately analogous Single-tree version of the algorithm, and anti-redundancy pruning is much more important. Figure 5 shows the unprecedented efficiency gains, which become more dramatic as n increases. Approximating 'exact' computations. Even for algorithms such as 2-point, that return exact counts , bounded approximation is possible. Suppose the true value of the 2-point function is V* but that we can tolerate a fractional error of f: we'll accept any value V such that IV - V*I < fV*. It is possible to adapt the dual-tree algorithm using a best-first iterative deepening search strategy to guarantee this result while exploiting permission to approximate effectively by building the count as much as possible from "easy-win" node pairs while doing approximation at hard deep node-pairs. 5 Outlier detection, nearest neighbors, and other problems One of the main intents of this paper is to point out the broad applicability of this type of algorithm within statistical learning. Figure 3 shows performance results for our outlier detection and nearest neighbors algorithms. Figure 6 lists many N-body problems which are clear candidates for acceleration in future work. 6 5The n-point correlation is useful for detailed characterizations of mass distributions (including galaxies and biomasses). Higher-order n-point correlations detect increasingly subtle differences in mass distribution, and are also useful for assessing variance in the lower-order n-point statistics. For example, the three-point correlation, which measures the number of triplets of points meeting the specified geometric constraints, can distinguish between two distributions that have the same 2-point correlations but differ in their degree of "stripiness" versus "spottiness" . 6In our nearest neighbors algorithm we consider the problem of finding, for each query point, its single nearest neighbor among the data points. (This is exactly the all-nearestneighbors problem of computational geometry.) The methods are easily generalized to the case of finding the k nearest neighbors, as in k-NN classification and locally weighted regression. Outlier detection is one of the most common statistical operations encountered in data analysis. The question of which procedure is most correct is an open and active one. We present here a natural operation which might be used directly for outlier detection, or within another procedure: for each of the points, find the number of other points that are within distance r of it - those having zero neighbors within r are defined as outliers. (This is exactly the all-range-count problem.) Statistical Op eratIOn 2- point function n-point function Multiple 2-point func tion Batch k-ne arest n eighbor N on- paramet erlc outlier d et ectIOn I d enOlsmg Batch K ernel d ensity / classify / r egr ession Batch loc ally weighted r egression Batch kernel PCA Gaussian pro cess le arning and prediction K-me ans Mixture of G aussians clustering Hidden Markov model RBF n eural n etwork Finding pairs of correlated attributes Finding n-tuples of correlated attributes D ep e nden cy-tree le arning R esults h er e? Yes Yes Yes Yes Yes Yes No No No No No No No No No No Approx imation ? What is N ? Optional Optional Optional Optional Optional Yes Yes Yes Yes Optional Yes Yes Yes Optional Optional Optional # # # # # # # # # # # # # # # # Data Data Data Data Data Data Data Data Data Data, Cluste rs Data, C luste rs Data, States Data, N eurons Attributes Attributes Attributes Figure 6: A very brief sample of applicability of Dual-tree search methods. References [1] J. BaInes and P. Hut . A Hierarchical O(NlogN) Force-Calculation Algorithm. Nature, 324 , 1986. [2] K . Deng and A. W. Moore. Multiresolution instance-based learning. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, pages 1233- 1239, San Francisco, 1995. Morgan Kaufmann. [3] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. A CM Transa ctions on Mathematical Software, 3(3):209- 226, September 1977. [4] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. Journal of Computational Physics, 73, 1987. [5] D. E. Knuth. Sor?ting and Searching. Addison Wesley , 1973. [6] A. W. M oore. Very fast mixt ure-model-based clustering using multiresolution kd-trees. In M. Kearns and D. Cohn, editors, Advan ces in Neural Information Processing Systems 10, pages 543- 549, San Francisco, April 1999. Morgan Kaufmann. [7] A. W. Moore. The Anchors Hierarchy: Using the triangle inequality to survive high dimensional data. In Twelfth Conference on Un certainty in A rtificial Intelligence (to appear) . AAAI Press, 2000. [8] A. W. Moore and M. S. Lee. Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets. Journal of Artificial Intelligen ce Research, 8, March 1998. [9] D. Pelleg and A. W. Moore . Accelerating Exact k-means Algorithms with Geometric Reasoning. In Proceedings of the Fifth International Conference on Knowledge Discove ry and Data Mining. AAAI Press, 1999. [10] F. P . Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985. [11] A. Szalay. Personal Communication. 2000. [12] I. Szapudi. A New Method for Calculating Counts in Cells. The Astrophysical Journal, 1997. [l3] I. Szapudi, S. Colombi, and F. BeInardeau. Cosmic Statistics of Statistics. N otices of the Royal Astronomical Society, 1999. Monthly [14] T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH: An Efficient Data Clustering Method for Very Large Databases. In Proceedings of the Fifteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems : PODS 1996. Assn for Computing Machinery, 1996.
1848 |@word rightchild:6 briefly:1 version:2 middle:1 nd:2 disk:2 twelfth:2 open:1 simulation:2 r:2 covariance:3 dramatic:1 harder:2 recursively:1 disappointingly:1 contains:1 loc:1 ours:1 existing:1 current:3 arest:1 etwork:1 intelligence:2 leaf:6 unacceptably:1 short:1 record:3 compo:1 pointer:1 coarse:1 characterization:1 node:33 traverse:1 successive:1 lx:1 zhang:1 five:1 mathematical:1 dn:1 become:3 symposium:1 introduce:1 theoretically:3 expected:1 roughly:1 nor:1 frequently:1 ry:1 decreasing:1 considering:1 increasing:2 becomes:1 matched:2 bounded:1 medium:2 mass:2 what:2 kind:3 cm:1 livny:1 finding:5 suite:1 guarantee:2 certainty:1 sky:2 every:3 charge:1 xd:2 exactly:2 returning:2 partitioning:1 farthest:2 acceptably:1 appear:1 before:2 subsumption:4 treat:1 ure:1 might:1 range:10 practical:2 practice:3 recursive:5 implement:1 procedure:4 erformance:1 nlogn:1 thought:2 significantly:1 dnode:19 get:2 cannot:1 conventional:2 straightforward:1 pod:1 survey:1 splitting:1 immediately:2 factored:1 nesting:1 importantly:1 datapoints:3 retrieve:1 searching:6 rediscovering:1 analogous:2 updated:1 hierarchy:1 suppose:1 exact:6 cut:1 database:2 binning:2 ep:1 ding:1 solved:1 cy:1 region:1 intuition:1 traversal:4 personal:1 tight:1 efficiency:1 triangle:1 compactly:1 easily:2 joint:1 represented:1 surrounding:1 fast:3 describe:1 query:16 artificial:2 aggregate:1 hyper:1 shamos:1 quite:1 larger:1 valued:1 say:2 otherwise:1 statistic:11 abstractly:1 final:1 advantage:2 unprecedented:1 took:1 propose:1 product:1 loop:2 poorly:1 multiresolution:3 ent:1 exploiting:3 parent:1 billion:1 assessing:1 cached:3 object:1 depending:1 andrew:1 nearest:10 op:1 job:1 c:2 differ:1 qnode:20 radius:13 saved:1 correct:1 attribute:5 awm:1 viewing:1 bin:5 require:1 sor:1 generalization:3 paramet:1 extension:1 lying:1 hut:1 considered:1 ic:1 major:1 omitted:1 purpose:1 estimation:4 favorable:1 applicable:1 weighted:2 clearly:1 always:2 gaussian:2 modified:1 finkel:1 focus:3 rank:3 check:1 mainly:1 pentium:1 centroid:3 detect:1 inst:1 synonymous:1 nn:1 entire:1 eliminate:1 accept:2 hidden:1 classification:3 dual:18 aforementioned:1 among:1 art:1 special:1 spatial:1 summed:1 aware:1 saving:1 having:2 once:4 runtimes:6 biology:1 represents:1 broad:1 kw:1 survive:1 future:1 preparata:1 simultaneously:3 individual:2 geometry:4 consisting:1 friedman:1 detection:5 interest:2 mining:1 multiply:1 introduces:1 mixture:2 nl:1 yielding:1 behind:1 tuple:1 machinery:1 tree:53 iv:1 divide:2 timed:1 plotted:1 instance:2 classify:1 mhz:1 cost:5 applicability:2 stored:1 reported:1 answer:1 considerably:1 chunk:2 st:1 density:5 fundamental:1 international:2 lee:1 physic:2 quickly:2 decorated:1 linux:1 again:1 aaai:2 deepening:1 containing:1 choose:1 usable:1 return:1 potential:1 subsumes:1 ranking:1 astrophysical:1 performed:1 root:1 tion:1 doing:3 sort:1 parallel:1 ery:1 contribution:2 accuracy:1 regrettably:1 variance:1 kaufmann:2 efficiently:1 yield:1 yes:13 datapoint:1 simultaneous:1 whenever:1 infinitesimal:1 galaxy:2 workstation:1 gain:1 eurons:1 dataset:4 birch:1 wh:1 knowledge:1 astronomical:2 color:1 dimensionality:2 ubiquitous:1 fractional:1 subtle:1 appears:1 wesley:1 higher:2 dt:1 tolerate:1 april:1 execute:1 box:4 though:3 done:1 furthermore:1 just:1 correlation:22 until:1 ally:1 ei:1 cohn:1 lack:1 resident:1 incrementally:1 widespread:1 perhaps:1 gray:1 scientific:1 bentley:1 building:1 contain:1 true:1 remedy:1 twopoint:11 mrkd:4 moore:5 ll:1 during:1 ection:1 generalized:1 sloane:1 pro:1 reasoning:1 meaning:1 mixt:1 superior:1 common:1 pseudocode:1 empirically:3 discussed:1 he:1 accumulate:1 mellon:2 monthly:1 queried:1 approx:1 grid:3 particle:1 dot:1 l3:1 add:1 closest:1 recent:1 exclusion:4 apart:1 reverse:1 smgle:1 verlag:1 szalay:1 inequality:1 meeting:1 exploited:2 seen:1 minimum:4 morgan:2 deng:1 prune:5 egr:1 shortest:1 redundant:1 multiple:4 faster:6 adapt:1 af:4 calculation:1 long:1 match:1 ensity:1 visit:1 ellipsis:1 controlled:1 oore:1 prediction:1 involving:1 regression:1 cmu:2 surpassed:1 fifteenth:1 kernel:6 robotics:1 cell:1 cosmic:1 addition:2 whereas:1 fine:1 xdl:2 extra:1 unlike:1 recording:1 elegant:1 virtually:1 call:7 counting:1 split:1 enough:2 easy:2 iii:1 inner:1 idea:5 sibling:1 whether:2 six:1 pca:1 distributing:1 accelerating:2 proceed:1 deep:1 useful:3 iterating:1 generally:1 involve:1 clear:1 detailed:1 locally:1 simplest:2 assn:1 exist:1 estimated:1 xii:1 carnegie:2 redundancy:1 neither:1 ce:3 rectangle:5 asymptotically:1 excludes:1 merely:1 pelleg:1 sum:1 run:4 powerful:1 fourth:1 you:1 family:1 almost:1 utilizes:1 scaling:1 entirely:1 layer:1 bound:2 datum:2 distinguish:1 quadratic:9 encountered:1 leftchild:5 occur:1 constraint:1 n3:1 software:1 nearby:1 min:4 pruned:1 speedup:6 department:1 according:2 march:1 kd:16 smaller:2 terminates:2 increasingly:1 qu:1 outlier:11 notorious:1 taken:1 turn:1 count:17 needed:2 addison:1 imation:1 gaussians:1 operation:5 permit:1 greengard:1 apply:1 hierarchical:1 appropriate:1 ernel:1 permission:1 batch:4 gridding:1 clustering:3 include:1 opportunity:2 calculating:2 exploit:1 ting:1 widest:1 approximating:1 society:1 advan:1 already:1 question:1 strategy:1 primary:1 visiting:1 exhibit:1 september:1 win:1 distance:11 separate:3 sci:1 hmm:1 outer:1 me:1 topic:1 manifold:1 reason:1 furthest:1 echniques:2 code:1 providing:2 difficult:1 esults:1 sigart:1 intent:1 astrophysics:1 design:1 ethod:1 implementation:2 perform:6 datasets:5 markov:1 anti:1 optional:9 defining:1 communication:1 pair:15 required:1 specified:1 ethods:1 fv:1 narrow:1 beyond:1 egression:1 summarize:1 built:2 including:3 max:1 royal:1 eration:1 natural:2 force:2 scheme:1 historically:1 brief:1 ne:1 axis:1 nearestneighbors:1 naive:3 func:1 xq:3 review:1 geometric:7 checking:1 asymptotic:2 versus:1 digital:1 degree:1 sufficient:3 principle:2 editor:1 course:1 last:1 infeasible:1 neighbor:8 wide:1 taking:1 barrier:1 midpoint:1 fifth:1 distributed:1 benefit:1 curve:1 dimension:3 depth:2 avoids:1 made:1 san:2 counted:2 far:2 pruning:11 approximate:6 confirm:1 active:1 anchor:1 owns:1 francisco:2 tuples:5 don:1 search:17 iterative:1 ctions:1 triplet:1 tailed:1 un:1 nature:1 agray:1 main:2 bounding:4 incrementing:1 border:1 n2:2 child:3 body:5 eural:1 rtificial:1 position:3 lie:1 candidate:1 weighting:1 er:1 list:1 normalizing:1 naively:3 essential:1 effectively:1 importance:1 knuth:1 magnitude:2 illustrates:1 depicted:1 logarithmic:1 simply:6 ordered:1 monotonic:1 springer:1 ramakrishnan:1 acm:1 sized:1 acceleration:3 rbf:2 careful:1 considerable:1 hard:1 infinite:1 except:1 uniformly:1 kearns:1 total:8 called:1 est:11 indicating:1 rokhlin:1 alexander:1 brevity:1 relevance:1 dept:1 arning:2 correlated:2
927
1,849
Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice Dirk Ormoneit Department of Computer Science Stanford University Stanford, CA 94305-9010 [email protected] Peter Glynn EESOR Stanford University Stanford, CA 94305-4023 Abstract Many approaches to reinforcement learning combine neural networks or other parametric function approximators with a form of temporal-difference learning to estimate the value function of a Markov Decision Process. A significant disadvantage of those procedures is that the resulting learning algorithms are frequently unstable. In this work, we present a new, kernel-based approach to reinforcement learning which overcomes this difficulty and provably converges to a unique solution. By contrast to existing algorithms, our method can also be shown to be consistent in the sense that its costs converge to the optimal costs asymptotically. Our focus is on learning in an average-cost framework and on a practical application to the optimal portfolio choice problem. 1 Introduction Temporal-difference (TD) learning has been applied successfully to many real-world applications that can be formulated as discrete state Markov Decision Processes (MDPs) with unknown transition probabilities. If the state variables are continuous or high-dimensional, the TD learning rule is typically combined with some sort of function approximator - e.g. a linear combination of feature vectors or a neural network - which may well lead to numerical instabilities (see, for example, [BM95, TR96]). Specifically, the algorithm may fail to converge under several circumstances which, in the authors ' opinion, is one of the main obstacles to a more wide-spread use of reinforcement learning (RL) in industrial applications. As a remedy, we adopt a non-parametric perspective on reinforcement learning in this work and we suggest a new algorithm that always converges to a unique solution in a finite number of steps. In detail, we assign value function estimates to the states in a sample trajectory and we update these estimates in an iterative procedure. The updates are based on local averaging using a so-called "weighting kernel". Besides numerical stability, a second crucial advantage of this algorithm is that additional training data always improve the quality of the approximation and eventually lead to optimal performance - that is, our algorithm is consistent in a statistical sense. To the authors' best knowledge, this is the first reinforcement learning algorithm for which consistency has been demonstrated in a continuous space framework. Specifically, the recently advocated "direct" policy search or perturbation methods can by construction at most be optimal in a local sense [SMSMOO , VRKOOj. Relevant earlier work on local averaging in the context of reinforcement learning includes [Rus97j and [Gor99j. While these papers pursue related ideas, their approaches differ fundamentally from ours in the assumption that the transition probabilities of the MDP are known and can be used for learning. By contrast, kernelbased reinforcement learning only relies on sample trajectories of the MDP and it is therefore much more widely applicable in practice. While our method addresses both discounted- and average-cost problems, we focus on average-costs here and refer the reader interested in discounted-costs to [OSOOj. For brevity, we also defer technical details and proofs to an accompanying paper [OGOOj. Note that averagecost reinforcement learning has been discussed by several authors (e.g. [TR99]). The remainder of this work is organized as follows. In Section 2 be provide basic definitions and we describe the kernel-based reinforcement learning algorithm. Section 3 focuses on the practical implementation of the algorithm and on theoretical issues. Sections 4 and 5 present our experimental results and conclusions. 2 Kernel-Based Reinforcement Learning Consider a MDP defined by a sequence of states X t taking values in IR d , a sequence of actions at taking values in A = {I, 2, ... , M}, and a family of transition kernels {Pa(x, B)la E A} characterizing the conditional probability of the event X t E B given X t - 1 = x and at-l = a. The cost function c(x, a) represents an immediate penalty for applying action a in state x. Strategies, policies, or controls are understood as mappings of the form J1. : IRd -+ A, and we let PX,/A denote the probability distribution governing the Markov chain starting from Xo = x associated with the policy J1.. Several regularity conditions are listed in detail in [OGOOj. Our goal is to identify policies that are optimal in that they minimize the long-run [f average-cost TJ/A == liIllT-t oo Ex,/A 'L,;=-Ol c(Xt, J1.(Xt})]. An optimal policy, J1.*, can be characterized as a solution to the Average-Cost Optimality Equation (ACOE): TJ* + h*(x) J1.*(x) + (rah*)(x)}, argmin{c(x, a) + (rah*)(x)} , a min{c(x, a) a (1) (2) where TJ* is the minimum average-cost and h*(x) has an interpretation as the differential value of starting in x as opposed to drawing a random starting position from the stationary distribution under J1.*. r a denotes the conditional expectation operator (r ah)(X) == Ex,a [h(Xl) ], which is assumed to be unknown so that (1) cannot be solved explicitly. Instead, we simulate the MDP using a fixed proposal strategy jl in reinforcement learning to generate a sample trajectory as training data. Formally, let S == {zo, .. . , Zm} denote such an m-step sample trajectory and let A == {ao, ... ,am-llas = p,(zs)} and C == {c(zs , as)IO ~ s < m} be the sequences of actions and costs associated with S. Then our objective can be reformulated as the approximation of fJ* based on information in S, A, and C. In detail, we will construct an approximate expectation operator, l' m,a, based on the training data, S , and use this approximation in place of the true operator rain this work. Formally substituting 1'm ,a for rain (1) and (2) gives the Approximate Avemge-Cost Optimality Equation (AACOE): i)m + hm(x) (3) flm(x) argmjn {c(x , a) + (1' m,ahm)(X)} . (4) Note that , ifthe solutions i)m and hm to (3) are well-defined, they can be interpreted as statistical estimates of TJ* and h* in equation (1). However , i)m and h m need not exist unless 1'm ,a is defined appropriately. We therefore employ local averaging in this work to construct 1'm,a in a way that guarantees the existence of a unique fixed point of (3). For the derivation of the local averaging operator, note that the task of approximating (rah)(x) = Ex,a[h(Xdl can be interpreted alternatively as a regression of the "target" variable h(Xd onto the "input" Xo = x . So-called kernel-smoothers address regression tasks of this sort by locally averaging the target values in a small neighborhood of x . This gives the following approximation: m-l L km ,a(zs, x)h(zs+1)' (5) s=o (6) In detail, we employ the weighting function or weighting kernel km ,a(zs, x) in (6) to determine the weights that are used for averaging in equation (5). Here km,a(zs , x) is a multivariate Gaussian, normalized so as to satisfy the constraints km ,.. (zs, x) > 0 if as = a , km,a(zs , x) = 0 if as i- a, and I:::,,=~l km, .. (z s, x) = 1. Intuitively, (5) assesses the future differential cost of applying action a in state x by looking at all times in the training data where a has been applied previously in a state similar to x , and by averaging the current differential value estimates at the outcomes of these previous transitions. Because the weights k m , .. (zs , x) are related inversely to the distance Ilzs - xii, transitions originating in the neighborhood of x are most influential in this averaging procedure. A more statistical interpretation of (5) would suggest that ideally we could simply generate a large number of independent samples from the conditional distribution Px ,a and estimate Ex ,a[h(X1)l using Monte-Carlo approximation. Practically speaking , this approach is clearly infeasible because in order to assess the value of the simulated successor states we would need to sample recursively, thereby incurring exponentially increasing computational complexity. A more realistic alternative is to estimate l' m,ah (x) as a local average of the rewards that were generated in previous transitions originating in the neighborhood of x, where the membership of an observation Z s in the neighborhood of x is quantified using km ,a( zs, x). Here the regularization parameter b determines the width of the Gaussian kernel and thereby also the size of the neighborhood used for averaging. Depending on the application , it may be advisable to choose b either fixed or as a location-dependent function of the training data. 3 "Self-Approximating Property" As we illustrated above, kernel-based reinforcement learning formally amounts to substituting the approximate expectation operator m,a for r a and then applying dynamic programming to derive solutions to the approximate optimality equation (3). In this section, we outline a practical implementation of this approach and we present some of our theoretical results. In particular, we consider the relative value iteration algorithm for average-cost MDPs that is described , for example, in [Ber95]. This procedure iterates a variant of equation (1) to generate a sequence of value function estimates, h~ , that eventually converge to a solution of (1) (or (3), respectively). An important practical problem in continuous state MDPs is that the intermediate functions h~ need to be represented explicitly on a computer. This requires some form of function approximation which may be numerically undesirable and computationally burdensome in practice. In the case of kernel-based reinforcement learning, the so-called "self-approximating" property allows for a much more efficient implementation in vector format (see also [Rus97]). Specifically, because our definition of m,ah in (5) only depends on the values of h at the states in S , the AACOE (3) can be solved in two steps: r r (7) (8) In other words , we first determine the values of hm at the points in S using (7) and then compute the values at new locations x in a second step using (8). Note that (7) is a finite equation system by contrast to (3). By introducing the vectors and matrices n?,(i) == hm,?,( zi ), c?,(i) == C?,(Zi), q>?,(i,j) == km ,?, (Zj,Zi ) for i = 1, . .. , m and j = 1, ... , m , the relative value iteration algorithm can thus be written conveniently as (for details, see [Ber95, OGOO]): ~k+1 ._ ~k ~k () . - U n ew - I t new 1 . U (9) Hence we end up with an algorithm that is analogous to value iteration except that we use the weighting matrix q>a in place ofthe usual transition probabilities and k and C a are vectors of points in the training set S as opposed to vectors of states. Intuitively, (9) assigns value estimates to the states in the sample trajectory and updates these estimates in an iterative fashion. Here the update of each state is based on a local average over the costs and values of the samples in its neighborhood. Since q>a (i, j) > 0 and 2::7=1 q>a(i, j) = 1 we can further exploit the analogy between (9) and the usual value iteration in an "artificial" MDP with transition probabilities q>a to prove the following theorem: n Theorem 1 The relative value iteration (9) converges to a unique fixed point. For details, the reader is referred to [OSOO , OGOO]. Note that Theorem 1 illustrates a rather unique property of kernel-based reinforcement learning by comparison to alternative approaches. In addition, we can show that - under suitable regularity conditions - kernel-based reinforcement learning is consistent in the following sense: Theorem 2 The approximate optimal cost TI* in the sense that E xo , ji. 1Tim A - Tfm converges to the true optimal cost TI * 1m-t ---+co 0. Also, the true cost of the approximate strategy Pm converges to the optimal cost: Hence Pm performs as well as fJ* asymptotically and we can also predict the optimal cost TJ* using r,m. From a practical standpoint, Theorem 2 asserts that the performance of approximate dynamic programming can be improved by increasing the amount of training data. Note, however, that the computational complexity of approximate dynamic programming depends on the sample size m. In detail , the complexity of a single application of (9) is O(m2) in a naive implementation and O(mlog m) in a more elaborate nearest neighbor approach. This complexity issue prevents the use of very large data sets using the "exact" algorithm described above. As in the case of parametric reinforcement learning, we can of course restrict ourselves to a fixed amount of computational resources simply by discarding observations from the training data or by summarizing clusters of data using "sufficient statistics". Note that the convergence property in Theorem 1 remains unaffected by such an approximation. 4 Optimal Portfolio Choice In this section , we describe the practical application of kernel-based reinforcement learning to an investment problem where an agent in a financial market decides whether to buy or sell stocks depending on the market situation. In the finance and economics literature, this task is known as "optimal portfolio choice" and has created an enormous literature over the past decades. Formally, let St symbolize the value of the stock at time t and let the investor choose her portfolio at from the set A == {O , 0.1, 0.2 , ... , I}, corresponding to the relative amount of wealth invested in stocks as opposed to an alternative riskless asset. At time t + 1, the stock price changes from St to St+1, and the portfolio of the investor participates in the price movement depending on her investment choice. Formally, if her wealth at time t is W t , it becomes W t +1 = + at St ?~: S, ) W t at time t + 1. To render this simulation (1 as realistic as possible, our investor is assumed to be risk-averse in that her fear of losses dominates her appreciation of gains of equal magnitude. A standard way to express these preferences formally is to aim at maximizing the expectation of a concave "utility function", U(z), ofthe final wealth WT. Using the choice U(z) = log( z), the investor's utility can be written as U(WT) = 2:,;:01log (1 + at S'?~:S') . Hence utilities are additive over time , and the objective of maximizing E[U(WT)] can be stated in an average-cost framework where c(x, a) = Ex,a [log (1 + a S'?~:S' )]. We present results using simulated and real stock prices. With regard to the simulated data, we adopt the common assumption in finance literature that stock prices are driven by an Ito process with stochastic, mean-reverting volatility: dSt fJStdt dVt ?(fJ - + ylv;StdBt, vt)dt + pylv;dBt . Here Vt is the time-varying volatility, and B t and B t are independent Brownian motions. The parameters of the model are fJ = 1.03, fJ = 0.3, ? = 10.0, and p = 5.0. We simulated daily data for the period of 13 years using the usual Euler approximation of these equations. The resulting stock prices, volatilities, and returns are shown in Figure l. Next, we grouped the simulated time series into 10 sets of training and Figure 1: The simulated time-series of stock prices (left) , volatility (middle) , and daily returns (right; Tt == log(St/St-d) over a period of one year. test data such that the last 10 years are used as 10 test sets and the three years preceding each test year are used as training data. Table 1 reports the training and test performances on each of these experiments using kernel-based reinforcement learning and a b enchmark buy & hold strategy. Performance is measured using Year 4 5 6 7 8 9 10 11 12 13 II II Reinforcement Learning Training Test 0.129753 0.125742 0.100265 0.059405 0.082622 0.077856 0.136525 0.145992 0.126052 0.127900 0.096555 0. 107905 -0.074588 0.201186 0.227161 0.098172 0.199804 0.121507 -0.018110 -0 .022748 Buy &: Hold Test I Training 0.058819 0.043107 0.053755 0.018023 0.041410 0.074632 0.137416 0.147065 0.125978 0.077196 0.052533 0.081395 -0.064981 0.172968 0.197319 0.092312 0.194993 0.118656 -0.017869 -0 .029886 Table 1: Investment p erformance on the simulated data (initial wealth Wa = 100). the Sharpe-ratio which is a standard measure of risk-adjusted investment performance. In detail, the Sharpe-ratio is defined as SR = log(WT/Wo)/iT where iT is the standard deviation of log(Wt!Wt - 1 ) over time. Note that large values indicate good risk-adjusted performance in years of positive growth , whereas negative values cannot readily b e interpreted. We used the root of the volatility (standardized to zero mean and unit variance) as input information and determined a suitable choice for the bandwidth parameter (b = 1) experimentally. Our results in Table 1 demonstrate that reinforcement learning dominates buy & hold in eight out of ten years on the training set and in all seven years with positive growth on the test set. Table 2 shows the results of an experiment where we replaced the artificial time series with eight years of daily German stock index data (DAX index , 1993-2000). We used the years 1996-2000 as t est data and the three years preceding each test year for training. As the model input , we computed an approximation of the (root) volatility using a geometric average of historical returns. Note that the training performance of reinforcement learning always dominates the buy & hold strategy, and the test results are also superior to the b enchmark except in the year 2000. Year 1996 1997 1998 1999 2000 Reinforcement Learning Training Test 0.083925 0.173373 0.119875 0.121583 0.123927 0.079584 0.141242 0.094807 0.085236 -0.007878 Buy &; Training 0.038818 0.119875 0.096183 0.035137 0.081271 Hold Test 0.120107 0.096369 0.035204 0.090541 0.148203 Table 2: Investment performance on the DAX data. 5 Conclusions We presented a new, kernel-based reinforcement learning method that overcomes several important shortcomings of temporal-difference learning in continuous-state domains. In particular, we demonstrated that the new approach always converges to a unique approximation of the optimal policy and that the quality of this approximation improves with the amount of training data. Also, we described a financial application where our method consistently outperformed a benchmark model in an artificial and a real market scenario. While the optimal portfolio choice problem is relatively simple, it provides an impressive proof of concept by demonstrating the practical feasibility of our method. Efficient implementations of local averaging for large-scale problems have been discussed in the data mining community. Our work makes these methods applicable to reinforcement learning, which should be valuable to meet the real-time and dimensionality constraints of real-world problems. Acknowledgements. The work of Dirk Ormoneit was partly supported by the Deutsche Forschungsgemeinschaft. Saunak Sen helped with valuable discussions and suggestions. References [Ber95) D. P. Bertsekas. Dynamic Programming and Optimal Control, volume 1 and 2. Athena Scientific, 1995. [BM95) J. A. Boyan and A. W. Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS 7,1995. [Gor99) G. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Computer Science Department, Carnegie Mellon University, 1999. [OGOO) D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in averagecost problems. Working paper, Stanford University. In preparation. [O SOO) D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 2001. To appear. [Rus97) J. Rust. Using randomization to break the curse of dimensionality. Economet"ica, 65(3):487- 516, 1997. [SMSMOO) R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS 12,2000. [TR96) J. N. TsitsikIis and B. Van Roy. Feature-based methods for large-scale dynamic programming. Machine Learning, 22:59-94, 1996. [TR99) J. N. TsitsikIis and B. Van Roy. Average cost temporal-difference learning. Automatica, 35(11):1799- 1808, 1999. [VRKOO) J. N. Tsitsiklis V. R. Konda. Actor-critic algorithms. In NIPS 12,2000.
1849 |@word middle:1 km:8 simulation:1 thereby:2 recursively:1 initial:1 series:3 ours:1 past:1 existing:1 current:1 written:2 readily:1 realistic:2 additive:1 numerical:2 j1:6 update:4 stationary:1 iterates:1 provides:1 location:2 preference:1 direct:1 differential:3 prove:1 combine:1 ica:1 market:3 frequently:1 ol:1 discounted:2 td:2 curse:1 increasing:2 becomes:1 dax:2 deutsche:1 argmin:1 interpreted:3 pursue:1 z:10 guarantee:1 temporal:4 safely:1 ti:2 concave:1 xd:1 finance:2 growth:2 control:2 unit:1 appear:1 bertsekas:1 positive:2 understood:1 local:8 io:1 sutton:1 meet:1 quantified:1 co:1 unique:6 practical:7 practice:2 investment:5 procedure:4 erformance:1 word:1 suggest:2 dbt:1 cannot:2 onto:1 undesirable:1 operator:5 context:1 applying:3 risk:3 instability:1 demonstrated:2 maximizing:2 economics:1 starting:3 assigns:1 m2:1 rule:1 financial:2 stability:1 analogous:1 construction:1 target:2 exact:1 programming:5 pa:1 roy:2 solved:2 averse:1 movement:1 valuable:2 complexity:4 reward:1 ideally:1 dynamic:5 singh:1 stock:9 represented:1 derivation:1 zo:1 describe:2 shortcoming:1 monte:1 artificial:3 neighborhood:6 outcome:1 stanford:6 widely:1 drawing:1 statistic:1 invested:1 final:1 advantage:1 sequence:4 ifthe:1 sen:2 remainder:1 zm:1 relevant:1 asserts:1 convergence:1 regularity:2 cluster:1 converges:6 tim:1 oo:1 depending:3 advisable:1 derive:1 volatility:6 measured:1 nearest:1 advocated:1 c:1 indicate:1 differ:1 stochastic:1 successor:1 mcallester:1 opinion:1 assign:1 ao:1 generalization:1 randomization:1 adjusted:2 accompanying:1 practically:1 hold:5 mapping:1 predict:1 substituting:2 adopt:2 outperformed:1 applicable:2 grouped:1 successfully:1 clearly:1 always:4 gaussian:2 aim:1 rather:1 varying:1 focus:3 consistently:1 contrast:3 industrial:1 sense:5 am:1 burdensome:1 summarizing:1 dependent:1 membership:1 typically:1 her:5 originating:2 interested:1 provably:1 issue:2 equal:1 construct:2 represents:1 sell:1 future:1 report:1 fundamentally:1 gordon:1 employ:2 replaced:1 ourselves:1 dvt:1 mining:1 sharpe:2 tj:5 chain:1 daily:3 unless:1 theoretical:2 earlier:1 obstacle:1 tsitsikiis:2 disadvantage:1 cost:23 introducing:1 deviation:1 euler:1 combined:1 st:6 participates:1 thesis:1 opposed:3 choose:2 return:3 includes:1 satisfy:1 explicitly:2 depends:2 root:2 helped:1 break:1 sort:2 investor:4 defer:1 minimize:1 ass:2 ir:1 variance:1 identify:1 ofthe:2 carlo:1 trajectory:5 asset:1 unaffected:1 ah:3 definition:2 glynn:2 rah:3 proof:2 associated:2 gain:1 knowledge:1 improves:1 dimensionality:2 organized:1 dt:1 improved:1 governing:1 working:1 quality:2 scientific:1 mdp:5 normalized:1 true:3 remedy:1 concept:1 regularization:1 hence:3 moore:1 illustrated:1 width:1 self:2 argmjn:1 outline:1 tt:1 demonstrate:1 performs:1 motion:1 fj:5 averagecost:2 recently:1 common:1 superior:1 rl:1 ji:1 rust:1 exponentially:1 volume:1 jl:1 discussed:2 interpretation:2 numerically:1 significant:1 refer:1 mellon:1 consistency:1 pm:2 portfolio:7 actor:1 impressive:1 multivariate:1 brownian:1 perspective:1 driven:1 scenario:1 vt:2 approximators:1 minimum:1 additional:1 preceding:2 converge:3 determine:2 period:2 ii:2 smoother:1 technical:1 characterized:1 long:1 feasibility:1 variant:1 basic:1 regression:2 circumstance:1 expectation:4 iteration:5 kernel:18 proposal:1 addition:1 whereas:1 xdl:1 wealth:4 crucial:1 appropriately:1 standpoint:1 sr:1 intermediate:1 forschungsgemeinschaft:1 zi:3 symbolize:1 restrict:1 bandwidth:1 idea:1 whether:1 utility:3 penalty:1 wo:1 ird:1 peter:1 render:1 reformulated:1 speaking:1 action:4 listed:1 tr99:2 amount:5 locally:1 ten:1 generate:3 exist:1 zj:1 xii:1 discrete:1 carnegie:1 express:1 demonstrating:1 enormous:1 asymptotically:2 year:15 run:1 dst:1 place:2 family:1 reader:2 decision:3 constraint:2 simulate:1 optimality:3 min:1 px:2 format:1 relatively:1 department:2 influential:1 combination:1 intuitively:2 xo:3 computationally:1 equation:8 resource:1 previously:1 remains:1 eventually:2 fail:1 german:1 reverting:1 end:1 incurring:1 eight:2 alternative:3 existence:1 denotes:1 rain:2 standardized:1 konda:1 exploit:1 approximating:4 objective:2 parametric:3 strategy:5 usual:3 gradient:1 distance:1 simulated:7 athena:1 seven:1 unstable:1 besides:1 index:2 ratio:2 stated:1 negative:1 implementation:5 policy:7 unknown:2 appreciation:1 observation:2 markov:4 benchmark:1 finite:2 immediate:1 situation:1 looking:1 dirk:2 mansour:1 perturbation:1 community:1 flm:1 nip:3 address:2 soo:1 event:1 suitable:2 difficulty:1 boyan:1 ormoneit:5 improve:1 mdps:3 inversely:1 created:1 hm:4 naive:1 literature:3 geometric:1 acknowledgement:1 relative:4 loss:1 suggestion:1 analogy:1 approximator:1 agent:1 sufficient:1 consistent:3 critic:1 course:1 supported:1 last:1 infeasible:1 tsitsiklis:1 wide:1 neighbor:1 taking:2 characterizing:1 van:2 regard:1 world:2 transition:8 author:3 reinforcement:29 historical:1 approximate:9 overcomes:2 decides:1 buy:6 automatica:1 assumed:2 alternatively:1 continuous:4 iterative:2 search:1 decade:1 table:5 ca:2 domain:1 main:1 spread:1 x1:1 referred:1 elaborate:1 fashion:1 position:1 xl:1 weighting:4 ito:1 theorem:6 xt:2 discarding:1 dominates:3 phd:1 magnitude:1 illustrates:1 simply:2 conveniently:1 prevents:1 fear:1 determines:1 relies:1 conditional:3 goal:1 formulated:1 price:6 change:1 experimentally:1 specifically:3 except:2 mlog:1 determined:1 averaging:10 wt:6 called:3 partly:1 experimental:1 la:1 est:1 ew:1 formally:6 kernelbased:1 brevity:1 preparation:1 ex:5
928
185
215 Consonant Recognition by Modular Construction of Large Phonemic Time-Delay Neural Networks Alex Waibel Carnegie-Mellon University Pittsburgh, PA 15213, ATR Interpreting Telephony Research Laboratories Osaka, Japan Abstract In this paperl we show that neural networks for speech recognition can be constructed in a modular fashion by exploiting the hidden structure of previously trained phonetic subcategory networks. The performance of resulting larger phonetic nets was found to be as good as the performance of the subcomponent nets by themselves. This approach avoids the excessive learning times that would be necessary to train larger networks and allows for incremental learning. Large time-delay neural networks constructed incrementally by applying these modular training techniques achieved a recognition performance of 96.0% for all consonants. 1. Introduction Recently we have demonstrated that connectionist architectures capable of capturing some critical aspects of the dynamic nature of speech, can achieve superior recognition performance for difficult but small phonemic discrimination tasks such as discrimination of the voiced consonants B,D and G [Waibel 89, Waibel 88a]. Encouraged by these results we wanted to explore the question, how we might expand on these models to make them useful for the design of speech recognition systems. A problem that emerges as we attempt to apply neural network models to the full speech recognition problem is the problem of scaling. Simply extending neural networks to ever larger structures and retraining them as one monolithic net quickly exceeds the capabilities of the fastest and largest supercomputers. The search complexity of finding a good solutions in a huge space of possible network configurations also soon assumes unmanageable proportions. Moreover, having to decide on all possible classes for recognition ahead of time as well as collecting sufficient data to train such a large monolithic network is impractical to say the least. In an effort to extend our models from small recognition tasks to large scale speech recognition systems, we must therefore explore modularity and incremental learning as design strategies to break up a large learning task into smaller subtasks. Breaking up a large task into subtasks to be tackled by individual black boxes interconnected in ad hoc arrangements, on the other hand, would mean to abandon one of the most attractive aspects of connectionism: the ability to perform complex constraint satisfaction in a massively parallel and interconnected fashion, in view of an overall optimal perfonnance goal. In this paper we demonstrate based on a set of experiments aimed at phoneme recognition that it is indeed possible to construct large neural networks incrementally by exploiting the hidden structure of smaller pretrained subcomponent 1An extended version of this paper will also appear in the Proceedings of the 1989 International Conference on Acoustics, Speech and Signal Processing. Copyright: IEEE. Reprinted with pennission. 216 Waibel networks. 2. Small Phonemic Classes by Time-Delay Neural Networks In our previous work, we have proposed a Time-Delay Neural Network architecture (as shown on the left of Fig. 1 for B,D,G) as an approach to phoneme discrimination that achieves very high recognition scores [Waibel 89, Waibel 88a]. Its multilayer architecture, its shift-invariance and the time delayed connections of its units all contributed to its performance by allowing the net to develop complex, non-linear decision surfaces and insensitivity to misalignments and by incorporating contextual information into decision making (see [Waibel 89, Waibel 88a] for detailed analysis and discussion). It is trained by the back-propagation procedure [Rurnelhart 86] using shared weights for different time shifted positions of the net [Waibel 89 , Waibel 88a]. In spirit it has similarities to other models recently proposed [Watrous 88, Tank 87]. This network, however, had only been trained for the voiced stops B,D,G and we began our extensions by training similar networks for the other phonemic classes in our database. ~ n, OutDut Llyt' ?n . " Intlgrltlon -... ?? ? ??????? .. , , I " ~ nlO ? ? ? ? ? ? ? ? ? ? t,n .. ....... .....???. ? . ??????? 'N, ? ??????? ?????? ?? ????????? ? ? ????? ? ? III a&t ????????? UI ? ? ? --------"~-"--'-~ IJI' "" ., IS ,,_"'" 10 mS.c tr.me rate Figure 1. The TDNN architecture: BOO-net (left), BooPTK-net (right) All phoneme tokens in our experiments were extracted using phonetic handlabels from a large vocabulary database of 5240 common Japanese words. Each word in the database was spoken in isolation by one male native Japanese speaker. All utterances were recorded in a sound proof booth and digitized at a 12 kHz sampling rate. The database was then split into a training set and a testing set of 2620 utterances each. A 150 msec range around a phoneme boundary was excised for each phoneme token and 16 mel scale fllterbank coefficients computed every 10 msec [Waibel 89, Waibel 88a]. The Consonant Recognition by Modular Construction preprocessed training and testing data was then used to train or to evaluate our TDNNs' performance for various phoneme classes. For each class, TDNNs with an architecture similar to the BOO-net in Fig.l were trained. A total of seven nets aimed at the major coarse phonetic classes in Japanese were trained, including voiced stops B, D. G, voiceless stops P,T,I(, the nasals M, N and syllabic nasals, fricatives S, SR, R and Z, affricates CR, TS,liquids and glides R, W, Y and fmally the set of vowels A, I, U, E and O. Each of these nets was given between two and five phonemes to distinguish and the pertinent input data was presented for learning. Note, that each net was trained only within each respective coarse class and has no notion of phonemes from other classes yet. Evaluation of each net on test data within each of these subcategories revealed that an average rate of9S.S% can be achieved (see [WaibeISSb] for a more detailed tabulation of results). 3. Scaling TDNNs to Larger Phonemic Classes We have seen that TDNNs achieve superior recognition performance on difficult but small recognition tasks. To train these networlcs substantial computational resources were needed. This raises the question of how our networks could be extended to encompass all phonemes or handle speech recognition in general. To shed light on this question of scaling, we consider first the problem of extending our networks from the task of voiced stop consonant recognition (hence the BOO-task) to the task of distinguishing among all stop consonants (the BOOPTK-task). For a network aimed at the discrimination of the voiced stops (a BOO-net), approximately 6000 connections had to be trained over about SOO training tokens. An identical net (also with approximately 6000 connections2) can achieve discrimination among the voiceless stops ("P", "T" and "K"). To extend our networks to the recognition of all stops, i.e., the voiced and the unvoiced stops (B,D,G,P,T,K), a larger net is required. We have trained such a network for experimental purposes. To allow for the necessary number of features to develop we have given this net 20 units in the first hidden layer, 6 units in hidden layer 2 and 6 output units. On the right of Fig. 1 we show this net in actual operation with a "G" presented at its input. Eventually a high performance network was obtained that achieves 9S.3% correct recognition over a 1613token BDGPTK-test database, but it took inordinate amounts of learning to arrive at the trained net (IS days on a 4 processor Alliant!). Although going from voiced stops to all stops is only a modest increase in task size, about IS,OOO connections had to be trained. To make matters worse, not only the number of connections should be increased with task size, but in general the amount of training data required for good generalization of a larger net has to be increased as well. Naturally, there are practical limits to the size of a training database, and more training data translates into even more learning time. Learning is further complicated by the increased complexity of the higher dimensional weightspace in large nets as well as the limited preciSion of our simulators. Despite progress towards faster learning algorithms [Haffner 88, Fahlman 88], it is clear that we cannot hope for one single monolithic network to be trained within reasonable time as we 2Note. that these are connettions over which a back-propagation pass is performed during each iteration. Since many of them share the same weights, only a small fraction (about SOO) of them are actually free pararneten. 21 7 218 Waibel increase size to handle larger and larger tasks. Moreover, requiring that all classes be considered and samples of each class be presented during training, is undesirable for practical reasons as we contemplate the design of large neural systems. Alternative ways to modularly construct and incrementally train such large neural systems must therefore be explored. 3.1. Experiments with Modularity Four experiments were performed to explore methodologies for constructing phonetic neural nets from smaller component subnets. As a task we used again stop consonant recognition (BooPTK) although other tasks have recently been explored with similar success (BOO and MNsN) [Waibel 88c]. As in the previous section we used a large database of 5240 common Japanese words spoken in isolation from which the testing an training tokens for the voiced stops (the BOO-set) and for the voiceless stops (the PTKset) was extracted. Two separate TDNNs have been trained. On testing data the BOO-net used here performed 98.3% correct for the BDG-set and the PTK-net achieved 98.7% correct recognition for the PTK-set As a fIrst naive attempt we have now simply run a speech token from either set (i.e., B,D,G,P,T or K) through both a BOO-net and a PTK-net and selected the class with the highest activation from either net as the recognition result. As might have been expected (the component nets had only been trained for their respective classes), poor recognition performance (60.5%) resulted from the 6 class experiment. This is partially due to the inhibitory property of the TDNN that we have observed elsewhere [Waibel 89]. To combine the two networks more effectively, therefore, portions of the net had to be retrained. O""tpul L'yt' ... ???? ".t ~~Q.I.~. -=::::-::-: :___ ~ MIdden .?..... . ... .. I..,.., 1 lOG ? ............. . ...... . . . .. ?? ..... ........ . ............ . ~!!I!?III? ?? . .... i ?????????????? ... ? ???? I ????????? ?????? ????????? .?.........?? ...........?? .......... .. ?............... ~I::~:~.::::::: ~: . ~~ ??????????????? Figure 2. BDGPTK-net trained from hidden units from a Boo- and a PTK-net. We start by assuming that the fIrst hidden layer in either net already contains all the lower Consonant Recognition by Modular Construction level acoustic phonetic features we need for proper identification of the stops and freeze the connections from the input layer (the speech data) to the first hidden layer's 8 units in the BOO-net and the 8 units in the PTK-neL Back-propagation learning is then performed only on the connections between these 16 (= 2 X 8) units in hidden layer 1 and hidden layer 2 and between hidden layer 2 and the combined BooPTK-net's output. This network is shown in Fig.2 with a "G" token presented as input. Only the higher layer connections had to be retrained (for about one day) in this case and the resulting network achieved a recognition performance of 98.1 % over the testing data. Combination of the two subnets has therefore yielded a good net although a slight performance degradation compared to the subnets was observed. This degradation could be explained by the increased complexity of the task. but also by the inability of this net to develop lower level acoustic-phonetic features in hidden layer 1. Such features may in fact be needed for discrimination between the two stop classes. in addition to the withinclass features. In a third experiment. we therefore flrst train a separate fiNN to perform the voiced/unvoiced (V/UV) distinction between the Boo- and the PTK-task. The network has a very similar structure as the BOO-net. except that only four hidden units were used in hidden layer 1 and two in hidden layer 2 and at the output. This V/UV-net achieved better than 99% voiCed/unvoiced classification on the test data and its hidden units developed in the process are now used as additional features for the BooPTK-task. The connections from the input to the flrst hidden layer of the Boo-. the PTK- and the V/UV nets are frozen and only the connections that combine the 20 units in hidden layer 1 to the higher layers are retrained. Training of the V/UV-net and subsequent combination training took between one and two days. The resulting net was evaluated as before on our testing database and achieved a recognition score of 98.4% correct. i!~, ' " .....__....... . OutDut llyt' ~t.g'''lan '' Frtt -. Fr . . ; Freel , \ .. ~ ___~_~_ ',....... , MtddtnUl,." .:~: : :. : : . ? ? ? Figure 3. Combination of a BDG-net and a PTK-net using 4 additional units in hidden layer 1 as free "Connectionist Glue". In the previous experiment, good results could be obtained by adding units that we believed to be the useful class distinctive features that were missing in our second experiment. In a fourth experiment. we have now examined an approach that allows for 219 220 Waibel the network to be free to discover any additional features that might be useful to merge the two component networks. In stead of previously training a class distinctive network. we now add four units to hidden layer 1. whose connections to the input are free to learn any missing discriminatory features to supplement the 16 frozen BOO and PTK features. We call these units the "connectionist glue" that we apply to merge two distinct networks into a new combined net. This network is shown in Fig.3. The hidden units of hidden layer 1 from the BOO-net are shown on the left and those from the PTK-net on the right. The connections from the moving input window to these units have been trained individually on Boo- and PTK-data. respectively. and -as before- remain fIxed during combination learning. In the middle on hidden layer 1 we show the 4 free "Glue" units. Combination learning now searches for an optimal combination of the existing Boo- and PTK-features and also supplements these by learning additional interclass discriminatory features. Combination retraining with "glue" required a two day training run. Performance evaluation of this network over the BDGPTK test database yielded a recognition rate of 98.4%. In addition to the techniques described so far. it may be useful to free all connections in a large modularly constructed network for an additional small amount of fine tuning. This has been done for the BooPTK-net shown in Fig.3 yielding some additional performance improvements. Each iteration of the full network is indeed very slow. but convergence is reached after only few additional tuning iterations. The resulting network fmally achieved (over testing data) a recognition score of 98.6%. 3.2. Steps for the Design of Large Scale Neural Nets Method bdg ptk Individual TDNNs 98 . 3~ 98.7 % TDNN:Max. ActlvatlOn bdgptk GO . 5~ ~ Reb-aiD BDGPTK 98.3 Reb-aiD Combined Higher Layers 98.1 % Reb-aiD with VIUV-units 98 . 4~ Reb-aiD with Glue 98 . 4~ All-Net Fine Tuning 98.6~ Table 3-1: From BOO to BDGPTK; Modular Scaling Methods. Table 3-1 summarizes the major results from our experiments. In the fIrst row it shows the recognition performance of the two initial TDNNs trained individually to perform the Boo- and the PTK-tasks. respectively. Underneath. we show the results from the various experiments described in the previous section. The results indicate, that larger TDNNs can indeed be trained incrementally. without requiring excessive amounts of training and without loss in performance. The total incremental training time was between one third and one half of a full monolithically trained net and the resulting Consonant Recognition by Modular Construction networks appear to perform slightly better. Even more astonishingly, they appear to achieve performance as high as the subcomponent BDG- and PTK-nets alone. As a strategy for the efficient construction of larger networks we have found the following concepts to be extremely effective: modular,incremental learning, class distinctive learning, connectionist glue, partial and selective learning and all-netfine tuning. 4. Recognition of all Consonants The incremental learning techniques explored so far can now be applied to the design of networks capable of recognizing all consonants. 4.1. Network Architecture Outpu' La.,.., ? Q G , T K II( H,N S S,K Z It VI Y , , ' I I FilM.I (Frtl) , , HlGlclen Layer 1 , ' \ \ \ "\ , I'.. H' (Fre., I , \ \ \ \ Figure 4. Modular Construction of an All Consonant Network Our consonant TDNN (shown in Fig.4.1) was constructed modularly froHi networks aimed at the consonant subcategories, i.e., the BDG-, PTK-, MNsN-, SShHZ-, TsCh- and the RWY-tasks. Each of these nets had been trained before to discriminate between the consonants within each class. Hidden layers 1 and 2 were then extracted from these nets, i.e. their weights copied and frozen in a new combined consonant TDNN. In addition, an interclass discrimination net was trained that distinguishes between the consonant subclasses and thus hopefully provides missing featural information for interclass discrimination much like the V/UV network described in the previous section. The structure of this network was very similar to other subcategory TDNN s, except that we have allowed for 20 units in hidden layer 1 and 6 hidden units (one for each coarse consonant class) in hidden layer 2. The weights leading into hidden layers 1 and 2 were then also copied from this interclass discrimination net into the consonant network and frozen. Three connections were then established to each of the 18 consonant output categories (B,D,G,P,T,K,M,N,sN,S, Sh.H,Z,Ch,Ts,R,W and Y): one to connect an output 221 222 Waibel unit with the appropriate interclass discrimination unit in hidden layer 2, one with the appropriate intra class discrimination unit from hidden layer 2 of the corresponding subcategory net and one with the always activated threshold unit (not shown in Fig.4.1) The overall network architecture is shown in Fig.4.1 for the case of an incoming test token (e.g., a "G"). For simplicity, Fig.4.1 shows only the hidden layers from the BDG-,PTK,SShHZ- and the inter-class discrimination nets. At the output, only the two connections leading to the correctly activated "G" -output unit are shown. Units and connections pertaining to the other subcategories as well as connections leading to the 17 other output units are omitted for clarity in Fig.4.1. All free weights were initialized with small random weights and then trained. 4.2. Results Consonants Task Recognition Rate (%) bdg 98.6 ptk 98.7 mnN 96.6 sshhz 99.3 chts 100.0 rwy 99.9 cons. class 96.7 All consonant TDNN 95.0 All-Net Fine Tuning 95.9 Table 4-1: Consonant Recognition Performance Results. Table 4.2 summarizes our results for the consonant recognition task. In the first 6 rows the recognition results (measured over the available test data in their respective sublasses) are given. The entry "cons.class" shows the performance of the interclass discrimination net in identifying the coarse phonemic subclass of an unknown token. 96.7% of all tokens were correctly categorized into one of the six consonant subclasses. Mter completion of combination learning the entire net was evaluated over 3061 consonant test tokens, and achieved a 95.0% recognition accuracy. All-net fme tuning was then performed by freeing up all connections in the network to allow for small additional adjustments in the interest of better overall performance. Mter completion of all-net fine tuning, the performance of the network then improved to 96.0% correct. To put these recognition results into perspective, we have compared these results with several other competing recognition techniques and found that our incrementally trained net compares favorably [Waibel 88b). Consonant Recognition by Modular Construction 5. Conclusion The serious problems associated with scaling smaller phonemic subcomponent networks to larger phonemic tasks are overcome by careful modular design. Modular design is achieved by several important strategies: selective and incremental learning of subcomponent tasks, exploitation of previously learned hidden structure, the application of connectionist glue or class distinctive features to allow for separate networks to "grow" together, partial training of portions of a larger net and finally, all-net fine tuning for making small additional adjustments in a large net Our findings suggest, that judicious application of a number of connectionist design techniques could lead to the successful design of high performance large scale connectionist speech recognition systems. References [Fahlman 88] Fahl man , S.E. An Empirical Study of Learning Speed in BackTechnical Report CMU-CS-88-162, Carnegie-Mellon Propagation Networks. University, June, 1988. [Haffner 88] Haffner, P., Waibel, A. and Shikano, K. Fast Back-Propagation Learning Methods for Neural Networks in Speech. In Proceedings of the Fall Meeting of the Acoustical Society of Japan. October, 1988. [Rumelhart 86] Rumelhart, D.E., Hinton, G.E. and Williams, R.J. Learning Internal Representations by Error Propagation. In McClelland, J L. and Rumelhart, D.E. (editor), Parallel Distributed Processing; Explorations in the Microstructure of Cognition, chapter 8, pages 318-362. MIT Press, Cambridge, MA, 1986. [Tank 87] Tank, D.W. and Hopfield, JJ. Neural Computation by Concentrating Information in Time. In Proceedings National Academy of Sciences, pages 1896-1900. April, 1987. [WaibeI88a] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang K. Phoneme Recognition: Neural Networks vs. Hidden Markov Models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 8.S3.3. April, 1988. [Waibel 88b] Waibel, A., Sawai, H. and Shikano, K. Modularity and Scaling in Large Technical Report TR-I-0034, ATR Interpreting Phonemic Neural Networks. Telephony Research Laboratories, July, 1988. [Waibel 8Se] Waibel, A. Connectionist Glue: Modular Design of Neural Speech Systems. In Touretzky, D.S., Hinton, G.E. and Sejnowski, T J. (editors), Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988. [Waibel 89] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang K. Phoneme Recognition Using Time-Delay Neural Networks. IEEE, Transactions on Acoustics, Speech and Signal Processing, March, 1989. [Watrous 88] Watrous, R. Speech Recognition Using Connectionist Networks. PhD thesis, University of Pennsylvania, October, 1988. 223
185 |@word exploitation:1 middle:1 version:1 proportion:1 retraining:2 glue:8 alliant:1 tr:2 initial:1 configuration:1 contains:1 score:3 liquid:1 existing:1 contextual:1 activation:1 yet:1 lang:2 must:2 subsequent:1 subcomponent:5 pertinent:1 wanted:1 discrimination:13 alone:1 half:1 selected:1 v:1 coarse:4 provides:1 five:1 constructed:4 combine:2 inter:1 expected:1 indeed:3 weightspace:1 themselves:1 simulator:1 actual:1 window:1 discover:1 moreover:2 watrous:3 developed:1 spoken:2 finding:2 impractical:1 every:1 collecting:1 subclass:3 shed:1 unit:28 appear:3 before:3 monolithic:3 limit:1 despite:1 inordinate:1 approximately:2 merge:2 might:3 black:1 examined:1 fastest:1 limited:1 discriminatory:2 range:1 practical:2 testing:7 procedure:1 excised:1 empirical:1 word:3 suggest:1 cannot:1 undesirable:1 put:1 applying:1 demonstrated:1 missing:3 yt:1 go:1 williams:1 simplicity:1 identifying:1 fmally:2 osaka:1 handle:2 notion:1 construction:7 distinguishing:1 pa:1 rumelhart:3 recognition:42 native:1 database:9 observed:2 highest:1 substantial:1 ui:1 complexity:3 tsch:1 dynamic:1 trained:22 raise:1 distinctive:4 misalignment:1 hopfield:1 various:2 chapter:1 train:6 distinct:1 fast:1 effective:1 sejnowski:1 pertaining:1 whose:1 modular:13 larger:12 film:1 say:1 stead:1 ability:1 hanazawa:2 abandon:1 ptk:18 hoc:1 frozen:4 net:65 took:2 interconnected:2 fr:1 achieve:4 insensitivity:1 academy:1 exploiting:2 convergence:1 extending:2 incremental:6 develop:3 completion:2 subnets:3 measured:1 freeing:1 school:1 progress:1 phonemic:9 c:1 indicate:1 correct:5 exploration:1 microstructure:1 generalization:1 connectionism:1 extension:1 around:1 considered:1 cognition:1 major:2 achieves:2 omitted:1 purpose:1 individually:2 largest:1 hope:1 mit:1 always:1 cr:1 fricative:1 monolithically:1 june:1 improvement:1 underneath:1 entire:1 hidden:32 expand:1 going:1 contemplate:1 selective:2 tank:3 overall:3 among:2 classification:1 construct:2 having:1 sampling:1 encouraged:1 identical:1 excessive:2 connectionist:10 report:2 serious:1 few:1 distinguishes:1 resulted:1 national:1 individual:2 delayed:1 bdg:7 vowel:1 attempt:2 huge:1 interest:1 intra:1 evaluation:2 male:1 sh:1 yielding:1 light:1 copyright:1 activated:2 affricate:1 capable:2 partial:2 necessary:2 respective:3 perfonnance:1 modest:1 initialized:1 increased:4 sawai:1 entry:1 delay:5 recognizing:1 successful:1 connect:1 combined:4 international:2 reb:4 together:1 quickly:1 again:1 thesis:1 recorded:1 worse:1 leading:3 japan:2 coefficient:1 matter:1 ad:1 vi:1 performed:5 break:1 view:1 portion:2 start:1 reached:1 capability:1 parallel:2 complicated:1 voiced:10 accuracy:1 fme:1 phoneme:11 kaufmann:1 identification:1 processor:1 flrst:2 touretzky:1 naturally:1 proof:1 associated:1 con:2 stop:16 fre:1 iji:1 concentrating:1 emerges:1 actually:1 back:4 higher:4 day:4 methodology:1 improved:1 april:2 ooo:1 evaluated:2 box:1 done:1 hand:1 hopefully:1 propagation:6 incrementally:5 requiring:2 concept:1 hence:1 laboratory:2 attractive:1 during:3 speaker:1 mel:1 m:1 demonstrate:1 interpreting:2 recently:3 began:1 superior:2 common:2 tabulation:1 khz:1 extend:2 slight:1 mellon:2 freeze:1 cambridge:1 tuning:8 uv:5 had:7 moving:1 similarity:1 surface:1 add:1 perspective:1 massively:1 phonetic:7 success:1 meeting:1 seen:1 morgan:1 additional:9 signal:3 ii:1 july:1 full:3 sound:1 encompass:1 exceeds:1 technical:1 faster:1 nlo:1 believed:1 multilayer:1 cmu:1 iteration:3 achieved:9 tdnns:8 addition:3 fine:5 grow:1 sr:1 spirit:1 call:1 revealed:1 iii:2 split:1 boo:19 isolation:2 architecture:7 competing:1 pennsylvania:1 reprinted:1 haffner:3 translates:1 withinclass:1 shift:1 six:1 rurnelhart:1 effort:1 speech:15 jj:1 useful:4 detailed:2 aimed:4 clear:1 nasal:2 se:1 amount:4 nel:1 category:1 mcclelland:1 inhibitory:1 shifted:1 s3:1 correctly:2 carnegie:2 four:3 lan:1 threshold:1 clarity:1 preprocessed:1 fraction:1 run:2 fourth:1 arrive:1 reasonable:1 decide:1 decision:2 summarizes:2 scaling:6 capturing:1 layer:28 summer:1 distinguish:1 tackled:1 copied:2 yielded:2 ahead:1 constraint:1 alex:1 networlcs:1 aspect:2 speed:1 extremely:1 waibel:26 combination:8 poor:1 march:1 smaller:4 remain:1 slightly:1 making:2 explained:1 resource:1 previously:3 eventually:1 needed:2 finn:1 available:1 operation:1 apply:2 appropriate:2 alternative:1 supercomputer:1 assumes:1 society:1 question:3 arrangement:1 already:1 strategy:3 separate:3 atr:2 me:1 seven:1 acoustical:1 reason:1 assuming:1 difficult:2 october:2 favorably:1 design:10 proper:1 unknown:1 subcategory:3 perform:4 contributed:1 allowing:1 unvoiced:3 markov:1 t:2 extended:2 ever:1 hinton:4 digitized:1 interclass:6 retrained:3 subtasks:2 required:3 connection:17 acoustic:5 distinction:1 learned:1 established:1 including:1 soo:2 max:1 critical:1 satisfaction:1 tdnn:7 naive:1 utterance:2 featural:1 sn:1 subcategories:3 loss:1 telephony:2 syllabic:1 astonishingly:1 sufficient:1 editor:2 share:1 row:2 elsewhere:1 token:11 fahlman:2 soon:1 free:7 allow:3 fall:1 unmanageable:1 distributed:1 boundary:1 overcome:1 vocabulary:1 avoids:1 far:2 transaction:1 mter:2 incoming:1 pittsburgh:1 consonant:27 shikano:4 search:2 modularity:3 table:4 nature:1 learn:1 mnn:1 complex:2 japanese:4 constructing:1 allowed:1 categorized:1 fig:11 fashion:2 slow:1 aid:4 precision:1 position:1 msec:2 breaking:1 third:2 outpu:1 explored:3 incorporating:1 adding:1 effectively:1 supplement:2 phd:1 booth:1 simply:2 explore:3 adjustment:2 partially:1 pretrained:1 ch:1 extracted:3 ma:1 goal:1 careful:1 towards:1 shared:1 man:1 judicious:1 except:2 degradation:2 glide:1 total:2 pas:1 invariance:1 experimental:1 la:1 discriminate:1 internal:1 inability:1 viuv:1 evaluate:1 voiceless:3
929
1,850
Feature Selection for SVMs J. Weston t, S. Mukherjee tt , O. Chapelle*, M. Pontil tt T. Poggiott, V. Vapnik*,ttt t Barnhill Biolnformatics.com, Savannah, Georgia, USA. tt CBCL MIT, Cambridge, Massachusetts, USA. * AT&T Research Laboratories, Red Bank, USA. ttt Royal Holloway, University of London, Egham, Surrey, UK. Abstract We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA micro array data. 1 Introduction In many supervised learning problems feature selection is important for a variety of reasons: generalization performance, running time requirements, and constraints and interpretational issues imposed by the problem itself. In classification problems we are given f data points Xi E ~n labeled Y E ?1 drawn i.i.d from a probability distribution P(x, y). We would like to select a subset of features while preserving or improving the discriminative ability of a classifier. As a brute force search of all possible features is a combinatorial problem one needs to take into account both the quality of solution and the computational expense of any given algorithm. Support vector machines (SVMs) have been extensively used as a classification tool with a great deal of success from object recognition [5, 11] to classification of cancer morphologies [10] and a variety of other areas, see e.g [13] . In this article we introduce feature selection algorithms for SVMs. The methods are based on minimizing generalization bounds via gradient descent and are feasible to compute. This allows several new possibilities: one can speed up time critical applications (e.g object recognition) and one can perform feature discovery (e.g cancer diagnosis). We also show how SVMs can perform badly in the situation of many irrelevant features, a problem which is remedied by using our feature selection approach. The article is organized as follows. In section 2 we describe the feature selection problem, in section 3 we review SVMs and some of their generalization bounds and in section 4 we introduce the new SVM feature selection method. Section 5 then describes results on toy and real life data indicating the usefulness of our approach. 2 The Feature Selection problem The feature selection problem can be addressed in the following two ways: (1) given a fixed m ? n , find the m features that give the smallest expected generalization error; or (2) given a maximum allowable generalization error "(, find the smallest m. In both of these problems the expected generalization error is of course unknown, and thus must be estimated. In this article we will consider problem (1). Note that choices of m in problem (1) can usually can be reparameterized as choices of"( in problem (2). Problem (1) is formulated as follows. Given a fixed set of functions y = f(x, a) we wish to find a preprocessing of the data x r-t (x * 0'), 0' E {a, I} n, and the parameters a of the function f that give the minimum value of T(O', a) = f V(y,f((x*O'),a))dP(x,y) (1) subject to 110'110 = m, where P(x,y) is unknown, x * 0' = (Xl 0'1 , ... ,xnO'n) denotes an elementwise product, V (', .) is a loss functional and II . 110 is the a-norm. In the literature one distinguishes between two types of method to solve this problem: the so-called filter and wrapper methods [2]. Filter methods are defined as a preprocessing step to induction that can remove irrelevant attributes before induction occurs, and thus wish to be valid for any set of functions f(x, a). For example one popular filter method is to use Pearson correlation coefficients. The wrapper method, on the other hand, is defined as a search through the space of feature subsets using the estimated accuracy from an induction algorithm as a measure of goodness of a particular feature subset. Thus, one approximates T(O', a) by minimizing Twrap(O', a) = min Talg(O') (2) IT subject to 0' E {a, l}n where Talg is a learning algorithm trained on data preprocessed with fixed 0'. Wrapper methods can provide more accurate solutions than filter methods [9], but in general are more computationally expensive since the induction algorithm Talg must be evaluated over each feature set (vector 0') considered, typically using performance on a hold out set as a measure of goodness of fit. In this article we introduce a feature selection algorithm for SVMs that takes advantage of the performance increase of wrapper methods whilst avoiding their computational complexity. Note, some previous work on feature selection for SVMs does exist, however results have been limited to linear kernels [3, 7] or linear probabilistic models [8]. Our approach can be applied to nonlinear problems. In order to describe this algorithm, we first review the SVM method and some of its properties. 3 Support Vector Learning Support Vector Machines [13] realize the following idea: they map x E IRn into a high (possibly infinite) dimensional space and construct an optimal hyperplane in this space. Different mappings x r-t ~(x) E 1l construct different SVMs. The mapping ~ (.) is performed by a kernel function K (', .) which defines an inner product in 1l. The decision function given by an SVM is thus: f(x) = w . ~(x) + b = L a?YiK(xi, x) + b. (3) The optimal hyperplane is the one with the maximal distance (in 1l space) to the closest image ~(Xi) from the training data (called the maximal margin). This reduces to maximizing the following optimization problem: 1 l W 2 (0:) = LO:i - 2 i=1 ? l L (4) O:iO:jYiyjK(Xi,Xj) i ,j=1 under constraints 2:;=1 O:iYi = and O:i 2:: 0, i = 1, ... , ?. For the non-separable case one can quadratically penalize errors with the modified kernel K +- K + I where I is the identity matrix and A a constant penalizing the training errors (see [4] for reasons for this choice). t Suppose that the size of the maximal margin is M and the images <I>(Xl), ... , <I>(Xl) of the training vectors are within a sphere of radius R. Then the following holds true [13]. Theorem 1 lfimages of training data of size ? belonging to a .Iphere of size R are separable with the corresponding margin M, then the expectation of the error probability has the bound 1 {R2} 1 { R 2 W 2 (0:) O} EPerr ~ ?E M2 = ?E , (5) where expectation is taken over sets of training data of size ?. This theorem justifies the idea that the performance depends on the ratio E{ R2 / M2} and not simply on the large margin M, where R is controlled by the mapping function <1>(.). Other bounds also exist, in particular Vapnik and Chapelle [4] derived an estimate using the concept of the span of support vectors. Theorem 2 Under the assumption that the set of support vectors does not change when removing the example p Ep l - err 1 < !E ~ \II ( o:~ - ? ~ (K- 1 ) p=1 sv pp -1) (6) where \II is the step function , Ksv is the matrix of dot products between support vectors, p~;:-; is the probability of test error for the machine trained on a sample of size ? - 1 and the expectations are taken over the random choice of the sample. 4 Feature Selection for SVMs In the problem of feature selection we wish to minimize equation (1) over u and 0:. The support vector method attempts to find the function from the set f(x, w, b) = w . <I> (x) + b that minimizes generalization error. We first enlarge the set of functions considered by the algorithm to f(x, w, b, u) = w . <I>(x * u) + b. Note that the mapping <l>u(x) = <I> (x * u) can be represented by choosing the kernel function Ku in equations (3) and (4): Ku(x, y) = K((x * u), (y * u)) = (<I>u(x) . <l>u(y)) (7) for any K . Thus for these kernels the bounds in Theorems (1) and (2) still hold. Hence, to minimize T(U, 0:) over 0: and u we minimize the wrapper functional Twrap in equation (2) where Talg is given by the equations (5) or (6) choosing a fixed value of u implemented by the kernel (7) . Using equation (5) one minimizes over u: R2W2(U) = R2(U)W2(o:O, u) (8) where the radius R for kernel Ku can be computed by maximizing (see, e.g [13]): (9) subject to L:i f3i = 1, f3i ~ 0, i = 1, ... , f, and W2(a O , 0") is defined by the maximum of functional (4) using kernel (7). In a similar way, one can minimize the .span bound over 0" instead of equation (8). Finding the minimum of R 2 W 2 over 0" requires searching over all possible subsets of n features which is a combinatorial problem. To avoid this problem classical methods of search include greedily adding or removing features (forward or backward selection) and hill climbing. All of these methods are expensive to compute if n is large. As an alternative to these approaches we suggest the following method: approximate the binary valued vector 0" E {O, 1}n, with a real valued vector 0" E ]Rn . Then, to find the optimum value of 0" one can minimize R 2 W 2 , or some other differentiable criterion, by gradient descent. As explained in [4] the derivative of our criterion is: aR2W 2(0") aO"k R2( )aW 2(a O,0") = 0" aO"k + W2( a )aR2(0") 0 ,fI aO"k aR2(0") (10) (11) (12) We estimate the minimum of 7(0", a) by minimizing equation (8) in the space 0" E ]Rn using the gradients (10) with the following extra constraint which approximates integer programming: (13) subject to L:i O"i = m, O"i ~ 0, i ? = 1, ... ,f. For large enough), as p -+ only m elements of 0" will be nonzero, approximating optimization problem 7(0", a). One can further simplify computations by considering a stepwise approximation procedure to find m features. To do this one can minimize R 2 W 2 (0") with 0" unconstrained. One then sets the q ? n smallest values of 0" to zero, and repeats the minimization until only m nonzero elements of 0" remain. This can mean repeatedly training a SVM just a few times, which can be fast. 5 Experiments 5.1 Toy data We compared standard SVMs, our feature selection algorithms and three classical filter methods to select features followed by SVM training. The three filter methods chose the m largest features according to: Pearson correlation coefficients, the Fisher criterion score 1, and the Kolmogorov-Smirnov test2 ). The Pearson coefficients and Fisher criterion cannot model nonlinear dependencies. In the two following artificial datasets our objective was to assess the ability of the algorithm to select a small number of target features in the presence of irrelevant and redundant features. 1 F( r) = 1 U i, -1?; 21 , where 1-'; is the mean value for the r-th feature in the positive and negative r +U r classes and 0"; 2 is the standard deviation 2KStst(r) = Vl sup (P{X :::; from each training example, and fr} - PiX :::; fr, Yr = I}) where fr denotes the r-th feature P is the corresponding empirical distribution. Linear problem Six dimensions of 202 were relevant. The probability of y = 1 or -1 was equal. The first three features {Xl,X2,X3} were drawn as Xi = yN(i,l) and the second three features {X4, X5, X6} were drawn as Xi = N(O, 1) with a probability of 0.7, otherwise the first three were drawn as Xi = N(O, 1) and the second three as Xi = yN(i - 3, 1). The remaining features are noise Xi = N(O, 20), i = 7, ... ,202. Nonlinear problem Two dimensions of 52 were relevant. The probability of y = 1 or -1 was equal. The data are drawn from the following: if y = -1 then {Xl, X2} are drawn 3} and from N(JLl, 1;) or N(JL2, 1;) with equal probability, JLl = {-?, -3} and JL2 = 1; = I , if Y = 1 then {Xl, xd are drawn again from two normal distributions with equal probability, with JLl = {3, -3} and JL2 = {-3, 3} and the same 1; as before. The rest of the features are noise Xi = N(O, 20), i = 3, .. . ,52. ii, In the linear problem the first six features have redundancy and the rest of the features are irrelevant. In the nonlinear problem all but the first two features are irrelevant. We used a linear SVM for the linear problem and a second order polynomial kernel for the nonlinear problem. For the filter methods and the SVM with feature selection we selected the 2 best features. The results are shown in Figure (1) for various training set sizes, taking the average test error on 500 samples over 30 runs of each training set size. The Fisher score (not shown in graphs due to space constraints) performed almost identically to correlation coefficients. In both problems standard SVMs perform poorly: in the linear example using ? = 500 points one obtains a test error of 13% for SVMs, which should be compared to a test error of 3% with ? = 50 using our methods. Our SVM feature selection methods also outperformed the filter methods, with forward selection being marginally better than gradient descent. In the nonlinear problem, among the filter methods only the Kolmogorov-Smirnov test improved performance over standard SVMs. o Span-Bound & Forward Se l ection --- RW-Bound & Gradient x Standard SVMs Correlation Coefficients -~ Ko l moqorov-Srnirnov Test 0.7 0.6 0.7 0.6 O .5~:::;:========~=::=::;~===~ 0.5 , ' 0.4 , , , , 0.41 0.3 ' \ \ \ \ , ' , 0.2 0 .1 'b o Span- Bound & Forward Se l ection --- RW-Bound & Grad i ent x Standard SVMs Correlation Coefficients -~ Ko l rnoqorov-Smirnov Test 0.3 ' , 0.2 \, '. , ~~--~--~~----~ t!J - - ~o.... _ _ 0 .1 40 60 (a) 80 1 00 __ ___ _ 8- _ _ ______ _ o ~'~~ -- ---- - o - -- - - -- __ _ _ ~Q_ - --o -- -- - - --o- - - 20 B 20 40 60 80 100 (b) Figure 1: A comparison of feature selection methods on (a) a linear problem and (b) a nonlinear problem both with many irrelevant features. The x-axis is the number of training points, and the y-axis the test error as a fraction of test points. 5.2 Real-life data For the following problems we compared minimizing R 2 W 2 via gradient descent to the Fisher criterion score. Face detection The face detection experiments described in this section are for the system introduced in [12, 5]. The training set consisted of 2, 429 positive images offrontal faces of size 19x 19 and 13,229 negative images not containing faces. The test set consisted of 105 positive images and 2, 000, 000 negative images. A wavelet representation of these images [5] was used, which resulted in 1,740 coefficients for each image. Performance of the system using all coefficients, 725 coefficients, and 120 coefficients is shown in the ROC curve in figure (2a). The best results were achieved using all features, however R 2 W 2 outperfomed the Fisher score. In this case feature selection was not useful for eliminating irrelevant features, but one could obtain a solution with comparable performance but reduced complexity, which could be important for time critical applications. Pedestrian detection The pedestrian detection experiments described in this section are for the system introduced in [11]. The training set consisted of 924 positive images of people of size 128x64 and 10, 044 negative images not containing pedestrians. The test set consisted of 124 positive images and 800, 000 negative images. A wavelet representation of these images [5, 11] was used, which resulted in 1,326 coefficients for each image. Performance of the system using all coefficients and 120 coefficients is shown in the ROC curve in figure (2b). The results showed the same trends that were observed in the face recognition problem. l~~"j 10 " 10 ' FalsoPositiveRato (a) 10 ' ". Falso PositillO Rillo (b) Figure 2: The solid line is using all features, the solid line with a circle is our feature selection method (minimizing R 2 W 2 by gradient descent) and the dotted line is the Fisher score. (a)The top ROC curves are for 725 features and the bottom one for 120 features for face detection. (b) ROC curves using all features and 120 features for pedestrian detection. Cancer morphology classification For DNA micro array data analysis one needs to determine the relevant genes in discrimination as well as discriminate accurately. We look at two leukemia discrimination problems [6, 10] and a colon cancer problem [1] (see also [7] for a treatment of both of these problems). The first problem was classifying myeloid and lymphoblastic leukemias based on the expression of 7129 genes. The training set consists of 38 examples and the test set of 34 examples. Using all genes a linear SVM makes 1 error on the test set. Using 20 genes a errors are made for R2W2 and 3 errors are made using the Fisher score. Using 5 genes 1 error is made for R 2 W 2 and 5 errors are made for the Fisher score. The method of [6] performs comparably to the Fisher score. The second problem was discriminating B versus T cells for lymphoblastic cells [6]. Standard linear SVMs make 1 error for this problem. Using 5 genes a errors are made for R 2 W 2 and 3 errors are made using the Fisher score. In the colon cancer problem [1] 62 tissue samples probed by oligonucleotide arrays contain 22 normal and 40 colon cancer tissues that must be discriminated based upon the expression of 2000 genes. Splitting the data into a training set of 50 and a test set of 12 in 50 separate trials we obtained a test error of 13% for standard linear SVMs. Taking 15 genes for each feature selection method we obtained 12.8% for R 2 W 2 , 17.0% for Pearson correlation coefficients, 19.3% for the Fisher score and 19.2% for the Kolmogorov-Smirnov test. Our method is only worse than the best filter method in 8 of the 50 trials. 6 Conclusion In this article we have introduced a method to perform feature selection for SVMs. This method is computationally feasible for high dimensional datasets compared to existing wrapper methods, and experiments on a variety of toy and real datasets show superior performance to the filter methods tried. This method, amongst other applications, speeds up SVMs for time critical applications (e.g pedestrian detection), and makes possible feature discovery (e.g gene discovery). Secondly, in simple experiments we showed that SVMs can indeed suffer in high dimensional spaces where many features are irrelevant. Our method provides one way to circumvent this naturally occuring, complex problem. References [1] U. Alon, N. Barkai, D. Notterman, K. Gish, S. Ybarra, D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon cancer tissues probed by oligonucleotide arrays. Cell Biology, 96:6745- 6750, 1999. [2] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artijicialintelligence, 97:245- 271" 1997. [3] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In Proc. 13th International Conference on Machine Learning, pages 82- 90, San Francisco, CA, 1998. [4] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukhetjee. Choosing kernel parameters for support vector machines. Machine Learning, 2000. [5] T. Evgeniou, M. Ponti!, C. Papageorgiou, and T. Poggio. Image representations for object detection using kernel classifiers. In Asian Conference on Computer Vision , 2000. [6] T. Golub, D. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. Mesirov, H. Coller, M. Loh, J. Downing, M. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286:531537, 1999. [7] I. Guyon, J. Weston, S. Barnhill , and V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, 2000. [8] T. Jebara and T. Jaakkola. Feature selection and dualities in maximum entropy discrimination. In Uncertainity In Artijiciallntellegence, 2000. [9] J. Kohavi. Wrappers for feature subset selection. All issue on relevance, 1995. [10] S. Mukhetjee, P. Tamayo, D. Slonim, A. Verri, T. Golub, J. Mesirov, and T. Poggio. Support vector machine classification of micro array data. AI Memo 1677, Massachusetts Institute of Technology, 1999. [11] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrian detection using wavelet templates. In Proc. Computer Vision and Pattern Recognition, pages 193- 199, Puerto Rico, June 16- 20 1997. [12] C. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In International Conference on Computer Vision , Bombay, India, January 1998. [13] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
1850 |@word trial:2 eliminating:1 polynomial:1 norm:1 smirnov:4 tamayo:2 tried:1 gish:1 myeloid:1 solid:2 wrapper:7 score:10 existing:1 err:1 bradley:1 com:1 must:3 john:1 realize:1 remove:1 discrimination:3 selected:1 yr:1 provides:1 downing:1 consists:1 introduce:4 indeed:1 expected:2 morphology:2 considering:1 minimizes:2 whilst:1 finding:2 concave:1 xd:1 classifier:2 uk:1 brute:1 yn:2 xno:1 before:2 positive:5 slonim:2 io:1 analyzing:1 chose:1 limited:1 x3:1 procedure:1 pontil:1 langley:1 area:1 empirical:1 suggest:1 cannot:1 selection:29 imposed:1 map:1 maximizing:2 splitting:1 m2:2 array:5 searching:1 x64:1 target:1 suppose:1 programming:1 element:2 trend:1 recognition:5 expensive:2 mukherjee:1 labeled:1 ep:1 observed:1 bottom:1 levine:1 notterman:1 complexity:2 trained:2 upon:2 represented:1 various:1 kolmogorov:3 fast:1 describe:2 london:1 artificial:1 pearson:4 choosing:3 solve:1 valued:2 otherwise:1 ability:2 itself:1 advantage:1 differentiable:1 mesirov:2 product:3 maximal:3 fr:3 relevant:4 poorly:1 interpretational:1 ent:1 requirement:1 optimum:1 leave:1 object:4 alon:1 implemented:1 uncertainity:1 radius:2 attribute:1 filter:11 lymphoblastic:2 ao:3 generalization:7 secondly:1 hold:3 considered:2 normal:3 cbcl:1 great:1 mapping:4 smallest:3 proc:2 outperformed:1 combinatorial:2 largest:1 tool:1 puerto:1 minimization:2 mit:1 modified:1 avoid:1 jaakkola:1 derived:1 june:1 greedily:1 colon:4 savannah:1 vl:1 typically:1 irn:1 issue:2 classification:7 among:1 equal:4 construct:2 evgeniou:1 enlarge:1 x4:1 biology:1 broad:1 look:1 leukemia:2 simplify:1 micro:3 few:1 distinguishes:1 resulted:2 asian:1 attempt:1 detection:11 possibility:1 golub:2 accurate:1 poggio:4 circle:1 sinha:1 bombay:1 goodness:2 deviation:1 subset:5 usefulness:1 dependency:1 aw:1 sv:1 international:2 discriminating:1 probabilistic:1 again:1 containing:2 possibly:1 worse:1 derivative:1 toy:4 account:1 coefficient:14 pedestrian:7 depends:1 performed:3 sup:1 red:1 ttt:2 minimize:7 ass:1 accuracy:1 efficiently:1 climbing:1 accurately:1 comparably:1 marginally:1 monitoring:1 tissue:3 barnhill:2 surrey:1 pp:1 naturally:1 treatment:1 massachusetts:2 popular:1 organized:1 rico:1 test2:1 supervised:1 x6:1 improved:1 verri:1 evaluated:1 just:1 correlation:6 until:1 hand:1 nonlinear:7 defines:1 jll:3 quality:1 barkai:1 usa:3 concept:1 true:1 consisted:4 contain:1 hence:1 laboratory:1 nonzero:2 deal:1 x5:1 q_:1 criterion:5 ection:2 allowable:1 hill:1 tt:3 occuring:1 performs:1 image:15 mangasarian:1 fi:1 superior:2 functional:3 discriminated:1 approximates:2 elementwise:1 ybarra:1 cambridge:1 ai:1 unconstrained:1 dot:1 chapelle:3 iyi:1 closest:1 showed:2 irrelevant:8 binary:1 success:1 life:3 preserving:1 minimum:3 determine:1 redundant:1 ii:4 reduces:1 sphere:1 molecular:1 controlled:1 prediction:1 ko:2 vision:3 expectation:3 kernel:11 oren:2 achieved:1 cell:3 penalize:1 addressed:1 lander:1 kohavi:1 w2:3 extra:1 rest:2 subject:4 integer:1 presence:1 coller:1 revealed:1 enough:1 identically:1 variety:3 xj:1 fit:1 f3i:2 inner:1 idea:2 grad:1 bloomfield:1 six:2 expression:4 suffer:1 loh:1 york:1 repeatedly:1 yik:1 useful:1 se:2 extensively:1 svms:20 dna:2 rw:2 reduced:1 exist:2 dotted:1 estimated:2 diagnosis:1 probed:2 redundancy:1 blum:1 drawn:7 preprocessed:1 penalizing:1 caligiuri:1 backward:1 graph:1 fraction:1 pix:1 run:1 oligonucleotide:2 almost:1 guyon:1 decision:1 comparable:1 bound:11 followed:1 badly:1 constraint:4 x2:2 bousquet:1 speed:2 min:1 span:4 separable:2 according:1 belonging:1 describes:1 remain:1 son:1 osuna:1 explained:1 mack:1 taken:2 computationally:2 equation:7 jl2:3 egham:1 alternative:1 denotes:2 running:1 include:1 remaining:1 top:1 clustering:1 approximating:1 classical:2 objective:1 occurs:1 gradient:8 dp:1 amongst:1 distance:1 separate:1 remedied:1 reason:2 induction:4 ratio:1 minimizing:5 expense:1 negative:5 memo:1 unknown:2 perform:4 datasets:3 descent:6 january:1 reparameterized:1 situation:1 rn:2 jebara:1 introduced:3 quadratically:1 usually:1 pattern:2 royal:1 critical:3 force:1 circumvent:1 technology:1 axis:2 review:2 literature:1 discovery:4 loss:1 ar2:2 versus:1 article:5 bank:1 classifying:1 lo:1 cancer:9 course:1 repeat:1 institute:1 india:1 template:1 face:7 taking:2 curve:4 dimension:2 valid:1 forward:4 made:6 preprocessing:2 san:1 approximate:1 obtains:1 gene:12 francisco:1 xi:10 discriminative:1 search:4 ku:3 ca:1 improving:1 complex:1 papageorgiou:3 noise:2 gaasenbeek:1 roc:4 georgia:1 wiley:1 wish:3 xl:6 wavelet:3 theorem:4 removing:2 r2:4 svm:9 stepwise:1 vapnik:5 adding:1 justifies:1 margin:4 entropy:1 simply:1 huard:1 weston:2 identity:1 formulated:1 fisher:11 feasible:2 change:1 infinite:1 talg:4 hyperplane:2 ksv:1 tumor:1 called:2 discriminate:1 duality:1 indicating:1 holloway:1 select:3 support:12 people:1 relevance:1 avoiding:1
930
1,851
Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Kevin A. Archie Neuroscience Program University of Southern California Los Angeles, CA 90089-2520 Bartlett W. Mel Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089-1451 Abstract Neurons in area V4 have relatively large receptive fields (RFs), so multiple visual features are simultaneously "seen" by these cells. Recordings from single V4 neurons suggest that simultaneously presented stimuli compete to set the output firing rate, and that attention acts to isolate individual features by biasing the competition in favor of the attended object. We propose that both stimulus competition and attentional biasing arise from the spatial segregation of afferent synapses onto different regions of the excitable dendritic tree of V4 neurons. The pattern of feedforward, stimulus-driven inputs follows from a Hebbian rule: excitatory afferents with similar RFs tend to group together on the dendritic tree, avoiding randomly located inhibitory inputs with similar RFs. The same principle guides the formation of inputs that mediate attentional modulation. Using both biophysically detailed compartmental models and simplified models of computation in single neurons, we demonstrate that such an architecture could account for the response properties and attentional modulation of V4 neurons. Our results suggest an important role for nonlinear dendritic conductances in extrastriate cortical processing. 1 Introduction Neurons in higher regions of visual cortex have relatively large receptive fields (RFs): for example, neurons representing the central visual field in macaque area V4 have RFs up to 5? across (Desimone & Schein, 1987). Such large RFs often contain multiple potentially significant features in a single image, leading to the question: How can these neurons extract information about individual objects? Moran and Desimone (1985) showed that when multiple stimuli are present within the RF of a V4 neuron, attention effectively reduces the RF extent of the cell, so that only the attended feature contributes to its output. Desimone (1992) noted that one way this modulation could be performed is to assign input from each RF subregion to a single dendritic branch of the V4 neuron; modulatory inhibition could then "tum off" branches, so that subregions of the RF could be independently gated. Recent experiments have revealed a more subtle picture regarding both the interactions between simultaneously presented stimuli and the effects of attentional modulation. Record- ings from individual V4 neurons have shown that simultanously presented stimuli compete to set the output firing rate (Luck, Chelazzi, Hillyard, & Desimone, 1997; Reynolds, Chelazzi, & Desimone, 1999). For example, consider a cell for which stimulus S, presented by itself, produces a strong response consisting of s spikes, and stimulus W produces a weak response of w spikes. Presenting the two stimuli Sand W together generally produces an output less than s but more than w. Note that the "weak" stimulus W is excitatory for the cell when presented alone, since it increases the response from 0 to w, but effectively inhibitory when presented together with "strong" stimulus S. Attention serves to bias the competition, so that attending to S would increase the output of the V4 cell (moving it closer to s), while attending to W would decrease the output (moving it closer to w). To describe their results, Reynolds et al. (1999) proposed a mathematical model in which individual stimuli both excite and inhibit the V4 neuron. The sum of excitatory and inhibitory input is acted on by divisive normalization proportional to the total strength of input to produce a competitive interaction between simultaneous stimuli. Attention is then implemented as a multiplicative gain on both excitatory and inhibitory input arising from the attended stimulus. In previous work using biophysically detailed compartmental models of neurons with active dendrites, we observed that increasing the stimulus contrast produced a multiplicative scaling of the tuning curve of a complex cell (Archie & Mel, 2000, Fig. 6g), suggesting an implicit normalization. In the present work, we test the following hypotheses: (1) segregation of input onto different branches of an excitable dendritic tree could produce competitive interactions between simultaneously presented stimuli, and (2) modulatory synapses on active dendrites could be a general mechanism for multiplicative modulation of inputs. 2 Methods We used both biophysically detailed compartmental models and a simplified model of a single cortical neuron to test whether competition and attentional biasing could arise from interactions between excitatory and inhibitory inputs in a nonlinear dendritic tree. An overview of the input segregation common to both classes of model is shown in Fig. 1. Biophysically detailed compartmental model. The detailed model included 4 layers of processing: (1) an LGN cell layer with center-surround RFs; (2) a virtual layer of simplecell-like subunits which were drawn from elongated rows of ON- and OFF-center LGN cells - virtual in that the subunit computations were actually carried out in the dendrites of the overlying complex cells, following Mel, Ruderman, and Archie (1998) and Archie and Mel (2000); (3) an 8 x 8 grid of complex cells, each of which contained 4 subunits with progressively shifted positions/phases; and (4) a single V4 cell in the top layer, which received input from the complex cell layer. Layers 3 and 4 are shown in Fig. 2. The LGN was modeled as 4 arrays (ON- and OFF-center, left and right eye) of differenceof-Gaussian spatial filters, as described in Archie and Mel (2000). Responses of the cortical cells were calculated using the NEURON simulation environment (Hines & Carnevale, 1997). Complex cells contained 4 basal branches, each 1 /.Lm in diameter and 150 /.Lm long; one apical branch 5 /.Lm in diameter and 250 /.Lm long; a spherical soma 20 /.Lm in diameter; and an axon 0.5 /.Lm in diameter and 1000 /.Lm long with an initial segment 1 /.Lm in diameter and 20 /.Lm long. Hodgkin-Huxley-style Na+ and K+ conductances were present in the membrane of the entire cell, with lO-fold higher density in the axon (9Na = 0.120 S/cm2 ,9K = 0.100 S/cm2 ) than in the soma and dendrites (9Na = 0.012 S/cm2 ,9K = 0.010 S/cm2 ). The V4 cell was modeled with the same parameters as the complex cells, but with 8 basal branches instead of 4. Figure 1: Segregation of excitatory and inhibitory inputs. Two sources of stimulus-driven input are shown, Sl and S2, each corresponding to an independently attendable subregion of the RF of the V4 cell. Note that each source of stimulus-driven input makes both excitatory projections to a specific branch on the V4 cell, and inhibitory projections (through an interneuron) to other branches. Similarly, the modulatory inputs Al and A2 each direct attention to a particular branch; for example, Al adds excitatory modulation to the branch corresponding to the Sl RF subregion and (indirectly) inhibitory modulation to other branches. V4 inhibitory /interneurons Figure 2: Design of the biophysically detailed model. Complex cells were arranged in a grid with overlapping RFs and similar orientation preferences. Each vertical stripe of cells formed an RF subregion to which attention could be directed. Each complex cell within a subregion formed one excitatory and one (indirect) inhibitory connection onto the V4 cell. Synapse locations were assigned at random within a given V4 branch. All excitatory connections for a given subregion were targeted to a single branch, while the corresponding inhibitory synapses were distributed across all of the other branches of the cell. Attentional modulatory and background synapses, both described in the text, are not shown. Excitatory synapses were modeled as having both voltage-dependent (NMDA) and voltageindependent (AMPAikainate) components, while inhibitory synapses were fast and voltageindependent (GABA-A). All synapses were modeled using the kinetic scheme of Destexhe, Mainen, and Sejnowski (1994), with peak conductance values scaled inversely by the local input resistance to reduce the dependence of local EPSP size on synapse location. The complex cells received input from the LGN, using the spatial arrangement of excitatory and inhibitory inputs described in Archie and Mel (2000), with inhibitory inputs distributed throughout the input branches (rather than, e.g., being restricted to the proximal part of the branch). We have previously shown that this arrangement of inputs produces phase- and contrast-invariant tuning to stimulus orientation, similar to that seen in cortical complex cells. All 64 complex cells had the same preferred orientation, which we will for convenience call vertical. For each stimulus image, each complex cell was simulated for 1 second and the resulting firing rate was used to set the activity level of one excitatory and one inhibitory synapse onto the V4 cell. The inhibition was assumed to emanate from an implicit inhibitory interneuron in V4. The stimulus-driven inputs to the V4 neuron were modeled as Poisson trains whose mean rate was set to the corresponding complex cell firing rate for excitatory synapses, and 1.3 times the corresponding complex cell firing rate for inhibitory synapses. The inputs were arranged on the V4 cell so that all of the complex cells with RFs distributed along a vertical stripe of the V4 RF (i.e., aligned with the preferred orientation of the complex cells) formed one subregion and made their excitatory projections to a single branch (Fig. 2). The inhibitory synapse from each complex cell was placed on a different branch than the corresponding excitatory synapse, with the specific location chosen at random. Attention was implemented by placing two modulatory synapses on each branch, one excitatory and one inhibitory. In the absence of attention, all modulatory synapses had a mean event rate of 0.1 Hz. Attention was directed to a particular subregion by increasing the firing rate of the excitatory modulation on the corresponding branch to 100 Hz, and increasing the inhibitory modulation on all other branches to 67 Hz. Each branch of the V4 cell also received a single excitatory synapse with mean firing rate 25 Hz, representing background (non-stimulus driven) input from the cortical network. These synapses provided most of the input needed for the cell to fire action potentials, while the stimulus-driven inputs modulated the firing rate. The rationale for the spatial arrangement of synapses was that co aligned complex cells with overlapping RFs would have correlated responses over the ensemble of images seen during early postnatal development, and would thus tend to congregate within the same dendritic subunits according to Hebbian developmental principles. Similarly, excitatory synapses would tend to avoid inhibitory synapses driven by the same stimuli, since if the two are near each other on the dendrite, the efficacy of the excitation is systematically reduced by the corresponding inhibition. Sum of squared filters model. We have previously proposed that an individual cortical pyramidal neuron may carry out high-order computations that roughly fit the form of an energy model, i.e., a sum of half-squared linear filters, with electrotonic ally isolated regions of the dendritic tree performing the quadratic subunit computations. Only excitatory inputs were previously considered, leaving open the question of how inhibition might fit in such a model. An obvious implementation of inhibition is to simply subtract the mean firing rates of inhibitory inputs, just as excitatory inputs are added. The sum-of-squares model thus has the form: f(x) = Lj((LiEBj WiXi)+)2, where y+ denotes y if y 2: 0,0 otherwise; 8 j is the set of inputs i that project to branch j; and Wi is +1 if input i is excitatory, -1 if inhibitory. We considered both a "paper-and-pencil" model, in which we hand-selected input values for each stimulus with an eye towards ease of interpretation; and also a model in which the tabulated complex cell output rates (from layer 3 of the detailed model) were strong stimulus weak stimulus 35 attention away ~ attention to strong t + o Q) en E 25 o 10 N ~ ~ ~ rt 15 '5. en attention to weak 1 05 strong .. weak t single stimulus (no attention) away .. to strong to weak ~ ? attention Figure 3: Results from the biophysically detailed model. In the top row, strong and weak visual stimuli are shown at left, and combined stimuli in three attentional conditions are indicated at right. Bar graph shows response of simulated V4 cell under each of these 5 conditions, averaged over 192 runs. Combined stimulus in the absence of attention yields output between the responses to either stimulus alone. Attention to either the strong or the weak stimulus pushes the cell's response toward the individual response for that stimulus. used as input. 3 Results Detailed model. A strong stimulus (a vertical bar) and a weak stimulus (a bar of the same length, turned 60 0 from vertical) were selected. Figure 3 shows the stimulus images and simulated V4 cell response for each stimulus alone and for the combined stimulus in various attentional states. In the absence of attention within the receptive field (attention away), the response of the cell to the combined image lay between the responses to the strong image alone or the weak image alone. This intermediate response is consistent with the responses of many V4 cells under similar conditions, and is the result of the competition between excitatory and inhibitory inputs: because of the spatial segregation, inhibitory synapses driven by one stimulus selectively undermine the effectiveness of excitation due to the other. This competition between stimuli was also biased by attentional modulation (Fig. 3). Attending to the strong stimulus elevated the response to the combined image compared to the condition where attention was directed away, thus bringing the response closer to the response to the strong stimulus alone. Similarly, attention to the weak stimulus lowered the response to the combined stimulus. Sum of squared filters. We used a 4-subunit sum-of-squares model for illustrative purposes. A stimulus in this model is a 4-dimensional vector, with each component representing the total input (excitatory positive, inhibitory negative) to a single subunit. Most stimuli tested had equal excitatory and inhibitory influence, so that the sum of the components was zero, and had excitatory influence confined to one subunit (i.e., the features were small compared to the entire V4 RF). One example set of stimulus vectors follows, with x indicating that stimulus x is attended (implemented by adding a modulatory value of +1 to the attended branch, and -1 to all others): s = [5 , -2 , -1 , -2] ---+ 25 +0+0+0 = 25 w = [-1, -1, -1, 3] ---+ 0+0+0+9 =9 s + w = [4, -3, -2, 1] 16+0+ 0+ 1 = 17 S + w = [5, -4, -3,0] 25 +0+0+0 = 25 s + w = [3, -4, -3, 2] 9+0+0+4 = 13 This simple model gave qualitiatively correct results. Some stimulus combinations we considered gave results inconsistent with the biased-competition model - e.g., the above situation with w = [-1, -1,3, -1]. The most common type of failure was that attending to the strong stimulus in the combined image led to a larger response than that produced by the strong stimulus alone. We also saw this happen for certain parameter sets in the biophysically detailed model, as described below; a similar result is seen in some of the data of Reynolds et ai. (1999). Nonetheless, this simple model gives qualitatively correct results for a surprisingly large set of input combinations. When the complex-cell output from the detailed model was used as input to a sum-ofsquared-filters model with 8 subunits, results qualitatively similar to the detailed simulation results were obtained. For the stimuli shown in Fig. 3, the following results were seen (all responses in arbitrary units): with no attention, strong: 109, weak: 2.57, combined: 84.3; combined, attention to strong: 106; to weak: 80. This simplified model, like the biophysically detailed model, is rather sensitive to the values used for the modulatory inputs: with slightly different values, for example, attending to the strong stimulus makes the response to the combined image higher than the response to the strong stimulus alone. In continuing studies, we are working to determine whether this parameter sensitivity is a general feature of such models. 4 Discussion A variety of previous models for attention have considered how the RF of cortical neurons can be dynamically modulated (Olshausen, Anderson, & Van Essen, 1993; Niebur & Koch, 1994; Salinas & Abbott, 1997; Lee, Itti, Koch, & Braun, 1999). Our model, an extension of the proposal of Desimone (1992), specifies a biophysical mechanism for the multiplicative gain used in previous models (Salinas & Abbott, 1997; Reynolds et aI., 1999), and suggests that both the stimulus competition and attentional effects seen in area V4 could be implemented by a straightforward mapping of stimulus-driven and modulatory afferents, both excitatory and inhibitory, onto the dendrites of V4 neurons. The results from the sumof-squared-filters models demonstrate that even a crude model of computation in single neurons can account for the complicated response properties of V4 neurons, given several quasi-independent nonlinear dendritic subunits and a suitable spatial arrangement of synapses. In continuing work, we are exploring the large space of parameters (e.g., density of various ionic conductances, ratio of inhibition to excitation, strength of modulatory inputs) to determine which aspects of the response properties are fundamental to the model, and which are accidents of the particular parameters chosen. This work should help to identify strong vs. weak experimental predictions regarding the contributions of dendritic subunit computation to the response properties of extrastriate neurons. Acknowledgements Supported by NSF. Reference Archie, K. A., & Mel, B. W (2000). A model for intradendritic computation of binocular disparity. Nature Neurosci., 3(1), 54-63. Connor, C. E., Preddie, D. c., Gallant, J. L., & Essen, D. C. V. (1997). Spatial attention effects in macaque area V4. 1. Neurosci., 17(9), 3201-3214. Desimone, R., & Schein, S. J. (1987). Visual properties of neurons in area V4 of the macaque: sensitivity to stimulus form. 1. Neurophysiol., 57(3),835-868. Desimone, R. (1992). Neural circuits for visual attention in the primate brain. In Carpenter, G. A., & Grossberg, S. (Eds.), Neural Networks for Vision and Image Processing, chap. 12, pp. 343-364. MIT Press, Cambridge, MA. Desimone, R. (1998). Visual attention mediated by biased competition in extrastriate visual cortex. Phil. Trans. R. Soc. Lond. B, 353,1245-1255. Destexhe, A., Mainen, Z., & Sejnowski, T. 1. (1994). Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. 1. Comput. Neurosci., 1, 195-230. Destexhe, A., & Pare, D. (1999). Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. 1. Neurophysiol., 81, 1531-1547. Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment. Neural Comput.,9,1179-1209. Lee, D. K., Itti, L., Koch, C., & Braun, J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neurosci., 2(4), 375-381. Luck, S. 1., Chelazzi, L., Hillyard, S. A., & Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas VI, V2, and V4 of macaque visual cortex. 1. Neurophysiol., 77, 24-42. McAdams, C. J., & Maunsell, J. H. R. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. 1. Neurosci., 19(1),431441. Mel, B. W (1999). Why have dendrites? A computational perspective. In Stuart, G., Spruston, N., & Hausser, M. (Eds.), Dendrites, chap. 11, pp. 271-289. Oxford University Press. Mel, B. W, Ruderman, D. L., & Archie, K. A. (1998). Translation-invariant orientation tuning in visual "complex" cells could derive from intradendritic computations. 1. Neurosci., 18(11),4325-4334. Moran, 1., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229, 782-784. Motter, B. C. (1993) . Focal attention produces spatially selective processing in visual cortical areas VI, V2, and V4 in the presence of competing stimuli. 1. Neurophysiol., 70(3),909-919. Niebur, E., & Koch, C. (1994). A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. 1. Comput. Neurosci., 1, 141-158. Olshausen, B. A., Anderson, C. H., & Van Essen, D. C. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. 1. Neurosci., 13(11),4700-4719. Reynolds, J. H., Chelazzi, L., & Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. 1. Neurosci., 19(5), 1736-1753. Salinas, E., & Abbott, L. F. (1997). Invariant visual responses from attentional gain fields. 1. Neurophysiol., 77,3267-3272.
1851 |@word simplecell:1 open:1 cm2:4 integrative:1 simulation:3 attended:5 carry:1 extrastriate:4 initial:1 efficacy:1 disparity:1 mainen:2 reynolds:5 happen:1 progressively:1 v:1 alone:8 half:1 selected:2 postnatal:1 record:1 location:3 preference:1 mathematical:1 along:1 direct:1 roughly:1 brain:1 spherical:1 chap:2 increasing:3 provided:1 project:1 circuit:1 temporal:1 act:1 braun:2 scaled:1 unit:1 underlie:1 compartmentalization:1 maunsell:1 positive:1 engineering:1 local:2 oxford:1 firing:9 modulation:10 might:1 dynamically:1 suggests:1 co:1 ease:1 averaged:1 directed:3 grossberg:1 emanate:1 area:9 projection:3 suggest:2 onto:5 convenience:1 influence:2 elongated:1 center:3 phil:1 straightforward:1 attention:34 independently:2 rule:1 attending:5 array:1 hypothesis:1 recognition:1 located:1 lay:1 stripe:2 observed:1 role:1 region:3 decrease:1 luck:2 inhibit:1 environment:2 developmental:1 dynamic:1 ings:1 segment:1 neurophysiol:5 indirect:1 various:2 train:1 fast:1 describe:1 sejnowski:2 kevin:1 formation:1 salina:3 whose:1 larger:1 compartmental:4 otherwise:1 favor:1 itself:1 mcadams:1 biophysical:1 propose:1 interaction:4 epsp:1 aligned:2 turned:1 competition:11 los:2 transmission:1 produce:7 object:2 help:1 derive:1 received:3 subregion:8 soc:1 implemented:4 strong:19 differenceof:1 correct:2 filter:7 routing:1 virtual:2 sand:1 assign:1 dendritic:11 extension:1 exploring:1 koch:4 considered:4 mapping:1 lm:9 early:1 a2:1 purpose:1 saw:1 sensitive:1 mit:1 activates:1 gaussian:1 rather:2 avoid:1 voltage:1 contrast:2 dependent:1 entire:2 lj:1 quasi:1 selective:4 lgn:4 among:2 orientation:6 development:1 spatial:8 field:5 equal:1 having:1 placing:1 stuart:1 others:1 stimulus:60 randomly:1 simultaneously:4 individual:6 phase:2 consisting:1 fire:1 conductance:4 interneurons:1 essen:3 desimone:12 closer:3 tree:5 spruston:1 continuing:2 isolated:1 schein:2 formalism:1 apical:1 archie:8 proximal:1 combined:10 density:2 peak:1 sensitivity:2 fundamental:1 v4:36 off:3 lee:2 together:3 synthesis:1 na:3 squared:4 central:1 leading:1 style:1 itti:2 account:2 suggesting:1 potential:1 afferent:3 vi:2 performed:1 multiplicative:4 competitive:3 complicated:1 vivo:1 contribution:1 formed:3 square:2 ensemble:1 yield:1 identify:1 weak:14 biophysically:8 produced:2 intradendritic:2 ionic:1 niebur:2 simultaneous:2 synapsis:17 ed:2 synaptic:1 failure:1 energy:1 nonetheless:1 pp:2 obvious:1 gain:3 wixi:1 subtle:1 nmda:1 actually:1 tum:1 higher:3 response:27 synapse:6 arranged:2 anderson:2 just:1 biomedical:1 implicit:2 binocular:1 correlation:1 hand:1 ally:1 undermine:1 working:1 ruderman:2 nonlinear:3 overlapping:2 indicated:1 olshausen:2 effect:4 contain:1 pencil:1 assigned:1 spatially:1 during:1 noted:1 mel:9 excitation:3 illustrative:1 presenting:1 neocortical:1 demonstrate:2 image:11 common:3 overview:1 winner:1 interpretation:1 elevated:1 significant:1 surround:1 connor:1 ai:2 cambridge:1 tuning:4 subserve:1 grid:2 focal:1 similarly:3 had:4 moving:2 lowered:1 hillyard:2 cortex:4 inhibition:6 add:1 showed:1 recent:1 perspective:1 driven:9 certain:1 seen:6 accident:1 determine:2 branch:24 multiple:3 reduces:1 hebbian:2 long:4 impact:1 prediction:1 vision:1 poisson:1 normalization:2 confined:1 cell:46 proposal:1 background:2 pyramidal:2 source:2 leaving:1 biased:3 bringing:1 recording:1 isolate:1 tend:3 hz:4 inconsistent:1 effectiveness:1 call:1 near:1 presence:1 feedforward:1 revealed:1 destexhe:3 intermediate:1 variety:1 fit:2 gave:2 architecture:1 competing:1 reduce:1 regarding:2 angeles:2 whether:2 bartlett:1 tabulated:1 resistance:1 action:1 electrotonic:1 generally:1 modulatory:10 detailed:13 subregions:1 diameter:5 reduced:1 specifies:1 sl:2 nsf:1 inhibitory:28 shifted:1 neuroscience:1 arising:1 motter:1 group:1 basal:2 soma:2 drawn:1 abbott:3 graph:1 sum:8 compete:2 overlying:1 run:1 hodgkin:1 throughout:1 scaling:1 layer:7 fold:1 quadratic:1 chelazzi:4 activity:2 strength:2 huxley:1 aspect:1 lond:1 performing:1 relatively:2 acted:1 department:1 according:1 combination:2 sumof:1 gaba:1 membrane:2 across:2 slightly:1 wi:1 primate:1 restricted:1 invariant:4 segregation:5 previously:3 mechanism:4 carnevale:2 needed:1 neuromodulation:1 serf:1 away:4 indirectly:1 v2:3 gate:1 top:2 denotes:1 question:2 arrangement:4 spike:2 added:1 receptive:3 dependence:1 rt:1 southern:2 attentional:12 simulated:3 extent:1 toward:1 length:1 modeled:5 ratio:1 potentially:1 negative:1 design:1 implementation:2 gated:1 gallant:1 vertical:5 neuron:27 subunit:11 situation:1 arbitrary:1 connection:2 california:2 hausser:1 macaque:6 trans:1 bar:3 below:1 pattern:2 biasing:4 program:1 rf:20 event:1 suitable:1 representing:3 scheme:1 pare:1 eye:2 inversely:1 picture:1 carried:1 excitable:3 extract:1 mediated:1 text:1 acknowledgement:1 rationale:1 proportional:1 consistent:1 principle:2 systematically:1 translation:1 row:2 lo:1 excitatory:28 placed:1 surprisingly:1 supported:1 guide:1 bias:1 distributed:3 van:2 curve:1 calculated:1 cortical:9 made:1 qualitatively:2 simplified:3 preferred:2 neurobiological:1 active:2 assumed:1 excite:1 why:1 nature:2 ca:2 contributes:1 dendrite:8 complex:21 neurosci:9 s2:1 arise:2 mediate:1 carpenter:1 neuronal:1 fig:6 en:2 axon:2 position:1 comput:3 crude:1 specific:2 moran:2 adding:1 effectively:2 push:1 interneuron:2 subtract:1 led:1 simply:1 visual:17 contained:2 hines:2 kinetic:2 ma:1 targeted:1 towards:1 absence:3 included:1 preddie:1 total:2 divisive:1 experimental:1 indicating:1 selectively:1 modulated:2 tested:1 avoiding:1 correlated:1
931
1,852
Explaining Away in Weight Space Peter Dayan Sham Kakade Gatsby Computational Neuroscience Unit, UCL 17 Queen Square London WCIN 3AR da y a n @ga t sb y.u c l. ac . uk sham@ga t sby.u c l. ac .uk Abstract Explaining away has mostly been considered in terms of inference of states in belief networks. We show how it can also arise in a Bayesian context in inference about the weights governing relationships such as those between stimuli and reinforcers in conditioning experiments such as bacA,'Ward blocking. We show how explaining away in weight space can be accounted for using an extension of a Kalman filter model; provide a new approximate way of looking at the Kalman gain matrix as a whitener for the correlation matrix of the observation process; suggest a network implementation of this whitener using an architecture due to Goodall; and show that the resulting model exhibits backward blocking. 1 Introduction The phenomenon of expl aining away is commonplace in inference in belief networks. In this, an explanation (a setting of activities of unobserved units) that is consistent with certain observations is accorded a low posterior probability if another explanation for the same observations is favoured either by the prior or by other observations. Explaining away is typically realized by recurrent inference procedures, such as mean field inference (see Jordan, 1998). However, explaining away is not only important in the space of on-line explanations for data; it is also important in the space of weights. This is a very general problem that we illustrate using a phenomenon from animal conditioning called bad'Ward blocking (Shanks, 1985; Miller & Matute, 1996). Conditioning paradigms are important because they provide a window onto processes of successful natural inference, which are frequently statistically normative. Backwards blocking poses a very different problem from standard explaining away, and rather complex theories have been advanced to account for it (eg Wagner & Brandon, 1989). We treat it as a case for Kalman jiltering, and suggest a novel network model for Kalman filtering to solve it. Consider three different conditioning paradigms associated with backwards blocking: name forward backward sharing set 1 set 2 test L~R L,S~R S~ L,S~R L~R S~ L,S~R --- ? ? S~Rl2 These paradigms involve one or two sets of multiple learning trials (set 1 and set 2), in which stimuli (a light, L, and/or a sound, S) are conditioned to a reward (R), followed by a test phase, in which the strength of association between the sound S and reward is assessed. This is found to be weak (.) in forward and backward blocking, but stronger (Rf2) in the sharing paradigm. The effect that concerns this paper is occurring during the second set of trials during backward blocking in which the association between the sound and the reward is weakened (compared with sharing), even though the sound is not presented during these trials. The apparent association between the sound and the reward established in the first set of trials is explained away in the second set of trials. The standard explanation for this (Wagner's SOP model, see Wagner & Brandon, 1989) suggests that during the first set of trials, the light comes to predict the presence of the sound; and that during the second set of trials, the fact that the sound is expected (on the basis of the light, represented by the activation of 'opponent' sound units) but not presented, weakens the association between the sound and the reward. Not only does this suggestion lack a statistical basis, but also its network implementation requires that the activation of the opponent sound units makes weaker the weights from the standard sound units to reward. It is unclear how this could work. In this paper, we first extend the Kalman filter based conditioning theory of Sutton (1992) to the case of backward blocking. Next, we show the close relationship between the key quantity for a Kalman filter - namely the covariance matrix of uncertainty about the relationship between the stimuli and the reward - and the symmetric whitening matrix for the stimuli. Then we show how the Goodall algorithm for whitening (Goodall 1960; Atick & Redlich, 1993) makes for an appropriate network implementation for weight updates based on the Kalman filter. The final algorithm is a motivated mixture of unsupervised and reinforcement (or, equivalently in this case, supervised) learning. Last, we demonstrate backward blocking in the full model. 2 The Kalman filter and classical conditioning Sutton (1992) suggested that one can understand classical conditioning in terms of normative statistical inference. The idea is that on trial n there is a set of true weights Wn mediating the relationship between the presentation of stimuli Xn and the amount of reward Tn that is delivered, where (1) and En '" N[O, T2] is zero-mean Gaussian noise, independent from one trial to the next. l For the cases above, Xn = (x~,x~) might have two dimensions, one each for light and sound, taking on values that are binary, representing the presence and absence of the stimuli. Similarly, W n = (w~, w~) also has two dimensions. Crucially, to allow for the possibility (realized in most conditioning experiments) that the true weights might change, the model includes a diffusion term W n +1 = Wn + 11n (2) where 11n '" N[O, (72][] is also Gaussian. The task for the animal is to take observations of the stimuli {xn} and rewards {Tn} and infer a distribution over W n . Provided that the initial uncertainty can be captured as Wo '" N[O, ~o] for some covariance matrix ~o , inference takes the form of a standard recursive Kalman filter, for which P(WnITl ... Tn-d '" N[w n, ~n] and , , Wn+l =w n ~ _ ~ Lln+l - LIn iPor vectors a, b, matrix C, a? b + ~n ~ . Xn Xn ? LIn . Xn + (7 2][ _ ( +T , 2 Tn-Wn'Xn ~n' XnXn . ~n ~ Xn . LIn . Xn +T2 ) (3) (4) = I:i aibi, a? C? b = I:ij aiCijbj, matrix [ab]ij = aibj. If 1;n ex n, then the update for the mean can be seen as a standard delta rule (Widrow & Stearns, 1985; Rescorla & Wagner, 1972), involving the prediction error (or innovation) On rn - wn . x n . Note the familiar, but at first sight counterintuitive, result that the update for the covariance matrix does not depend on the innovation or the observed rn .2 = In backward blocking, in the first set of trials, the off-diagonal terms of the covariance matrix 1;n become negative. This can either be seen from the form of the update equation for the covariance matrix (since Xn '" (1,1)), or, more intuitivep', from the fact that these trials imply a constraint only on w* + w~, therefore forcing wn and w* to be negatively correlated. The consequence of this negative correlation in the second set of trials is that the S component of 1;n . Xn = 1;n . (1,0) is less than 0, and so, via equation 3, w~ reduces. This is exactly the result in backward blocking. Another way of looking at this is in terms of explaining away in weight space. From the first set of trials, the animal infers that w* + w~ = R > 0; from the second, that the prediction owes to w* rather than w~, and so the old value w~ = R/2 is explained away by w*. Sutton (1992) actually suggested the approximation of forcing the off-diagonal components of the covariance matrix 1;n to be 0, which, of course, prevents the system from accounting for backward blocking. We seek a network account of explaining away in the space of weights by implementing an approximate form of Kalman filtering. 3 Whitening and the Kalman filter In conventional applications of the Kalman filter, Xn would typically be constant. That is, the hidden state (w n ) would be observed through a fixed observation process. In cases such as classical conditioning, though, this is not true - we are interested in the case that Xn changes over time, possibly even in a random (though fully observable) way. The plan for this section is to derive an approximate relationship between the average covariance matrix over the weights f; and a whitening matrix for the stimulus inputs. In the next section, we consider an implementation of a particular whitening algorithm as an unsupervised way of estimating the covariance matrix for the Kalman filter and show how to use it to learn the weights Wn appropriately. Consider the case that Xn are random, with correlation matrix (xx) = Q, and consider the mean covariance matrix f; for the Kalman filter, averaging across the variation in x. Make the approximation that / f;. xx . f;) \X.1; ' X+T 2 (f; . xx . f;) - (X?1;?X+T 2 ) which is less drastic than it might first appear since the denominator is just a scalar. Then, we can solve for the average of the asymptotic value of f; in the equation for the update of the Kalman filter as f;Qf; ex n (5) Thus f; is a whitening filter for the correlation matrix Q of the inputs {x}. Symmetric whitening filters (f; must be symmetric) are generally unique (Atick & Redlich, 1993). This result is very different from the standard relationship between Kalman filtering and whitening. The standard Kalman filter is a whitening filter for the innovations process on = rn - wn . X n , ie as extracting all the systematic variation into W n, leaving only random variation due to the observation noise and the diffusion process. Equation 5 is an additional level of whitening, saying that one can look at the long-run average covariance 2Note also the use of the alternative form of the Kalman filter, in which we perform observation/conditioning followed by drift, rather than drift followed by observation/conditioning. A B x ]j10 c 8. 8 E 8 y(t) ~ 6 o ~4 '5 ~ 2 on diagonal component off diagonal com anent Figure 1: Whitening. A) The lower curve shows the average maximum off-diagonal element of IfjQfjl as a function of v . The upper curve shows the average maximum diagonal element of the same matrix. The off-diagonal components are around an order of magnitude smaller than the ondiagonal components, even in the difficult regime where v is near 0, and thus the matrix Q is nearly singular. B) Network model for Kalman filtering. Identity feedforward weights 1I map inputs x to a recurrent network yet} whose output is used to make predictions. Learning of the recurrent weights B is based on Goodall's (1960) rule; learning of the prediction weights is based on the delta rule, only using yeO} to make the predictions and y(oo} to change the weights. matrix of the uncertainty in Wn as whitening the input process x n . This is inherently unsupervised, in that whitening takes place without any reference to the observed rewards (or even the innovation). Given the approximation, we tested whether f; really whitens Q by by generating Xn from a Gaussian distribution, with mean (1,1) and variance v 2 II, calculating the long-run average value of f;, and assessing whether f f;Qf; is white. There is no unique measure for the deviation of f from being diagonal; as an example, figure lA shows as a function of v the largest on- and off-diagonal elements of f . The figure shows that the off-diagonal components are comparatively very small, even when v is very small, for which Q has an eigenvalue very near to 0 making the whitening matrix nearly undefined. Equally, in this case, ~n tends to have very large values, since, looking at equation 4, the growth in uncertainty coming from a 2 II is not balanced by any observation in the direction (1, -1) that is orthogonal to (1,1) . = Of course, only the long-run average covariance matrix f; whitens Q. We make the further approximation of using an on-line estimate of the symmetric whitening matrix as the online estimate of the covariance of the weights ~n . 4 A network model Figure IB shows a network model in which prediction weights wn adapt in a manner that is appropriately sensitive to a learned, on-line, estimate of the whitening matrix. The network has two components, a mapping from input x to output y(t), via recurrent feedback weights B (the Goodall (1960) whitening filter), and a mapping from y, through a set of prediction weights W to an estimate of the reward. The second part of the network is most straightforward. The feedforward weights from x to yare just the identity matrix II. Therefore, the initial value in the hidden layer in response to stimulus Xn is y(O) X n , and so W.Xn. the prediction of reward is just w . y(O) = = The first part of the network is a straightforward implementation of Goodall's whitening filter (Goodall, 1960; Atick & Redlich, 1993). The recurrent dynamics in the y-Iayer are taken as being purely linear. Therefore, in response to input x (propagated through the identity feedforward weights) TY = -y+x+By and so y( 00) = (II - B)-lX, provided that the inverse exists. Goodall's algorithm changes the recurrent weights B using local, anti-Hebbian learning, according to tl.B ()( -xy + II - B . (6) This rule stabilizes on average when II = (II - B)-lQ[(II - B)-l], that is when (II - B)-l is a whitening filter for the correlation matrix Q of the inputs. If B is symmetric, which can be guaranteed by making B = (()) initially (Atick & Redlich, 1993), then, by convergence, we have (II - B)-l = f:; and, given input Xn to the network - ~Xn = (II - B) -1 Xn = Yn(oo) Therefore, we can implement a learning rule for the prediction weights akin to the Kalman filter (equation 3) using (7) This is the standard delta rule, except that the predictions are based on Yn(O) Xn , whereas the weight changes are based on Yn( 00) = f:;x n . The learning rule gets wrong the absolute magnitude of the weight changes (since it lacks the Xn . ~n . Xn + T2 term on the denominator - but it gets right the direction of the changes. 5 Results Figure 2 shows the result of learning in backward blocking. In association with Tn = 1, first stimulus Xn = (1,1) was presented for 20 trials, then stimulus Xn = (1,0) was presented for a further 20 trials. Figure 2A shows the development of the weights w~ (solid) and w~ (dashed). During the first set of trials, these grow towards 0.5; during the second set, they differentiate sharply with the weight associated with the light growing towards 1, and that with the sound, which is explained away, growing towards O. Figure 2B shows the development of two terms in the estimated covariance matrix. The negative covariance between light and sound is evident, and causes the sharp changes in the weights on the 21st trial. Figure 2C & D show the values using the exact Kalman filter, showing qualitatively similar behavior. The increases in the magnitudes of ~~L and ~~s during the first sta~e of backwards blocking come from the lack of information in the input about w~ - w n ' despite its continual diffusion (from equation 2). Thus backwards blocking is a pathological case. Nevertheless, the on-line method for estimating ~ captures the correct behavior. Figures 2 E-H show a non-pathological case with observation noise added. The estimates from the model closely match those of the exact Kalman filter, a result that is is also true for other non-pathological cases. 6 Discussion We have shown how the standard Kalman filter produces explaining away in the space of weights, and suggested and proved efficacious a natural network model for implementing the Kalman filter. The model mixes unsupervised learning of a whitener for the observation process (ie the Xn of equation 1), providing the covariance matrix governing the uncertainty in the weights, with supervised (or equivalently reinforcement) learning of the mean values of the weights . Unsupervised learning is reasonable since the evolution of the covariance matrix of the weights is independent of the innovations. The basic result is an A ,; , B i 4 ;;f DB L__~~. ___.___ D2 "0 20 40 . , D01~ ~ ':... B:~.... E~L 0-?. I -, e .. E~L '. .. " ?????.......... j/~~L '. \j -'0 00 20 10 trial ., trial tria l E , 20 o 10 20 3D 40 trial F , .DB ,~ 04 /.- ........ ----.... _' ??? __ ~_~ . __ _? ___ ? 04 ?O~~-~20:;______;;c-__!' ?O~~-~20:;______;;c-__!' trial trial Figure 2: Backward blocking in the full model. A) The development of w over 20 trials with Xn = (1,1) and 20 with Xn = (1,0) . B) The development of the estimated covariance of the weight for the light 'E~L and cross-covariance between the light and the sound 'E~s. The learning rates in equations 6 and 7 were both 0.125.C & D) The development of wand 'E from the exact Kalman filter with parameters (IT = .09 and T = 0.35). E) The development of w as in A) except with multiplicative Gaussian noise added (ie noise with standard deviation 0.35 is added only to the representations of stimuli that are present). F & G) The comparison of win the model (solid line) and in the exact Kalman filter (dashed line), using the sarne parameters for the Kalman filter as in C) and D). H) A comparison of the true covariance, 'En (dashed line), with the rescaled estimate, (][ - B)-1 (solid line). approximation, but one that has been shown to match results quite closely. Further work is needed to understand how to set the parameters of the Goodall learning rule to match 0'2 and 7 2 exactly. Hinton (personal communication) has suggested an alternative interpretation of Kalman filtering based on a heteroassociative novelty filter. Here, the idea is to use the recurrent network B only once, rather than to equilibrium, with (as for our model) Yn(O) Xn , the prediction v = wn . Yn(O), Yn(l) = Bn . Xn , and = b.w n This gives Bn a similar role to IX ~n b. Yn(l) (Tn - Wn ' Yn(O)) . in learning wn . For the novelty filter, _ Bn-- Bn . XnXn . Bn JB n . x n J2 ' which makes the network a perfect heteroassociator between Xn and Tn. If we compare the update for Bn to that for ~n (equation 4), we can see that it amounts approximately to assuming neither observation noise nor drift. Thus, whereas our network model approximates the long-run covariance matrix, the novelty filter approximates the instantaneous covariance matrix directly, and could clearly be adapted to take account of noise. Unfortunately, there are few quantitatively precise experimental results on backwards blocking, so it is hard to choose between different possible rules. There is a further alternative. Sutton (1992) suggested an online way of estimating the elements of the covariance matrix, observing that E[t5~l = 7 2 + Xn . ~n . Xn (8) and so considered using a standard delta rule to fit the square innovation using a quadratic input representation ((X~)2, (X~)2 , x~ X x~, 1) .3 The weight associated with the last ele3 Although the x~ x x~ term was omitted from Sutton's diagonal approximation to 'En. ment, ie the bias, should come to be the observation noise 7 2 ; the weights associated with the other elements are just the components of ~n. The most critical concern about this is that it is not obvious how to use the resulting covariance matrix to control learning about the mean values of the weights. There is also the more theoretical concern that the covariance matrix should really be independent of the prediction errors, one manifestation of which is that the occurrence of backward blocking in the model of equation 8 is strongly sensitive to initial conditions. Although backward blocking is a robust phenomenon, particularly in human conditioning experiments (Shanks, 1985), it is not observed in all animal conditioning paradigms. One possibility for why not is that the anatomical substrate of the cross-modal recurrent network (the B weights in the model) is not ubiquitously available. In its absence, y( 00) = y(O) = Xn in response to an input X n , and so the network will perform like the standard delta or Rescorla-Wagner (Rescorla & Wagner, 1972) rule. The Kalman filter is only one part of a more complicated picture for statistically normative models of conditioning. It makes for a particularly clear example of what is incomplete about some of our own learning rules (notably Kakade & Dayan, 2000) which suggest that, at least in some circumstances, learning about the two different stimuli should progress completely independently. We are presently trying to integrate on-line and learned competitive and additive effects using ideas from mixture models and Kalman filters. Acknowledgements We are very grateful to David Shanks, Rich Sutton, Read Montague and Terry Sejnowski for discussions of the Kalman filter model and its relationship to backward blocking, and to Sam Roweis for comments on the paper. This work was funded by the Gatsby Charitable Foundation and the NSF. References Atick, JJ & Redlich, AN (1993) Convergent algorithm for sensory receptive field development. Neural Computation 5:45-60. Goodall, MC (1960) Performance of stochastic net. Nature 185:557-558. Jordan, MI, editor (1998) Learning in Graphical Models. Dordrecht: Kluwer. Kakade, S & Dayan, P (2000) Acquisition in autoshaping. In SA Solla, TK Leen & K-R Muller, editors, Advances in Neural Information Processing Systems, 12. Miller, RR & Matute, H (1996) . Biological significance in forward and backward blocking: Resolution of a discrepancy between animal conditioning and human causal judgment. Journal of Experimental Psychology: General 125:370-386. Rescorla, RA & Wagner, AR (1972) A theory of Pavlovian conditioning: The effectiveness of reinforcement and non-reinforcement. In AH Black & WF Prokasy, editors , Classical Conditioning II: Current Research and Theory. New York:Aleton-Century-Crofts, 64-69. Shanks, DR (1985). Forward and backward blocking in human contingency judgement. Quarterly Journal of Experimental Psychology: Comparative & Physiological P5ychology 37:1-21. Sutton, RS (1992). Gain adaptation beats least squares? In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems. Wagner, AR & Brandon, SE (1989). Evolution of a structured connectionist model of Pavlovian conditioning (AESOP). In SB Klein & RR Mowrer, editors, Contemporary Learning Theories. Hillsdale, NJ: Erlbaum, 149-189. Widrow, B & Stearns, SD (1985) Adaptive Signal Processing. Englewood Cliffs, NJ:Prentice-Hall.
1852 |@word trial:23 judgement:1 stronger:1 d2:1 seek:1 crucially:1 bn:6 covariance:24 accounting:1 heteroassociative:1 r:1 solid:3 initial:3 l__:1 current:1 com:1 activation:2 yet:1 must:1 additive:1 update:6 sby:1 lx:1 become:1 manner:1 notably:1 ra:1 expected:1 behavior:2 frequently:1 growing:2 nor:1 window:1 provided:2 estimating:3 xx:3 what:1 unobserved:1 nj:2 continual:1 growth:1 exactly:2 wrong:1 uk:2 control:1 unit:5 appear:1 yn:8 local:1 treat:1 tends:1 sd:1 consequence:1 sutton:7 despite:1 cliff:1 approximately:1 might:3 black:1 weakened:1 suggests:1 statistically:2 unique:2 recursive:1 implement:1 procedure:1 suggest:3 get:2 onto:1 ga:2 close:1 prentice:1 context:1 conventional:1 map:1 expl:1 straightforward:2 independently:1 resolution:1 rule:12 counterintuitive:1 century:1 variation:3 exact:4 substrate:1 element:5 mowrer:1 particularly:2 blocking:22 observed:4 role:1 capture:1 commonplace:1 solla:1 rescaled:1 contemporary:1 balanced:1 reward:12 dynamic:1 personal:1 depend:1 grateful:1 purely:1 negatively:1 basis:2 completely:1 montague:1 represented:1 london:1 sejnowski:1 dordrecht:1 apparent:1 whose:1 quite:1 solve:2 autoshaping:1 ward:2 final:1 delivered:1 online:2 differentiate:1 eigenvalue:1 rr:2 reinforcer:1 net:1 ucl:1 ment:1 rescorla:4 coming:1 adaptation:1 j2:1 roweis:1 convergence:1 wcin:1 assessing:1 produce:1 generating:1 perfect:1 comparative:1 tk:1 illustrate:1 recurrent:8 ac:2 pose:1 weakens:1 widrow:2 ij:2 derive:1 oo:2 progress:1 sa:1 ex:2 come:3 direction:2 closely:2 correct:1 filter:33 stochastic:1 human:3 implementing:2 hillsdale:1 really:2 biological:1 extension:1 brandon:3 around:1 considered:2 hall:1 equilibrium:1 mapping:2 predict:1 stabilizes:1 j10:1 omitted:1 sensitive:2 largest:1 clearly:1 gaussian:4 sight:1 rather:4 wf:1 inference:8 dayan:3 sb:2 typically:2 initially:1 hidden:2 interested:1 development:7 animal:5 plan:1 field:2 once:1 look:1 unsupervised:5 nearly:2 discrepancy:1 t2:3 stimulus:13 jb:1 quantitatively:1 few:1 connectionist:1 sta:1 pathological:3 familiar:1 phase:1 ab:1 englewood:1 possibility:2 mixture:2 light:8 undefined:1 xy:1 orthogonal:1 owes:1 incomplete:1 old:1 causal:1 theoretical:1 ar:3 queen:1 deviation:2 successful:1 erlbaum:1 aining:1 st:1 ie:4 systematic:1 off:7 choose:1 possibly:1 dr:1 yeo:1 account:3 includes:1 multiplicative:1 observing:1 competitive:1 complicated:1 square:3 variance:1 miller:2 judgment:1 weak:1 bayesian:1 mc:1 ah:1 sharing:3 ty:1 acquisition:1 obvious:1 associated:4 mi:1 whitens:2 propagated:1 gain:2 proved:1 infers:1 actually:1 supervised:2 response:3 modal:1 leen:1 though:3 strongly:1 governing:2 atick:5 just:4 correlation:5 lack:3 name:1 effect:2 true:5 evolution:2 read:1 symmetric:5 eg:1 white:1 during:8 manifestation:1 trying:1 evident:1 demonstrate:1 tn:7 instantaneous:1 novel:1 conditioning:18 association:5 extend:1 interpretation:1 approximates:2 kluwer:1 similarly:1 matute:2 funded:1 whitening:19 posterior:1 own:1 forcing:2 certain:1 binary:1 ubiquitously:1 muller:1 captured:1 seen:2 additional:1 novelty:3 paradigm:5 signal:1 dashed:3 ii:12 multiple:1 mix:1 sound:15 sham:2 infer:1 reduces:1 hebbian:1 full:2 match:3 adapt:1 cross:2 long:4 lin:3 equally:1 prediction:12 involving:1 basic:1 denominator:2 circumstance:1 whereas:2 singular:1 leaving:1 grow:1 appropriately:2 comment:1 db:2 effectiveness:1 jordan:2 extracting:1 near:2 presence:2 backwards:5 feedforward:3 wn:13 fit:1 psychology:2 architecture:1 idea:3 whether:2 motivated:1 akin:1 wo:1 peter:1 york:1 cause:1 jj:1 generally:1 clear:1 involve:1 se:1 amount:2 stearns:2 nsf:1 neuroscience:1 delta:5 estimated:2 klein:1 anatomical:1 key:1 nevertheless:1 neither:1 diffusion:3 backward:16 wand:1 run:4 inverse:1 uncertainty:5 place:1 saying:1 reasonable:1 layer:1 shank:4 followed:3 guaranteed:1 convergent:1 yale:1 quadratic:1 activity:1 strength:1 adapted:1 constraint:1 sharply:1 pavlovian:2 structured:1 according:1 across:1 smaller:1 sam:1 kakade:3 making:2 goodall:10 presently:1 explained:3 taken:1 equation:11 needed:1 drastic:1 available:1 accorded:1 opponent:2 yare:1 quarterly:1 away:13 appropriate:1 rl2:1 occurrence:1 alternative:3 graphical:1 calculating:1 classical:4 comparatively:1 added:3 realized:2 quantity:1 receptive:1 diagonal:11 unclear:1 exhibit:1 win:1 assuming:1 kalman:31 relationship:7 providing:1 innovation:6 equivalently:2 difficult:1 mostly:1 mediating:1 unfortunately:1 sop:1 negative:3 implementation:5 perform:2 upper:1 observation:14 anti:1 beat:1 hinton:1 looking:3 communication:1 precise:1 rn:3 sharp:1 drift:3 aibj:1 david:1 namely:1 learned:2 established:1 suggested:5 regime:1 explanation:4 belief:2 terry:1 critical:1 natural:2 advanced:1 representing:1 imply:1 picture:1 prior:1 acknowledgement:1 asymptotic:1 fully:1 suggestion:1 filtering:5 foundation:1 integrate:1 contingency:1 consistent:1 editor:4 charitable:1 qf:2 course:2 accounted:1 last:2 lln:1 bias:1 weaker:1 understand:2 allow:1 explaining:9 taking:1 wagner:8 absolute:1 curve:2 dimension:2 xn:34 feedback:1 rich:1 t5:1 sensory:1 forward:4 qualitatively:1 reinforcement:4 adaptive:2 prokasy:1 approximate:3 observable:1 iayer:1 why:1 learn:1 nature:1 robust:1 inherently:1 complex:1 da:1 significance:1 aibi:1 noise:8 arise:1 redlich:5 en:3 tl:1 gatsby:2 favoured:1 lq:1 ib:1 ix:1 croft:1 bad:1 showing:1 normative:3 physiological:1 concern:3 exists:1 workshop:1 magnitude:3 conditioned:1 occurring:1 prevents:1 scalar:1 identity:3 presentation:1 towards:3 absence:2 change:8 hard:1 except:2 averaging:1 called:1 experimental:3 la:1 assessed:1 tested:1 phenomenon:3 correlated:1
932
1,853
Automatic choice of dimensionality for peA Thomas P. Minka MIT Media Lab 20 Ames St, Cambridge, MA 02139 [email protected] Abstract A central issue in principal component analysis (PCA) is choosing the number of principal components to be retained. By interpreting PCA as density estimation, we show how to use Bayesian model selection to estimate the true dimensionality of the data. The resulting estimate is simple to compute yet guaranteed to pick the correct dimensionality, given enough data. The estimate involves an integral over the Steifel manifold of k-frames, which is difficult to compute exactly. But after choosing an appropriate parameterization and applying Laplace's method, an accurate and practical estimator is obtained. In simulations, it is convincingly better than cross-validation and other proposed algorithms, plus it runs much faster. 1 Introduction Recovering the intrinsic dimensionality of a data set is a classic and fundamental problem in data analysis. A popular method for doing this is PCA or localized PCA. Modeling the data manifold with localized PCA dates back to [4]. Since then, the problem of spacing and sizing the local regions has been solved via the EM algorithm and split/merge techniques [2, 6, 14,5]. However, the task of dimensionality selection has not been solved in a satisfactory way. On the one hand we have crude methods based on eigenvalue thresholding [4] which are very fast, or we have iterative methods [1] which require excessive computing time. This paper resolves the situation by deriving a method which is both accurate and fast. It is an application of Bayesian model selection to the probabilistic PCA model developed by [12, 15]. The new method operates exclusively on the eigenvalues of the data covariance matrix. In the local PCA context, these would be the eigenvalues of the local responsibility-weighted covariance matrix, as defined by [14]. The method can be used to fit different PCA models to different classes, for use in Bayesian classification [11]. 2 Probabilistic peA This section reviews the results of [15]. The PCA model is that a d-dimensional vector x was generated from a smaller k-dimensional vector w by a linear transformation (H, m) plus a noise vector e: x = Hw + m + e. Both the noise and the principal component vector ware assumed spherical Gaussian: (1) The observation x is therefore Gaussian itself: p(xIH, m, v) '" N(m, HHT + vI) (2) The goal of PCA is to estimate the basis vectors H and the noise variance v from a data set D = {Xl, ... , XN }. The probability of the data set is (27f)-Nd/2IHHT + vII- N/ 2 exp(-~tr((HHT + VI)-lS)) (3) p(DIH,m,v) S = I)Xi - m)(xi - m)T (4) As shown by [15], the maximum-likelihood estimates are: 1 ~ m= N~xi A A _ V - "'~ L."J=k+l A'J (5) d-k i where orthogonal matrix U contains the top k eigenvectors of SIN, diagonal matrix A contains the corresponding eigenvalues, and R is an arbitrary orthogonal matrix. 3 Bayesian model selection Bayesian model selection scores models according to the probability they assign the observed data [9, 8]. It is completely analogous to Bayesian classification. It automatically encodes a preference for simpler, more constrained models, as illustrated in figure 1. Simple models only fit a small fraction of data sets, but they assign correspondingly higher probability to those data sets. Flexible models spread themselves out more thinly. The probability of the data given the model is computed by integrating over the unknown parameter values in that model: p(DIM) p(D I M) .n ~""";"'" model flexible model ------~--_r~------ constrained model wins D flexible model wins Figure 1: Why Bayesian model selection prefers simpler models = fo p(DIO)p(OIM)dO (6) This quantity is called the evidence for model M. A useful property of Bayesian model selection is that it is guaranteed to select the true model, if it is among the candidates, as the size of the dataset grows to infinity. 3.1 The evidence for probabilistic peA For the PCA model, we want to select the subspace dimensionality k. To do this, we compute the probability of the data for each possible dimensionality and pick the maximum. For a given dimensionality, this requires integrating over all PCA parameters (m, H, v) . First we need to define a prior density for these parameters. Assuming there is no information other than the data D, the prior should be as noninformative as possible. A non informative prior for m is uniform, and with such a prior we can integrate out m analytically, leaving p(DIH, v) = N-d/2(27f)-(N-1)d/2IHHT + vII-(N-1)/2 exp( -~tr((HHT +VI)-lS)) (7) where S = ~)Xi - m)(Xi - m)T (8) Unlike m, H must have a proper prior since it varies in dimension for different models. Let H be decomposed just as in (5): (9) where L is diagonal with diagonal elements k The orthogonal matrix U is the basis, L is the scaling (corrected for noise), and R is a rotation within the subspace (which will turn out to be irrelevant). A conjugate prior for (U, L, R, v), parameterized by a, is p(U,L,R,v) ex IHHT +vII-(a+2)/2exp(_~tr((HHT +VI)-l)) (10) This distribution happens to factor into p(U)p(L )p(R)p(v) , which means the variables are a-priori independent: p(L) p(v) ILI-(a+2)/2 exp( -::tr(L -1)) 2 ex v-(a+2)(d-k)/2 exp( _ a(d - k)) ex p(U)p(R) 2v (constant-defined in (20? (11) (12) (13) The hyperparameter a controls the sharpness of the prior. For a noninformative prior, a should be small, making the prior diffuse. Besides providing a convenient prior, the decomposition (9) is important for removing redundant degrees of freedom (R) and for separating H into independent components, as described in the next section. Combining the likelihood with the prior gives p(Dlk) =Ck /IHH T +vII-n/2exp(-~tr((HHT +vI)-l(S+aI)))dUdLdv (14) n = N +1+a (15) The constant Ck includes N-d/2 and the normalizing terms for p(U) , p(L), and p(v) (given in [lO])-only p(U) will matter in the end. In this formula R has already been integrated out; the likelihood does not involve R so we just get a multiplicative factor of JRP(R) dR = 1. 3.2 Laplace approximation Laplace's method is a powerful method for approximating integrals in Bayesian statistics [8]: / f(())d() ~ f(B)(27f),ows(A)/2IAI- 1 / 2 (16) (17) The key to getting a good approximation is choosing a good parameterization for () = (U, L, v). Since li and v are positive scale parameters, it is best to use l~ = log(li) and v' = log( v). This results in ~ f. _ NAi + a: , - N-1+a: d 2 10g f((}) I (dlD2 ()=o N~:=k+1 Aj v= n(d-k)-2 = _ N - 1 + a: d2 10g f((}) I (dV')2 ()=o 2 (18) = _ n(d - k) - 2 (19) 2 The matrix U is an orthogonal k-frame and therefore lives on the Stiefel manifold [7], which is defined by condition (9). The dimension of the manifold is m = dk - k(k + 1) /2, since we are imposing k(k + 1)/2 constraints on a d x k matrix. The prior density for U is the reciprocal of the area of the manifold [7]: k p(U) = Tk II r((d - i + 1)/2)7f-(d-i+1)/2 (20) i=l A useful parameterization of this manifold is given by the Euler vector representation: (21) where U d is a fixed orthogonal matrix and Z is a skew-symmetric matrix of parameters, such as Z = [-~12 Zt/ ~~: 1 (22) - Z13 -Z23 0 The first k rows of Z determine the first k columns of exp(Z), so the free parameters are Zij with i < j and i ::; k; the others are constant. This gives d(d-1)/2-(d-k)(d-k-1)/2 = m parameters, as desired. For example, in the case (d = 3, k = 1) the free parameters are Z12 and Z13, which define a coordinate system for the sphere. As a function of U, the integrand is simply 1 p(UID, L, v) ex: exp( -2 tr ((L -1 - v-1I)U T SU)) (23) The density is maximized when U contains the top k eigenvectors of S . However, the density is unchanged if we negate any column of U. This means that there are actually 2k different maxima, and we need to apply Laplace's method to each. Fortunately, these maxima are identical so can simply multiply (16) by 2k to get the integral over the whole manifold. If we set U d to the eigenvectors of S : uIsu d = N A (24) then we just need to apply Laplace's method at Z = O. As shown in [10], if we define the estimated eigenvalue matrix A= then the second differential at Z [~ VI~-J = 0 simplifies to k d 2 logf((}) IZ=Q (25) d = - "L...J " ~ -1 L...J (\ - \~ -1 )(Ai - Aj)Ndzij2 (26) i=l j=i+1 There are no cross derivatives; the Hessian matrix Az is diagonal. So its determinant is the product of these second derivatives: k IAzl = II d II i=l j=i+1 (.~j1 - ~i1)(Ai - Aj)N (27) Laplace's method requires this to be nonsingular, so we must have k < N. The crossderivatives between the parameters are all zero: cP log 1(0) I dlidZ = d2 10g 1(0) dvdZ 0=0 I 0=0 = d2 10g 1(0) dlidv I = 0 (28) 0=0 so A is block diagonal and IAI = IAzIIALIIAvl. We know AL from (19), Av from (19), and Az from (27). We now have all of the terms needed in (16), and so the evidence approximation is p(Dlk) RJ 2 k ck Ii l -n/2 v-n(d-k)/2e-nd/2(27r)(m+k+1)/2IAzl-l/2IALI-1/2IAvl-1/2 (29) For model selection, the only terms that matter are those that strongly depend on k, and since D: is small and N reasonably large we can simplify this to p(Dlk) RJ p(U) (g A;) ~ v = -NI'iJ- N(,-.)I'(2.)(m+k)I' IAzl-'I' N-'I' Et=k+l Aj d- k (30) (31) which is the recommended formula. Given the eigenvalues, the cost of computing p(D Ik) is O(min(d, N)k), which is less than one loop over the data matrix. A simplification of Laplace's method is the BIC approximation [8]. This approximation drops all terms which do not grow with N, which in this case leaves only p(Dlk) RJ ( g Aj ) -N/2 v- N (d-k)/2 N-(m+k)/2 (32) BIC is compared to Laplace in section 4. 4 Results To test the performance of various algorithms for model selection, we sample data from a known model and see how often the correct dimensionality is recovered. The seven estimators implemented and tested in this study are Laplace's method (30), BIC (32), the two methods of [13] (called RR-N and RR-U), the algorithm in [3] (ER), the ARD algorithm of [1], and 5-fold cross-validation (CV). For cross-validation, the log-probability assigned to the held-out data is the scoring function. ER is the most similar to this paper, since it performs Bayesian model selection on the same model, but uses a different kind of approximation combined with explicit numerical integration. RR-N and RR-U are maximum likelihood techniques on models slightly different than probabilistic PCA; the details are in [10]. ARD is an iterative estimation algorithm for H which sets columns to zero unless they are supported by the data. The number of nonzero columns at convergence is the estimate of dimensionality. Most of these estimators work exclusively from the eigenvalues of the sample covariance matrix. The exceptions are RR-U, cross-validation, and ARD; the latter two require diagonalizing a series of different matrices constructed from the data. In our implementation, the algorithms are ordered from fastest to slowest as RR-N, mc, Laplace, cross-validation, RR-U, ARD, and ER (ER is slowest because of the numerical integrations required). The first experiment tests the data-rich case where N > > d. The data is generated from a lO-dimensional Gaussian distribution with 5 "signal" dimensions and 5 noise dimensions. The eigenvalues of the true covariance matrix are: Signal Noise N = 100 108642 1(x5) The number of times the correct dimensionality (k = 5) was chosen over 60 replications is shown at right. The differences between ER, Laplace, and CV are not statistically significant. Results below the dashed line are worse than Laplace with a significance level of 95%. ER Laplace CV BIC ARD RRN RRU The second experiment tests the case of sparse data and low noise: Signal Noise N= 10 108642 0.1 (xl0) The results over 60 replications are shown at right. BIC and ER, which are derived from large N approximations, do poorly. Cross-validation also fails, because it doesn't have enough data to work with. The third experiment tests the case of high noise dimensionality: Signal Noise N=60 10 8 642 0.25 (x95) The ER algorithm was not run in this case because of its excessive computation time for large d. Laplace The final experiment tests the robustness to having a non-Gaussian data distribution within the subspace. We start with four sound fragments of 100 samples each. To make things especially non-Gaussian, the values in third fragment are squared and the values in the fourth fragment are cubed. All fragments are standardized to zero mean and unit variance. Gaussian noise in 20 dimensions is added to get: Signal Noise N = 100 4 sounds 0.5 (x20) The results over 60 replications of the noise (the signals were constant) are reported at right. CV Laplace ARD ARD CV RRU BIC RRN BlC RRU RAN ER 5 Discussion Bayesian model selection has been shown to provide excellent performance when the assumed model is correct or partially correct. The evaluation criterion was the number of times the correct dimensionality was chosen. It would also be useful to evaluate the trained model with respect to its performance on new data within an applied setting. In this case, Bayesian model averaging is more appropriate, and it is conceivable that a method like ARD, which encompasses a soft blend between different dimensionalities, might perform better by this criterion than selecting one dimensionality. It is important to remember that these estimators are for density estimation, i.e. accurate representation of the data, and are not necessarily appropriate for other purposes like reducing computation or extracting salient features. For example, on a database of 301 face images the Laplace evidence picked 120 dimensions, which is far more than one would use for feature extraction. (This result also suggests that probabilistic PCA is not a good generative model for face images.) References [1] C. Bishop. Bayesian PCA. In Neural Information Processing Systems 11, pages 382- 388, 1998. [2] C. Bregler and S. M. Omohundro. Surface learning with applications to lipreading. In NIPS, pages 43- 50, 1994. [3] R. Everson and S. Roberts. Inferring the eigenvalues of covariance matrices from limited, noisy data. IEEE Trans Signal Processing, 4 8(7):2083- 2091 , 2000. http : //www. robots . ox . ac . uk/-sjrob/Pubs/spectrum . ps . gz. [4] K. Fukunaga and D. Olsen. An algorithm for finding intrinsic dimensionality of data. IEEE Trans Computers, 20(2):176-183,1971. [5] Z. Ghahramani and M. Beal. Variational inference for Bayesian mixtures of factor analysers. In Neural Information Processing Systems 12, 1999. [6] Z. Ghahramani and G. Hinton. The EM algorithm for mi xtures of factor analyzers. Technical Report CRG-TR-96-1 , University of Toronto, 1996. http : //www . gatsby . ucl . ac . uk/-zoubin/pape rs . html. [7] A. James. Normal multivariate analysis and the orthogonal group. Annals of Mathematical Statistics, 25(1):40- 75, 1954. [8] R. E. Kass and A. E. Raftery. Bayes factors and model uncertainty. Technical Report 254, University of Washington, 1993. http : //www . st a t . wa shington . edu/t e ch . reports/tr254 . ps . [9] D. J. C. MacKay. Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems, 6:469- 505 , 1995. http : //wol . r a. phy . cam .a c . uk/mack a y/abstr a cts/ne twork . html. [10] T. Minka. Automatic choice of dimensionality for PCA. Technical Report 514, MIT Media Lab Vision and Modeling Group, 1999. f tp : //whit e chapel . media . mit .edu/pub/tech-reports/TR-514ABSTRAC T. html. [11] B. Moghaddam, T. Jebara, and A. Pentland. Bayesian modeling of facial similarity. In Neural Information Processing Systems 11, pages 910-916, 1998. [12] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Trans Pattern Analysis and Machine Intelligence, 19(7):696-710, 1997. [13] J. J. Rajan and P. J. W. Rayner. Model order selection for the singular value decomposition and the discrete Karhunen-Loeve transform using a Bayesian approach. lEE Vision, Image and Signal Processing, 144(2):166- 123, 1997. [14] M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443-482, 1999. http : //cit e s ee r . nj . n e c . com/362314 . html. [15] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. J Royal Statistical Society B, 61(3), 1999.
1853 |@word determinant:1 nd:2 d2:3 r:1 simulation:1 covariance:5 decomposition:2 rayner:1 pick:2 tr:8 phy:1 contains:3 exclusively:2 score:1 zij:1 series:1 fragment:4 selecting:1 pub:2 recovered:1 ka:1 com:1 yet:1 must:2 numerical:2 informative:1 j1:1 noninformative:2 drop:1 generative:1 leaf:1 intelligence:1 parameterization:3 reciprocal:1 ames:1 preference:1 toronto:1 simpler:2 mathematical:1 constructed:1 differential:1 ik:1 replication:3 themselves:1 spherical:1 decomposed:1 automatically:1 resolve:1 z13:2 medium:4 kind:1 developed:1 finding:1 transformation:1 nj:1 remember:1 exactly:1 uk:3 control:1 unit:1 positive:1 local:3 ware:1 merge:1 might:1 plus:2 suggests:1 fastest:1 limited:1 dlk:4 statistically:1 practical:2 block:1 area:1 convenient:1 integrating:2 zoubin:1 get:3 selection:12 context:1 applying:1 www:3 l:2 sharpness:1 chapel:1 estimator:4 deriving:1 classic:1 coordinate:1 laplace:16 analogous:1 annals:1 us:1 element:1 database:1 observed:1 solved:2 region:1 ran:1 cam:1 trained:1 depend:1 basis:2 completely:1 various:1 fast:2 analyser:2 choosing:3 plausible:1 statistic:2 transform:1 itself:1 noisy:1 final:1 beal:1 eigenvalue:9 rr:7 ucl:1 product:1 combining:1 loop:1 date:1 poorly:1 az:2 getting:1 convergence:1 abstr:1 p:2 tk:1 object:1 ac:2 ard:8 ij:1 recovering:1 implemented:1 involves:1 correct:6 pea:3 wol:1 require:2 dio:1 assign:2 sizing:1 probable:1 bregler:1 crg:1 normal:1 exp:8 purpose:1 estimation:3 weighted:1 blc:1 mit:4 gaussian:6 ck:3 derived:1 likelihood:4 slowest:2 tech:1 dim:1 inference:1 integrated:1 i1:1 issue:1 classification:2 flexible:3 among:1 html:4 priori:1 constrained:2 integration:2 mackay:1 having:1 extraction:1 washington:1 identical:1 excessive:2 others:1 report:5 simplify:1 steifel:1 freedom:1 multiply:1 evaluation:1 mixture:2 held:1 accurate:3 moghaddam:2 integral:3 facial:1 orthogonal:6 unless:1 desired:1 column:4 modeling:3 soft:1 whit:1 tp:1 sjrob:1 cost:1 euler:1 uniform:1 reported:1 varies:1 combined:1 st:2 density:6 fundamental:1 probabilistic:8 lee:1 squared:1 central:1 dr:1 worse:1 derivative:2 rrn:2 li:2 cubed:1 includes:1 matter:2 z12:1 vi:6 multiplicative:1 picked:1 lab:2 responsibility:1 doing:1 start:1 bayes:1 ni:1 variance:2 maximized:1 nonsingular:1 bayesian:17 mc:1 fo:1 minka:2 james:1 mi:1 dataset:1 popular:1 dimensionality:17 actually:1 back:1 higher:1 tipping:2 supervised:1 iai:2 ox:1 strongly:1 just:3 hand:1 su:1 aj:5 grows:1 true:3 analytically:1 assigned:1 symmetric:1 nonzero:1 satisfactory:1 illustrated:1 sin:1 x5:1 criterion:2 omohundro:1 performs:1 cp:1 interpreting:1 stiefel:1 image:3 variational:1 rotation:1 significant:1 cambridge:1 imposing:1 ai:3 cv:5 automatic:2 analyzer:1 robot:1 similarity:1 surface:1 multivariate:1 irrelevant:1 life:1 lipreading:1 scoring:1 pape:1 fortunately:1 determine:1 redundant:1 recommended:1 dashed:1 signal:8 ii:4 sound:2 rj:3 technical:3 faster:1 cross:7 sphere:1 prediction:1 vision:2 want:1 spacing:1 twork:1 grow:1 leaving:1 singular:1 unlike:1 thing:1 extracting:1 ee:1 logf:1 split:1 enough:2 fit:2 bic:6 simplifies:1 pca:16 hessian:1 prefers:1 useful:3 eigenvectors:3 involve:1 cit:1 http:5 estimated:1 discrete:1 hyperparameter:1 iz:1 rajan:1 group:2 key:1 four:1 salient:1 uid:1 fraction:1 run:2 parameterized:1 powerful:1 fourth:1 uncertainty:1 scaling:1 z23:1 ct:1 guaranteed:2 simplification:1 fold:1 infinity:1 constraint:1 encodes:1 diffuse:1 loeve:1 integrand:1 min:1 fukunaga:1 according:1 conjugate:1 smaller:1 slightly:1 em:2 making:1 happens:1 dv:1 mack:1 turn:1 skew:1 needed:1 know:1 dih:2 end:1 everson:1 apply:2 appropriate:3 robustness:1 thomas:1 top:2 standardized:1 ghahramani:2 especially:1 approximating:1 society:1 unchanged:1 nai:1 already:1 quantity:1 added:1 blend:1 diagonal:5 conceivable:1 win:2 subspace:3 separating:1 seven:1 manifold:7 assuming:1 besides:1 retained:1 providing:1 difficult:1 x20:1 robert:1 implementation:1 proper:1 zt:1 unknown:1 perform:1 av:1 observation:1 oim:1 pentland:2 situation:1 hinton:1 frame:2 arbitrary:1 jebara:1 required:1 xih:1 nip:1 trans:3 below:1 pattern:1 convincingly:1 encompasses:1 royal:1 diagonalizing:1 ne:1 raftery:1 gz:1 review:2 prior:12 ows:1 localized:2 validation:6 integrate:1 degree:1 thresholding:1 lo:2 row:1 supported:1 free:2 jrp:1 face:2 correspondingly:1 sparse:1 dimension:6 xn:1 rich:1 doesn:1 far:1 olsen:1 assumed:2 xi:5 spectrum:1 iterative:2 why:1 reasonably:1 excellent:1 necessarily:1 significance:1 spread:1 whole:1 noise:13 gatsby:1 fails:1 inferring:1 explicit:1 xl:1 candidate:1 crude:1 third:2 hw:1 removing:1 formula:2 bishop:3 er:9 dk:1 negate:1 evidence:4 normalizing:1 intrinsic:2 karhunen:1 vii:4 simply:2 visual:1 ordered:1 partially:1 ch:1 ma:1 goal:1 operates:1 corrected:1 averaging:1 reducing:1 principal:5 called:2 hht:5 ili:1 exception:1 select:2 latter:1 xl0:1 evaluate:1 tested:1 ex:4
933
1,854
Algorithmic Stability and Generalization Performance Olivier Bousquet CMAP Ecole Polytechnique F-91128 Palaiseau cedex FRANCE [email protected]?fr Andre Elisseeff'" Barnhill Technologies 6709 Waters Avenue Savannah, GA 31406 USA [email protected] Abstract We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A stable learner is one for which the learned solution does not change much with small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension is infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance. 1 Introduction A key issue in computational learning theory is to bound the generalization error of learning algorithms. Until recently, most of the research in that area has focused on uniform a-priori bounds giving a guarantee that the difference between the training error and the test error is uniformly small for any hypothesis in a given class. These bounds are usually expressed in terms of combinatorial quantities such as VCdimension. In the last few years, researchers have tried to use more refined quantities to either estimate the complexity of the search space (e.g. covering numbers [1]) or to use a posteriori information about the solution found by the algorithm (e.g. margin [11]). There exist other approaches such as observed VC dimension [12], but all are concerned with structural properties of the learning systems. In this paper we present a novel way of obtaining PAC bounds for specific algorithms explicitly using their stability properties. The notion of stability, introduced by Devroye and Wagner [4] in the context of classification for the analysis of the Leave-oneout error and further refined by Kearns and Ron [8] is used here in the context of regression in order to get bounds on the empirical error rather than the leaveone-out error. This method has the nice advantage of providing bounds that do -This work was done while the author was at Laboratoire ERIC, Universite Lumiere Lyon 2,5 avenue Pierre Mendes-France, F-69676 Bron cedex, FRANCE not depend on any complexity measure of the search space (e.g. VC-dimension or covering numbers) but rather on the way the algorithm searches this space. In that respect, our approach can be related to Freund's [7] where the estimated size of the subset of the hypothesis space actually searched by the algorithm is used to bound its generalization error. However Freund's result depends on a complexity term which we do not have here since we are not looking separately at the hypotheses considered by the algorithm and their risk. The paper is structured as follows: next section introduces the notations and the definition of stability used throughout the paper. Section 3 presents our main result as a PAC-like theorem. In Section 4 we will prove that regularization networks are stable and apply the main result to obtain bounds on their generalization ability. A discussion of the results will be presented in Section 5. 2 Notations and Definitions X and 'lJ being respectively an input and an output space, we consider a learning set S = {Zl = (Xl, yd, .. , Zm = (xm, Ym)} of size m in Z = X x 'lJ drawn i.i.d. from an unknown distribution D. A learning algorithm is a function L from into 'lJx mapping a learning set S onto a function Is from X to 'lJ. To avoid complex notations, we consider only deterministic algorithms. It is also assumed that the algorithm A is symmetric with respect to S, i. e. for any permutation over the elements of S, Is yields the same result. Furthermore, we assume that all functions are measurable and all sets are countable which does not limit the interest of the results presented here. zm The empirical error of a function I measured on the training set Sis: 1 Rm(f) =- m m L c(f, Zi) i=l c: 'lJx X X x 'lJ -t 1R+ being a cost function. The risk or generalization error can be written as: R(f) = Ez~D [c(f,z)] The study we describe here intends to bound the difference between empirical and generalization error for specific algorithms. More precisely, our goal is to bound for any E > 0, the term (1) Usually, learning algorithms cannot output just any function in 'lJx but rather pick a function Is in a set :r S;; 'lJX representing the structure or the architecture or the model. Classical VC theory deals with structural properties and aims at bounding the following quantity: PS~Dm [sup IRm(f) - R(f)1 > fE'3' EJ (2) This applies to any algorithm using :r as a hypothesis space and a bound on this quantity directly implies a similar bound on (1). However, classical bounds require the VC dimension of :r to be finite and do not use information about algorithmic properties. For a set :r, there exists many ways to search it which may yield different performance. For instance, multilayer perceptrons can be learned by a simple backpropagation algorithm or combined with a weight decay procedure. The outcome of the algorithm belongs in both cases to the same set of functions, although their performance can be different. VC theory was initially motivated by empirical risk minimization (ERM) in which case the uniform bounds on the quantity (2) give tight error bounds. Intuitively, the empirical risk minimization principle relies on a uniform law of large numbers. Because it is not known in advance, what will be the minimum of the empirical risk, it is necessary to study the difference between empirical and generalization error for all possible functions in 9". If, now, we do not consider this minimum, but instead, we focus on the outcome of a learning algorithm A, we may then know a little bit more what kind of functions will be obtained. This limits the possibilities and restricts the supremum over all the functions in 9" to the possible outcomes of the algorithm. An algorithm which always outputs the null function does not need to be studied by a uniform law of large numbers. Let's introduce a notation for modified training sets: if S denotes the initial training set, S = {Zl, ... ,Zi-l,Zi,Zi+l, ... ,zm}, then Si denotes the training set after Zi has been replaced by a different training example z~, that is Si = {Zl, ... , Zi-l, z~, Zi+1, . .. , zm}. Now, we define a notion of stability for regression. Definition 1 (UniforIll stability) Let S = {Zl, ... ,zm} be a training set, Si = S\Zi be the training set where instance i has been removed and A a symmetric algorithm. We say that A is (3-stable if the following holds: '<IS E zm, '<Iz~,z E Z, Ic(fs,z) - c(fsi,z)l:::; {3 (3) This condition expresses that for any possible training set S and any replacement example z~, the difference in cost (measured on any instance in Z) incurred by the learning algorithm when training on S and on Si is smaller than some constant {3. 3 Main result A stable algorithm, i. e. {3-stable with a small {3, has the property that replacing one element in its learning set does not change much its outcome. As a consequence, the empirical error, if thought as a random variable, should have a small variance. Stable algorithms can then be good candidates for their empirical error to be close to their generalization error. This assertion is formulated in the following theorem: ?: :; Theorem 2 Let A be a (3-stable algorithm, such that c(fs, Z) :::; M, for all Z E Z and all learning set S. For all E > 0, for any m ~ 8~C we have: PS~D'" [lRm(fS) - R(fs) I and for any m ~ > E] :::; , 64Mm{3 + 8M 2 2 mE (4) 1, (5) Notice that this theorem gives tight bounds when the stability (3 is of the order of 11m. It will be proved in next section that regularization networks satisfy this requirement. In order to prove theorem 2, one has to study the random variable X = R(fs) Rm(fs), which can be done using two different approaches. The first one (corresponding to the exponential inequality) uses a classical martingale inequality and is detailed below. The second one is a bit more technical and requires to use standard proof techniques such as symmetrization. Here we only briefly sketch this proof and refer the reader to [5] for more details. Proof of inequality (5) We use the following theorem: Theorem 3 {McDiarmid [9}}. Let Y1 , .. . , Y n be n i.i.d. random variables taking values in a set A, and assume that F : An -+ ~ satisfies for I ~ i ~ n: sup IF(Yl, ... ,Yn)-F(Yl, ... ,Yi-l,Y~,Yi+1, ... ,Yn)1 ~Ci then In order to apply theorem 3, we have to bound the expectation of X. We begin with a useful lemma: Lemma 1 For any symmetric learning algorithm we have for all I ES~D~ [R(fs) - Rm(fs)] = ES,z:~D~+l [c(fs, Z~) - ~ i ~ m: c(fs" zD] Proof: Notice that ES~D'" [Rm(fs)] I =- m m 2:ES~D'" [c(fs,Zj)] = ES~D'" [c(fS,Zi)], 'Vi E {I, ... ,m} j=l by symmetry and the i.i.d. assumption. Now, by simple renaming of Zi as z~ we get ES~D'" [Rm(fs)] = ES'~D'" [c(fs.,zDJ = ES,Z:~D~+l [c(fs.,zD] which, with the observation that ES~D~ [R(fs)] = ES,z:~D"' +l [c(fs, zD ] concludes the proof. o Using the above lemma and the fact that A is ,B-stable, we easily get: ES~D'" [R(fs) - Rm(fs)] We now have to compute the constants Ci ~ ES,z:~D"'+l [,B] = ,B appearing in theorem 3. We have IR(fs) - R(fs?) I ~ Ez~D [Ic(fs, z) - c(fs.,z)l] ~,B and IRm(fs) - Rm(fsi)1 < ~ 2: Ic(fs,zj) m #i < c(fsi,zj)1 + ~ IC(fS,Zi) - c(fs?,zDI m 2M +,B m Theorem 3 applied to R(fs) - Rm(fs) then gives inequality (5). Sketch of the proof of inequality (4) P(IXI ;:::: ?) ~ Recall Chebyshev's inequality: E[X2] -2-' ? (6) for any random variable X. In order to apply this inequality, we have to bound E[X2]. This can be done with a similar reasoning as for the expectation. Calculations are however more complex and we do not describe them here. For more details, see [5]. The result is the following: Eu[X2] ~ M2/m + 8M,B which with (6) gives inequality (4) and concludes the proof. 4 4.1 Stability of Regularization Networks Definitions Regularization networks have been introduced in machine learning by Poggio and Girosi [10]. The relationship between these networks and the Support Vector Machines, as well as their Bayesian interpretation, make them very attractive. We consider a training set S = {(Xl, Yl), ... ,(x m , Ym)} with Xi E IRd and Yi E IR, that is we are in the regression setting. The regularization network technique consists in finding a function I : IRd -t IR in a space H which minimizes the following functional: (7) where II/II~ denotes the norm in the space H. In this framework, H is chosen to be a reproducing kernel Hilbert space (rkhs), which is basically a functional space endowed with a dot productl. A rkhs is defined by a kernel function, that is a symmetric function k : IRd X IRd -t IR which we will assume to be bounded by a constant K in what follows 2 . In particular the following property will hold: (8) 4.2 Stability study In this section, we show that regularization networks are, furthermore, stable as soon as A is not too small. Ilkli H Theorem 4 For regularization networks with ( 2K2 A2 ~ + K 4K A and (f(x) - y)2 ~ M, + 2) In(2/6) m (9) and R(fs) ~ Rm(fs) + 2M ( 64K A + 2) ...!... m6 (10) Proof: Let's denote by Is the minimizer of C. Let's define Let lSi be the minimizer of C i and let 9 denote the difference lSi - Is. By simple algebra, we have for t E [0,1] C(fs) - C(fs + tg) 2t m m j=l = -- ~)Is(xj) - Yj)g(Xj) - 2tA < Is,g > +t 2 A(g) lWe do not detail here the properties of such a space and refer the reader to [2, 3] for additional details. 20 nce again we do not give full detail of the definition of appropriate kernel functions and refer the reader to [3] . where A(g) which is not explicitly written here is the factor of t 2 . Similarly we have 2t m l)fSi(Xj) -Yj)g(Xj) #i + 2t (fsi (xD - YDg(xD + 2tA < fSi, 9 > +t 2Ai(g) m By optimality, we have C(fs) - C(fs + tg) :::; 0 and Ci(fsi) - Ci(fsi - tg) :::; 0 thus, summing those inequalities, dividing by tim and making t -t 0, we get 2L g(Xj)2 - 2(fS(Xi) - Yi)g(Xi) + 2(fsi (xD - YDg(x~) + 2mAIIgilir :::; 0 #i which gives mAllgllir :::; (fS(Xi) - Yi)g(Xi) - (fs(x~) - YDg(xD :::; 2VMI\;IIgliH using (8). We thus obtain Ilfsi - fsllH :::; 2VMI\;/(mA) (11) and also Vx,y I(fs(x) - y)2 - (fsi(X) - y)21:::; 2VMlfs(x) - fSi(X)1 :::; 4MI\;/(mA) We thus proved that the minimization of C[f] is a 4:;>.'" -stable procedure which <>. allows to apply theorem 2. 4.3 Discussion These inequalities are both of interest since the range where they are tight is different. Indeed, (10) has a poor dependence on c5 which makes it deteriorate when high confidence is sought for. However, (9) can give high confidence bounds but will be looser when A is small. Moreover, results exposed by Evgeniou et al. [6] indicate that the optimal dependence of A with m is obtained for Am = 0 (In In m). If we plug this into the above bounds, we can notice that (9) does not converge as m -t 00. It may be conjectured that the poor estimation of the variance coming from the martingale method in McDiarmid's inequality is responsible for this effect, but a finer analysis is required to fully understand this phenomenon. One of the interests of these results is to provide a mean for choosing the A parameter by minimizing the right hand side of the inequality. Usually, it is determined with a validation set: some of the data is not used during learning and A is chosen such that the error of fs over the validation set is minimized. The drawback of this approach is to reduce the amount of data available for learning. 5 Conclusion and future work We have presented a new approach to get bounds on the generalization performance of learning algorithms which makes use of specific properties of these algorithms. The bounds we obtain do not depend on the complexity of the hypothesis class but on a measure of how stable the algorithm's output is with respect to changes in the training set. Although this work has focused on regression, we believe that it can be extended to classification, in particular by making the stability requirement less demanding (e.g. stability in average instead of uniform stability). Future work will also aim at finding other algorithms that are stable or can be appropriately modified to exhibit the stability property. At last, a promising application of this work could be the model selection problem where one has to tune parameters of the algorithms (e.g. A and parameters of the kernel for regularization networks). Instead of using cross-validation, one could measure how stability is influenced by the various parameters of interest and plug these measures into theorem 2 to derive bounds on the generalization error. Acknowledgments We would like to thank G. Lugosi, S. Boucheron and O. Chapelle for interesting discussions on stability and concentration inequalities. Many thanks to A. Smola and to the anonymous reviewers who helped improve the readability. References [1] N. Alon , S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence and learnability. Journal of the ACM, 44(4):615- 631, 1997. [2] N. Aronszajn. Theory ofreproducing kernels. Trans. Amer. Math. Soc. , 68:337- 404, 1950. [3] M. Atteia. Hilbertian Kernels and splines functions. Studies in computational mathematics 4. North-Holland, 1992 . [4] L.P. Devroye and T.J. Wagner. Distribution-free performance bounds for potential function rules. IEEE Trans. on Information Theory, 25(5) :202- 207, 1979. [5] A. Elisseeff. A study about algorithmic stability and its relation to generalization performances. Technical report, Laboratoire ERIC , Univ. Lyon 2, 2000. [6] T. Evgeniou, M. Pontil, and T. Poggio. A unified framework for regularization networks and support vector machines. Technical Memo AIM-1654, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, December 1999. [7] Y. Freund. Self bounding learning algorithms. In Proceedings of the 11 th Annual Conference on Computational Learning Theory (COLT-9S), pages 247- 258, New York, July 24- 26 1998. ACM Press. [8] M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-oneout cross-validation. Neural Computation, 11(6):1427- 1453, 1999. [9] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, pages 148- 188. Cambridge University Press , Cambridge , 1989. [10] T . Poggio and F . Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978- 982 , 1990. [11] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. A framework for structural risk minimization. In Proc. 9th Annu. Conf. on Comput. Learning Theory, pages 68- 76. ACM Press, New York, NY, 1996. [12] J. Shawe-Taylor and R. C. Williamson. Generalization performance of classifiers in terms of observed covering numbers. In Paul Fischer and Hans Ulrich Simon, editors, Proceedings of the 4th European Conference on Computational Learning Theory (Eurocolt-99) , volume 1572 of LNAI, pages 274- 284, Berlin, March 29- 31 1999. Springer.
1854 |@word briefly:1 norm:1 tried:1 elisseeff:2 pick:1 initial:1 ecole:1 rkhs:2 com:1 si:5 written:2 girosi:2 intelligence:1 math:1 readability:1 ron:2 mcdiarmid:3 prove:2 consists:1 introduce:1 deteriorate:1 indeed:1 eurocolt:1 lyon:2 little:1 begin:1 notation:4 bounded:2 moreover:1 null:1 what:3 kind:1 minimizes:1 unified:1 finding:2 guarantee:1 xd:4 rm:9 k2:1 classifier:1 zl:4 yn:2 limit:2 consequence:1 yd:1 lrm:1 lugosi:1 studied:1 range:1 acknowledgment:1 responsible:1 yj:2 backpropagation:1 procedure:2 pontil:1 area:1 empirical:9 thought:1 confidence:2 renaming:1 get:5 onto:1 ga:1 cannot:1 selection:1 close:1 risk:6 context:2 equivalent:1 measurable:1 deterministic:1 reviewer:1 focused:2 survey:1 zdi:1 m2:1 rule:1 haussler:1 stability:19 notion:2 olivier:1 us:1 ixi:1 hypothesis:6 element:2 observed:2 intends:1 eu:1 removed:1 complexity:5 depend:4 tight:3 algebra:1 exposed:1 eric:2 learner:1 easily:1 various:1 univ:1 describe:2 artificial:1 outcome:4 refined:2 choosing:1 sanity:1 say:1 ability:1 fischer:1 advantage:1 coming:1 fr:1 zm:6 convergence:1 p:2 requirement:2 leave:2 bron:1 ben:1 tim:1 derive:1 alon:1 vcdimension:1 measured:2 dividing:1 soc:1 cmapx:1 implies:1 indicate:1 drawback:1 vc:7 vx:1 require:1 generalization:14 anonymous:1 hold:2 mm:1 considered:1 ic:4 algorithmic:4 mapping:1 sought:1 a2:1 estimation:1 proc:1 combinatorial:1 symmetrization:1 sensitive:1 minimization:4 always:1 aim:3 modified:2 rather:4 avoid:1 ej:1 focus:1 check:1 am:1 posteriori:1 savannah:1 lj:4 lnai:1 initially:1 relation:1 france:3 issue:1 classification:2 colt:1 priori:1 hilbertian:1 evgeniou:2 future:2 minimized:1 report:1 spline:1 few:1 replaced:1 replacement:1 interest:4 possibility:1 introduces:1 necessary:1 poggio:3 taylor:2 irm:2 instance:3 lwe:1 assertion:1 tg:3 cost:2 subset:1 uniform:6 too:1 learnability:1 combined:1 thanks:1 yl:3 ym:2 again:1 cesa:1 conf:1 style:1 potential:1 north:1 satisfy:1 combinatorics:1 explicitly:3 depends:1 vi:1 helped:1 sup:2 simon:1 ir:4 variance:2 who:1 yield:2 bayesian:1 basically:1 researcher:1 finer:1 barnhill:1 influenced:1 andre:2 definition:5 dm:1 universite:1 proof:8 mi:1 proved:2 massachusetts:1 recall:1 hilbert:1 actually:1 ta:2 amer:1 done:3 furthermore:2 just:1 smola:1 until:1 sketch:2 hand:1 replacing:1 aronszajn:1 believe:1 usa:1 effect:1 regularization:11 symmetric:4 boucheron:1 laboratory:1 deal:1 attractive:1 during:1 self:1 covering:3 polytechnique:2 demonstrate:1 reasoning:1 novel:2 recently:1 functional:2 volume:1 interpretation:1 refer:3 cambridge:2 ai:1 mathematics:1 similarly:1 shawe:2 fsi:11 dot:1 chapelle:1 stable:12 han:1 conjectured:1 belongs:1 inequality:13 yi:5 minimum:2 additional:1 converge:1 july:1 ii:2 full:1 technical:3 calculation:1 plug:2 cross:2 regression:4 multilayer:2 expectation:2 kernel:6 separately:1 laboratoire:2 appropriately:1 posse:1 cedex:2 december:1 structural:3 concerned:1 m6:1 xj:5 zi:11 architecture:1 reduce:1 avenue:2 chebyshev:1 motivated:1 bartlett:1 f:41 ird:4 york:2 useful:1 detailed:1 tune:1 amount:1 exist:1 restricts:1 zj:3 lsi:2 notice:3 estimated:1 zd:3 iz:1 express:1 key:1 drawn:1 year:1 nce:1 throughout:1 reader:3 looser:1 bit:2 bound:28 annual:1 precisely:1 x2:3 bousquet:2 optimality:1 structured:1 march:1 poor:2 vmi:2 smaller:1 making:2 intuitively:1 erm:1 know:1 available:1 endowed:1 apply:5 appropriate:1 pierre:1 appearing:1 denotes:3 giving:1 classical:3 quantity:5 concentration:1 dependence:2 exhibit:1 thank:1 berlin:1 me:1 water:1 devroye:2 relationship:1 providing:1 minimizing:1 fe:1 oneout:2 memo:1 countable:1 unknown:1 bianchi:1 observation:1 finite:1 extended:1 looking:1 y1:1 reproducing:1 introduced:2 david:1 required:2 learned:2 trans:2 usually:3 below:1 xm:1 demanding:1 representing:1 improve:1 technology:2 concludes:2 nice:1 law:2 freund:3 fully:1 permutation:1 interesting:1 validation:4 incurred:1 principle:1 editor:1 ulrich:1 last:2 soon:1 free:1 side:1 understand:1 institute:1 taking:1 wagner:2 leaveone:1 dimension:6 author:1 c5:1 palaiseau:1 supremum:1 summing:1 assumed:1 xi:5 search:5 promising:1 obtaining:2 symmetry:1 williamson:2 complex:2 european:1 anthony:1 main:3 bounding:2 paul:1 martingale:2 ny:1 exponential:1 xl:2 candidate:1 comput:1 theorem:13 annu:1 specific:3 pac:3 decay:1 exists:1 cmap:1 ci:4 margin:1 ez:2 expressed:1 holland:1 applies:1 springer:1 minimizer:2 satisfies:1 relies:1 acm:3 ma:2 goal:1 formulated:1 change:4 infinite:1 determined:1 uniformly:1 kearns:2 lemma:3 e:12 perceptrons:1 searched:1 support:2 ydg:3 phenomenon:1 mendes:1
934
1,855
Fast Training of Support Vector Classifiers F. Perez-Cruzt, P. L. Alarc6n-Dianat, A. Navia-Vazquez:j:and A. Artes-Rodriguez:j:. tDpto. Teoria de la Seiial y Com., Escuela Politecnica, Universidad de Alcala. 28871-Alcala de Henares (Madrid) Spain. e-mail: [email protected] :j:Dpto. Tecnologias de las comunicaciones, Escuela Politecnica Superior, Universidad Carlos ill de Madrid, Avda. Universidad 30, 28911-Leganes (Madrid) Spain. Abstract In this communication we present a new algorithm for solving Support Vector Classifiers (SVC) with large training data sets. The new algorithm is based on an Iterative Re-Weighted Least Squares procedure which is used to optimize the SVc. Moreover, a novel sample selection strategy for the working set is presented, which randomly chooses the working set among the training samples that do not fulfill the stopping criteria. The validity of both proposals, the optimization procedure and sample selection strategy, is shown by means of computer experiments using well-known data sets. 1 INTRODUCTION The Support Vector Classifier (SVC) is a powerful tool to solve pattern recognition problems [13, 14] in such a way that the solution is completely described as a linear combination of several training samples, named the Support Vectors. The training procedure for solving the SVC is usually based on Quadratic Programming (QP) which presents some inherent limitations, mainly the computational complexity and memory requirements for large training data sets. This problem is typically avoided by dividing the QP problem into sets of smaller ones [6, 1, 7, 11], that are iteratively solved in order to reach the SVC solution for the whole set of training samples. These schemes rely on an optimizing engine, QP, and in the sample selection strategy for each sub-problem, in order to obtain a fast solution for the SVC. An Iterative Re-Weighted Least Squares (IRWLS) procedure has already been proposed as an alternative solver for the SVC [10] and the Support Vector Regressor [9], being computationally efficient in absolute terms. In this communication, we will show that the IRWLS algorithm can replace the QP one in any chunking scheme in order to find the SVC solution for large training data sets. Moreover, we consider that the strategy to decide which training samples must j oin the working set is critical to reduce the total number of iterations needed to attain the SVC solution, and the runtime complexity as a consequence. To aim for this issue, the computer program SV cradit have been developed so as to solve the SVC for large training data sets using IRWLS procedure and fixed-size working sets. The paper is organized as follows. In Section 2, we start by giving a summary of the IRWLS procedure for SVC and explain how it can be incorporated to a chunking scheme to obtain an overall implementation which efficiently deals with large training data sets. We present in Section 3 a novel strategy to make up the working set. Section 4 shows the capabilities of the new implementation and they are compared with the fastest available SVC implementation, SV Mlight [6]. We end with some concluding remarks. 2 IRWLS-SVC In order to solve classification problems, the SVC has to minimize Lp = ~llwI12+CLei- LJliei- LQi(Yi(?(xifw+b)-l+ei) (1) i i i with respectto w, band ei and maximize it with respectto Qi and Jli, subject to Qi, Jli ~ 0, where ?(.) is a nonlinear transformation (usually unknown) to a higher dimensional space and C is a penalization factor. The solution to (1) is defined by the Karush-Kuhn-Tucker (KKT) conditions [2]. For further details on the SVC, one can refer to the tutorial survey by Burges [2] and to the work ofVapnik [13, 14]. In order to obtain an IRWLS procedure we will first need to rearrange (1) in such a way that the terms depending on ei can be removed because, at the solution C - Qi - Jli = 0 Vi (one of the KKT conditions [2]) must hold. Lp = 1 Qi(l- Yi(?T(Xi)W + b)) 211wl12 + L i = (2) where The weighted least square nature of (2) can be understood if ei is defined as the error on each sample and ai as its associated weight, where! IIwl1 2 is a regularizing functional. The minimization of (2) cannot be accomplished in a single step because ai = ai(ei), and we need to apply an IRWLS procedure [4], summarized below in tree steps: 1. Considering the ai fixed, minimize (2). 2. Recalculate ai from the solution on step 1. 3. Repeat until convergence. In order to work with Reproducing Kernels in Hilbert Space (RKHS), as the QP procedure does, we require that w = Ei (JiYi?(Xi) and in order to obtain a non-zero b, that Ei {JiYi = O. Substituting them into (2), its minimum with respect to {Ji and b for a fixed set of ai is found by solving the following linear equation system l (3) IThe detailed description of the steps needed to obtain (3) from (2) can be found in [10]. where y = [Yl, Y2, ... Yn]T (4) 'r/i,j = 1, ... ,n 'r/i,j = 1, ... ,n (H)ij = YiYj?T(Xi)?(Xj) = YiyjK(Xi,Xj) (Da)ij = aio[i - j] 13 = [,81, ,82, ... (5) (6) (7) , ,8n]T and 0[?] is the discrete impulse function. Finally, the dependency of ai upon the Lagrange multipliers is eliminated using the KKT conditions, obtaining a, ai 2.1 ={~ ei Yi' eiYi < Yt.et. > - ?? (8) IRWLS ALGORITHMIC IMPLEMENTATION The SVC solution with the IRWLS procedure can be simplified by dividing the training samples into three sets. The first set, SI, contains the training samples verifying < ,8i < C, which have to be determined by solving (3). The second one, S2, includes every training sample whose,8i = 0. And the last one, S3, is made up of the training samples whose ,8i = C. This division in sets is fully justified in [10]. The IRWLS-SVC algorithm is shown in Table 1. ? 0. Initialization: SI will contain every training sample, S2 = 0 and S3 = 0. Compute H. e_a = y, f3_a = 0, b_a = 0, G 13 = Gin, a = 1 and G b3 = G bi n . 1 Solve [ (H)Sb S1 + D(al S1 . =? = e-lt a, 3. ai = { ~ (13) S2 2. e ? 1[ (Y)Sl (f3)Sl ] (y ) ~1 b and (13) Ss = C DyH(f3 - f3_a) - (b - b_a)1 =[1- G 13 ] G b3 ' ?. eiYi < e- _ > O'r/Z E SI U S2 U S3 tYt 4. Sets reordering: a. Move every sample in S3 with eiYi < to S2. b. Move every sample in SI with ,8i = C to S3. c. Move every sample in SI with ai = to S2 . d. Move every sample in S2 with ai :I to SI. 5. e_a = e, f3_a = 13, G 13 = (H)Sl,SS (f3)ss + (G in )Sl' b-lt = band Gb3 = -y~s (f3)ss + Gbin ? 6. Go to step 1 and repeat until convergence. ei Yi ' ? ?? Table 1: IRWLS-SVC algorithm. The IRWLS-SVC procedure has to be slightly modified in order to be used inside a chunk:ing scheme as the one proposed in [8, 6], such that it can be directly applied in the one proposed in [1]. A chunking scheme is needed to solve the SVC whenever H is too large to fit into memory. In those cases, several SVC with a reduced set of training samples are iteratively solved until the solution for the whole set is found. The samples are divide into a working set, Sw, which is solved as a full SVC problem, and an inactive set, Sin. If there are support vectors in the inactive set, as it might be, the inactive set modifies the IRWLSSVC procedure, adding a contribution to the independent term in the linear equation system (3) . Those support vectors in S in can be seen as anchored samples in S3, because their ,8i is not zero and can not be modified by the IRWLS procedure. Then, such contribution (Gin and G bin ) will be calculated as G 13 and G b3 are (Table 1, 5th step), before calling the IRWLS-SVC algorithm. We have already modified the IRWLS-SVC in Table 1 to consider Gin and G bin , which must be set to zero if the Hessian matrix, H, fits into memory for the whole set of training samples. The resolution of the SVC for large training data sets, employing as minimization engine the IRWLS procedure, is summarized in the following steps: 1. Select the samples that will form the working set. 2. Construct Gin = (H)Sw,Sin (f3)s.n and G bin = -yIin (f3)Sin 3. Solve the IRWLS-SVC procedure, following the steps in Table 1. 4. Compute the error of every training sample. 5. If the stopping conditions Yiei < C eiYi> -c leiYil < C 'Vii 'Vii 'Vii (Ji = 0 (Ji = C 0 < (Ji < C (9) (10) (11) are fulfilled, the SVC solution has been reached. The stopping conditions are the ones proposed in [6] and C must be a small value around 10 - 3 , a full discussion concerning this topic can be found in [6]. 3 SAMPLE SELECTION STRATEGY The selection of the training samples that will constitute the working set in each iteration is the most critical decision in any chunking scheme, because such decision is directly involved in the number of IRWLS-SVC (or QP-SVC) procedures to be called and in the number of reproducing kernel evaluations to be made, which are, by far, the two most time consuming operations in any chunking schemes. In order to solve the SVC efficiently, we first need to define a candidate set of training samples to form the working set in each iteration. The candidate set will be made up, as it could not be otherwise, with all the training samples that violate the stopping conditions (9)-(11); and we will also add all those training samples that satisfy condition (11) but a small variation on their error will make them violate such condition. The strategies to select the working set are as numerous as the number of problems to be solved, but one can think three different simple strategies: ? Select those samples which do not fulfill the stopping criteria and present the largest Iei I values. ? Select those samples which do not fulfill the stopping criteria and present the smallest Iei I values. ? Select them randomly from the ones that do not fulfill the stopping conditions. The first strategy seems the more natural one and it was proposed in [6]. If the largest leil samples are selected we guanrantee that attained solution gives the greatest step towards the solution of (1). But if the step is too large, which usually happens, it will cause the solution in each iteration and the (Ji values to oscillate around its optimal value. The magnitude of this effect is directly proportional to the value of C and q (size of the working set), so in the case ofsmall C (C < 10) and low q (q < 20) it would be less noticeable. The second one is the most conservative strategy because we will be moving towards the solution of (1) with small steps. Its drawback is readily discerned if the starting point is inappropriate, needing too many iterations to reach the SVC solution. The last strategy, which has been implemented together with the IRWLS-SVC procedure, is a mid-point between the other two, but if the number of samples whose 0 < (3i < C increases above q there might be some iterations where we will make no progress (working set is only made up of the training samples that fulfill the stopping condition in (11)). This situation is easily avoided by introducing one sample that violates each one of the stopping conditions per class. Finally, if the cardinality of the candidate set is less than q the working set is completed with those samples that fulfil the stopping criteria conditions and present the least leil. In summary, the sample selection strategy proposed is 2 : 1. Construct the candidate set, Se with those samples that do not fulfill stopping conditions (9) and (10), and those samples whose (3 obeys 0 < (3i < C. 2. IfISel < ngot05. 3. Choose a sample per class that violates each one of the stopping conditions and move them from Se to the working set, SW. 4. Choose randomly n - ISw I samples from Se and move then to SW. Go to Step 6. 5. Move every sample form Se to Sw and then-ISwl samples that fulfill the stopping conditions (9) and (10) and present the lowest leil values are used to complete SW . 6. Go on, obtaining Gin and Gbin. 4 BENCHMARK FOR THE IRWLS-SVC We have prepared two different experiments to test both the IRWLS and the sample selection strategy for solving the SVc. The first one compares the IRWLS against QP and the second one compares the samples selection strategy, together with the IRWLS, against a complete solving procedure for SVC, the SV Mlight. In the first trial, we have replaced the LOQO interior point optimizer used by SV M1ig ht version 3.02 [5] by the IRWLS-SVC procedure in Table 1, to compare both optimizing engines with equal samples selection strategy. The comparison has been made over a Pentium ill-450MHz with 128Mb running on Window98 and the programs have been compiled using Microsoft Developer 6.0. In Table 2, we show the results for two data sets: the first q 20 40 70 Adult44781 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 21.25 20.70 0.61 0.39 20.60 19.22 1.01 0.17 21.15 18.72 2.30 0.46 Splice 2175 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 46.19 30.76 21.94 4.77 71.34 24.93 46.26 8.07 53.77 20.32 34.24 7.72 Table 2: CPU Time indicates the consume time in seconds for the whole procedure. The Optimize Time indicates the consume time in second for the LOQO or IRWLS procedure. one, containing 4781 training samples, needs most CPU resources to compute the RKHS and the second one, containing 2175 training samples, uses most CPU resources to solve the SVC for each Sw, where q indicates the size of the working set. The value of C has 2In what follows, I . I represents absolute value for numbers and cardinality for sets been set to 1 and 1000, respectively, and a Radial Basis Function (RBF) RKHS [2] has been employed, where its parameter a has been set, respectively, to 10 and 70. As it can be seen, the SV M1ig ht with IRWLS is significantly faster than the LOQO procedure in all cases. The kernel cache size has been set to 64Mb for both data sets and for both procedures. The results in Table 2 validates the IRWLS procedure as the fastest SVC solver. For the second trial, we have compiled a computer program that uses the IRWLS-SVC procedure and the working set selection in Section 3, we will refer to it as svcradit from now on. We have borrowed the chunking and shrinking ideas from the SV Mlight [6] for our computer program. To test these two programs several data sets have been used. The Adult and Web data sets have been obtained from 1. Platt's web page http://research.microsoft.comr jplatt/smo.html/; the Gauss-M data set is a two dimensional classification problem proposed in [3] to test neural networks, which comprises a gaussian random variable for each class, which highly overlap. The Banana, Diabetes and Splice data sets have been obtained from Gunnar Ratsch web page http://svm.first.gmd.der raetschl. The selection of C and the RKHS has been done as indicated in [11] for Adult and Web data sets and in http://svm.first.gmd.derraetschl for Banana, Diabetes and Splice data sets. In Table 3, we show the runtime complexity for each data set, where the value of q has been elected as the one that reduces the runtime complexity. Database Dim Adult6 Adult9 Adult! Web 1 Web7 Gauss-M Gauss-M Banana Banana Diabetes Splice 123 123 123 300 300 2 2 2 2 8 69 N Sampl. 11221 32562 1605 2477 24693 4000 4000 400 4900 768 2175 C a SV 1 1 1000 5 5 1 100 316.2 316.2 10 1000 10 10 10 10 10 1 1 1 1 2 70 4477 12181 630 224 1444 1736 1516 80 1084 409 525 q CPU time radit light radit light 150 130 100 100 150 70 100 40 70 40 150 40 70 10 10 10 10 10 70 40 10 20 118.2 1093.29 25.98 2.42 158.13 12.69 61.68 0.33 22.46 2.41 14.06 124.46 1097.09 113.54 2.36 124.57 48.28 3053.20 0.77 1786.56 6.04 49.19 Table 3: Several data sets runtime complexity, when solved with the short, and SV Mlight, light for short. s v c radit , radit for One can appreciate that the svcradit is faster than the SV M1ig ht for most data sets. For the Web data set, which is the only data set the SV Mlight is sligthly faster, the value of C is low and most training samples end up as support vector with (3i < C. In such cases the best strategy is to take the largest step towards the solution in every iteration, as the SV Mlig ht does [6], because most training samples (3i will not be affected by the others training samples (3j value. But in those case the value of C increases the SV c radit samples selection strategy is a much more appropriate strategy than the one used in SV Mlight. 5 CONCLUSIONS In this communication a new algorithm for solving the SVC for large training data sets has been presented. Its two major contributions deal with the optimizing engine and the sample selection strategy. An IRWLS procedure is used to solve the SVC in each step, which is much faster that the usual QP procedure, and simpler to implement, because the most difficult step is the linear equation system solution that can be easily obtained by LU decomposition means [12]. The random working set selection from the samples not fulfilling the KKT conditions is the best option if the working is be large, because it reduces the number of chunks to be solved. This strategy benefits from the IRWLS procedure, which allows to work with large training data set. All these modifications have been concreted in the svcradit solving procedure, publicly available at http://svm.tsc.uc3m.es/. 6 ACKNOWLEDGEMENTS We are sincerely grateful to Thorsten Joachims who has allowed and encouraged us to use his SV Mlight to test our IRWLS procedure, comparisons which could not have been properly done otherwise. References [1] B. E. Boser, I. M . Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In 5th Annual Workshop on Computational Learning Theory, Pittsburg, U.S.A., 1992. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. [3] S. Haykin. Neural Networks: A comprehensivefoundation. Prentice-Hall, 1994. [4] P. W. Holland and R. E. Welch. Robust regression using iterative re-weighted least squares. Communications of Statistics Theory Methods, A6(9):813-27, 1977. [5] T. Joachims. http://www-ai.infonnatik.uni-dortmund.de/forschung/verfahren Isvmlight Isvmlight.eng.html. Technical report, University of Dortmund, Informatik, AI-Unit Collaborative Research Center on 'Complexity Reduction in Multivariate Data', 1998. [6] T. Joachims. Making Large Scale SVM Learning Practical, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. 1. C. and Smola, A. 1., pages 169-184. M.I.T. Press, 1999. [7] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing, pages 276-285, Amelia Island, U.S.A, 1997. [8] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In ICPR'98, Brisbane, Australia, August 1998. [9] F. Perez-Cruz, A. Navia-Vazquez" P. L. Alarcon-Diana, and A. Artes-Rodriguez. An irwls proceure for svr. In the Proceedings of the EUSIPCO'OO, Tampere, Finland, 9 2000. [10] F. Perez-Cruz, A. N avia-Vazquez, J. L. Rojo-Alvarez, and A. Artes-Rodriguez. A new training algorithm for support vector machines. In Proceedings of the Fifth Bayona Workshop on Emerging Technologies in Telecommunications, volume 1, pages 116120, Baiona, Spain, 91999. [11] 1. C. Platt. Sequential Minimal Optimization: A Fast Algorithm for Training Suppor Vector Machines, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. J. C. and Smola, A. J., pages 185-208. M.I.T. Press, 1999. [12] w. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 2 edition, 1994. [13] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. [14] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
1855 |@word trial:2 version:1 seems:1 decomposition:1 eng:1 reduction:1 contains:1 rkhs:4 com:1 si:6 must:4 readily:1 john:1 cruz:2 numerical:1 girosi:2 sampl:1 selected:1 short:2 haykin:1 simpler:1 scholkopf:2 inside:1 cpu:6 inappropriate:1 solver:2 considering:1 cardinality:2 spain:3 cache:1 moreover:2 lowest:1 what:1 developer:1 emerging:1 developed:1 transformation:1 every:9 gbin:2 runtime:4 classifier:4 platt:2 uk:1 unit:1 yn:1 before:1 understood:1 consequence:1 eusipco:1 might:2 initialization:1 fastest:2 bi:1 obeys:1 practical:1 implement:1 procedure:30 attain:1 significantly:1 radial:1 svr:1 cannot:1 interior:1 selection:14 prentice:1 optimize:4 www:1 yt:1 comr:1 modifies:1 go:3 center:1 starting:1 survey:1 welch:1 resolution:1 his:1 jli:3 variation:1 fulfil:1 programming:1 us:2 diabetes:3 recognition:2 database:1 solved:6 verifying:1 recalculate:1 removed:1 diana:1 complexity:7 grateful:1 solving:8 ithe:1 upon:1 division:1 completely:1 basis:1 easily:2 respectto:2 fast:3 whose:4 solve:9 consume:2 s:4 otherwise:2 statistic:1 think:1 validates:1 mb:2 llwi12:1 description:1 recipe:1 convergence:2 requirement:1 e_a:2 depending:1 oo:1 ij:2 noticeable:1 borrowed:1 progress:1 dividing:2 implemented:1 kuhn:1 drawback:1 yiyj:1 comprehensivefoundation:1 australia:1 violates:2 bin:3 require:1 karush:1 hold:1 around:2 hall:1 algorithmic:1 substituting:1 major:1 optimizer:1 finland:1 smallest:1 proc:1 largest:3 tool:1 weighted:4 minimization:2 gaussian:1 aim:1 modified:3 fulfill:7 joachim:3 properly:1 indicates:3 mainly:1 pentium:1 dim:1 stopping:13 sb:1 typically:1 vetterling:1 issue:1 among:1 ill:2 overall:1 classification:2 html:2 pittsburg:1 equal:1 construct:2 f3:6 eliminated:1 encouraged:1 represents:1 others:1 report:1 inherent:1 randomly:3 replaced:1 microsoft:2 highly:1 mining:1 evaluation:1 perez:3 light:3 rearrange:1 tree:1 divide:1 re:3 minimal:1 aio:1 mhz:1 a6:1 introducing:1 jplatt:1 too:3 dependency:1 sv:14 chooses:1 chunk:2 universidad:3 yl:1 regressor:1 together:2 containing:2 choose:2 de:6 iei:2 summarized:2 includes:1 satisfy:1 vi:1 reached:1 start:1 carlos:1 capability:1 option:1 contribution:3 minimize:2 square:4 publicly:1 collaborative:1 who:1 efficiently:2 dortmund:2 lu:1 informatik:1 vazquez:3 explain:1 reach:2 whenever:1 against:2 tucker:1 involved:1 associated:1 knowledge:1 organized:1 hilbert:1 uc3m:2 higher:1 attained:1 dyh:1 lqi:1 improved:1 discerned:1 alvarez:1 done:2 smola:2 until:3 working:18 web:6 ei:9 nonlinear:1 rodriguez:3 indicated:1 impulse:1 b3:3 effect:1 validity:1 contain:1 y2:1 multiplier:1 iteratively:2 deal:2 sin:3 criterion:4 tsc:2 complete:2 elected:1 svc:41 novel:2 navia:2 superior:1 functional:1 qp:8 ji:5 volume:1 refer:2 cambridge:2 ai:13 moving:1 compiled:2 add:1 mlight:7 multivariate:1 optimizing:3 verlag:1 yi:4 accomplished:1 der:1 seen:2 minimum:1 employed:1 fernando:1 maximize:1 signal:1 violate:2 full:2 needing:1 reduces:2 ing:1 technical:1 faster:4 concerning:1 qi:4 regression:1 iteration:7 kernel:5 proposal:1 justified:1 ratsch:1 brisbane:1 subject:1 escuela:2 xj:2 fit:2 reduce:1 idea:1 inactive:3 hessian:1 cause:1 constitute:1 remark:1 oscillate:1 detailed:1 se:4 prepared:1 mid:1 band:2 gmd:2 reduced:1 http:5 sl:4 tutorial:2 s3:6 fulfilled:1 per:2 discrete:1 affected:1 gunnar:1 ht:4 run:1 powerful:1 telecommunication:1 named:1 guyon:1 decide:1 decision:2 quadratic:1 annual:1 politecnica:2 calling:1 loqo:7 concluding:1 icpr:1 combination:1 smaller:1 slightly:1 son:1 osuna:2 lp:2 island:1 modification:1 s1:2 happens:1 making:1 fulfilling:1 thorsten:1 chunking:6 computationally:1 equation:3 resource:2 sincerely:1 needed:3 end:2 available:2 operation:1 apply:1 appropriate:1 alternative:1 running:1 completed:1 sw:7 giving:1 appreciate:1 move:7 already:2 strategy:20 usual:1 gin:5 topic:1 mail:1 difficult:1 implementation:4 unknown:1 benchmark:1 situation:1 communication:4 incorporated:1 banana:4 yiyjk:1 reproducing:2 august:1 verfahren:1 amelia:1 engine:4 smo:1 boser:1 adult:3 usually:3 pattern:2 below:1 program:5 memory:3 greatest:1 critical:2 overlap:1 natural:1 rely:1 scheme:7 technology:1 numerous:1 acknowledgement:1 discovery:1 freund:1 fully:1 reordering:1 limitation:1 proportional:1 iiwl1:1 penalization:1 editor:2 tyt:1 summary:2 repeat:2 last:2 infonnatik:1 burges:4 absolute:2 fifth:1 benefit:1 calculated:1 made:5 avoided:2 simplified:1 employing:1 far:1 uni:1 kkt:4 isw:1 consuming:1 xi:4 iterative:3 anchored:1 table:11 nature:2 robust:1 obtaining:2 da:1 oin:1 whole:4 s2:7 edition:1 allowed:1 madrid:3 wiley:1 shrinking:1 sub:1 comprises:1 candidate:4 splice:4 svm:4 workshop:3 vapnik:3 adding:1 sequential:1 forschung:1 magnitude:1 margin:1 vii:3 flannery:1 lt:2 tampere:1 lagrange:1 holland:1 springer:1 teukolsky:1 rbf:1 towards:3 replace:1 determined:1 reducing:1 conservative:1 total:1 called:1 e:2 la:2 gauss:3 select:5 support:14 regularizing:1
935
1,856
Gaussianization Scott Shaobing Chen Renaissance Technologies East Setauket, NY 11733 [email protected] Ramesh A. Gopinath IBM TJ. Watson Research Center Yorktown Heights, NY 10598 [email protected] Abstract High dimensional data modeling is difficult mainly because the so-called "curse of dimensionality". We propose a technique called "Gaussianization" for high dimensional density estimation, which alleviates the curse of dimensionality by exploiting the independence structures in the data. Gaussianization is motivated from recent developments in the statistics literature: projection pursuit, independent component analysis and Gaussian mixture models with semi-tied covariances. We propose an iterative Gaussianization procedure which converges weakly: at each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques. Gaussianization offers density estimation sharper than traditional kernel methods and radial basis function methods. Gaussianization can be viewed as efficient solution of nonlinear independent component analysis and high dimensional projection pursuit. 1 Introduction Density Estimation is a fundamental problem in statistics. In the statistics literature, the univariate problem is well-understood and well-studied. Techniques such as (variable) kernel methods, radial basis function methods, Gaussian mixture models, etc, can be applied successfully to obtain univariate density estimates. However, the high dimensional problem is very challenging, mainly due to the problem of the so-called "curse of dimensionality". In high dimensional space, data samples are often sparsely distributed: it requires very large neighborhoods to achieve sufficient counts, or the number of samples has to grows exponentially according to the dimensions in order to achieve sufficient coverage of the sampling space. As a result, direct extension of univariate techniques can be highly biased, because they are neighborhood-based. In this paper, we attempt to overcome the curse of dimensionality by exploiting independence structures in the data. We advocate the notion that Independence lifts the curse of dimensionality! Indeed, if the dimensions are independent, then there is no curse of dimensionality since the high dimensional problem can be reduced to univariate problems along each dimension. For natural data sets which do not have independent dimensions, we would like to construct transforms such that after the transformation, the dimensions become independent. We propose a technique called "Gaussianization" which finds and exploits independence structures in the data for high dimensional density estimation. For a random variable X EnD, we define its Gaussianization transform to be an invertible and differential transform T(X) such that the transformed variable T(X) follows the standard Gaussian distribution: T(X) '" N(O, I) It is clear that density estimates can be derived from Gaussianization transforms. We pro- pose an iterative procedure which converges weakly in probability: at each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques which are based on univariate density estimation. At each iteration, the coordinates become less dependent in terms of the mutual information, and the transformed data samples become more Gaussian in terms of the Kullback-Leibler divergence. In fact, at each iteration, as far as the data is linearly transformed to less dependent coordinates, the convergence result still holds. Our convergence proof of Gaussianization is highly related to Huber's convergence proof of projection pursuit [4]. Algorithmically, each Gaussianization iteration amounts to performing the linear independent component analysis. Since the assumption of linear independent component analysis may not be valid, the resulting linear transform does not necessarily make the coordinate independent, however, it does make the coordinates as independent as possible. Therefore the engine of our algorithm is the linear independent component analysis. We propose an efficient EM algorithm which jointly estimates the linear transform and the marginal univariate Gaussianization transform at each iteration. Our parametrization is identical to the independence factor analysis proposed by Attias (1999) [1]. However, we apply the variational method in the M-step, as in the semi-tied covariance algorithm proposed for Gaussian mixture models by Gales (1999) [3]. 2 Existence of Gaussianization We first show the existence of Gaussianization transforms. Denote ?(.) the probability density function of the standard normal N(O, I); denote ?(', /-?,};.) the probability density function of N(/-?,};.) ; denote <1>(-) the cumulative distribution function (CDF) of the standard normal. 2.1 Univariate Gaussianization Univariate Gaussianization exists uniquely and can be derived from univariate density estimation. Let X E n 1 be the univariate variable. We assume that the density function of X is strictly positive and differentiable. Let F(?) be the cumulative distribution function of X. Then T(?) is a Gaussianization transform if and only if it satisfies the following partial differential equation: p(x) aT = ?(T(x))1 ax I? It can be easily verified that the above partial differential equation has only two solutions: ?<I>-l(F(X)) '" N(O, 1) (1) In practice, the CDF F( ?) is not available; it has to be estimated from the training data. We choose to approximate it by Gaussian mixture models: p(x) = 2:[=1 7ri?(X, /-?i, O'l); equivalently we assume the CDF F(x) = 2:[=1 7ri<l>(X~:,) where the parameters {7ri' /-?i, O'd can be estimated via maximum likelihood using the standard EM algorithm. Therefore we can parameterize the Gaussianization transform as T(X) I X-. i=l O'i = <1>-1(2: 7ri <I> ( ~)) (2) In practice there is an issue of model selection: we suggest to use model selection techniques such as the Bayesian information criterion [6] to determine the number of Gaussians I. Throughout the paper, we shall assume that univariate density estimation and univariate Gaussianization can be solved by univariate Gaussian mixture models. 2.2 High Dimensional Gaussianization However, the existence of high dimensional Gaussianization is non-trivial. We present here a theoretical construction. For simplicity, we consider the two dimensional case. Let X = (X I, X 2) T be the random variable. Gaussianization can be achieved in two steps. We first marginally Gaussianize the first coordinate X I and fix the second coordinate X 2 unchanged; the transformed variable will have the following density P(XI,X2) =P(XI)P(X2Ixt) = ?(xt)p(x2Ixt) . We then marginally Gaussian each conditional density p(?IXI) for each Xl. Notice that the marginal Gaussianization is different for different Xl : T X1 (X2 ) = CP-I(F.l x l(X2 )), Once all the conditional densities are marginally Gaussianized, we achieve joint Gaussianization p(XI,X2) = p(XI)p(X2IxI) = ?(Xt}?(X2) . The existence of high dimensional Gaussianization can be proved by similar construction. The above construction, however, is not practical since the marginal Gaussianization of the conditional densities P(X2 = x21X I = xt) requires estimation of the conditional densities given all Xl , which is impossible with finite samples. In the following sections, we shall develop an iterative Gaussianization algorithm that is practical and also can be proved to converge weakly. High-dimensional Gaussianization is unique up to any invertible transforms which preserve the measure on RP induced by the standard Gaussian distribution. Examples of such transforms are orthogonal linear transforms and certain nontrivial Nonlinear transforms. 3 Gaussianization with Linear leA Assumption Let (Xl,' . . , XN) be the i.i.d. samples from the random variable X E RP. We assume that there exist a linear transform A DxD such that the transformed variable Y = (YI , ... , YD)T = AX has independent components: p(YI, ... , YD) = p(yt} ... p(YD)' In this case, Gaussianization is reduced to linear ICA: we can first find the linear transformation A by linear independent component analysis, and then Gaussianize each individual dimension of Y by univariate Gaussianization. We parametrize the marginal Gaussianization by univariate Gaussian mixtures (2). This amounts to model the coordinates of the transformed variable by univariate Gaussian mixtures: p(Yd) = l:[~l 7rd,i?(Yd, f.l,d,i, a~,i)' We would like to jointly optimize both the linear transform A and the marginal Gaussianization parameters (7r, f.l" a) via maximum likelihood. In fact, this is the same parametrization as in Attias (1999) [1] . We point out that modeling the coordinates after the linear transform as non-Gaussian distributions, for which we assume univariate Gaussian mixtures are adequate, leads to ICA while as modeling them as single Gaussians leads to PCA. The joint estimation of the parameters can be computed via the EM algorithm. The auxiliary function which has to be maximized in the M-step has the following form: N Q(A,7r,f.l"a) = Nlogldet(A)I+ L D Id 1 )2 ( LLWn,d,dlog7rd,i-210g27ra~,i- Yn,d2~td,i 1 n=l d=l i=l d,t where (Wn,d,i) are the posterior counts computed at the E-step. It can be easily shown that the priors (7rd,i can be easily updated and the means (f..Ld,i can be entirely determined by the linear transform A. However, updating the linear transform A and the variances (lTd,i) does not have closed form solution and has to be solved iteratively by numerical methods. Attias (1999) [1] proposed to optimize Q via gradient descent: at each iteration, one fixes the linear transform and compute the Gaussian mixture parameters, then fixes the Gaussian ntixture parameters and update the linear transform via gradient descent using the so-called natural gradient. We propose an iterative algorithm as in Gales (1999) [3] for the M-step which does not involve gradient descent and the nuisance and instability caused by of the step size parameter. At each iteration, we fix the linear transform A and update the variances (lTd,i); we then fix (lTd,i) and update each row of A with all the other rows of A fixed: updating each row amounts to solving a system of linear equations. Our iterative scheme guarantees that the auxiliary function Q to be increased at every iteration. Notice that each iteration in our M-step updates the rows of the linear matrix A by solving D linear equations. Although our iterative scheme may be slightly more expensive per iteration than standard numerical optintization techniques such as Attias' algorithm, in practice it converges after very few iterations, as observed in Gales (1999) [3]. In contrast the numerical optintization scheme may take an order of magnitude more iterations. In fact, in our experiments, our algorithm converges much faster than Attias's algorithm. Furthermore, our algorithm is stable since each iteration is guaranteed to increase the likelihood. The M-step in both Attias' algorithm and our algorithm can be implemented efficiently by storing and accessing the sufficient statistics. Typically in our M-steps, most of the improvement on the likelihood comes in the first few iterations. Therefore we can stop each M-step after, say one iteration of updating the parameters; even though the auxiliary function is not optimized, but it is guaranteed to improve. Therefore we obtained the socalled generalized EM algorithm. Attias (1999) [1] reported faster convergence of the generalized EM algorithm than the standard EM algorithm. 4 Iterative Gaussianization In this section we develop an iterative algorithm which Gaussianizes arbitrary random variables. At each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques which are based on univariate density estimation. We shall show that transfornting the data into the least dependent coordinates can be achieved by linear independent component analysis. We also prove the weak convergence result. We define the negentropy 1 of a random variable X = (Xl,"', X D) T as the KullbackLeibler divergence between X and the standard Gaussian distribution. We define the marginal negentropy to be JM(X) = Ef=l J(Xd). One can show that the negentropy can be decomposed as the sum of the marginal negentropy and the mutual information: J(X) = JM(X) + I(X). Gaussianization is equivalent to finding an invertible transform TO such that the negentropy ofthe transformed variable vanishes: J(T(X)) = O. For arbitrary random variable X E R D, we propose the following iterative Gaussianization algorithm. Let X(O) = X. At each iteration, (A) Linearly transform the data: y(k) = Ax(k). 1 We are abusing the terminology slightly: normally the negentropy of a random variable is defined to be the Kullback-Leibler distance between itself and the Gaussian variable with the same mean and covariance. (B) Nonlinearly transform the data by marginal Gaussianization: X(k+1) = w'Tr,J.1.,u (y(k)) where the marginal Gaussianization w1T, It, 0.(- ), which approximates the ideal marginal Gaussianization w(.), can be derived from univariate Gaussian mixtures (2): The parameters are chosen by minimizing the negentropy of the transformed variable X(k+1): (3) (..4, 7T, it, 0-) = min J(w 1T ,It,.,.(AX)). A,1r,f-t,u Thus, after each iteration, the transformed variable becomes as close as possible to the standard Gaussian in the Kullback-Leibler distance. First, the problem of minizing the negentropy (3) is equivalent to the maximum likelihood problem for Gaussianization with linear ICA assumption in section 3, and thus can be solved by the same efficient EM algorithm. Second, since the data X(k) might not satisfy the linear ICA assumption, the optimal linear transform might not transform X(k) into independent coordinates. However, it does transform X (k) into the least dependent coordinates, since J(X(k+1)) = JM(w(AX(k))) + I(w(AX(k))) = I(AX(k)) . Further more, if the linear transform A is constrained to be orthogonal, then finding the least dependent coordinates is equivalent to finding the marginally most non-Gaussian coordinates, since J(X(k)) = J(AX(k)) = JM(AX(k)) + I(AX(k)) (notice that the negentropy is invariant under orthogonal transforms). Therefore our iterative algorithm can be viewed as follows. At each iteration, the data is linearly transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized. In practice, after the first iteration, the algorithm finds linear transforms which are almost orthogonal. Therefore one can also view practically that at each iteration, the data is linearly transformed to the most marginally non-Gaussian coordinates and then each coordinate is marginally Gaussianized. For the sake of simplicity, we assume that we can achieve perfect marginal Gaussianization w(?) by w1T ,It,.,.O), which is derived from univariate Gaussian mixtures. In fact, when the number of Gaussians goes to infinity and the number of samples goes to infinity, one can show that lim W1T ,It,.,. = W. Thus it suffices to analyze the ideal iterative Gaussianization X(k) where A = W(AX(k)) = argmin J(W(AX(k))) = argmin I(AX(k)) . Following Huber's argument [4], we can show that X(k) -t N(O, I) in the sense of weak convergence, i.e. the density function of X(k) converges pointwise to the density function of standard normal. Original Data IteratIon 1 Iteration 2 :~ -2~ -1 -~2L---:O -----' -~4L-_-::-2----:0 -----:-2----'4 -4 - 4 -2 0 2 4 iteratIon 3 IteratIon 4 Iteration 5 :~:). -.: ... o -2 .. f :ltJ?? :,.?.~?!to.: . 0 :" 0" : ::-:... 0 ~..: -4 -4-2 iteratIon 6 0 -4 -4-2 24 IteratIon 7 -2 ?. ~:. :!'::-. 0.6 O.S ........ ?? ? " . -4 -4-2 Gausslanlzatlon DensIty EstImatIon 0. 4 0 24 4~ ~.:4~ 0".' .' 0 .2 .;..=: Iteration 8 . . : 0.0 2 ' "?? - 2 :? ? ; ? ?. :_: _4L-------' -4-2 0 24 '-; . 0 2 ..... -4 -4-2 24 l: .r,... 0 24 GaussIan MIxture DensIty EstImatIon 0. 2 0. 4 0.6 O.S Figure 1: Iterative Gaussianization on a synthetic circular data set We point out that out iterative algorithm can be relaxed as follows. At each iteration, the data can linearly transformed into coordinates which are less dependent, instead of into coordinates which are the least dependent: I(X(k) - I(AkX(k)) ~ E[I(X(k) - i~t I(AX(k))] where the constant E > O. We can show that this relaxed algorithm still converges weakly. 5 Examples We demonstrate the process of our iterative Gaussianization algorithm through a very difficult two dimensional synthetic data set. The true underlying variable is circularly distributed: in the polar coordinate system, the angle is uniformly distributed; the radius follows a mixture of four non-overlapping Gaussians. We drew 1000 i.i.d. samples from this distribution. We ran 8 iterations to Gaussianize the data set. Figure 4 displays the transformed data set at each iteration. Clearly we see the transformed data gradually becomes standard Gaussian. Let X(O) = X ; assume that the iterative Gaussianization procedure converges after K iterations, i.e. X(K) '" N( O, I). Since the transforms at each iteration are invertible, we can then compute Jacobian and obtain density estimation for X. The Jacobian can be computed rapidly due to the chain rule. Figure 4 compares the Gaussianization density estimate (8 iterations) and Gaussian mixture density estimate (40 Gaussians). Clearly we see that the Gaussianization density estimate recovers the four circular structure; however, the Gaussian mixture estimate lacks resolution. 6 Discussion Gaussianization is closely connected with the exploratory projection pursuit algorithm proposed by Friedman (1987) [2]. In fact we argue that our iterative Gaussianization procedure can easily constrained as an efficient parametric solution of high dimensional projection pursuit. Assume that we are interested in [-dimensional projections where 1 ~ [ ~ D. If we constrain that at each iteration the linear transform has to be orthogonal, and only the first [ coordinates of the transformed variable are marginally Gaussianized, then the iterative Gaussianization algorithm achieves [ dimensional projection pursuit. The bottleneck of Friedman's high dimensional projection pursuit is to find the jointly most non-Gaussian projection and to jointly Gaussianize that projection. In contrast, our algorithm finds the most marginally non-Gaussian projection and marginally Gaussianize that projection; it can be computed by an efficient EM algorithm. We argue that Gaussianization density estimation indeed alleviates the problem of the curse of dimensionality. At each iteration, the effect of the curse of dimensionality is solely on finding a linear transform such that the transformed coordinates are less dependent, which is a relatively much easier problem than the original problem of high dimensional density estimation itself; after the linear transform, the marginal Gaussianization can be derived from univariate density estimation, which has nothing to do with the curse of dimensionality. Hwang 1994 [5] performed extensive comparative study among the following three popular density estimates: one dim projection pursuit density estimates (a special case of our iterative Gaussianization algorithm), adaptive kernel density estimates and radial basis function density estimates; he concluded that projection pursuit density estimates outperform in most data set. We are currently experimenting with application of Gaussianization density estimation in automatic speech and speaker recognition. References [1] H. Attias, "Independent factor analysis", Neural Computation, vol. 11, pp. 803-851, 1999. [2] J.H. Friedman, "Exploratory projection pursuit", J. American Statistical Association, vol. 82, pp. 249-266, 1987. [3] MJ.F. Gales, "Semi-tied covariance matrices for hidden Markov Models", IEEE Transaction Speech and Audio Processing, vol. 7, pp. 272-281, 1999. [4] PJ. Huber, "Projection pursuit", Annals of Statistics, vol. 13, pp 435-525, 1985. [5] J. Hwang, S. Lay and A. Lippman, "Nonparametric multivariate density estimation: a comparative study", IEEE Transaction Signal Processing, vol. 42, pp 2795-2810, 1994. [6] G. Schwarz, "Estimating the dimension of a model", Annals of Statistics, vol. 6, pp 461-464,1978.
1856 |@word d2:1 covariance:4 tr:1 ld:1 com:2 negentropy:9 numerical:3 update:4 parametrization:2 height:1 along:1 direct:1 become:3 differential:3 prove:1 advocate:1 huber:3 ica:4 indeed:2 decomposed:1 td:1 curse:9 jm:4 becomes:2 estimating:1 underlying:1 argmin:2 finding:4 transformation:2 guarantee:1 every:1 xd:1 normally:1 yn:1 positive:1 understood:1 id:1 solely:1 yd:5 might:2 studied:1 challenging:1 practical:2 unique:1 practice:4 lippman:1 procedure:4 projection:16 radial:3 suggest:1 close:1 selection:2 impossible:1 instability:1 optimize:2 equivalent:3 center:1 yt:1 go:2 resolution:1 simplicity:2 rule:1 notion:1 coordinate:28 exploratory:2 updated:1 annals:2 construction:3 ixi:1 expensive:1 recognition:1 updating:3 lay:1 sparsely:1 observed:1 solved:3 parameterize:1 connected:1 ran:1 accessing:1 vanishes:1 weakly:4 solving:2 basis:3 easily:4 joint:2 lift:1 neighborhood:2 minizing:1 say:1 statistic:6 transform:25 jointly:4 itself:2 differentiable:1 propose:6 rapidly:1 alleviates:2 achieve:4 ltj:1 exploiting:2 convergence:6 comparative:2 perfect:1 converges:7 develop:2 pose:1 coverage:1 auxiliary:3 implemented:1 come:1 radius:1 closely:1 gaussianization:54 fix:5 suffices:1 extension:1 strictly:1 hold:1 dxd:1 practically:1 normal:3 achieves:1 estimation:18 polar:1 currently:1 schwarz:1 successfully:1 clearly:2 gaussian:28 renaissance:1 derived:5 ax:14 improvement:1 likelihood:5 mainly:2 experimenting:1 contrast:2 sense:1 dim:1 dependent:12 typically:1 hidden:1 transformed:19 interested:1 issue:1 among:1 socalled:1 development:1 constrained:2 special:1 mutual:2 marginal:12 construct:1 once:1 sampling:1 identical:1 few:2 preserve:1 divergence:2 individual:1 attempt:1 friedman:3 highly:2 circular:2 mixture:15 tj:1 chain:1 partial:2 orthogonal:5 theoretical:1 increased:1 modeling:3 kullbackleibler:1 reported:1 synthetic:2 density:36 fundamental:1 invertible:4 choose:1 gale:4 american:1 satisfy:1 caused:1 performed:1 view:1 closed:1 analyze:1 variance:2 efficiently:1 maximized:1 ofthe:1 weak:2 bayesian:1 marginally:13 pp:6 proof:2 recovers:1 stop:1 proved:2 popular:1 lim:1 dimensionality:9 though:1 furthermore:1 nonlinear:2 overlapping:1 lack:1 abusing:1 hwang:2 grows:1 effect:1 true:1 leibler:3 iteratively:1 uniquely:1 nuisance:1 speaker:1 yorktown:1 criterion:1 generalized:2 demonstrate:1 cp:1 pro:1 variational:1 ef:1 exponentially:1 association:1 he:1 approximates:1 rd:2 automatic:1 stable:1 etc:1 posterior:1 multivariate:1 recent:1 certain:1 watson:1 yi:2 relaxed:2 determine:1 converge:1 signal:1 semi:3 faster:2 offer:1 x2ixt:2 iteration:38 kernel:3 achieved:2 lea:1 concluded:1 biased:1 induced:1 ideal:2 wn:1 independence:5 attias:8 bottleneck:1 motivated:1 pca:1 ltd:3 speech:2 adequate:1 clear:1 involve:1 transforms:10 amount:3 nonparametric:1 reduced:2 outperform:1 exist:1 notice:3 estimated:2 algorithmically:1 per:1 shall:3 gaussianize:5 vol:6 four:2 terminology:1 pj:1 verified:1 sum:1 angle:1 throughout:1 almost:1 entirely:1 guaranteed:2 display:1 nontrivial:1 infinity:2 constrain:1 ri:4 x2:6 shaobing:1 sake:1 argument:1 min:1 performing:1 relatively:1 according:1 slightly:2 em:8 invariant:1 gradually:1 equation:4 count:2 end:1 pursuit:11 available:1 gaussians:5 parametrize:1 apply:1 rp:2 existence:4 original:2 exploit:1 unchanged:1 parametric:1 traditional:1 gradient:4 distance:2 argue:2 trivial:1 pointwise:1 minimizing:1 equivalently:1 difficult:2 sharper:1 markov:1 ramesh:1 finite:1 descent:3 arbitrary:2 nonlinearly:1 extensive:1 optimized:1 engine:1 scott:1 natural:2 scheme:3 improve:1 technology:1 prior:1 literature:2 gopinath:1 sufficient:3 storing:1 ibm:2 row:4 distributed:3 overcome:1 dimension:7 xn:1 valid:1 cumulative:2 adaptive:1 far:1 transaction:2 approximate:1 kullback:3 xi:4 iterative:18 gaussianized:7 mj:1 necessarily:1 linearly:5 w1t:3 nothing:1 akx:1 x1:1 ny:2 xl:5 tied:3 jacobian:2 xt:3 exists:1 circularly:1 drew:1 magnitude:1 chen:1 easier:1 univariate:24 satisfies:1 cdf:3 conditional:4 viewed:2 determined:1 uniformly:1 called:5 east:1 audio:1
936
1,857
Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles Martin J. Wainwright, Erik B. Sudderth, and Alan S. Willsky Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 { mjwain, esuddert, willsky} @mit. edu Abstract We present the embedded trees algorithm, an iterative technique for estimation of Gaussian processes defined on arbitrary graphs. By exactly solving a series of modified problems on embedded spanning trees, it computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the embedded trees algorithm also computes exact error covariances. The error covariance computation is most efficient for graphs in which removing a small number of edges reveals an embedded tree. In this context, we demonstrate that sparse loopy graphs can provide a significant increase in modeling power relative to trees, with only a minor increase in estimation complexity. 1 Introduction Graphical models are an invaluable tool for defining and manipulating probability distributions. In modeling stochastic processes with graphical models, two basic problems arise: (i) specifying a class of graphs with which to model or approximate the process; and (ii) determining efficient techniques for statistical inference. In fact, there exists a fundamental tradeoff between the expressive power of a graph, and the tractability of statistical inference. At one extreme are tree-structured graphs: although they lead to highly efficient algorithms for estimation [1, 2], their modeling power is often limited. The addition of edges to the graph tends to increase modeling power, but also introduces loops that necessitate the use of more sophisticated and costly techniques for estimation. In areas like coding theory, artificial intelligence, and speech processing [3, 1], graphical models typically involve discrete-valued random variables. However, in domains such as image processing, control, and oceanography [2, 4, 5], it is often more appropriate to consider random variables with a continuous distribution. In this context, Gaussian processes on graphs are of great practical significance. Moreover, the Gaussian case provides a valuable setting for developing an understanding of estimation algorithms [6, 7]. The focus of this paper is the estimation and modeling of Gaussian processes defined on graphs with cycles. We first develop an estimation algorithm that is based on exploiting trees embedded within the loopy graph. Given a set of noisy measurements, this embedded trees (ET) algorithm computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the ET algorithm also computes exact error covariances at each node. In many applications, these error statistics are as important as the conditional means. We then demonstrate by example that relative to tree models, graphs with a small number of loops can lead to substantial improvements in modeling fidelity without a significant increase in estimation complexity. 2 2.1 Linear estimation fundamentals Problem formulation Consider a Gaussian stochastic process x N(O, P) that is Markov with respect to an undirected graph g. Each node in 9 corresponds to a subvector Xi of x. We will refer to Xi as the state variable for the ith node, and its length as the state dimension. By the Hammersley- Clifford Theorem [8], p- 1 inherits a sparse structure from g. If it is partitioned into blocks according to the state dimensions, the (i, j)fh block can be nonzero only if there is an edge between nodes i and j. <'oJ Let y = ex + v, v N(O, R), be a set of noisy observations. Without loss of generality, we assume that the subvectors Yi of the observations are conditionally independent given the state x. For estimation purposes, we are interested in p( Xi Iy), the marginal distribution of the state at each node conditioned on the noisy observations. Standard formulas exist for the computation of p(xIY) N(x, P): <'oJ <'oJ x = peT R- 1y P= [p- 1+ eTR- 1e] -1 (1) The conditional error covariances Pi are the block diagonal elements of the full error covariance P, where the block sizes are equal to the state dimensions. 2.2 Exploiting graph structure When 9 is tree structured, both the conditional means and error covariances can be computed by a direct and very efficient O(cF N) algorithm [2]. Here d is the maximal state dimension at any node, and N is the total number of nodes. This algorithm is a generalization of classic Kalman smoothing algorithms for time series, and involves passing means and covariances to and from a node chosen as the root. For graphs with cycles, calculating the full error covariance P by brute force matrix inversion would, in principle, provide the conditional means and error variances. Since the computational complexity of matrix inversion is O([dNP), this proposal is not practically feasible in many applications, such as image processing, where N may be on the order of 105 . This motivates the development of iterative techniques for linear estimation on graphs with cycles. Recently, two groups [6, 7] have analyzed Pearl's belief propagation [1] in application to Gaussian processes defined on loopy graphs. For Gaussians on trees, belief propagation produces results equivalent to the Kalman smoother of Chou et al. [2]. For graphs with cycles, these groups showed that when belief propagation converges, it computes the correct conditional means, but that error covariances are incorrect. The complexity per iteration of belief propagation on loopy graphs is O(d 3 N), where one iteration corresponds to updating each message once. -l P tree(l) p-l = p-l + K 1 -l P tree(2) = p-l + K 2 Figure 1. Embedded trees produced by two different cutting matrices Ki for a nearest- neighbor grid (observation nodes not shown). It is important to note that conditional means can be efficiently calculated using techniques from numerical linear algebra [9]. In particular, it can be seen from equation (1) that computing the conditional mean iC is equivalent to computing the product of a matrix inverse and a vector. Given the sparsity of p-l, iterative techniques like conjugate gradient [9] can be used to compute the mean with associated cost O(dN) per iteration. However, like belief propagation, such techniques compute only the means and not the error covariances. 3 3.1 Embedded trees algorithm Calculation of means In this section, we present an iterative algorithm for computing both the conditional means and error covariances of a Gaussian process defined on any graph. Central to the algorithm is the operation of cutting edges from a loopy graph to reveal an embedded tree. Standard tree algorithms [2] can be used to exactly solve the modified problem, and the results are used in a subsequent iteration. For a Gaussian process on a graph, the operation of removing edges corresponds to modifying the inverse covariance matrix. Specifically, we apply a matrix splitting p-l + C T R-1C = p-l _ K + C T R-1C tree(t) t where K t is a symmetric cutting matrix chosen to ensure that Pt~;e(t) corresponds to a valid tree-structured inverse covariance matrix. This matrix splitting allows us to consider defining a sequence of iterates {xn} by the recursion: + CTR-1C] ~n - K ~n-l + CTR- 1 [ p-l tree(t(n)) X - t(n)x Y Here t(n) indexes the embedded tree used in the nth iteration. For example, Figure 1 shows two of the many spanning trees embedded in a nearest-neighbor grid. When the matrix (Pt~;e(t(n)) + C T R-1C) is positive definite, it is possible to solve for the next iterate xn in terms of data y and the previous iterate. Thus, given some starting point XO, we can generate a sequence of iterates {iC n } by the recursion 1 ~-l + CTR- 1y] (2) x~ = Mt(n) [Kt(n)X where Mt(n) ~ (Pt~;e(t(n)) + C T R-1C). By comparing equation (2) to equation (1), it can be seen that computing the nth iterate corresponds to a linear-Gaussian problem, which can be solved efficiently and directly with standard tree algorithms [2]. 3.2 Convergence of means Before stating some convergence results, recall that for any matrix A, the spectral radius is defined as p(A) ~ max>. 1>'1, where>. ranges over the eigenvalues of A. x Proposition 1. Let be the conditional mean ofthe original problem on the loopy graph, and consider the sequence of iterates {xn} generated by equation (2). Then for any starting point, is the unique fixed point of the recursion, and the error en ::@, xn - x obeys the dynamics x en = [IT Mt~:e(t(j))Kt(j)l (3) eO J=1 In a typical implementation of the algorithm, one cycles through the embedded trees in some fixed order, say t = 1, .. . , T. In this case, the convergence of the algorithm can be analyzed in terms of the product matrix A ::@, Mt~:e(j) Kj. nJ=1 Proposition 2. Convergence of the ET algorithm is governed by the spectral radius of A. In particular, if p(A) > 1, then the algorithm will not converge, whereas if p(A) < 1, then (xn - x) n~ 0 geometrically at rate 'Y ::@, p(A) 10. Note that the cutting matrices K must be chosen in order to guarantee not only that ~-;;e is tree-structured but also that M ::@, (Pt-;;e + C T R- 1 C) is positive definite. The following theorem, adapted from results in [10], gives conditions guaranteeing the validity and convergence of the ET algorithm when cutting to a single tree. Theorem 1. Define Q ::@, p-1 + C T R- 1 C, and M ::@, Q + K. Suppose the cutting matrix K is symmetric and positive semidefinite. Then we are guaranteed that p(M- 1 K) < 1. In particular, we have the bounds: Am ax (K) :::; p(M-1 K) :::; Amax (K) Amax(K) + Amax(Q) Amax(K) + Amin(Q) (4) It should be noted that the conditions of this theorem are sufficient, but by no means necessary, to guarantee convergence of the ET algorithm. In particular, we find that indefinite cutting matrices often lead to faster convergence. Furthermore, Theorem 1 does not address the superior performance typically achieved by cycling through several embedded trees. Gaining a deeper theoretical understanding of these phenomena is an interesting open question. 3.3 Calculation of error covariances Although there exist a variety of iterative algorithms for computing the conditional mean of a linear-Gaussian problem, none of these methods correctly compute error covariances at each node. We show here that the ET algorithm can efficiently compute these covariances in an iterative fashion. For many applications (e.g., oceanography [5]), these error statistics are as important as the estimates. We assume for simplicity in notation that XO = 0 and then expand equation (2) to yield that for any iteration xn = [F n + Mt(~)]CT R- 1 y, where the matrix F n satisfies the recursion F n = M-1 t(n) K t(n) 1 ] [F n- 1 + M-t(n-1) (5) with the initial condition F1 = o. It is straightforward to show that whenever the recursion for the conditional means in equation (2) converges, then the matrix sequence {Fn + Mt(~)} converges to the full error covariance P. Moreover, the cutting matrices K are typically of low rank, say O(E) where E is the number of cut edges. On this basis, it can be shown that each F n can be Convergence of means 10' 1t-~-~~-r=:==O;==,'~C;====jl -+- Conj. Grad. -+- Embedded Tree -<)- Behef Prop. 10 20 30 40 50 Iteration 60 70 80 Convergence of error variances 10' f;-~--~-r==: -+~Ec::' mbC=e=;" dd;=ed~T;=re=ejl 10 (a) 20 30 40 50 Iteration 60 70 80 (b) Figure 2. (a) Convergence rates for computing conditional means (normalized ?2 error). (b) Convergence rate of ET algorithm for computing error variances. decomposed as a sum of O(E) rank 1 matrices. Directly updating this low-rank decomposition of F n from that of F n - 1 requires O(d3 E2 N) operations. However, an efficient restructuring of this update requires only O(d 3 EN) operations [11]. The diagonal blocks of the low- rank representation may be easily extracted and added to the diagonal blocks of Mi(~), which are computed by standard tree smoothers. All together, we may obtain these error variances in O(d3 EN) operations per iteration. Thus, the computation of error variances will be particularly efficient for graphs where the number of edges E that must be cut is small compared to the total number of nodes N. 3.4 Results We have applied the algorithm to a variety of graphs, ranging from graphs with single loops to densely connected MRFs on grids. Figure 2(a) compares the rates of convergence for three algorithms: conjugate gradient (CG), embedded trees (ET), and belief propagation (BP) on a 20 x 20 nearest-neighbor grid. The ET algorithm employed two embedded trees analogous to those shown in Figure 1. We find that CG is usually fastest, and can exhibit supergeometric convergence. In accordance with Proposition 2, the ET algorithm converges geometrically. Either BP or ET can be made to converge faster, depending on the choice of clique potentials. However, we have not experimented with optimizing the performance of ET by adaptively choosing edges to cut. Figure 2(b) shows that in contrast to CG and BP, the ET algorithm can also be used to compute the conditional error variances, where the convergence rate is again geometric. 4 4.1 Modeling using graphs with cycles Issues in Illodel design A variety of graphical structures may be used to approximate a given stochastic process. For example, perhaps the simplest model for a I-D time series is a Markov chain, as shown in Figure 3(a). However, a high order model may be required to adequately capture long-range correlations. The associated increase in state dimension leads to inefficient estimation. Figure 3(b) shows an alternative model structure. Here, additional "coarse scale" nodes have been added to the graph which are not directly linked to any measurements. These nodes are auxiliary variables created to explain the "fine scale" stochastic process of interest. If properly designed, the resulting tree structure 0= auxiliary nodes o = fine scale nodes ? = observations rTT1 (a) (d) (c) (b) 0 .5 0 .5 0 .8 0 .4 0.4 0 .6 0 .3 0.3 0 .4 0 .2 0 .2 0 .2 0.1 0.1 (e) (f) Figure 3. (a) Markov chain. (b) Multiscale tree model. (c) Tree augmented by extra edge. (d) Desired covariance P. (e) Error IP - ptr?? 1 between desired covariance and realized tree covariance. (f) Error IP - Hoopl between desired covariance and covariance realized with loopy graph. will capture long-range correlations without the increase in state dimension of a higher-order Markov model. In previous work, our group has developed efficient algorithms for estimation and stochastic realization using such multiscale tree models [2, 4, 5, 12]. The gains provided by multiscale models are especially impressive when quadtrees are used to approximate two-dimensional Markov random fields. While statistical inference on MRFs is notoriously difficult, estimation on quadtrees remains extremely efficient. The most significant weakness of tree models is boundary artifacts. That is, leaf nodes that are adjacent in the original process may be widely separated in the tree structure (see Figure 3(b)). As a result, dependencies between these nodes may be inadequately modeled, causing blocky discontinuities. Increasing the state dimension d of the hidden nodes will reduce blockiness, but will also reduce estimation efficiency, which is O(d3 N) in total. One potential solution is to add edges between pairs of fine scale nodes where tree artifacts are likely to arise, as shown in Figure 3(c). Such edges should be able to account for short-range dependency neglected by a tree model. Furthermore, optimal inference for such "near- tree" models using the ET algorithm will still be extremely efficient. 4.2 Application to Illultiscale Illodeling Consider a one-dimensional process of length 32 with exact covariance P shown in Figure 3(d). We approximate this process using two different graphical models, a multiscale tree and a "near-tree" containing an additional edge between two finescale nodes across a tree boundary (see Figure 3(c)). In both models, the state dimension at each node is constrained to be 2; therefore, the finest scale contains 16 nodes to model all 32 process points. Figure 3( e) shows the absolute error IP - P tree I for the tree model, where realization was performed by the scale-recursive algorithm presented in [12]. The tree model matches the desired process statistics relatively well except at the center, where the tree structure causes a boundary artifact. Figure 3(f) shows the absolute error IP -l1oop l for a graph obtained by adding a single edge across the largest fine-scale tree boundary. The addition reduces the peak error by 60%, a substantial gain in modeling fidelity. If the ET algorithm is implemented by cutting to two different embedded trees, it converges extremely rapidly with rate 'Y = 0.11. 5 Discussion This paper makes contributions to both the estimation and modeling of Gaussian processes on graphs. First, we developed the embedded trees algorithm for estimation of Gaussian processes on arbitrary graphs. In contrast to other techniques, our algorithm computes both means and error covariances. Even on densely connected graphs, our algorithm is comparable to or better than other techniques for computing means. The error covariance computation is especially efficient for graphs in which cutting a small number of edges reveals an embedded tree. In this context, we have shown that modeling with sparsely connected loopy graphs can lead to substantial gains in modeling fidelity, with a minor increase in estimation complexity. From the results of this paper arise a number of fundamental questions about the trade-off between modeling fidelity and estimation complexity. In order to address these questions, we are currently working to develop tighter bounds on the convergence rate of the algorithm, and also considering techniques for optimally selecting edges to be removed. On the modeling side, we are expanding on previous work for trees [12] in order to develop a theory of stochastic realization for processes on graphs with cycles. Lastly, although the current paper has focused on Gaussian processes, similar concepts can be developed for discrete-valued processes. Acknowledgments This work partially funded by ONR grant NOOOI4-00-1-0089 and AFOSR grant F4962098-1-0349; M.W. supported by NSERC 1967 fellowship, and E.S. by NDSEG fellowship. References [1] J. Pear!. Probabilistic reasoning in intelligent systems. Morgan Kaufman, 1988. [2] K. Chou, A. Willsky, and R. Nikoukhah. Multiscale systems, Kalman filters, and Riccati equations. IEEE Trans. AC, 39(3}:479-492, March 1994. [3] R. G. Gallager. Low-density parity check codes. MIT Press, Cambridge, MA, 1963. [4] M. Luettgen, W. Karl, and A. Willsky. Efficient multiscale regularization with application to optical How. IEEE Trans. 1m. Proc., 3(1}:41-64, Jan. 1994. [5] P. Fieguth, W. Karl, A. Willsky, and C. Wunsch. Multiresolution optimal interpolation of satellite altimetry. IEEE Trans. Ceo. Rem., 33(2}:280- 292, March 1995. [6] P. Rusmevichientong and B. Van Roy. An analysis of turbo decoding with Gaussian densities. In NIPS 12, pages 575- 581. MIT Press, 2000. [7] Y. Weiss and W. T. Freeman. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. In NIPS 12, pages 673- 679. MIT Press, 2000. [8] J. Besag. Spatial interaction and the statistical analysis of lattice systems. J. Roy. Stat. Soc. Series B, 36:192- 236, 1974. [9] J.W. Demme!. Applied numerical linear algebra. SIAM, Philadelphia, 1997. [10] O. Axelsson. Bounds of eigenvalues of preconditioned matrices. SIAM J. Matrix Anal. Appl., 13:847- 862, July 1992. [11] E. Sudderth, M. Wainwright, and A. Willsky. Embedded trees for modeling and estimation of Gaussian processes on graphs with cycles. In preparation, Dec. 2000. [12] A. Frakt and A. Willsky. Computationally efficient stochastic realization for internal multiscale autoregressive models. Mult. Sys. and Sig. Proc. To appear.
1857 |@word inversion:2 open:1 covariance:25 decomposition:1 initial:1 series:4 xiy:1 contains:1 selecting:1 current:1 comparing:1 must:2 finest:1 fn:1 subsequent:1 numerical:2 designed:1 update:1 intelligence:1 leaf:1 sys:1 ith:1 esuddert:1 short:1 provides:1 iterates:3 node:22 coarse:1 dn:1 direct:1 incorrect:1 rem:1 freeman:1 decomposed:1 subvectors:1 increasing:1 considering:1 provided:1 moreover:2 notation:1 kaufman:1 developed:3 nj:1 guarantee:2 exactly:2 control:1 brute:1 grant:2 appear:1 positive:3 before:1 engineering:1 accordance:1 tends:1 interpolation:1 specifying:1 appl:1 fastest:1 limited:1 quadtrees:2 range:4 obeys:1 practical:1 unique:1 acknowledgment:1 recursive:1 block:6 definite:2 jan:1 area:1 mult:1 context:3 equivalent:2 center:1 straightforward:1 starting:2 focused:1 simplicity:1 splitting:2 amax:4 wunsch:1 classic:1 analogous:1 pt:4 suppose:1 exact:3 sig:1 element:1 roy:2 particularly:1 updating:2 cut:3 sparsely:1 electrical:1 solved:1 capture:2 cycle:9 connected:3 mjwain:1 trade:1 removed:1 valuable:1 substantial:3 complexity:6 dynamic:1 neglected:1 solving:1 algebra:2 efficiency:3 basis:1 easily:1 separated:1 artificial:1 choosing:1 widely:1 valued:2 solve:2 say:2 statistic:3 noisy:3 ip:4 inadequately:1 sequence:4 eigenvalue:2 interaction:1 maximal:1 product:2 causing:1 loop:3 realization:4 rapidly:1 riccati:1 multiresolution:1 amin:1 exploiting:2 convergence:15 satellite:1 produce:1 guaranteeing:1 converges:5 depending:1 develop:3 stating:1 stat:1 ac:1 nearest:3 minor:2 soc:1 auxiliary:2 implemented:1 involves:1 radius:2 correct:1 modifying:1 stochastic:7 filter:1 f1:1 generalization:1 proposition:3 tighter:1 practically:1 ic:2 great:1 fh:1 purpose:1 estimation:21 proc:2 currently:1 largest:1 correctness:1 tool:1 mit:4 gaussian:17 modified:2 ax:1 focus:1 inherits:1 improvement:1 properly:1 rank:4 check:1 contrast:2 pear:1 cg:3 chou:2 am:1 besag:1 inference:4 mrfs:2 typically:3 hidden:1 manipulating:1 expand:1 interested:1 issue:1 fidelity:4 development:1 smoothing:1 constrained:1 spatial:1 marginal:1 equal:1 once:1 field:1 intelligent:1 densely:2 interest:1 message:1 highly:1 blocky:1 weakness:1 introduces:1 extreme:1 analyzed:2 semidefinite:1 chain:2 kt:2 edge:15 necessary:1 conj:1 tree:55 re:1 desired:4 theoretical:1 modeling:15 lattice:1 loopy:8 tractability:1 cost:1 optimally:1 dependency:2 adaptively:1 density:2 fundamental:3 peak:1 siam:2 probabilistic:1 off:1 decoding:1 together:1 iy:1 ctr:3 clifford:1 central:1 again:1 ndseg:1 containing:1 luettgen:1 necessitate:1 inefficient:1 account:1 potential:2 coding:1 rusmevichientong:1 performed:1 root:1 linked:1 contribution:1 variance:6 efficiently:3 yield:1 ofthe:1 produced:1 none:1 notoriously:1 explain:1 whenever:1 ed:1 blockiness:1 e2:1 associated:2 mi:1 gain:3 massachusetts:1 noooi4:1 recall:1 sophisticated:1 higher:1 wei:1 formulation:1 generality:1 furthermore:2 lastly:1 correlation:2 working:1 expressive:1 multiscale:7 propagation:7 artifact:3 perhaps:1 reveal:1 oceanography:2 validity:1 normalized:1 concept:1 adequately:1 regularization:1 symmetric:2 laboratory:1 nonzero:1 conditionally:1 adjacent:1 noted:1 ptr:1 demonstrate:2 invaluable:1 reasoning:1 image:2 ranging:1 recently:1 superior:1 mt:6 jl:1 nikoukhah:1 ejl:1 significant:3 measurement:2 refer:1 cambridge:2 grid:4 funded:1 impressive:1 add:1 showed:1 optimizing:1 onr:1 yi:1 seen:2 morgan:1 additional:2 eo:1 employed:1 converge:2 july:1 ii:1 smoother:2 full:3 reduces:1 alan:1 match:1 faster:2 calculation:2 long:2 basic:1 iteration:9 achieved:1 dec:1 proposal:1 addition:2 whereas:1 fine:4 fellowship:2 sudderth:2 extra:1 unlike:2 undirected:1 near:2 iterate:3 variety:3 topology:1 reduce:2 tradeoff:1 grad:1 speech:1 passing:1 cause:1 involve:1 simplest:1 generate:1 exist:2 per:3 correctly:1 discrete:2 group:3 indefinite:1 d3:3 graph:37 geometrically:2 sum:1 inverse:3 decision:1 comparable:3 ki:1 bound:3 ct:1 guaranteed:1 turbo:1 adapted:1 bp:3 extremely:3 optical:1 martin:1 relatively:1 department:1 structured:4 developing:1 according:1 march:2 conjugate:2 across:2 partitioned:1 mbc:1 xo:2 computationally:1 equation:7 remains:1 gaussians:1 operation:5 apply:1 appropriate:1 spectral:2 alternative:1 original:2 ceo:1 cf:1 ensure:1 graphical:6 calculating:1 especially:2 question:3 added:2 realized:2 costly:1 diagonal:3 cycling:1 exhibit:1 gradient:2 etr:1 spanning:2 pet:1 willsky:7 preconditioned:1 erik:1 length:2 kalman:3 index:1 modeled:1 code:1 difficult:1 implementation:1 design:1 motivates:1 anal:1 observation:5 markov:5 defining:2 arbitrary:3 pair:1 subvector:1 required:1 pearl:1 discontinuity:1 trans:3 address:2 able:1 nip:2 usually:1 sparsity:1 hammersley:1 oj:3 max:1 gaining:1 belief:7 wainwright:2 power:4 force:1 recursion:5 nth:2 technology:1 f4962098:1 created:1 philadelphia:1 kj:1 understanding:2 geometric:1 determining:1 relative:2 afosr:1 embedded:20 loss:1 interesting:1 sufficient:1 principle:1 dd:1 pi:1 karl:2 supported:1 parity:1 side:1 deeper:1 institute:1 neighbor:3 absolute:2 sparse:2 van:1 boundary:4 dimension:8 calculated:1 valid:1 xn:6 computes:6 autoregressive:1 made:1 ec:1 approximate:4 cutting:10 clique:1 reveals:2 xi:3 continuous:1 iterative:6 expanding:1 domain:1 significance:1 arise:3 augmented:1 en:4 fashion:1 governed:1 removing:2 theorem:5 formula:1 experimented:1 exists:1 adding:1 conditioned:1 likely:1 gallager:1 restructuring:1 nserc:1 partially:1 fieguth:1 corresponds:5 dnp:1 satisfies:1 extracted:1 ma:2 prop:1 conditional:15 feasible:1 specifically:1 typical:1 except:1 total:3 internal:1 preparation:1 phenomenon:1 ex:1
937
1,858
Stagewise processing in error-correcting codes and image restoration K. Y. Michael Wong Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong [email protected] Hidetoshi Nishimori Department of Physics, Tokyo Institute of Technology, Oh-Okayama, Meguro-ku, Tokyo 152-8551, Japan [email protected] Abstract We introduce stagewise processing in error-correcting codes and image restoration, by extracting information from the former stage and using it selectively to improve the performance of the latter one. Both mean-field analysis using the cavity method and simulations show that it has the advantage of being robust against uncertainties in hyperparameter estimation. 1 Introduction In error-correcting codes [1] and image restoration [2], the choice of the so-called hyperparameters is an important factor in determining their performances. Hyperparameters refer to the coefficients weighing the biases and variances of the tasks. In error correction, they determine the statistical significance given to the paritychecking terms and the received bits. Similarly in image restoration, they determine the statistical weights given to the prior knowledge and the received data. It was shown, by the use of inequalities, that the choice of the hyperparameters is optimal when there is a match between the source and model priors [3]. Furthermore, from the analytic solution of the infinite-range model and the Monte Carlo simulation of finite-dimensional models, it was shown that an inappropriate choice of the hyperparameters can lead to a rapid degradation of the tasks. Hyperparameter estimation is the subject of many studies such as the "evidence framework" [4]. However, if the prior models the source poorly, no hyperparameters can be reliable [5]. Even if they can be estimated accurately through steady-state statistical measurements, they may fluctuate when interfered by bursty noise sources in communication channels. Hence it is equally important to devise decoding or restoration procedures which are robust against the uncertainties in hyperparameter estimation. Here we introduce selective freezing to increase the tolerance to uncertainties in hy- perparameter estimation. The technique has been studied for pattern reconstruction in neural networks, where it led to an improvement in the retrieval precision, a widening of the basin of attraction, and a boost in the storage capacity [6]. The idea is best illustrated for bits or pixels with binary states ?1, though it can be easily generalized to other cases. In a finite temperature thermodynamic process, the binary variables keep moving under thermal agitation. Some of them have smaller thermal fluctuations than the others, implying that they are more certain to stay in one state than the other. This stability implies that they have a higher probability to stay in the correct state for error-correction or image restoration tasks, even when the hyperparameters are not optimally tuned. It may thus be interesting to separate the thermodynamic process into two stages. In the first stage we select those relatively stable bits or pixels whose time-averaged states have a magnitude exceeding a certain threshold. In the second stage we subsequently fix (or freeze) them in the most probable thermodynamic states. Thus these selectively frozen bits or pixels are able to provide a more robust assistance to the less stable bits or pixels in their search for the most probable states. The two-stage thermodynamic process can be studied analytically in the mean-field model using the cavity method. For the more realistic cases of finite dimensions in image restoration, simulation results illustrate the relevance of the infinite-range model in providing qualitative guidance. Detailed theory of selective freezing is presented in [7]. 2 Formulation Consider an information source which generates data represented by a set of Ising spins {~i}' where ~i = ?1 and i = 1" .. ,N. The data is generated according to the source prior Ps ( {~d ). For error-correcting codes transmitting unbiased messages, all sequences are equally probable and Ps({O) = 2-N. For images with smooth structures, the prior consists of ferromagnetic Boltzmann factors, which increase the tendencies of the neighboring spins to stay at the same spin states, that is, Ps ( {~}) ex exp (~ 2: ~i~j) (1) . (ij) Here (ij) represents pairs of neighboring spins, z is the valency of each site. The data is coded by constructing the codewords, which are the products of p spins JR ... ip = ~il ... ~ip for appropriately chosen sets of of indices {il' ... , ip}. Each spin may appear in a number of p-spin codewords; the number of times of appearance is called the valency zp. For conventional image restoration, codewords with only p = 1 are transmitted, corresponding to the pixels in the image. When the signal is transmitted through a noisy channel, the output consists of the sets {Jh ... ip} and {Ti}, which are the corrupted versions of {JR"'i p} and {~i} respectively, and described by the output probability Pout ({ J}, {T} I{0) ex exp (/h 2: Jil ... ip~il ... ~ip + (31' 2: Ti~i) . (2) According to Bayesian statistics, the posterior probability that the source sequence is {(T}, given the outputs {J} and {T}, takes the form P( {(T }I{J}, {T}) ex exp ((3J 2: Jil .. ?ip(Til ... (Tip + (31' 2: TiO'i + ~ 2: (TiO'j) . (ij) (3) If the receiver at the end of the noisy channel does not have precise information on (3J, (3r or (3s, and estimates them as (3, hand (3m respectively, then the ith bit of the decoded/restored information is given by sgn(ai}, where (ai} = Thaie-H{u} The-H{u} , (4) and the Hamiltonian is given by H{a} = -(3" L...J J - 11 -- - a-11 - - - a 1p - - h "L...J T.-1 a-1 - (3m "L...J a-a -1p 1 J- (5) Z (ij) For the two-stage process of selective freezing, the spins evolve thermodynamically as prescribed in Eq_ (4) during the first stage, and the thermal averages (ai} of the spins are monitored_ Then we select those spins with l(ai}1 exceeding a given threshold (), and freeze them in the second stage of the thermodynamics_ The average of the spin ai in the second stage is then given by _ (ai} = Thai II j [6 ((aj}2 Th II j [6 ((aj}2 - ()2) 8Uj ,sgn(uj) + 6 (()2 - (aj }2)] e-ii{u} - , ()2) 8uj ,sgn(uj) + 6 (()2 - (aj}2)] e-H{u} (6) where 6 is the step function, iI {a} is the Hamiltonian for the second stage, and has the same form as Eq. (5) in the first stage. One then regards sgn(ai} as the ith spin of the decoding/restoration process. The most important quantity in selective freezing is the overlap of the decoded/restored bit sgn(ai} and the original bit ei averaged over the output probability and the spin distribution. This is given by Msf = LITj dJITj dTP ({O)P S out ({J}, {T}I{O)eisgn(ai}. (7) ~ Following [3], we can prove that selective freezing cannot outperform the single-stage process if the hyperparameters can be estimated precisely. However, the purpose of selective freezing is rather to provide a relatively stable performance when the hyperparameters cannot be estimated precisely. 3 Modeling error-correcting codes Let us now suppose that the output of the transmission channel consists of only the set of p-spin interactions {Jh --- ip }. Then h = 0 in the Hamiltonian (5), and we set (3m = 0 for the case that all messages are equally probable. Analytical solutions are available for the infinite-range model in which the exchange interactions are present for all possible pairs of sites. Consider the noise model in which Jh ---ip is Gaussian with mean p!joeh .. -eip/NP-1 and variance p!J 2 /2NP-l. We can apply a gauge transformation a i -+ a iei and J h ---ip -+ Ji1 ---ip ei1 ... ei p, and arrive at an equivalent p-spin model with a ferromagnetic bias, where (8) The infinite-range model is exactly solvable using the cavity method [8]. The method uses a self-consistency argument to consider what happens when a spin is added or removed from the system. The central quantity in this method is the cavity field, which is the local field of a spin when it is added to the system, assuming that the exchange couplings act only one-way from the system to the new spin (but not from the spin back to the system). Since the exchange couplings feeding the new spin have no correlations with the system, the cavity field becomes a Gaussian variable in the limit of large valency. The thermal average of a spin, say spin 1, is given by (9) where h1 is the cavity field obeying a Gaussian distribution, whose mean and variance are pjomp - 1 and pJ2qP -1 /2 respectively, where m and q are the magnetization and Edwards-Anderson order parameter respectively, given by m _1", = N ~(Ui) and q _1", 2 = N ~(Ui) . i (10) i Applying self-consistently the cavity argument to all terms in Eq. (10), we can obtain self-consistent equations for m and q. Now we consider selective freezing. If we introduce a freezing threshold () so that all spins with (Ui)2 > (}2 are frozen, then the freezing fraction f is given by (11) The thermal average of a dynamic spin in the second stage is related to the cavity fields in both stages, say, for spin 1, = tanh,B {h1 + ~(p - (0"1) 1)J2rP-2Xtr tanh ,Bh1 } , (12) where 11,1 is the cavity field in the second stage, r is the order parameter describing the spin correlations of the two thermodynamic stages: r == ~ L(Ui) {(ui)8 [(}2 - (Ui)2] + sgn(Ui)8 [(Ui)2 - (}2]} , (13) i Xtr is the trans-susceptibility which describes the response of a spin in the second stage to variations of the cavity field in the first stage, namely _ 1 ' " {)(Ui) (14) Xtr= N~~? i t The cavity field 11,1 is a Gaussian variable. Its mean and variance are pjoin p - 1 and pJ 2 ijP-1/2 respectively, where in and ij are the magnetization and EdwardsAnderson order parameter respectively, given by in = ~L [8((}2 - (Ui)2)(Ui) + 8((Ui)2 - (}2)sgn(ui)] , (15) i ij _ ~L [8((}2 - (Ui)2)(Ui)2 + 8((Ui)2 - (}2)] . (16) i Furthermore, the covariance between h1 and 11,1 is pJ 2 r P- 1/2, where r is given in Eq. (13). Applying self-consistently the same cavity argument to all terms in Eqs. (15), (16), (13) and (14), we arrive at the self-consistent equations for in, ij, rand Xtr. The performance of selective freezing is measured by Msf == ~ L [8((}2 - (Ui)2)sgn(ui) + 8((Ui)2 - (}2)sgn(ui)] . i (17) 0.93 0.95 0.91 ~ :'-<-s. 0.94 0.89 ? '2" '2 0.87 0.85 0.83 0.0 0.2 0.4 0.6 0.8 1=0 fr -- -6 1=0.7 0----0 1=0.8 0 1=0.9 G-----<J 1=0 6 -II 1=0.7 ~ ---0 1=0.8 [3- - - <J 1=0.9 G-----<J 0.93 1.0 T 0.92 0.3 0.6 0.9 1.2 1.5 T Figure 1: The overlap Msf as a function of the decoding temperature T for various given values of freezing fraction f. In this and the following figure, f = 0 corresponds to one-stage decoding/restoration. (a) Theoretical results for p = 3, jo = 0.8 and J = 1; (b) results of Monte Carlo simulations for p = 2 and jo = J = 1. In the example in Fig. l(a), the overlap of the single-stage dynamics reaches its maximum at the Nishimori point TN = J2/2jo as expected. We observe that the tolerance against variations in T is enhanced by selective freezing both above and below the optimal temperature (see especially f = 0.8). This shows that the region of advantage for selective freezing is even broader than that discussed in [7], where improvement is only observed above the optimal temperature. The advantages of selective freezing are confirmed by Monte Carlo simulations shown in Fig. l(b). For one-stage dynamics, the overlap is maximum at the Nishimori point (TN = 0.5) as expected. However, it deterriorates rather rapidly when the decoding temperature increases. In contrast, selective freezing maintains a more steady performance, especially when f = 0.9. 4 Modeling image restoration In conventional image restoration problems, a given degraded image consists of the set of pixels {Ti}, but not the set of exchange interactions {Ji1 ,. .. ,ip }. In this case, (3 = 0 in the Hamiltonian (5). The pixels Ti are the degraded versions of the source pixels corrupted by noise which, for convenience, is assumed to be Gaussian with mean and variance T2. In turn, the source pixels satisfy the prior distribution in Eq. (1) for smooth images. ei, aei Analysis of the mean-field model with extensive valency shows that selective freezing performs as well as one-stage dynamics, but cannot outperform it. Nevertheless, selective freezing provides a rather stable performance when the hyperparameters cannot be estimated precisely. Hence we model a situation common in modern communication channels carrying multimedia traffic, which are often bursty in nature. Since burstiness results in intermittent interferences, we consider a distribution of the degraded pixels with two Gaussian components, each with its own characteristics, 0.92 .-- -- 0.92 0.90 il 0.91 ~; 0.89 0.0 0.5 1.0 h 1.5 " 0.86 0.84 2.0 \l' t f=O f=0.3 ... - .., f=0.5 1>- -- " f=0.7 .---... f=0.9 o f=0.95 G---?) ; ~ ? 0.90 t ~, 0.88 f=O 0 ?????? 0 f=0.1 --- 0 f=0.3 1>--- " f=0.5 <t---<1 f=0.7 9 9 f=0.9 G---?) 9 0.82 0 3 4 7 Tm Figure 2: (a) The performance of selective freezing with 2 components of Gaussian noise at fJs = 1.05, II = 4/2 = 0.8, a1 = 5a2 = 1 and T1 = T2 = 0.3, The restoration agent operates at the optimal ratio fJm/h which assumes a single noise component with the overall mean 0.84 and variance 0.4024. (b) Results of Monte Carlo simulations for the overlaps of selective freezing compared with that of the one-stage dynamics for two-dimensional images generated at the source prior temperature Ts = 2.15. Suppose the restoration agent operates at the optimal ratio of fJm/h which assumes a single noise component. Then there will be a degradation of the quality of the restored images. In the example in Fig. 2(a), the reduction of the overlap Msf for selective freezing is much more modest than the one-stage process (j = 0). Other cases of interest, in which the restoration agent operates on other imprecise estimations, are discussed in [7]. All confirm the robustness of selective freezing. It is interesting to study the more realistic case of two-dimensional images, since we have so far presented analytical results for the mean-field model only. As confirmed by the results for Monte carlo simulations in Fig. 2(b), the overlaps of selective freezing are much more steadier than that of the one-stage dynamics when the decoding temperature changes. This steadiness is most remarkable for a freezing fraction of f = 0.9. 5 Discussions We have introduced a multistage technique for error-correcting codes and image restoration, in which the information extracted from the former stage can be used selectively to improve the performance of the latter one. While the overlap Msf of the selective freezing is bounded by the optimal performance of the one-stage dynamics derived in [3], it has the advantage of being tolerant to uncertainties in hyperparameter estimation. This is confirmed by both analytical and simulational results for mean-field and finite-dimensional models. Improvement is observed both above and below the optimal decoding temperature, superseding the observations in [7]. As an example, we have illustrated its advantage of robustness when the noise distribution is composed of more than one Gaussian components, such as in the case of modern communication channels supporting multimedia applications. Selective freezing can be generalized to more than two stages, in which spins that remain relatively stable in one stage are progressively frozen in the following one. It is expected that the performance can be even more robust. On the other hand, we have a remark about the basic assumption of the cavity method, namely that the addition or removal of a spin causes a small change in the system describable by a perturbative approach. In fact, adding or removing a spin may cause the thermal averages of other spins to change from below to above the thresholds ?& (or vice versa). This change, though often small, induces a nonnegligible change of the thermal averages from fractional values to the frozen values of ?I (or vice versa) in the second stage. The perturbative analysis of these changes is only approximate. The situation is reminiscent of similar instabilities in other disordered systems such as the perceptron, and are equivalent to Almeida-Thouless instabilities in the replica method [9]. A full treatment ofthe problem would require the introduction of a rough energy landscape [9], or the replica symmetry breaking ansatz in the replica method [8]. Nevertheless, previous experiences on disordered systems showed that the corrections made by a more complete treatment may not be too large in the ordered phase. For example, simulational results in Figs. I(b) are close to the corresponding analytical results in [7]. In practical implementations of error-correcting codes, algorithms based on beliefpropagation methods are often employed [10]. It has recently been shown that such decoded messages converge to the solutions of the TAP equations in the corresponding thermodynamic system [11]. Again, the performance of these algorithms are sensitive to the estimation of hyperparameters. We propose that the selective freezing procedure has the potential to make these algorithms more robust. Acknowledgments This work was partially supported by the Research Grant Council of Hong Kong (HKUST6157/99P). References [1] R. J. McEliece, The Theory of Information and Coding, Encyclopedia of Mathematics and its Applications (Addison-Wesley, Reading, MA 1977). [2] S. Geman and D. Geman, IEEE Trans. PAMI 6, 721 (1984). [3] H. Nishimori and K. Y. M. Wong, Phys. Rev. E 60, 132 (1999). [4] D. J. C. Mackay, Neural Computation 4, 415 (1992). [5] J. M. Pryce and A. D. Bruce, J. Phys. A 28, 511 (1995). [6] K. Y. M. Wong, Europhys. Lett. 36, 631 (1996). [7] K. Y. M. Wong and H . Nishimori, submitted to Phys. Rev. E (2000). [8] M. Mezard, G. Parisi, and V.A. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore 1987). [9] K. Y. M. Wong, Advances in Neural Information Processing Systems 9, 302 (1997). [10] B. J. Frey, Graphical Models for Machine Learning and Digital Communication (MIT Press, 1998). [11] Y. Kabashima and D. Saad, Europhys. Lett. 44, 668 (1998).
1858 |@word kong:3 version:2 simulation:7 covariance:1 reduction:1 tuned:1 okayama:1 perturbative:2 ust:1 reminiscent:1 realistic:2 analytic:1 progressively:1 implying:1 weighing:1 ith:2 hamiltonian:4 provides:1 nishi:1 qualitative:1 consists:4 prove:1 introduce:3 expected:3 rapid:1 inappropriate:1 becomes:1 bounded:1 what:1 transformation:1 ti:4 act:1 exactly:1 nonnegligible:1 grant:1 appear:1 agitation:1 t1:1 local:1 frey:1 limit:1 fluctuation:1 pami:1 studied:2 dtp:1 range:4 averaged:2 practical:1 acknowledgment:1 procedure:2 imprecise:1 cannot:4 convenience:1 close:1 storage:1 applying:2 instability:2 wong:5 conventional:2 equivalent:2 correcting:7 attraction:1 oh:1 stability:1 variation:2 enhanced:1 suppose:2 us:1 ising:1 geman:2 observed:2 region:1 ferromagnetic:2 removed:1 burstiness:1 ui:20 multistage:1 thai:1 dynamic:7 carrying:1 easily:1 aei:1 various:1 represented:1 monte:5 europhys:2 whose:2 say:2 statistic:1 noisy:2 ip:12 advantage:5 sequence:2 frozen:4 analytical:4 parisi:1 reconstruction:1 propose:1 interaction:3 product:1 fr:1 neighboring:2 j2:1 rapidly:1 poorly:1 meguro:1 p:3 zp:1 transmission:1 illustrate:1 coupling:2 ac:1 stat:1 measured:1 ij:7 received:2 eq:5 edward:1 implies:1 tokyo:2 correct:1 subsequently:1 disordered:2 sgn:9 exchange:4 require:1 feeding:1 fix:1 probable:4 correction:3 exp:3 bursty:2 a2:1 susceptibility:1 purpose:1 estimation:7 tanh:2 sensitive:1 council:1 vice:2 gauge:1 xtr:4 rough:1 kowloon:1 gaussian:8 mit:1 rather:3 fluctuate:1 broader:1 derived:1 improvement:3 consistently:2 hk:1 contrast:1 glass:1 selective:22 pixel:10 overall:1 mackay:1 field:13 represents:1 others:1 np:2 t2:2 modern:2 composed:1 thouless:1 phase:1 interest:1 message:3 experience:1 modest:1 guidance:1 theoretical:1 virasoro:1 modeling:2 jil:2 restoration:16 pout:1 too:1 optimally:1 corrupted:2 stay:3 physic:2 decoding:7 ansatz:1 michael:1 tip:1 transmitting:1 jo:3 ijp:1 central:1 again:1 til:1 japan:1 potential:1 iei:1 coding:1 coefficient:1 satisfy:1 h1:3 traffic:1 maintains:1 bruce:1 il:4 spin:32 degraded:3 variance:6 eip:1 characteristic:1 ofthe:1 landscape:1 ji1:2 bayesian:1 accurately:1 carlo:5 confirmed:3 kabashima:1 submitted:1 phys:4 reach:1 against:3 energy:1 treatment:2 knowledge:1 fractional:1 back:1 wesley:1 higher:1 response:1 rand:1 formulation:1 though:2 anderson:1 furthermore:2 stage:30 correlation:2 mceliece:1 hand:2 freezing:26 ei:3 stagewise:2 quality:1 aj:4 scientific:1 unbiased:1 former:2 hence:2 analytically:1 illustrated:2 assistance:1 during:1 self:5 steady:2 hong:3 generalized:2 complete:1 magnetization:2 tn:2 performs:1 temperature:8 image:17 recently:1 common:1 jp:1 discussed:2 refer:1 measurement:1 freeze:2 versa:2 ai:9 consistency:1 mathematics:1 similarly:1 moving:1 stable:5 posterior:1 own:1 showed:1 certain:2 inequality:1 binary:2 devise:1 transmitted:2 employed:1 determine:2 converge:1 signal:1 ii:6 thermodynamic:6 full:1 smooth:2 match:1 retrieval:1 equally:3 coded:1 a1:1 basic:1 titech:1 addition:1 source:9 appropriately:1 saad:1 subject:1 extracting:1 idea:1 tm:1 cause:2 remark:1 clear:1 detailed:1 encyclopedia:1 induces:1 outperform:2 singapore:1 estimated:4 hyperparameter:4 threshold:4 nevertheless:2 pj:2 replica:3 fraction:3 uncertainty:4 arrive:2 fjs:1 bit:8 precisely:3 hy:1 generates:1 argument:3 prescribed:1 relatively:3 department:2 according:2 jr:2 smaller:1 describes:1 remain:1 describable:1 rev:2 happens:1 interference:1 equation:3 describing:1 turn:1 addison:1 end:1 available:1 apply:1 observe:1 robustness:2 original:1 assumes:2 graphical:1 uj:4 especially:2 beliefpropagation:1 added:2 quantity:2 codewords:3 restored:3 separate:1 capacity:1 ei1:1 water:1 assuming:1 code:7 index:1 providing:1 ratio:2 implementation:1 boltzmann:1 observation:1 finite:4 t:1 thermal:7 supporting:1 situation:2 communication:4 precise:1 intermittent:1 introduced:1 pair:2 namely:2 extensive:1 tap:1 boost:1 trans:2 able:1 beyond:1 below:3 pattern:1 reading:1 fjm:2 reliable:1 overlap:8 widening:1 solvable:1 improve:2 technology:2 prior:7 removal:1 evolve:1 nishimori:5 determining:1 interesting:2 remarkable:1 digital:1 agent:3 basin:1 consistent:2 supported:1 bias:2 jh:3 perceptron:1 institute:1 tolerance:2 regard:1 dimension:1 lett:2 world:1 made:1 far:1 approximate:1 cavity:13 keep:1 confirm:1 tolerant:1 receiver:1 assumed:1 search:1 bay:1 ku:1 channel:6 robust:5 nature:1 symmetry:1 constructing:1 significance:1 noise:7 hyperparameters:10 site:2 fig:5 precision:1 mezard:1 decoded:3 exceeding:2 obeying:1 breaking:1 removing:1 phkywong:1 evidence:1 adding:1 magnitude:1 tio:2 interfered:1 led:1 appearance:1 ordered:1 hidetoshi:1 partially:1 corresponds:1 extracted:1 ma:1 change:6 infinite:4 operates:3 degradation:2 called:2 multimedia:2 tendency:1 selectively:3 select:2 almeida:1 latter:2 relevance:1 hkust6157:1 ex:3
938
1,859
Accumulator networks: Suitors of local probability propagation Brendan J. Frey and Anitha Kannan Intelligent Algorithms Lab, University of Toronto, www. cs. toronto. edu/ "-'frey Abstract One way to approximate inference in richly-connected graphical models is to apply the sum-product algorithm (a.k.a. probability propagation algorithm), while ignoring the fact that the graph has cycles. The sum-product algorithm can be directly applied in Gaussian networks and in graphs for coding, but for many conditional probability functions - including the sigmoid function - direct application of the sum-product algorithm is not possible. We introduce "accumulator networks" that have low local complexity (but exponential global complexity) so the sum-product algorithm can be directly applied. In an accumulator network, the probability of a child given its parents is computed by accumulating the inputs from the parents in a Markov chain or more generally a tree. After giving expressions for inference and learning in accumulator networks, we give results on the "bars problem" and on the problem of extracting translated, overlapping faces from an image. 1 Introduction Graphical probability models with hidden variables are capable of representing complex dependencies between variables, filling in missing data and making Bayesoptimal decisions using probabilistic inferences (Hinton and Sejnowski 1986; Pearl 1988; Neal 1992). Large, richly-connected networks with many cycles can potentially be used to model complex sources of data, such as audio signals, images and video. However, when the number of cycles in the network is large (more precisely, when the cut set size is exponential), exact inference becomes intractable. Also, to learn a probability model with hidden variables, we need to fill in the missing data using probabilistic inference, so learning also becomes intractable. To cope with the intractability of exact inference, a variety of approximate inference methods have been invented, including Monte Carlo (Hinton and Sejnowski 1986; Neal 1992), Helmholz machines (Dayan et al. 1995; Hinton et al. 1995), and variational techniques (Jordan et al. 1998). Recently, the sum-product algorithm (a.k.a. probability propagation, belief propagation) (Pearl 1988) became a major contender when it was shown to produce astounding performance on the problem of error-correcting decoding in graphs with over 1,000,000 variables and cut set sizes exceeding 2100 ,000 (Frey and Kschischang 1996; Frey and MacKay 1998; McEliece et al. 1998). The sum-product algorithm passes messages in both directions along the edges in a graphical model and fuses these messages at each vertex to compute an estimate of P(variablelobs), where obs is the assignment of the observed variables. In a directed (b) (c) ??? Xj Ynj(Xj) Ynj(Xj) ~K(Y) ~K(Y) ??? ZK Figure 1: The sum-product algorithm passes messages in both directions along each edge in a Bayesian network. Each message is a function of the parent. (a) Incoming messages are fused to compute an estimate of P(ylobservations). (b) Messages are combined to produce an outgoing message 7rk(Y). (c) Messages are combined to produce an outgoing message >'j(Xj). Initially, all messages are set to 1. Observations are accounted for as described in the text. graphical model (Bayesian belief network) the message on an edge is a function of the parent of the edge. The messages are initialized to 1 and then the variables are processed in some order or in parallel. Each variable fuses incoming messages and produces outgoing messages, accounting for observations as described below. Xl, ... , X J are the parents of a variable y and Zl, ... , Z K are the children of y, messages are fused at y to produce function F(y) as follows (see Fig. Ia): If F(y) = (IT Ak(y)) (L: ... L: P(yIXI, . . . , xJ) IT 7I"j (Xj)) k Xl Rj P(y, obs), (1) j XJ where P(y lXI, ... , xJ) is the conditional probability function associated with y. If the graph is a tree and if messages are propagated from every variable in the network to y, as described below, the estimate is exact: F(y) = P(y, obs). Also, normalizing F(y) gives P(ylobs). If the graph has cycles, this inference is approximate. The message 7rk(Y) passed from y to Zk is computed as follows (see Fig. Ib): 7I"k(Y) = F(y)/Ak(Y). The message Aj(Xj) passed from y to Xj is computed as follows (see Fig. Ic): L:L: ... L: L: ... L:(IT Ak(y))P(ylxl, ... ,XJ)(IT 7I"i(Xi)). Aj(Xj) = y Xl X; - l Xj+1 XJ (2) (3) i#j k Notice that Xj is not summed over and is excluded from the product of the messages on the right. 7r- If y is observed to have the value y*, the fused result at y and the outgoing 71" messages are modified as follows: F (y) {F(Y) f- 0 if Y = y* otherwise , 7I"k (y) f- if Y = y* { 7I"k(Y) 0 otherwise (4) The outgoing A messages are computed as follows: Aj(Xj) = L: ... L: Xl L: ... L:(IT Ak(Y*))P(y = Y*IXI' . . . ,XJ)(IT 7I"i(Xi)). X; - l X;+l XJ k (5) i#j If the graph is a tree, these formulas can be derived quite easily using the fact that summations distribute over products. If the graph is not a tree, a local independence assumption can be made to justify these formulas. In any case, the algorithm computes products and summations locally in the graph, so it is often called the "sum-product" algorithm. SN ,2 'l ,, '" ' N,N- ! ' N Figure 2: The local complexity of a richly connected directed graphical model such as the one in (a) can be simplified by assuming that the effects of a child's parents are accumulated by a low-complexity Markov chain as shown in (b) . (c) The general structure of the "accumulator network" considered in this paper. 2 Accumulator networks The complexity of the local computations at a variable generally scales exponentially with the number of parents of the variable. For example, fusion (1) requires summing over all configurations of the parents. However, for certain types of conditional probability function P(yIXI,' .. ,xJ), this exponential sum reduces to a linear-time computation. For example, if P(yIXI,' .. ,xJ) is an indicator function for y = Xl XOR X2 XOR ... XOR XJ (a common function for error-correcting coding), the summation can be computed in linear time using a trellis (Frey and MacKay 1998). If the variables are real-valued and P(y lXI, ... ,xJ) is Gaussian with mean given by a linear function of Xl, ... ,X J, the integration can be computed using linear algebra (c.f. Weiss and Freeman 2000; Frey 2000). In contrast, exact local computation for the sigmoid function, P(y lXI, ... ,XJ) = 1/(1 + exp[-Bo - L: j BjXj]) , requires the full exponential sum. Barber (2000) considers approximating this sum using a central limit theorem approximation. In an "accumulator network" , the probability of a child given its parents is computed by accumulating the inputs from the parents in a Markov chain or more generally a tree. (For simplicity, we use Markov chains in this paper.) Fig. 2a and b show how a layered Bayesian network can be redrawn as an accumulator network. Each accumulation variable (state variable in the accumulation chain) has just 2 parents, and the number of computations needed for the sum-product computations for each variable in the original network now scales with the number of parents and the maximum state size of the accumulation chain in the accumulator network. Fig. 2c shows the general form of accumulator network considered in this paper, which corresponds to a fully connected Bayes net on variables Xl, ... ,X N. In this network, the variables are Xl, ... ,X N and the accumulation variables for Xi are Si,l, ... ,Si,i-l' The effect of variable Xj on child Xi is accumulated by Si,j' The joint distribution over the variables X = {Xi: i = 1, ... ,N} and the accumulation variables S = {Si,j : i = 1, ... ,N,j = 1, ... ,i -1} is P(X, S) N i-I i=l j=l = II [(II P(Si,j IXj, Si ,j-l)) P(xilsi,i-d]. If Xj is not a parent of Xi in the original network, we set P(si,jlxj,Si,j-t) Si,j = Si,j-l and P(Si,j IXj, Si,j-l) = 0 if Si,j :I Si,j-l' (6) = 1 if A well-known example of an accumulator network is the noisy-OR network (Pearl 1988; Neal 1992). In this case, all variables are binary and we set if Si,j-l = 1, if Xj = 1 and Si,j-l otherwise, where Pi ,j is the probability that Xj = 0, (7) = 1 turns on the OR-chain. Using an accumulation chain whose state space size equals the number of configurations of the parent variables, we can produce an accumulator network that can model the same joint distributions on Xl, ... , XN as any Bayesian network. Inference in an accumulator network is performed by passing messages as described above, either in parallel, at random, or in a regular fashion, such as up the accumulation chains, left to the variables, right to the accumulation chains and down the accumulation chains, iteratively. Later, we give results for an accumulator network that extracts images of translated, overlapping faces from an visual scene. The accumulation variables represent intensities of light rays at different depths in a layered 3-D scene. 2.1 Learning accuIIlulator networks To learn the conditional probability functions in an accumulator network, we apply the sum-product algorithm for each training case to compute sufficient statistics. Following Russell and Norvig (1995), the sufficient statistic needed to update the conditional probability function P(si,jlxj,Si,j-t) for Si,j in Fig. 2c is P(Si,j, Xj, Si,j_llobs). In particular, 8 log P(obs) 8P(Si ,j IXj, Si,j-l) _ P(Si,j, Xj, Si,j-llobs) P(Si,j IXj, Si,j-l) (8) P(Si,j,Xj,Si,j-llobs) is approximated by normalizing the product of P(Si,j IXj, Si,j-l) and the). and 11'" messages arriving at Si,j. (This approxi- mation is exact if the graph is a tree.) The sufficient statistics can be used for online learning or batch learning. If batch learning is used, the sufficient statistics are averaged over the training set and then the conditional probability functions are modified. In fact, the conditional probability function P(si,jlxj,Si,j-l) can be set equal to the normalized form of the average sufficient statistic, in which case learning performs approximate EM, where the E-step is approximated by the sum-product algorithm. 3 The bars problem Fig. 3a shows the network structure for the binary bars problem and Fig. 3b shows 30 training examples. For an N x N binary image, the network has 3 layers of binary variables: 1 top-layer variable (meant to select orientation); 2N middlelayer variables (mean to select bars); and N 2 bottom-layer image variables. For large N, performing exact inference is computationally intractable and hence the need for approximate inference. Accumulator networks enable efficient inference using probability propagation since local computations are made feasible. The topology of the accumulator network can be easily tailored to the bars problem, as described above. Given an accumulator network with the proper conditional probability tables, inference computes the probability of each bar and the probability of vertical (a) (b) (c) " 0 _c.=a 11.111 1II Imi 1IIIml 11111111 I 1I III 0 _ -=-::::J _ DC- "'1m . ,, , # of Iterations Figure 3: (a) Bayesian network for bars problem. (b) Examples of typical images. (c) KL divergence between approximate inference and exact inference after each iteration versus horizontal orientation for an input image. After each iteration of probability propagation, messages are fused to produce estimates of these probabilities. Fig. 3c shows the Kullback Leibler divergence between these approximate probabilities and the exact probabilities after each iteration, for 5 input images. The figure also shows the most probable configuration found by approximate inference. In most cases, we found that probability propagation correctly infers the presence of appropriate bars and the overall orientation of the bars. -:,~========:::;;",,;;;;; . _;;;;;;;;;:.,~;;;;..."","Tloo In cases of multiple interpretations of the image (e.g., Fig. 3c, image 4), probability propagation tended to find appropriate interpre.c tations, although the divergence between the g approximate and exact inferences is larger. ]! ~ ..... s: - 10 ." .?-12 Starting with an accumulator network with random parameters, we trained the network as described above. Fig. 4 shows the online learning curves corresponding to different learning rates. The log-likelihood oscillates and although the optimum (horizontal line) is not reached, the results are encouraging. 4 .:J ~-1 4 -" " Figure 4: Learning curves for learn2 " # ofsweeps xlD" ing rates .05, .075 and .1 Accumulating light rays for layered vision We give results on an accumulator network that extracts image components from scenes constructed from different types of overlapping face at random positions. Suppose we divide up a 3-D scene into L layers and assume that one of 0 objects can sit in each layer in one of P positions. The total number of object-position combinations per layer is K = 0 x P. For notational convenience, we assume that each object-position pair is a different object modeled by an opaqueness map (probability that each pixel is opaque) and an appearance map (intensity of each pixel). We constrain the opaqueness and appearance maps of the same object in different positions to be the same, up to translation. Fig. 5a shows the appearance maps of 4 such objects (the first one is a wall). In our model, Pkn is the probability that the nth pixel of object k is opaque and is the intensity of the nth pixel for object k. The input images are modeled by randomly picking an object in each of L layers, choosing whether each pixel in each layer is transparent or opaque, and accumulating light intensity by imaging the pixels through the layers, and then adding Gaussian noise. Wkn Fig. 6 shows the accumulator network for this model. zl E {I, .. . ,K} is the index (b) (a) Figure 5: (a) Learned appearance maps for a wall (all pixels dark and nearly opaque) and 3 faces. (b) An image produced by combining the maps in (a) and adding noise. (c) Objectspecific segmentation maps. The brightness of a pixel in the kth picture corresponds to the probability that the pixel is imaged by object k. of the object in the lth layer, where layer 1 is adjacent to the camera and layer Lis farthest from the camera. y~ is the accumulated discrete intensity of the light ray for pixel n at layer l. y~ depends on the identity of the object in the current layer zl and the intensity of pixel n in the previous layer y~+1. So, 1 1 Zl = 0, y~ = y~+1 zl > 0, y~ = W z l n = y~+l zl > 0' n yl = W z ln ...J. -r yl+l n zl > 0' n yl = yl+l ...J. n -;- W (9) zln otherwise. Each condition corresponds to a different imaging operation at layer l for the light ray corresponding to pixel n. Xn is the discretized intensity of pixel n, obtained from the light ray arriving at the camera, y~. P(xnly~) adds Gaussian noise to y~. After training the network on 200 labeled images, we applied iterative inference to identify and 10cate image components. After each iteration, the message passed from y~ to zl is an estimate of the probability that the light ray for pixel n is imaged by object zl at layer l (i.e., not occluded by other objects). So, for each object at each layer, we have an n-pixel "probabilistic segmentation map". In Fig. 5c we show the 4 maps in layer 1 corresponding to the objects shown in Fig. 5a, obtained after 12 iterations of the sum-product algorithm. zL ~'::::::::===:::::::::--- One such set of segmentation maps can be drawn for each layer. For deeper layers, the maps hopefully segment the part of the scene that sits behind the objects in the shallower layers. Fig. 7a shows the sets of segmentation maps corresponding to different layers, after each iteration of probability propagation, for the input image shown on the far right. After 1 iteration, the segmentation in the Figure 6: An accumulator netfirst layer is quite poor, causing uncertain segmenwork for layered vision. tation in deeper layers (except for the wall, which is mostly segmented properly in layer 2). As iterations increases, the algorithm converges to the correct segmentation, where object 2 is in front, followed by objects 3, 4 and 1 (the wall). It may appear from the input image in Fig. 7a that another possible depth ordering is object 2 in front, followed by objects 4, 3 and 1 - i.e., objects 3 and 4 may be reversed. However, it turns out that if this were the order, a small amount of dark hair from the top of the horizontal head would be showing. We added an extremely large amount of noise the the image used above, to see what the algorithm would do when the two depth orders really are equally likely. Fig. 7b shows the noisy image and the series of segmentation maps produced at each layer as the number of iterations increases. The segmentation maps for layer 1 show that object 2 is correctly identified as being in the front. Quite surprisingly, the segmentation maps in layer 2 oscillate between the two plausible interpretations of the scene - object 3 in front of object 4 and object 4 in front of object 3. Although we do not yet know how robust these oscillations are, or how accurately they reflect the probability masses in the different modes, this behavior is potentially very useful. References D. Barber 2000. Tractable belief propagation. The Learning Workshop, Snowbird, UT. (a) Layer 4 ... Layer 3 ???? ? ?? [J . ' ?? C D ??? D??? D ??? D ??? D ??? D??? D ??? D??? D ??? (b) Layer 4 [] ?? C ~ ?? C ~ ?? C ~ ?? C ~ ?? C ~ ?? C ~ ~ ?? C ~ ?? C ~ ?? C ?? C Layer 3 Layer 2 II I ~ .[]. I. [J ~ ~.[JI ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[JI Layer 2 Layer 1 11)[1 ? 1l[][I. I[][lill Il[][lill " Il[][lill Il[][lill Il[][lill Il[)[lill Il[)[lill Il[][lill Il[][lill Il[][lill Layer 1 1 111 1111. ??? . [J 1Il [I.?[J . l.lrI . 1l0[l. 1.[lC 1.[lC 1.[lC 101111 . [J 1Il ?? [J. ?? [J 1Il I.[J . .[l C 1.[lC . .[J 1Il I.[J . 1.[lC I.IIC I.[J . I.[J . ?? [l C 1.[lC I.[J . . '. [J . I.[J ? 10[1. I.[J. 10[1. 1.[lC IHIII I.[J. [10[1. .[lC IGIIII ?? [J. 10[1 ? ?? [lC 1111111 ?? [J ? 10[1 ? ?? [l C 1.[lC l.tl C 101111 B. J. Frey and F. R. Kschischang 1996. Probability propagation and iterative decoding. Proceedings of the 34th Allerton Conference on Communication, Control and Figure 7: (a) Probabilistic segmentation maps for each Computing 1996, University layer (column) after each iteration (row) of probability of Illinois at Urbana. propagation for the image on the far right. (b) When a large amount of noise is added to the image. the network C B. J . Frey and D. J?. . ' 11 b . . OSCI ates etween interpretations. M ac K ay 1998 . A revoIu t IOn: Belief propagation in graphs with cycles. In M. I. Jordan, M. I. Kearns and S. A. Solla (eds) Advances in Neural Information Processing Systems 10, MIT Press, Cambridge MA . M. I. Jordan, Z. Ghahramani, T. S. Jaakkola and L. K Saul 1999. An introduction to variational methods for graphical models. In M. I. Jordan (ed) Learning in Graphical Models , MIT Press, Cambridge, MA. R. McEliece, D. J. C. MacKay and J. Cheng 1998. Turbodecoding as an instance of Pearl's belief propagation algorithm. IEEE Journal on Selected Areas in Communications 16:2. K P. Murphy, Y. Weiss and M. I. Jordan 1999. Loopy belief propagation for approximate inference: An empirical study. Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA. J . Pearl 1988. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann , San Mateo CA. S. Russell and P. Norvig 1995. Artificial Intelligence: A Modern Approach. Prentice-Hall. Y. Weiss and W . T . Freeman 2000. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. In S.A. Solla, T. KLeen, and K-R. Miiller (eds) Advances in Neural Information Processing Systems 12, MIT Press.
1859 |@word accounting:1 brightness:1 configuration:3 series:1 current:1 ixj:5 si:34 yet:1 kleen:1 update:1 intelligence:2 selected:1 toronto:2 sits:1 allerton:1 along:2 constructed:1 direct:1 ray:6 introduce:1 behavior:1 discretized:1 freeman:2 encouraging:1 becomes:2 mass:1 what:1 every:1 oscillates:1 zl:10 control:1 farthest:1 appear:1 local:7 frey:8 limit:1 tation:1 ak:4 mateo:1 averaged:1 directed:2 accumulator:22 camera:3 area:1 empirical:1 regular:1 convenience:1 layered:4 prentice:1 accumulating:4 accumulation:10 www:1 map:16 missing:2 starting:1 simplicity:1 correcting:2 fill:1 norvig:2 suppose:1 exact:9 ixi:1 approximated:2 cut:2 labeled:1 invented:1 observed:2 bottom:1 connected:4 cycle:5 ordering:1 russell:2 solla:2 complexity:5 occluded:1 trained:1 segment:1 algebra:1 translated:2 easily:2 joint:2 sejnowski:2 monte:1 artificial:2 choosing:1 quite:3 whose:1 larger:1 valued:1 plausible:1 iic:1 otherwise:4 statistic:5 noisy:2 online:2 net:1 product:16 causing:1 combining:1 parent:14 optimum:1 produce:7 converges:1 object:27 ac:1 snowbird:1 interpre:1 c:1 direction:2 correct:1 redrawn:1 enable:1 transparent:1 wall:4 really:1 probable:1 summation:3 considered:2 ic:1 hall:1 exp:1 pkn:1 major:1 xld:1 correctness:1 mit:3 gaussian:5 mation:1 modified:2 jaakkola:1 derived:1 l0:1 lill:10 notational:1 properly:1 likelihood:1 contrast:1 brendan:1 lri:1 inference:19 dayan:1 accumulated:3 initially:1 hidden:2 pixel:15 overall:1 orientation:3 summed:1 mackay:3 integration:1 equal:2 filling:1 nearly:1 intelligent:2 modern:1 randomly:1 divergence:3 murphy:1 astounding:1 message:25 light:7 behind:1 chain:11 edge:4 capable:1 tree:6 divide:1 initialized:1 ynj:2 uncertain:1 instance:1 column:1 assignment:1 etween:1 loopy:1 vertex:1 front:5 imi:1 dependency:1 contender:1 combined:2 probabilistic:5 yl:4 decoding:2 picking:1 fused:4 central:1 reflect:1 li:1 distribute:1 coding:2 depends:1 performed:1 later:1 lab:1 reached:1 bayes:1 parallel:2 il:13 became:1 xor:3 kaufmann:2 identify:1 bayesian:5 accurately:1 produced:2 carlo:1 tended:1 ed:3 associated:1 propagated:1 richly:3 ut:1 infers:1 segmentation:10 wei:3 just:1 mceliece:2 horizontal:3 overlapping:3 propagation:16 hopefully:1 mode:1 aj:3 effect:2 normalized:1 hence:1 excluded:1 imaged:2 iteratively:1 leibler:1 neal:3 adjacent:1 ay:1 performs:1 reasoning:1 image:21 variational:2 recently:1 sigmoid:2 common:1 ji:2 anitha:1 exponentially:1 interpretation:3 cambridge:2 illinois:1 add:1 cate:1 certain:1 binary:4 morgan:2 signal:1 ii:4 full:1 multiple:1 rj:1 reduces:1 ing:1 segmented:1 equally:1 hair:1 vision:2 fifteenth:1 iteration:11 represent:1 tailored:1 ion:1 source:1 pass:2 jordan:5 extracting:1 presence:1 iii:1 variety:1 independence:1 xj:30 topology:2 identified:1 whether:1 expression:1 passed:3 miiller:1 passing:1 oscillate:1 generally:3 useful:1 amount:3 dark:2 locally:1 processed:1 notice:1 correctly:2 per:1 discrete:1 drawn:1 imaging:2 graph:10 fuse:2 sum:15 uncertainty:1 opaque:4 oscillation:1 decision:1 ob:4 layer:38 followed:2 cheng:1 precisely:1 constrain:1 x2:1 scene:6 extremely:1 performing:1 combination:1 poor:1 ate:1 em:1 wkn:1 making:1 computationally:1 ln:1 turn:2 needed:2 know:1 tractable:1 operation:1 apply:2 appropriate:2 batch:2 lxi:3 original:2 top:2 graphical:8 giving:1 ghahramani:1 approximating:1 added:2 kth:1 reversed:1 barber:2 considers:1 kannan:1 assuming:1 modeled:2 index:1 mostly:1 potentially:2 proper:1 shallower:1 vertical:1 observation:2 markov:4 urbana:1 hinton:3 communication:2 head:1 dc:1 arbitrary:1 intensity:7 pair:1 kl:1 learned:1 pearl:5 bar:9 below:2 including:2 video:1 belief:7 ia:1 indicator:1 nth:2 representing:1 tations:1 picture:1 extract:2 sn:1 text:1 fully:1 versus:1 sufficient:5 intractability:1 pi:1 translation:1 row:1 accounted:1 surprisingly:1 arriving:2 deeper:2 saul:1 face:4 curve:2 depth:3 xn:2 computes:2 made:2 san:2 simplified:1 far:2 cope:1 approximate:10 kullback:1 global:1 approxi:1 incoming:2 summing:1 francisco:1 xi:6 iterative:2 table:1 zln:1 learn:2 zk:2 robust:1 yixi:3 kschischang:2 ignoring:1 ca:2 complex:2 noise:5 child:5 fig:18 tl:1 fashion:1 lc:10 trellis:1 position:5 exceeding:1 exponential:4 xl:9 ib:1 rk:2 formula:2 theorem:1 down:1 showing:1 normalizing:2 fusion:1 intractable:3 sit:1 workshop:1 bayesoptimal:1 adding:2 appearance:4 likely:1 visual:1 bo:1 corresponds:3 ma:2 conditional:8 lth:1 identity:1 feasible:1 typical:1 except:1 justify:1 kearns:1 called:1 total:1 select:2 meant:1 outgoing:5 audio:1
939
186
272 NEURAL NET RECEIVERS IN MULTIPLE-ACCESS COMMUNICATIONS Bernd-Peter Paris, Geoffrey Orsak, Mahesh Varanasi, Behnaam Aazhang Department of Electrical and Computer Engineering Rice University Houston, TX 77251-1892 ABSTRACT The application of neural networks to the demodulation of spread-spectrum signals in a multiple-access environment is considered. This study is motivated in large part by the fact that, in a multiuser system, the conventional (matched filter) receiver suffers severe performance degradation as the relative powers of the interfering signals become large (the "near-far" problem). Furthermore, the optimum receiver, which alleviates the near-far problem, is too complex to be of practical use. Receivers based on multi-layer perceptrons are considered as a simple and robust alternative to the optimum solution. The optimum receiver is used to benchmark the performance of the neural net receiver; in particular, it is proven to be instrumental in identifying the decision regions of the neural networks. The back-propagation algorithm and a modified version of it are used to train the neural net. An importance sampling technique is introduced to reduce the number of simulations necessary to evaluate the performance of neural nets. In all examples considered the proposed neural ~et receiver significantly outperforms the conventional recelver. INTRODUCTION In this paper we consider the problem of demodulating signals in a code-division multiple-access (CDMA) Gaussian channel. Multiple accessing in code domain is achieved by spreading the spectrum of the transmitted signals using preassigned code waveforms. The conventional method of demodulating a spread-spectrum signal in a multiuser environment employs one filter matched to the desired signal. Since the conventional receiver ignores the presence of interfering signals it is reliable only when there are few simultaneous transmissions. Furthermore, when the relative received power of the interfering signals become large (the "near-far" problem), severe performance degradation of the system is observed even in situations with relatively low bandwidth efficiencies (defined as the ratio of the number of channel subscribers to the spread of the bandwidth) [Aazhang 87]. For this reason there has been an interest in designing optimum receivers for multi-user communication systems [Verdu 86, Lupas 89, Poor 88]. The resulting optimum demodulators, Neural Net Receivers in Multiple-Access Communications however, have a variable decoding delay with computational and storage complexity that depend exponentially on the number of active users. Unfortunately, this computational intensity is unacceptable in many applications. There is hence a need for near optimum receivers that are robust to near-far effects with a reasonable computational complexity to ensure their practical implementation. In this study, we introduce a class of neural net receivers that are based on multilayer perceptrons trained via the back-propagation algorithm. Neural net receivers are very attractive alternatives to the optimum and conventional receivers due to their highly parallel structures. As we will observe, the performance of the neural net receivers closely track that of the optimum receiver in all examples considered. SYSTEM DESCRIPTION In the multiple-access network of interest, transmitters are assumed to share a radio band in a combination of the time and code domain. One way of multiple accessing in the code domain is spread spectrum, which is a signaling scheme that uses a much wider bandwidth than necessary for a given data rate. Let us assume that in a given time interval there are K active transmitters in the network. In a simple setting, the kth active user, in a symbol interval, transmits a signal from a binary signal set derived from the set of code waveforms assigned to the corresponding user. The signal is time limited to the interval [a, T], where T is the symbol duration. In this paper we will concentrate on symbol-synchronous CDMA systems. Synchronous systems find applications in time slotted channels with the central (base) station transmitting to remote (mobile) terminals and also in relays between central stations. The synchronous problem will also be construed as providing us with a manageable setting to better understand the issues in the more difficult asynchronous situation. In a synchronous CDMA system, the users maintain time synchronism so that the relative time delays associated with all users are assumed to be zero. To illustrate the potentials of the proposed multiuser detector, we present the application to binary PSK direct-sequence signals in coherent systems. Therefore, the signal at a given receiver is the superposition of the K transmitted signals in additive channel noise (see [Aazhang 87, Lupas 89] and references within) P ret) K = L L b~i) Akak(t - iT) cos(we[t - iT] + Ok) + nt, t E ~, (1) i=1 k=1 where P is the packet length, Ak is the signal amplitude, We is the carrier frequency, Ok is the phase angle. The symbol b1i) E {-I, + I} denotes the bit that the kth user is transmitting in the ith time interval. In this model, nt is the additive channel noise which is assumed to be a white Gaussian random process. The time-limited code waveform, denoted.by ak(t), is derived from the spreading sequence assigned to the kth user. That is, ak(t) = Ef=-~/ a)k)p(t - jTe) where pet) is the unit rectangular pulse of duration Te and N is the length of the spreading sequence. One code period !!(k) = [a~k),a~k), . . . ,a~~I] is used for spreading the signal per symbol so 273 274 Paris, Orsak, Varanasi and Aazhang that T = NTc ? In this system, spectrum efficiency is measured as the ratio of the number of channel users to the spread factor, K/ N. In the next two sections, we first consider optimum synchronous demodulation of the multiuser spread-spectrum signal. Then, we introduce the application of neural networks to the multiuser detection problem. OPTIMUM RECEIVER Multiuser detection is an active research area with the objective of developing strategies for demodulation of information sent by several transmitters sharing a channel [Verdu 86, Poor 88, Varanasi 89, Lupas 89]. In these situations with two or more users of a multiple-access Gaussian channel, one filter matched to the desired signal is no longer optimum since the decision statistics are effected by the other signals (e.g., the statistics are disturbed by cross-correlations with the interfering signals). Employing conventional matched filters, because of its structural simplicity, may still be justified if the system is operating at a low bandwidth efficiency. However, as the number of users in the system with fixed bandwidth grows or as the relative received powers of the interfering signals become large, severe performance degradation of the conventional matched filter is observed [Aazhang 87]. For directsequence spread-spectrum systems, optimum receivers obtained by Verdu and Poor require an extremely high degree of software complexity and storage, which may be unacceptable for most multiple-access systems [Verdu 86, Lupas 89]. Despite implementation problems, studies on optimum demodulation illustrate that the effects of interfering signals in a CDMA system, in principle, can be neutralized. A complete study of the suboptimum neural net receiver requires a review of the maximum likelihood sequence detection formulation. Assuming that all possible information sequences are independent and equally likely, and defining !L{ i) = [b~i), b~i), ... , b}2]', it is easy to see that an optimum decision on fL{ i) is a one-shot decision in that it requires the observation of the received signal only in the ith time interval. Without loss of generality, we will therefore focus our attention on i 0 and drop the time superscript and consider the demodulation of the vector of bits !L with the observation of the received signal in the interval [0,11In a K -user Gaussian channel, the most likely information vector is chosen as that which maximizes the log of the likelihood function (see [Lupas 89]) = where Sk(t) = Akak(t) cos(wct + Ok) is the modulating signal of the kth user. The optimum decision can also be written as ~pt = arg te{ _l,+l}K max {2y'IL - !L'HIL} , (3) where H is the K x K matrix of signal cross-correlations such that the (k,l)th element is hk,r =< Sk(t), Sr(t) >. The vector of sufficient statistics '[ consists of the Neural Net Receivers in Multiple-Access Communications outputs of a bank of J{ filters each matched to one of the signals Yk = iT r(t)Sit;(t)dt, for k = 1,2, ... ,K. (4) The maximization in (3) has been shown to be NP-complete [Lupas 89], i.e., no algorithm is known that can solve the maximization problem in polynomial time in K. This computational intensity is unacceptable in many applications. In the next section, we consider a suboptimum receiver that employs artificial neural networks for finding a solution to a maximization problem similar to (3). NEURAL NETWORK Until now the application of neural networ,ks to multiple-access communications has not drawn much attention. In this study we employ neural networks for classifying different signals in synchronous additive Gaussian channels. We assume that the information bits of the first of the K signals is of interest, therefore, the phase angle of the desired signal is assumed to be zero (i.e., (}1 = 0). Two configurations with multi-layer perceptrons and sigmoid nonlinearity are considered for multiuser detection of direct-sequence spread-spectrum signals. One structure is depicted in Figure 1.b where a layered network of perceptrons processes the sufficient statistics (4) of the multi-user Gaussian channel. In this structure the first layer of the net (referred to as the hidden layer) processes [Y1, Y2, ... , YK]. The output layer may only have one node since there is only one signal that is being demodulated. This feed-forward structure is then trained using the back-propagation algorithm [Rumelhart 86]. In an alternate configuration, the continuous-time received signal is converted to an N-dimensional vector by sampling the output of the front-end filter at the chip rate Te- 1 as illustrated in Figure 1.a. The input vector to the net can be written so that the demodulation of the first signal is viewed as a classification problem: (5) where ?1(1) is the spreading code vector of the first user, 1] is a length-N vector of filtered Gaussian noise samples and L E[=2 bkA~ COS(8k)!!(k) is the multipleaccess interference vector with Ak = AkTel2, Vk = 1,2, ... ,K. The layered neural net is then trained to process the input vector for demodulation of the first user's information bits via the back-propagation algorithm. For this configuration we consider two training methods, first the multi-layer receiver is trained, via the backpropagation algorithm, to classify the parity of the desired signal (referred to as the "trained" example) [Lippmann 87]. In another attempt (referred to as the "preset" example), the input layer of the net is preset as Gaussian classifiers and the other layers are trained using the back propagation algorithm [Gschwendtner 88]. Since we are interested in understanding the internal representation of knowledge by the weights of the net, a signal space method is developed to illustrate decision regions. In a K -user system where the spreading sequences are not orthogonal, the = 275 276 Paris, Orsak, Varanasi and Aazhang signals can be represented by orthonormal bases using the Gram-Schmidt procedure. The optimum decision regions in the signal space for the demodulation of 61 are known [Poor 88] and can be directly compared to ones for the neural net. Figure 2 illustrates decision regions for the optimum receiver and for "preset" and "trained" neural net receivers. In this example, two users are sharing a channel with N 3, signal to noise ratio of user 1 (SN Rd equal to 8dB and relative energies of the two user, E2/ E1 6dB. As it is seen in this figure the decision region of the "preset" example is almost identical to the optimum boundary, however, the decision boundary for the "trained" example is quite conservative. Such comparisons are instrumental not only in identifying the pattern by which decisions are made by the neural networks but also in understanding the characteristics of the training algorithms. = = PERFORMANCE ANALYSIS In this paper, we motivate the application of neural nets to single-user detection in multiuser channels by comparing the performance of the receivers in Figure 1 to that of the conventional and the optimum [Poor 88]. Since exact analysis of the bit error probabilities for the neural net receivers are analytically intractable, we consider Monte Carlo simulations. This method can produce very accurate estimates of bit-error probability if the number of simulations is sufficiently large to ensure occurrence of several erroneous decisions. The fact that these multiuser receivers operate with near optimum error rates puts a tremendous computational burden on the computer system. The new variance reduction scheme, developed by Orsak and Aazhang in [Orsak 89], first shifts the simulated channel noise to bias the simulations and then scales the error rate to obtain an unbiased estimate with a reduced variance. This importance sampling technique, which proved to be extremely effective in single-user detection [Orsak 89], is applied to the analysis of the multiuser systems. As discussed in [Orsak 89], the fundamental issue is to generate more errors by biasing the simulations in cases where the error rate is very small. This strategy is better described by the two-user Gaussian example in Figure 2. In this example the simulation is carried out by generating zero-mean Gaussian noise vectors 'I} , random phase (}2 and random values of the interfering bit 62 . Considering 61 = 1. (corresponding to signals +a1 + a2 or +a1 - a2 which are marked by "+" in Figure 2) error occurs if the statistics fall on the left side of the decision boundary. It can be shown that the most efficient biasing scheme corresponds to a shift of the mean of the Gaussian noise and the multiple-access interference such that the mean of the statistics are placed on the decision boundary (the shifted signals are marked by "0" in Figure 2). Since this strategy generates much more errors than the standard Monte Carlo, errors are weighted to obtain an unbiased estimate of the error rate. The importance sampling technique substantially reduces the number of simulation trials compared to standard Monte Carlo for a given accuracy. In Figure 3 the gain which is defined as the ratio of the number of trials required for a fixed variance using Monte Carlo to that using the importance sampling method, is plotted versus Neural Net Receivers in Multiple-Access Communications the bit-error probability. In this example, the spreading sequence length, N is equal 3 and relative energies of the two user, E2/ El 6dB. The gain in this example of severe near-far problem is inversely proportional to the error rate. Furthermore, results from extensive analysis indicated that the proposed importance sampling technique is well suited for problems in multi-user communications and less than 100 trials is sufficient for an accurate error probability estimate. = NUMERICAL RESULTS The performance of the conventional, optimum [Poor 88] and the neural net receivers are compared via Monte Carlo simulations employing the importance sampling method. Except for a difference in length of training periods, the two configurations in Figure 1 result in similar average bit-error probabilities. Results presented here correspond to the neural net receiver in Figure l.a. A two-user Gaussian channel is considered with severe near-far problem where E2/ El 6dB and spreading sequence length N 3. In Figure 4, the average bit-error probabilities of the four receivers (conventional, optimum, neural nets for the "trained" and "preset" examples) are plotted versus the signal to noise ratio of the first user (SN RI). It is clear from this figure that the two neural net receivers outperform the matched filter receiver over the range of SN R l . Figure 5 depicts these average error probabilities versus the relative energies of the two users (i.e., E2/ El ) for a fixed SN Rl = 8dB and N = 3. As expected the conventional receiver becomes multiple-access limited as E2 increases, however, the performance of the neural net receivers closely track that of the optimum receiver for all values of E 2 ? We also considered a three-user Gaussian example with a high bandwidth efficiency and severe near-far problem where spreading sequence length N = 3 and first and third users have equal energy and second user has four times more energy (Le., E2/ El = 6dB ). The average error probabilities of the four receivers versus SN Rl are depicted in Figure 6. The neural net receivers maintained their near optimum performance even in this three user example with a spread fae tor of 3 corresponding to a bandwidth efficiency of 1. = = CONCLUSIONS In this paper, we consider the problem of demodulating a signal in a multipleaccess Gaussian channel. The error probability of different neural net receivers were compared with the conventional and optimum receivers in a symbol-synchronous system. As expected the performance of the conventional receiver (matched filter) is very sensitive to the strength of the interfering users. However, the error probability of the neural net receiver is independent of the strength of the other users and is at least one order of magnitude better than the conventional receiver. Except for a difference in the length of training periods, the two configurations in Figure 1 result in similar average bit-error probabilities. However, the training strategies, "preset" and "trained", resulted in slightly different error rates and decision regions. The multi-layer perceptron was very successful in the classification problem in the presence of interfering signals. In all the examples that were considered, two layers 277 278 Paris, Orsak, Varanasi and Aazhang of perceptrons proved to be sufficient to closely approximate the decision boundary of the optimum receiver. We anticipate that this application of neural networks will shed more light on the potentials of neural nets in digital communications. The issues facing the project were quite general in nature and are reported in many neural network studies. However, we were able to address these issues in multiple-access communications since the disturbances are structured and the optimum receiver (which is NP-hard) is well understood. References [Aazhang 87] B. Aazhang and H. V. Poor. Performance of DS/SSMA Communications in Impulsive Channels-Part I: Linear Correlation Receivers. IEEE Trans. Commun., COM-35(1l):1l79-1188, November 1987. [Gschwendtner 88] A. B. Gschwendtner. DARPA Neural Network Study. AFCEA International Press, 1988. [Lippmann 87] R. P. Lippmann and B. Gold. Neural-Net Classifiers Useful for Speech Recognition. In IEEE First Conference on Neural Networks, pages 417-425, San Diego, CA, June 21-24, 1987. [Lupas 89] R. Lupas and S. Verdu. Linear Multiuser Detectors for Synchronous Code-Division Multiple-Access Channels. IEEE Trans. Info. Theory, IT-34, 1989. [Orsak 89] G. Orsak and B. Aazhang. On the Theory of Importance Sampling Applied to the Analysis of Detection Systems. IEEE Trans. Commun., COM-37, April, 1989. [Poor 88] H. V. Poor and S. Verdu. Single-User Detectors for Multiuser Channels. IEEE Trans. Commun., COM-36(1):50-60, January, 1988. [Rumelhart 86] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning Internal Representation by Error Propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. I: Foundations, pages 318-362, MIT Press, 1986. [Varanasi 89] M. K. Varanasi and B. Aazhang. Multistage Detection in Asynchronous Code-Division Multiple-Access Communications. IEEE Trans. Commun., COM-37, 1989. [Verdu 86] S. Verdu. Optimum Multiuser Asymptotic Efficiency. IEEE Trans. Commun., COM-34(9):890-897, September, 1986. Neural Net Receivers in Multiple-Access Communications Sampler (n+l)Tc ~1 ret) (a) (b) Figure 1. Two Neural Net Receiver Structures. 4,---------------~.------rr------~ ? o 2 : ? Matched Filter .... Neural Net (preset) ~ ,l o r' ,.l " l Optimum Receiver A- Neural Net (trained) ~' -2 f I " I I :I : I .... - 4~----~~~~--~----~----~--~ -3 -1 -2 3 o 2 Figure 2. Decision Boundaries of the Various Receivers. ________________________________ 1012~ .~ ~ 10 10 Opt. Receiver 10 8 Neural Net (preset) 10 6 Neural Net (trained) C!' 10 4 10 2 Matched Filter 10? ~~--~~~~~~--~~--~~--~~ 10-3 10 -1 10-13 10 -11 10-9 10 -7 10-5 Prob. of Error Figure 3. Importance Sampling Gain versus Error Rate for 2-user Example. 279 280 Paris, Orsak, Varanasi and Aazhang 1 10-2 1010-3 r--::::~~::::=----I 15 10-4 .. 10-5 ~ 10-6 'Q 10-7 ~ 10-8 Q 10-9 10-10 "= Matched Filter Neural Net (trained) Neural Net (preset) Opt Receiver to- 11 to-I to -13+-_...,.._--._ _,--_...,.._--._ _,--_.... 2 4 6 8SNR l~ dB 12 16 14 Figure 4. Prob. of Error as a Function of the SNR (E2/El =4). 10-1 ~------------------------------~ Matched Filter Neural Net (trained) Neural Net (preset) Opt. Receiver 104+---~--~--~--~--~--,---~~ o 1 3 2 4 E2/El Figure 5. Influence of MA-Interference (SNR 10-1 10 -2 10 -3 =8dB). '---~~-'--~F.:~~==~====---, 10-4 10 -5 10 -6 10 -7 Neural Net (trained) 10 -8 Neural Net (preset) Opt. Receiver 10 -9 10 -10 10 ~1l 10 -12 10 -13-+-'I'"'"""II""""I"'........~.......,--r'........~.......,__-r-..,--.......,.--.--r--t 2 4 6 8 10 12 14 SNR in dB Figure 6. Error Curves for the 3-User Example. 16
186 |@word trial:3 version:1 manageable:1 polynomial:1 instrumental:2 simulation:8 pulse:1 subscriber:1 shot:1 reduction:1 configuration:5 ntc:1 slotted:1 multiuser:13 outperforms:1 comparing:1 nt:2 com:5 written:2 additive:3 numerical:1 drop:1 ith:2 filtered:1 node:1 unacceptable:3 direct:2 become:3 consists:1 introduce:2 expected:2 multi:7 terminal:1 considering:1 becomes:1 project:1 matched:12 maximizes:1 substantially:1 developed:2 ret:2 finding:1 shed:1 classifier:2 unit:1 carrier:1 engineering:1 understood:1 preassigned:1 despite:1 ak:4 verdu:8 k:1 co:3 limited:3 range:1 practical:2 backpropagation:1 signaling:1 procedure:1 area:1 significantly:1 layered:2 storage:2 put:1 influence:1 disturbed:1 conventional:14 williams:1 attention:2 duration:2 rectangular:1 simplicity:1 identifying:2 orthonormal:1 pt:1 diego:1 user:37 exact:1 us:1 designing:1 element:1 rumelhart:4 recognition:1 observed:2 electrical:1 region:6 remote:1 yk:2 accessing:2 environment:2 complexity:3 multistage:1 trained:15 depend:1 motivate:1 networ:1 division:3 efficiency:6 darpa:1 chip:1 represented:1 tx:1 various:1 train:1 effective:1 monte:5 artificial:1 quite:2 solve:1 statistic:6 superscript:1 sequence:10 rr:1 net:42 fae:1 alleviates:1 gold:1 description:1 optimum:29 transmission:1 produce:1 generating:1 wider:1 illustrate:3 measured:1 received:5 concentrate:1 waveform:3 closely:3 filter:13 exploration:1 packet:1 require:1 microstructure:1 opt:4 anticipate:1 sufficiently:1 considered:8 cognition:1 tor:1 a2:2 relay:1 spreading:9 radio:1 superposition:1 sensitive:1 modulating:1 weighted:1 mit:1 gaussian:14 modified:1 hil:1 mobile:1 derived:2 focus:1 june:1 vk:1 transmitter:3 likelihood:2 hk:1 el:6 hidden:1 interested:1 issue:4 arg:1 classification:2 denoted:1 equal:3 sampling:9 identical:1 np:2 employ:3 few:1 resulted:1 phase:3 maintain:1 attempt:1 detection:8 interest:3 aazhang:13 highly:1 severe:6 light:1 accurate:2 necessary:2 demodulated:1 orthogonal:1 desired:4 plotted:2 classify:1 impulsive:1 maximization:3 snr:4 delay:2 successful:1 too:1 front:1 reported:1 fundamental:1 international:1 decoding:1 transmitting:2 central:2 potential:2 converted:1 bka:1 effected:1 parallel:2 construed:1 il:1 accuracy:1 variance:3 characteristic:1 correspond:1 carlo:5 simultaneous:1 detector:3 suffers:1 sharing:2 energy:5 frequency:1 e2:8 transmits:1 associated:1 gain:3 proved:2 knowledge:1 amplitude:1 back:5 ok:3 feed:1 dt:1 april:1 formulation:1 generality:1 furthermore:3 correlation:3 until:1 d:1 propagation:6 indicated:1 grows:1 effect:2 y2:1 unbiased:2 hence:1 assigned:2 analytically:1 illustrated:1 white:1 attractive:1 maintained:1 complete:2 ef:1 sigmoid:1 rl:2 exponentially:1 mahesh:1 discussed:1 rd:1 nonlinearity:1 access:16 neutralized:1 longer:1 operating:1 base:2 commun:5 binary:2 transmitted:2 seen:1 houston:1 period:3 signal:44 ii:1 multiple:18 reduces:1 cross:2 equally:1 demodulation:8 e1:1 suboptimum:2 a1:2 multilayer:1 demodulator:1 achieved:1 justified:1 interval:6 wct:1 operate:1 sr:1 sent:1 db:9 structural:1 near:10 presence:2 easy:1 bandwidth:7 reduce:1 shift:2 synchronous:8 motivated:1 peter:1 speech:1 useful:1 clear:1 band:1 mcclelland:1 reduced:1 generate:1 outperform:1 shifted:1 track:2 per:1 vol:1 four:3 drawn:1 angle:2 prob:2 almost:1 reasonable:1 decision:17 bit:11 layer:10 fl:1 strength:2 ri:1 software:1 generates:1 extremely:2 relatively:1 department:1 developing:1 structured:1 alternate:1 combination:1 poor:9 slightly:1 interference:3 end:1 observe:1 occurrence:1 alternative:2 schmidt:1 denotes:1 orsak:11 ensure:2 afcea:1 cdma:4 objective:1 occurs:1 strategy:4 september:1 kth:4 simulated:1 reason:1 pet:1 assuming:1 code:11 length:8 ratio:5 providing:1 difficult:1 unfortunately:1 info:1 implementation:2 observation:2 benchmark:1 november:1 january:1 situation:3 defining:1 communication:12 hinton:1 y1:1 station:2 intensity:2 introduced:1 bernd:1 paris:5 required:1 extensive:1 coherent:1 tremendous:1 trans:6 address:1 able:1 pattern:1 biasing:2 reliable:1 max:1 power:3 disturbance:1 scheme:3 inversely:1 carried:1 sn:5 review:1 understanding:2 relative:7 asymptotic:1 loss:1 proportional:1 proven:1 geoffrey:1 versus:5 facing:1 digital:1 foundation:1 degree:1 sufficient:4 principle:1 editor:1 bank:1 classifying:1 share:1 interfering:9 placed:1 parity:1 asynchronous:2 bias:1 side:1 understand:1 perceptron:1 fall:1 distributed:1 boundary:6 curve:1 gram:1 ignores:1 forward:1 made:1 san:1 far:7 employing:2 approximate:1 lippmann:3 active:4 receiver:55 assumed:4 spectrum:8 continuous:1 sk:2 channel:19 nature:1 robust:2 ca:1 complex:1 domain:3 spread:9 noise:8 referred:3 depicts:1 third:1 erroneous:1 symbol:6 sit:1 intractable:1 burden:1 importance:8 magnitude:1 te:3 illustrates:1 suited:1 depicted:2 tc:1 likely:2 corresponds:1 b1i:1 rice:1 ma:1 viewed:1 marked:2 psk:1 hard:1 except:2 sampler:1 preset:11 degradation:3 conservative:1 perceptrons:5 internal:2 evaluate:1
940
1,860
Natural sound statistics and divisive normalization in the auditory system Odelia Schwartz Center for Neural Science New York University [email protected] Eero P. Simoncelli Howard Hughes Medical Institute Center for Neural Science, and Courant Institute of Mathematical Sciences New York University [email protected] Abstract We explore the statistical properties of natural sound stimuli preprocessed with a bank of linear filters. The responses of such filters exhibit a striking form of statistical dependency, in which the response variance of each filter grows with the response amplitude of filters tuned for nearby frequencies. These dependencies may be substantially reduced using an operation known as divisive normalization, in which the response of each filter is divided by a weighted sum of the rectified responses of other filters. The weights may be chosen to maximize the independence of the normalized responses for an ensemble of natural sounds. We demonstrate that the resulting model accounts for nonlinearities in the response characteristics of the auditory nerve, by comparing model simulations to electrophysiological recordings. In previous work (NIPS, 1998) we demonstrated that an analogous model derived from the statistics of natural images accounts for non-linear properties of neurons in primary visual cortex. Thus, divisive normalization appears to be a generic mechanism for eliminating a type of statistical dependency that is prevalent in natural signals of different modalities. Signals in the real world are highly structured. For example, natural sounds typically contain both harmonic and rythmic structure. It is reasonable to assume that biological auditory systems are designed to represent these structures in an efficient manner [e.g., 1,2]. Specifically, Barlow hypothesized that a role of early sensory processing is to remove redundancy in the sensory input, resulting in a set of neural responses that are statistically independent. Experimentally, one can test this hypothesis by examining the statistical properties of neural responses under natural stimulation conditions [e.g., 3,4], or the statistical dependency of pairs (or groups) of neural responses. Due to their technical difficulty, such multi-cellular experiments are only recently becoming possible, and the earliest reports in vision appear consistent with the hypothesis [e.g., 5]. An alternative approach, which we follow here, is to develop a neural model from the statistics of natural signals and show that response properties of this model are similar to those of biological sensory neurons. A number of researchers have derived linear filter models using statistical criterion. For visual images, this results in linear filters localized in frequency, orientation and phase [6, 7]. Similar work in audition has yielded filters localized in frequency and phase [8]. Although these linear models provide an important starting point for neural modeling, sensory neurons are highly nonlinear. In addition, the statistical properties of natural signals are too complex to expect a linear transformation to result in an independent set of components. Recent results indicate that nonlinear gain control plays an important role in neural processing. Ruderman and Bialek [9] have shown that division by a local estimate of standard deviation can increase the entropy of responses of center-surround filters to natural images. Such a model is consistent with the properties of neurons in the retina and lateral geniculate nucleus. Heeger and colleagues have shown that the nonlinear behaviors of neurons in primary visual cortex may be described using a form of gain control known as divisive normalization [10], in which the response of a linear kernel is rectified and divided by the sum of other rectified kernel responses and a constant. We have recently shown that the responses of oriented linear filters exhibit nonlinear statistical dependencies that may be substantially reduced using a variant of this model, in which the normalization signal is computed from a weighted sum of other rectified kernel responses [11, 12]. The resulting model, with weighting parameters determined from image statistics, accounts qualitatively for physiological nonlinearities observed in primary visual cortex. In this paper, we demonstrate that the responses of bandpass linear filters to natural sounds exhibit striking statistical dependencies, analogous to those found in visual images. A divisive normalization procedure can substantially remove these dependencies. We show that this model, with parameters optimized for a collection of natural sounds, can account for nonlinear behaviors of neurons at the level of the auditory nerve. Specifically, we show that: 1) the shape offrequency tuning curves varies with sound pressure level, even though the underlying linear filters are fixed; and 2) superposition of a non-optimal tone suppresses the response of a linear filter in a divisive fashion, and the amount of suppression depends on the distance between the frequency of the tone and the preferred frequency of the filter. 1 Empirical observations of natural sound statistics The basic statistical properties of natural sounds, as observed through a linear filter, have been previously documented by Attias [13]. In particular, he showed that, as with visual images, the spectral energy falls roughly according to a power law, and that the histograms of filter responses are more kurtotic than a Gaussian (i.e., they have a sharp peak at zero, and very long tails). Here we examine the joint statistical properties of a pair of linear filters tuned for nearby temporal frequencies. We choose a fixed set of filters that have been widely used in modeling the peripheral auditory system [14]. Figure 1 shows joint histograms of the instantaneous responses of a particular pair of linear filters to five different types of natural sound, and white noise. First note that the responses are approximately decorrelated: the expected value of the y-axis value is roughly zero for all values of the x-axis variable. The responses are not, however, statistically independent: the width of the distribution of responses of one filter increases with the response amplitude of the other filter. If the two responses were statistically independent, then the response of the first filter should not provide any information about the distribution of responses of the other filter. We have found that this type of variance dependency (sometimes accompanied by linear correlation) occurs in a wide range of natural sounds, ranging from animal sounds to music. We emphasize that this dependency is a property of natural sounds, and is not due purely to our choice of linear filters. For example, no such dependency is observed when the input consists of white noise (see Fig. 1). The strength of this dependency varies for different pairs of linear filters . In addition, we see this type of dependency between instantaneous responses of a single filter at two Speech o -1 Drums ? Monkey Cat White noise Nocturnal nature I~~; ~ ? Figure 1: Joint conditional histogram of instantaneous linear responses of two bandpass filters with center frequencies 2000 and 2840 Hz. Pixel intensity corresponds to frequency of occurrence of a given pair of values, except that each column has been independently rescaled to fill the full intensity range. For the natural sounds, responses are not independent: the standard deviation of the ordinate is roughly proportional to the magnitude of the abscissa. Natural sounds were recorded from CDs and converted to sampling frequency of 22050 Hz. nearby time instants. Since the dependency involves the variance of the responses, we can substantially reduce it by dividing. In particular, the response of each filter is divided by a weighted sum of responses of other rectified filters and an additive constant. Specifically: L2 Ri = 2: (1) 12 j WjiLj + 0'2 where Li is the instantaneous linear response of filter i, strength of suppression of filter i by filter j. 0' is a constant and Wji controls the We would like to choose the parameters of the model (the weights Wji, and the constant 0') to optimize the independence of the normalized response to an ensemble of natural sounds. Such an optimization is quite computationally expensive. We instead assume a Gaussian form for the underlying conditional distribution, as described in [15]: P (LiILj,j E Ni ) '" N(O; L wjiL; + 0'2) j where Ni is the neighborhood of linear filters that may affect filter i. We then maximize this expression over the sound data at each time t to obtain the parameters: (2) We solve for the optimal parameters numerically, using conjugate gradient descent. Note that the value of 0' depends on the somewhat arbitrary scaling of the input signal (i.e., doubling the input strength would lead to a doubling of 0') . 4 I I I : 0 \,-_ _ _ _----:--' I I I Other squared I filter resRonses \ I I , *' I Other squared filter responses Figure 2: Nonlinear whitening of a natural auditory signal with a divisive normalization model. The histogram on the left shows the statistical dependency of the responses of two linear bandpass filters. The joint histogram on the right shows the approximate independence of the normalized coefficients. Figure 2 depicts our statistically derived neural model. A natural sound is passed through a bank of linear filters (only 2 depicted for readability). The responses of the filters to a natural sound exhibit a strong statistical dependency. Normalization largely removes this dependency, such that vertical cross sections through the joint conditional histogram are all roughly the same. For the simulations in the next section, we use a set of Gammatone filters as the linear front end [14]. We choose a primary filter with center frequency 2000 Hz. We also choose a neighborhood of filters for the normalization signal: 16 filters with center frequencies 205 to 4768 Hz, and replicas of all filters temporally shifted by 100, 200, and 300 samples. We compute optimal values for u and the normalization weights Wj using equation (2), based on statistics of a natural sound ensemble containing 9 animal and speech sounds, each approximately 6 seconds long. 2 Model simulations vs. physiology We compare the normalized responses of the primary filter in our model (with all parameter values held fixed at the optimal values described above) to data recorded electrophysiologically from auditory nerve. Figure 3 shows data from a "two-tone suppression" experiment, in which the response to an optimal tone is suppressed by the presence of a second tone of non-optimal frequency. Two-tone suppression is often demonstrated by showing that the rate-level function of the optimal tone alone is shifted to the right in the presence of a non-optimal tone. In both cell and model, we obtain a larger rightward shift when the non-optimal tone is relatively close in frequency to the optimal tone, and almost no rightward shift when the non-optimal tone is more than two times the optimal frequency. In the model, this behavior is due to the fact that the strength of statistical dependency (and thus the strength of normalization weighting) falls with the frequency separation of a pair of filters. Cell Model (Javel et al., 1978) 120 ,---~-~-~-~-~_____, --e- no mask -e-e- Q) -e- mask ~ 100 = 1.2S"CF Q) E> 80 ro J::: (J .!!1 60 "0 c 40 ro Q) ::;; 20 30 40 50 60 Decibels 70 80 1ro ,-------~-~-~_____, -e- Q) "? 100 -e- 20 30 70 80 --e- no mask -e- mask = 1.5S?CF Q) ~ 80 ro J::: ~ 6O '6 c 40 ro Q) ::;; ro 50 60 Decibels 70 80 20 30 70 80 m 80 --e- no mask mask = 2.00*CF Q) ~ 100 Q) ~ 80 ro J::: ~ 6O '6 c 40 ro Q) ::;; ro 40 50 60 Decibels 70 80 ro 30 40 60 60 Decibels Figure 3: Two tone suppression data. Each plot shows neural response as a function of SPL for a single tone (circles), and for a tone in the presence of a secondary suppressive tone at 80 dB SPL (squares). The maximum mean response rate in the model is scaled to fit the cell data. Cell data re-plotted from [16] . Cell Model (Rose et aI., 1971) 120' , - - - - - - - - - - - - - - - - , OJ 100 1;; ;U 80 ~ rn J:::60 o en '6 40 c rn OJ ::;; 20 Frequency Frequency Figure 4: Frequency tuning curves for cell and model for different sound pressure levels. Cell data are re-plotted from [17]. Figure 4 shows frequency tuning for different sound pressure levels. As the sound pressure level (SPL) increases, the frequency tuning becomes broader, developing a "shoulder" and a secondary mode. Both cell and model show similar behavior, despite the fact that we are not fitting the model to these data: all parameters in the model are chosen by optimizing the independence of the responses to the ensemble of natural sound statistics. This result is particularly interesting because the data have been in the literature for many years, and are generally interpreted to mean that the frequency tuning properties of these cells varies with SPL. Our model suggests an alternative interpretation: the fundamental frequency tuning is determined by a fixed linear kernel, and is modulated by a divisive nonlinearity. 3 Discussion We have developed a weighted divisive normalization model for early auditory processing. Both the form and parameters of the model are determined from natural sound statistics. We have shown that the model can account for some prominent nonlinearities occurring at the level of the auditory nerve. A number of authors have suggested forms of divisive gain control in auditory models. Wang et al. [18] suggest that gain control in early auditory processing is consistent with psychophysical data and might be advantageous for applications of noise removal. Auditory gain control is also a central concept in the work of Lyon (e.g., [19]). Our work may provide theoretical justification for such models of divisive gain control in the auditory system. Our model is limited in a number of important ways. The current model lacks a detailed specification of a physiological implementation. In particular, normalization must be presumably implemented using lateral or feedback connections between neurons [e.g., 20]. The normalization signal of the model is computed and applied instantaneously, and thus lacks temporal dynamical properties [e.g., 19]. In addition, we have not made any distinction between nonlinearities that arise mechanically in the cochlea, and nonlinearities that arise at the neural level. It is likely that normalization occurs at least partially in outer hair cells [21,22]. On a more theoretical level, we have not addressed mechanisms by which the system optimizes itself. Our modeling uses parameters optimized for a fixed ensemble of natural sounds. Biologically, this optimization would presumably occur on multiple time scales through processes of evolution, development, learning, and adaptation. The ultimate question regarding the independence hypothesis underlying our model is: how far can such a bottom-up criterion go toward explaining neural processing? It seems likely that the model can be extended to account for levels of processing beyond the auditory nerve. For example, Nelken et al. [23] suggest that co-modulation masking release in auditory cortex results from the statistical structure of natural sound. But ultimately, it seems likely that one must also consider the auditory tasks, such as localization and recognition, that the organism must perform. References [1] F Attneave. Some informational aspects of visual perception. P~ych. Rev., 61:183- 193, 1954. [2] H B Barlow. Possible principles underlying the transformation of sensory messages. In W A Rosenblith, editor, Sensory Communications, page 217. MIT Press, Cambridge, MA, 1961. [3] Y Dan and J J Atick ad R C Reid. Efficient coding of natural scenes in the lateral geniculate nucleus: Experimental test of a computational theory. J. Neuroscience, 16:3351-3362, 1996. [4] H Attias and C E Schreiner. Coding of naturalistic stimuli by auditory midbrain neurons. Adv in Neural Info Processing Systems, 10: 103-109, 1998. [5] WE Vinje and J L Gallant. Sparse coding and decorrelation in primary visual cortex during natural vision. Science, 287, Feb 2000. [6] B A Olshausen and D J Field. Natural image statistics and efficient coding. Network: Computation in Neural Systems, 7:333-339, 1996. [7] A J Bell and T J Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327-3338, 1997. [8] A J Bell and T J Sejnowski. Learning the higher-order structure of a natural sound. Network: Computation in Neural Systems, 7:261- 266,1996. [9] D L Ruderman and W Bialek. Statistics of natural images: Scaling in the woods. Phys. Rev. Letters, 73(6):814-817, 1994. [10] D J Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181198, 1992. [11] E P Simoncelli and 0 Schwartz. Image statistics and cortical normalization models. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Adv. Neural Information Processing Systems, volume 11, pages 153-159, Cambridge, MA, 1999. MIT Press. [12] M J Wainwright, 0 Schwartz, and E P Simoncelli. Natural image statistics and divisive normalization: Modeling nonlinearities and adaptation in cortical neurons. In R Rao, B Olshausen, and M Lewicki, editors, Statistical Theories of the Brain. MIT Press, 2001. To appear. [13] H Attias and C E Schreiner. Temporal low-order statistics of natural sounds. In M Jordan, M Kearns, and S Solla, editors, Adv in Neural Info Processing Systems, volume 9, pages 27-33. MIT Press, 1997. [14] M Slaney. An efficient implementation of the patterson and holdworth auditory filter bank. Apple Technical Report 35, 1993. [15] E P Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc SPIE, 44th Annual Meeting, volume 3813, Denver, July 1999. Invited presentation. [16] E Javel, D Geisler, and A Ravindran. Two-tone suppression in auditory nerve of the cat: Rateintensity and temporal analyses. J. Acoust. Soc. Am., 63(4):1093- 1104, 1978. [17] J ERose, D J Anderson, and J F Brugge. Some effects of stimulus intensity on response of auditory nerve fibers in the squirell monkey. Journal Neurophys., 34:685-699, 1971. [18] K Wang and S Shamma. Self-normalization and noise-robustness in early auditory representations. In IEEE Trans. Speech and Audio Proc., volume 2, pages 421-435, 1994. [19] R F Lyon. Automatic gain control in cochlear mechanics. In P Dallos et al., editor, The Mechanics and Biophysics of Hearing, pages 395-420. Springer-Verlag, 1990. [20] M Carandini, D J Heeger, and J A Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17:8621- 8644, 1997. [21] D Geisler. From Sound to Synapse: Physiology of the Mammalian Ear. Oxford University Press, New York, 1998. [22] H B Zhao and J Santos-Sacchi. Auditory collusion and a coupled couple of outer hair cells. Nature, 399(6734):359-362, 1999. [23] I Nelken, Y Rotman, and 0 Bar Yosef. Responses of auditory-cortex neurons to structural features of natural sounds. Nature, 397(6715):154-157, 1999.
1860 |@word eliminating:1 advantageous:1 seems:2 simulation:3 pressure:4 tuned:2 current:1 comparing:1 neurophys:1 must:3 additive:1 shape:1 remove:3 designed:1 plot:1 v:1 alone:1 tone:16 readability:1 five:1 mathematical:1 consists:1 fitting:1 dan:1 manner:1 ravindran:1 mask:6 expected:1 roughly:4 abscissa:1 examine:1 mechanic:2 multi:1 brain:1 behavior:4 informational:1 lyon:2 becomes:1 underlying:4 linearity:1 santos:1 interpreted:1 substantially:4 monkey:2 suppresses:1 developed:1 acoust:1 transformation:2 temporal:4 ro:10 scaled:1 schwartz:3 control:8 medical:1 appear:2 reid:1 local:1 despite:1 oxford:1 becoming:1 approximately:2 modulation:1 might:1 suggests:1 co:1 limited:1 shamma:1 range:2 statistically:4 hughes:1 procedure:1 empirical:1 bell:2 physiology:2 suggest:2 naturalistic:1 close:1 optimize:1 demonstrated:2 center:6 go:1 starting:1 independently:1 schreiner:2 fill:1 justification:1 analogous:2 play:1 us:1 hypothesis:3 expensive:1 particularly:1 recognition:1 mammalian:1 observed:3 role:2 bottom:1 wang:2 wj:1 adv:3 solla:2 rescaled:1 rose:1 ultimately:1 purely:1 localization:1 division:1 patterson:1 rightward:2 joint:6 cat:3 fiber:1 sejnowski:2 neighborhood:2 quite:1 widely:1 solve:1 larger:1 statistic:14 itself:1 adaptation:2 gammatone:1 develop:1 strong:1 dividing:1 implemented:1 soc:1 involves:1 indicate:1 filter:49 biological:2 presumably:2 early:4 proc:2 geniculate:2 superposition:1 weighted:4 instantaneously:1 mit:4 gaussian:2 broader:1 earliest:1 derived:3 release:1 prevalent:1 suppression:6 am:1 typically:1 pixel:1 orientation:1 development:1 animal:2 field:1 sampling:1 report:2 stimulus:3 retina:1 oriented:1 phase:2 cns:1 message:1 highly:2 held:1 edge:1 circle:1 re:2 plotted:2 theoretical:2 column:1 modeling:5 rao:1 kurtotic:1 hearing:1 deviation:2 examining:1 too:1 front:1 dependency:17 varies:3 peak:1 fundamental:1 geisler:2 mechanically:1 rotman:1 squared:2 central:1 recorded:2 ear:1 containing:1 choose:4 slaney:1 audition:1 zhao:1 li:1 account:6 converted:1 nonlinearities:6 accompanied:1 coding:4 coefficient:1 depends:2 ad:1 masking:1 square:1 ni:2 variance:3 characteristic:1 largely:1 ensemble:5 rectified:5 researcher:1 apple:1 phys:1 rosenblith:1 decorrelated:1 energy:1 colleague:1 frequency:22 attneave:1 spie:1 couple:1 gain:7 auditory:23 carandini:1 electrophysiological:1 amplitude:2 brugge:1 nerve:7 appears:1 higher:1 courant:1 follow:1 response:46 synapse:1 though:1 anderson:1 atick:1 correlation:1 ruderman:2 cohn:1 nonlinear:6 lack:2 mode:1 grows:1 olshausen:2 effect:1 hypothesized:1 normalized:4 contain:1 barlow:2 concept:1 evolution:1 white:3 during:1 width:1 self:1 criterion:2 prominent:1 demonstrate:2 image:11 harmonic:1 instantaneous:4 ranging:1 recently:2 stimulation:1 denver:1 volume:4 tail:1 he:1 interpretation:1 organism:1 numerically:1 surround:1 cambridge:2 ai:1 tuning:6 automatic:1 nonlinearity:1 specification:1 cortex:8 whitening:1 feb:1 recent:1 showed:1 optimizing:1 optimizes:1 verlag:1 meeting:1 wji:2 somewhat:1 maximize:2 signal:9 july:1 full:1 sound:32 simoncelli:5 multiple:1 technical:2 cross:1 long:2 divided:3 biophysics:1 variant:1 basic:1 hair:2 vision:3 histogram:6 kernel:4 sometimes:1 represent:1 cochlea:1 normalization:20 cell:13 addition:3 addressed:1 modality:1 suppressive:1 invited:1 recording:1 hz:4 db:1 jordan:1 structural:1 presence:3 independence:5 affect:1 fit:1 drum:1 reduce:1 regarding:1 attias:3 shift:2 expression:1 ultimate:1 passed:1 movshon:1 speech:3 york:3 generally:1 nocturnal:1 detailed:1 amount:1 reduced:2 documented:1 shifted:2 neuroscience:3 group:1 redundancy:1 preprocessed:1 replica:1 sum:4 year:1 wood:1 letter:1 striking:2 almost:1 reasonable:1 spl:4 separation:1 scaling:2 electrophysiologically:1 yielded:1 annual:1 strength:5 occur:1 ri:1 scene:2 collusion:1 nearby:3 aspect:1 relatively:1 structured:1 developing:1 according:1 peripheral:1 yosef:1 conjugate:1 suppressed:1 rev:2 biologically:1 midbrain:1 computationally:1 equation:1 previously:1 mechanism:2 end:1 operation:1 generic:1 spectral:1 occurrence:1 alternative:2 robustness:1 ych:1 cf:3 instant:1 music:1 psychophysical:1 question:1 occurs:2 primary:7 striate:1 bialek:2 exhibit:4 gradient:1 distance:1 lateral:3 outer:2 cochlear:1 cellular:1 toward:1 info:2 implementation:2 perform:1 gallant:1 vertical:1 neuron:10 observation:1 howard:1 descent:1 extended:1 shoulder:1 communication:1 rn:2 sharp:1 arbitrary:1 intensity:3 ordinate:1 pair:6 optimized:2 connection:1 distinction:1 nip:1 trans:1 macaque:1 beyond:1 suggested:1 bar:1 dynamical:1 perception:1 oj:2 wainwright:1 power:1 decorrelation:1 natural:37 difficulty:1 temporally:1 axis:2 coupled:1 literature:1 l2:1 removal:1 law:1 expect:1 interesting:1 proportional:1 vinje:1 localized:2 nucleus:2 consistent:3 principle:1 editor:5 bank:3 cd:1 institute:2 fall:2 wide:1 explaining:1 sparse:1 curve:2 feedback:1 cortical:2 world:1 sensory:6 author:1 qualitatively:1 collection:1 made:1 nelken:2 far:1 approximate:1 emphasize:1 preferred:1 eero:2 nature:3 complex:1 domain:1 noise:5 arise:2 fig:1 en:1 depicts:1 fashion:1 heeger:3 bandpass:3 weighting:2 wavelet:1 decibel:4 showing:1 nyu:2 physiological:2 magnitude:1 occurring:1 entropy:1 depicted:1 explore:1 likely:3 visual:10 partially:1 doubling:2 lewicki:1 springer:1 corresponds:1 ma:2 conditional:3 presentation:1 experimentally:1 specifically:3 determined:3 except:1 kearns:2 secondary:2 divisive:12 experimental:1 odelia:2 modulated:1 audio:1
941
1,861
Algorithms for Non-negative Matrix Factorization Daniel D. Lee* *BelJ Laboratories Lucent Technologies Murray Hill, NJ 07974 H. Sebastian Seung*t tDept. of Brain and Cog. Sci. Massachusetts Institute of Technology Cambridge, MA 02138 Abstract Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. 1 Introduction Unsupervised learning algorithms such as principal components analysis and vector quantization can be understood as factorizing a data matrix subject to different constraints. Depending upon the constraints utilized, the resulting factors can be shown to have very different representational properties. Principal components analysis enforces only a weak orthogonality constraint, resulting in a very distributed representation that uses cancellations to generate variability [1, 2]. On the other hand, vector quantization uses a hard winnertake-all constraint that results in clustering the data into mutually exclusive prototypes [3]. We have previously shown that nonnegativity is a useful constraint for matrix factorization that can learn a parts representation of the data [4, 5]. The nonnegative basis vectors that are learned are used in distributed, yet still sparse combinations to generate expressiveness in the reconstructions [6, 7]. In this submission, we analyze in detail two numerical algorithms for learning the optimal nonnegative factors from data. 2 Non-negative matrix factorization We formally consider algorithms for solving the following problem: Non-negative matrix factorization (NMF) Given a non-negative matrix V, find non-negative matrix factors Wand H such that: V~WH (1) NMF can be applied to the statistical analysis of multivariate data in the following manner. Given a set of of multivariate n-dimensional data vectors, the vectors are placed in the columns of an n x m matrix V where m is the number of examples in the data set. This matrix is then approximately factorized into an n x r matrix Wand an r x m matrix H. Usually r is chosen to be smaller than nor m , so that Wand H are smaller than the original matrix V. This results in a compressed version of the original data matrix. What is the significance of the approximation in Eq. (1)? It can be rewritten column by column as v ~ Wh, where v and h are the corresponding columns of V and H. In other words, each data vector v is approximated by a linear combination of the columns of W, weighted by the components of h. Therefore W can be regarded as containing a basis that is optimized for the linear approximation of the data in V. Since relatively few basis vectors are used to represent many data vectors, good approximation can only be achieved if the basis vectors discover structure that is latent in the data. The present submission is not about applications of NMF, but focuses instead on the technical aspects of finding non-negative matrix factorizations. Of course, other types of matrix factorizations have been extensively studied in numerical linear algebra, but the nonnegativity constraint makes much of this previous work inapplicable to the present case [8]. Here we discuss two algorithms for NMF based on iterative updates of Wand H. Because these algorithms are easy to implement and their convergence properties are guaranteed, we have found them very useful in practical applications. Other algorithms may possibly be more efficient in overall computation time, but are more difficult to implement and may not generalize to different cost functions. Algorithms similar to ours where only one of the factors is adapted have previously been used for the deconvolution of emission tomography and astronomical images [9, 10, 11, 12]. At each iteration of our algorithms, the new value of W or H is found by multiplying the current value by some factor that depends on the quality ofthe approximation in Eq. (1). We prove that the quality of the approximation improves monotonically with the application of these multiplicative update rules. In practice, this means that repeated iteration of the update rules is guaranteed to converge to a locally optimal matrix factorization. 3 Cost functions To find an approximate factorization V ~ W H, we first need to define cost functions that quantify the quality of the approximation. Such a cost function can be constructed using some measure of distance between two non-negative matrices A and B . One useful measure is simply the square of the Euclidean distance between A and B [13], IIA - BI12 = L(Aij - Bij)2 (2) ij This is lower bounded by zero, and clearly vanishes if and only if A = B . Another useful measure is D(AIIB) = 2: k? ( Aij log B:~ - Aij + Bij ) (3) "J Like the Euclidean distance this is also lower bounded by zero, and vanishes if and only if A = B . But it cannot be called a "distance", because it is not symmetric in A and B, so we will refer to it as the "divergence" of A from B. It reduces to the Kullback-Leibler divergence, or relative entropy, when 2:ij Aij = 2:ij Bij = 1, so that A and B can be regarded as normalized probability distributions. We now consider two alternative formulations of NMF as optimization problems: Problem 1 Minimize IIV - W HI12 with respect to Wand H, subject to the constraints W,H~O. Problem 2 Minimize D(VIIW H) with re.lpect to Wand H, subject to the constraints W,H~O. Although the functions IIV - W HI12 and D(VIIW H) are convex in W only or H only, they are not convex in both variables together. Therefore it is unrealistic to expect an algorithm to solve Problems 1 and 2 in the sense of finding global minima. However, there are many techniques from numerical optimization that can be applied to find local minima. Gradient descent is perhaps the simplest technique to implement, but convergence can be slow. Other methods such as conjugate gradient have faster convergence, at least in the vicinity of local minima, but are more complicated to implement than gradient descent [8] . The convergence of gradient based methods also have the disadvantage of being very sensitive to the choice of step size, which can be very inconvenient for large applications. 4 Multiplicative update rules We have found that the following "multiplicative update rules" are a good compromise between speed and ease of implementation for solving Problems 1 and 2. Theorem 1 The Euclidean distance II V - W H II is non increasing under the update rules (WTV)att Hal' +- Hal' (WTWH)att (V HT)ia Wia +- Wia(WHHT)ia (4) The Euclidean distance is invariant under these updates if and only if Wand H are at a stationary point of the distance. Theorem 2 The divergence D(VIIW H) is nonincreasing under the update rules H att +- H att 2:i WiaVitt/(WH)itt " W L..Jk ka Wia +- Wia 2:1' HattVitt/(WH)itt " H L..Jv av (5) The divergence is invariant under these updates if and only ifW and H are at a stationary point of the divergence. Proofs of these theorems are given in a later section. For now, we note that each update consists of multiplication by a factor. In particular, it is straightforward to see that this multiplicative factor is unity when V = W H, so that perfect reconstruction is necessarily a fixed point of the update rules. 5 Multiplicative versus additive update rules It is useful to contrast these multiplicative updates with those arising from gradient descent [14]. In particular, a simple additive update for H that reduces the squared distance can be written as (6) If 'flatt are all set equal to some small positive number, this is equivalent to conventional gradient descent. As long as this number is sufficiently small, the update should reduce IIV - WHII? Now if we diagonally rescale the variables and set Halt "Ialt (7) = (WTW H)alt ' then we obtain the update rule for H that is given in Theorem 1. Note that this rescaling results in a multiplicative factor with the positive component of the gradient in the denominator and the absolute value of the negative component in the numerator of the factor. For the divergence, diagonally rescaled gradient descent takes the form Halt f- Halt + "Ialt [~Wia (:;;)ilt - ~ Wia]. (8) Again, if the "Ialt are small and positive, this update should reduce D (V II W H). If we now set Halt "Ialt= ui ~ W. ' za (9) then we obtain the update rule for H that is given in Theorem 2. This rescaling can also be interpretated as a multiplicative rule with the positive component of the gradient in the denominator and negative component as the numerator of the multiplicative factor. Since our choices for "Ialt are not small, it may seem that there is no guarantee that such a rescaled gradient descent should cause the cost function to decrease. Surprisingly, this is indeed the case as shown in the next section. 6 Proofs of convergence To prove Theorems 1 and 2, we will make use of an auxiliary function similar to that used in the Expectation-Maximization algorithm [15, 16]. Definition 1 G(h, h') is an auxiliary functionfor F(h) G(h, h') ~ F(h), G(h, h) if the conditions = F(h) (10) are satisfied. The auxiliary function is a useful concept because of the following lemma, which is also graphically illustrated in Fig. 1. Lemma 1 IfG is an auxiliary junction, then F is nonincreasing under the update ht+1 = argmlnG (h,ht ) Proof: F(ht+1) ~ G(ht+1, ht) ~ G(ht, ht) (11) = F(ht) ? Note that F(ht+1) = F(ht) only if ht is a local minimum of G(h, ht). If the derivatives of F exist and are continuous in a small neighborhood of ht , this also implies that the derivatives 'V F(ht) = O. Thus, by iterating the update in Eq. (11) we obtain a sequence of estimates that converge to a local minimum h min = argminh F(h) of the objective function: We will show that by defining the appropriate auxiliary functions G(h, ht) for both IIV W HII and D(V, W H), the update rules in Theorems 1 and 2 easily follow from Eq. (11). Figure 1: Minimizing the auxiliary function G(h, ht) F(ht) for h n+1 = argminh G(h, ht). 2:: F(h) guarantees that F(ht+1) :::; Lemma 2 If K(ht) is the diagonal matrix Kab(ht) = <5ab(WTwht)a/h~ then G(h, ht) = F(ht) + (h - + ~(h - ht)T\l F(ht) (13) ht)T K(ht)(h - ht) (14) is an auxiliary function for F(h) = ~ ~)Vi - a Proof: Since G(h, h) = F(h) is obvious, we need only show that G(h, ht) do this, we compare F(h) = F(ht) + (h - htf\l F(ht) + ~(h - 2:: F(h). To ht)T(WTW)(h - ht) 2 with Eq. (14) to find that G(h, ht) 2:: F(h) is equivalent to 0:::; (15) W ia h a )2 L i (h - htf[K(ht) - WTW](h - ht) (16) (17) To prove positive semidefiniteness, consider the matrix 1: (18) which is just a rescaling of the components of K - WTW. Then K - WTW is positive semidefinite if and only if M is, and VT M v = L VaMabVb ab (19) L h~(WTW)abh~v~ - vah~(WTW)abh~Vb ab (20) " t t L...J(W T W ) abhahb [1 + 1 2" v a2 2" Vb2 - VaVb ] (21) ab = ~ L(WTW)abh~h~(va - Vb)2 (22) ab > 0 (23) 'One can also show that K - WTW is positive semidefinite by considering the matrix K (I1 2. Then v. /M(WT W ht ) a is a positive eigenvector of K- 21 W T W K- with unity eigenvalue, and application of the Frobenius-Perron theorem shows that Eq. 17 holds. K- 21 W TW K- 21) K ? We can now demonstrate the convergence of Theorem 1: Proof of Theorem 1 Replacing G(h, ht) in Eq. (11) by Eq. (14) results in the update rule: ht+1 = ht - K(ht)-l\1F(ht) (24) Since Eq. (14) is an auxiliary function, F is nonincreasing under this update rule, according to Lemma 1. Writing the components of this equation explicitly, we obtain ht +1 = ht (WT V )a a a (WTWht)a . (25) By reversing the roles of Wand H in Lemma 1 and 2, F can similarly be shown to be nonincreasing under the update rules for W .? We now consider the following auxiliary function for the divergence cost function: Lemma 3 Define G(h,ht) (26) ia " Wiah~ ( Wiah~ ) - ~ Vi,"", W - ht logWiaha -log,"", W - ht ia ub ,b b ub ,b b (27) This is an auxiliary function for F(h) =L Vi log i (~ ~_ a 'l,a h ) - Vi + LWiaha a Proof: It is straightforward to verify that G(h, h) = F(h) . To show that G(h, ht) we use convexity of the log function to derive the inequality W iaha -log "~ Wiaha ::; - " ~ Q a log - a (28) a 2: F(h), (29) Qa a which holds for all nonnegative Q a that sum to unity. Setting Wiah~ Q a (30) = 'ub "'" Wibhbt we obtain -log "~ Wiaha ::; - "~ '"'"Wiah~ W- ht ( log Wiaha - log,"",Wiah~ W- ht ) a a ub ,b b ub ,b b (31) From this inequality it follows that F(h) ::; G(h, ht) . ? Theorem 2 then follows from the application of Lemma 1: Proof of Theorem 2: The minimum of G(h, ht) with respect to h is determined by setting the gradient to zero: _dG---,(,---,h,_h--,-t) __ " _ Wiah~ 1 ~v, t dha _ , ~b Wibhb ha "W- - 0 +~ ,- za- (32) Thus, the update rule of Eq. (11) takes the form t+1 ha h~" Vi = ub '"'" wkb ~ '"'" W-,b htbW ia ? i ub (33) Since G is an auxiliary function, F in Eq. (28) is nonincreasing under this update. Rewritten in matrix form, this is equivalent to the update rule in Eq. (5). By reversing the roles of Hand W, the update rule for W can similarly be shown to be nonincreasing .? 7 Discussion We have shown that application of the update rules in Eqs. (4) and (5) are guaranteed to find at least locally optimal solutions of Problems 1 and 2, respectively. The convergence proofs rely upon defining an appropriate auxiliary function . We are currently working to generalize these theorems to more complex constraints. The update rules themselves are extremely easy to implement computationally, and will hopefully be utilized by others for a wide variety of applications. We acknowledge the support of Bell Laboratories. We would also like to thank Carlos Brody, Ken Clarkson, Corinna Cortes, Roland Freund, Linda Kaufman, Yann Le Cun, Sam Rowei s, Larry Saul, and Margaret Wright for helpful discussions. References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Turk, M & Pentland, A (1991). Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71- 86. [3] Gersho, A & Gray, RM (1992). Vector Quantization and Signal Compression. Kluwer Acad. Press. [4] Lee, DD & Seung, HS . Unsupervised learning by convex and conic coding (1997). Proceedings of the Conference on Neural Information Processing Systems 9, 515- 521. [5] Lee, DD & Seung, HS (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401, 788- 791. [6] Field, DJ (1994). What is the goal of sensory coding? Neural Comput. 6, 559-601. [7] Foldiak, P & Young, M (1995). Sparse coding in the primate cortex. The Handbook of Brain Theory and Neural Networks, 895- 898. (MIT Press, Cambridge, MA). [8] Press, WH, Teukolsky, SA, Vetterling, WT & Flannery, BP (1993). Numerical recipes: the art of scientific computing. (Cambridge University Press, Cambridge, England). [9] Shepp, LA & Vardi, Y (1982) . Maximum likelihood reconstruction for emission tomography. IEEE Trans . MI-2, 113- 122. [10] Richardson, WH (1972) . Bayesian-based iterative method of image restoration. 1. Opt. Soc. Am. 62, 55- 59. [11] Lucy, LB (1974). An iterative technique for the rectification of observed distributions. Astron. J. 74, 745- 754. [12] Bouman, CA & Sauer, K (1996). A unified approach to statistical tomography using coordinate descent optimization. IEEE Trans. Image Proc. 5, 480--492. [13] Paatero, P & Tapper, U (1997). Least squares formulation of robust non-negative factor analysis. Chemometr. Intell. Lab. 37, 23- 35. [14] Kivinen, J & Warmuth, M (1997). Additive versus exponentiated gradient updates for linear prediction. Journal of Tnformation and Computation 132, 1-64. [15] Dempster, AP, Laird, NM & Rubin, DB (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc. 39, 1-38. [16] Saul, L & Pereira, F (1997). Aggregate and mixed-order Markov models for statistical language processing. In C. Cardie and R. Weischedel (eds). Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, 81- 89. ACL Press.
1861 |@word h:2 version:1 compression:1 decomposition:1 att:4 daniel:1 ours:1 current:1 ka:1 yet:1 written:1 additive:3 numerical:4 update:32 stationary:2 warmuth:1 constructed:1 prove:3 consists:1 manner:1 indeed:1 themselves:1 nor:1 brain:2 considering:1 increasing:1 discover:1 bounded:2 factorized:1 linda:1 what:2 kaufman:1 interpreted:1 minimizes:1 eigenvector:1 dha:1 unified:1 finding:2 nj:1 guarantee:2 rm:1 positive:8 understood:1 local:4 acad:1 approximately:1 ap:1 acl:1 studied:1 ease:1 factorization:10 practical:1 enforces:1 practice:1 implement:5 cogn:1 empirical:1 bell:1 word:1 cannot:1 writing:1 conventional:2 equivalent:3 straightforward:2 graphically:1 convex:3 rule:21 regarded:2 proving:1 coordinate:1 analogous:1 us:2 approximated:1 jk:1 utilized:2 recognition:1 submission:2 observed:1 role:2 decrease:1 rescaled:3 vanishes:2 convexity:1 ui:1 dempster:1 seung:3 iiv:4 solving:2 algebra:1 compromise:1 upon:2 inapplicable:1 basis:4 easily:1 aggregate:1 neighborhood:1 solve:1 compressed:1 richardson:1 laird:1 sequence:1 eigenvalue:1 reconstruction:3 representational:1 margaret:1 frobenius:1 recipe:1 convergence:10 perfect:1 object:1 depending:1 derive:1 stat:1 rescale:1 ij:3 expectationmaximization:1 sa:1 eq:13 soc:2 auxiliary:13 implies:1 quantify:1 differ:1 larry:1 opt:1 hold:2 sufficiently:1 wright:1 a2:1 proc:1 currently:1 sensitive:1 weighted:1 mit:1 clearly:1 focus:1 emission:2 likelihood:2 contrast:1 sense:1 am:1 helpful:1 vetterling:1 i1:1 overall:1 art:1 equal:1 field:1 unsupervised:2 others:1 few:1 dg:1 divergence:8 intell:1 ab:5 analyzed:1 semidefinite:2 nonincreasing:6 sauer:1 incomplete:1 euclidean:4 re:1 inconvenient:1 bouman:1 column:5 disadvantage:1 restoration:1 maximization:1 cost:6 optimally:1 lee:3 together:1 squared:1 again:1 satisfied:1 nm:1 containing:1 possibly:1 derivative:2 rescaling:4 wkb:1 semidefiniteness:1 coding:3 explicitly:1 depends:1 vi:5 multiplicative:11 later:1 lab:1 analyze:1 carlos:1 complicated:1 minimize:3 square:3 ofthe:1 generalize:2 weak:1 bayesian:1 cardie:1 multiplying:1 za:2 sebastian:1 ed:1 definition:1 turk:1 obvious:1 proof:8 mi:1 massachusetts:1 wh:6 astronomical:1 improves:1 follow:1 formulation:2 just:1 abh:3 hand:2 working:1 replacing:1 hopefully:1 quality:3 gray:1 perhaps:1 scientific:1 hal:2 kab:1 normalized:1 concept:1 verify:1 vicinity:1 symmetric:1 laboratory:2 leibler:2 illustrated:1 numerator:2 generalized:1 hill:1 demonstrate:1 image:3 kluwer:1 refer:1 cambridge:4 iia:1 similarly:2 cancellation:1 winnertake:1 language:2 dj:1 cortex:1 multivariate:3 foldiak:1 verlag:1 inequality:2 vt:1 minimum:6 converge:2 monotonically:1 signal:1 ii:3 reduces:2 technical:1 faster:1 england:1 long:1 roland:1 halt:4 va:1 prediction:1 denominator:2 expectation:1 iteration:2 represent:1 achieved:1 subject:3 db:1 seem:1 easy:2 variety:1 weischedel:1 reduce:2 prototype:1 clarkson:1 york:1 cause:1 useful:7 iterating:1 extensively:1 locally:2 tomography:3 ken:1 simplest:1 generate:2 exist:1 arising:1 jv:1 wtw:9 ht:53 sum:1 wand:8 ilt:1 yann:1 vb:2 brody:1 guaranteed:3 nonnegative:3 adapted:1 constraint:9 orthogonality:1 bp:1 aspect:1 speed:1 min:1 extremely:1 relatively:1 according:1 combination:2 conjugate:1 smaller:2 slightly:1 em:1 sam:1 unity:3 tw:1 cun:1 primate:1 invariant:2 rectification:1 computationally:1 equation:1 mutually:1 previously:3 discus:1 jolliffe:1 gersho:1 junction:1 rewritten:2 appropriate:2 hii:1 alternative:1 corinna:1 original:2 clustering:1 ensure:1 murray:1 objective:1 exclusive:1 diagonal:1 gradient:13 distance:8 thank:1 sci:1 minimizing:1 difficult:1 negative:12 implementation:1 av:1 markov:1 acknowledge:1 descent:8 pentland:1 defining:2 variability:1 lb:1 expressiveness:1 nmf:7 perron:1 optimized:1 learned:1 qa:1 shepp:1 trans:2 usually:1 royal:1 unrealistic:1 ia:6 natural:1 rely:1 kivinen:1 technology:2 conic:1 multiplication:1 relative:1 freund:1 expect:1 mixed:1 proven:1 versus:2 rubin:1 dd:2 course:1 diagonally:3 placed:1 surprisingly:1 aij:4 exponentiated:1 institute:1 wide:1 saul:2 eigenfaces:1 absolute:1 sparse:2 distributed:2 sensory:1 approximate:1 kullback:2 global:1 handbook:1 factorizing:1 continuous:1 latent:1 iterative:3 wia:6 learn:1 itt:2 nature:1 ca:1 robust:1 necessarily:1 complex:1 significance:1 neurosci:1 vardi:1 repeated:1 fig:1 slow:1 nonnegativity:2 pereira:1 comput:1 bij:3 young:1 ifg:1 theorem:13 lucent:1 cog:1 cortes:1 alt:1 deconvolution:1 quantization:3 tapper:1 entropy:1 flannery:1 lucy:1 simply:1 monotonic:1 springer:1 teukolsky:1 ma:2 goal:1 vb2:1 hard:1 determined:1 reversing:2 wt:3 principal:3 lemma:7 called:1 la:1 formally:1 support:1 ub:7 argminh:2 paatero:1
942
1,862
The Kernel Trick for Distances Bernhard SchOikopf Microsoft Research 1 Guildhall Street Cambridge, UK [email protected] Abstract A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms. 1 Introduction One of the crucial ingredients of SVMs is the so-called kernel trick for the computation of dot products in high-dimensional feature spaces using simple functions defined on pairs of input patterns. This trick allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, SVMs being but the most prominent example [13, 8]. Although the mathematical result underlying the kernel trick is almost a century old [6], it was only much later [1, 3,13] that it was made fruitful for the machine learning community. Kernel methods have since led to interesting generalizations of learning algorithms and to successful real-world applications. The present paper attempts to extend the utility of the kernel trick by looking at the problem of which kernels can be used to compute distances in feature spaces. Again, the underlying mathematical results, mainly due to Schoenberg, have been known for a while [7]; some of them have already attracted interest in the kernel methods community in various contexts [11, 5, 15]. Let us consider training data (Xl, yd, ... , (xm, Ym) E X x y. Here, Y is the set of possible outputs (e.g., in pattern recognition, {?1}), and X is some nonempty set (the domain) that the patterns are taken from. We are interested in predicting the outputs y for previously unseen patterns x. This is only possible if we have some measure that tells us how (x, y) is related to the training examples. For many problems, the following approach works: informally, we want similar inputs to lead to similar outputs. To formalize this, we have to state what we mean by similar. On the outputs, similarity is usually measured in terms of a loss function. For instance, in the case of pattern recognition, the situation is simple: two outputs can either be identical or different. On the inputs, the notion of similarity is more complex. It hinges on a representation of the patterns and a suitable similarity measure operating on that representation. One particularly simple yet surprisingly useful notion of (dis)similarity - the one we will use in this paper - derives from embedding the data into a Euclidean space and utilizing geometrical concepts. For instance, in SVMs, similarity is measured by dot products (i.e. angles and lengths) in some high-dimensional feature space F . Formally, the patterns are first mapped into Fusing ? : X -t F, x I-t ?(x), and then compared using a dot product (?(x), ?(X')). To avoid working in the potentially high-dimensional space F, one tries to pick a feature space in which the dot product can be evaluated directly using a nonlinear function in input space, i.e. by means of the kernel trick k(x, x') = (?(x), ?(X')). (1) Often, one simply chooses a kernel k with the property that there exists some ? such that the above holds true, without necessarily worrying about the actual form of ? - already the existence of the linear space F facilitates a number of algorithmic and theoretical issues. It is well established that (1) works out for Mercer kernels [3, 13], or, equivalently, positive definite kernels [2, 14]. Here and below, indices i and j by default run over 1, ... , m. Definition 1 (Positive definite kernel) A symmetric function k : X x X -t IR which for all mEN, Xi E X gives rise to a positive definite Gram matrix, i.e. for which for all Ci E IR we have ""~. CicjKij ~ 0, where Kij L...J l ,J=1 is called a positive definite (pd) kernel. := k(Xi, Xj), (2) One particularly intuitive way to construct a feature map satisfying (1) for such a kernel k proceeds, in a nutshell, as follows (for details, see [2]): 1. Define a feature map ? : X -t IRA:', X I-t k(., x). (3) Here, IRA:' denotes the space of functions mapping X into Ilt 2. Turn it into a linear space by forming linear combinations m m' i=1 j=1 (4) 3. Endow it with a dot product (1, g) := 2::1 2:;~1 ai/Jjk(xi,xj), and turn it into a Hilbert space Hk by completing it in the corresponding norm. Note that in particular, by definition ofthe dot product, (k(., x), k(., x')) = k(x, x'), hence, in view of (3), we have k(x,x' ) = (?(X),?(X')), the kernel trick. This shows that pd kernels can be thought of as (nonlinear) generalizations of one of the simplest similarity measures, the canonical dot product (x, x') , x, x' E IRN. The question arises as to whether there also exi st generalizations of the simplest dissimilarity measure, the di stance Ilx - x'11 2 . Clearly, the distance 11?(x) - ?(X') 112 in the feature space associated with a pd kernel k can be computed using the kernel trick (1) as k(x, x) + k(X', x') - 2k(x, x') . Positive definite kernels are, however, not the full story: there exists a larger class of kernels that can be used as generalized distances, and the following section will describe why. 2 Kernels as Generalized Distance Measures Let us start by considering how a dot product and the corresponding distance measure are affected by a translation of the data, x I-t x - Xo. Clearly, Ilx - x' 11 2 is translation invariant while (x, x') is not. A short calculation shows that the effect of the translation can be expressed in terms of II. - .11 2 as ((x - xo), (x' - xo)) = ~ (-llx - x / 11 2 + Ilx - xol1 2+ Ilxo - X'W) . (5) Note that this is, just like (x,x /), still a pd kernel: ~i,j CiCj((Xi - xo), (Xj - xo)) = II ~i Ci(Xi - xo)112 ~ O. For any choice of Xo E X, we thus get a similarity measure (5) associated with the dissimilarity measure Ilx - x'II. This naturally leads to the question whether (5) might suggest a connection that holds true also in more general cases: what kind of nonlinear dissimilarity measure do we have to substitute instead of II. - .11 2 on the right hand side of (5) to ensure that the left hand side becomes positive definite? The answer is given by a known result. To state it, we first need to define the appropriate class of kernels. Definition 2 (Conditionally positive definite kernel) A symmetric function k : X x X -t IR which satisfies (2) for all mEN, Xi E X and for all Ci E IR with ~~ Ci = 0, L... t =l (6) is called a conditionally positive definite (cpd) kernel. Proposition 3 (Connection pd - cpd [2]) Let Xo E X, and let k be a symmetric kernel on X x X. Then k(x, x') is positive definite := ~ (k(x, x') - k(x, xo) - k(xo, x') + k(xo, xo)) (7) if and only if k is conditionally positive definite. The proof follows directly from the definitions and can be found in [2]. This result does generalize (5): the negative squared distance kernel is indeed cpd, for ~i Ci = 0 implies - ~i,j cicjllxi - xjl12 = - ~i Ci ~j Cj IIxjl12 - ~j Cj ~i cillxil1 2+ 2~i,j CiCj (Xi, Xj) = 2~i,j CiCj (Xi, Xj) = 211 ~i CiXi 112 ~ O. In fact, this implies that all kernels of the form (8) k(x, x') = -llx - x /II/3, 0 < f3 ~ 2 are cpd (they are not pd), by application of the following result: Proposition 4 ([2]) If k : X x X -t] - 00,0] is cpd, then so are - (_k)O< (0 and -log(l - k). < Q < 1) To state another class of cpd kernels that are not pd, note first that as trivial consequences of Definition 2, we know that (i) sums of cpd kernels are cpd, and (ii) any constant b E IR is cpd. Therefore, any kernel of the form k + b, where k is cpd and b E IR, is also cpd. In particular, since pd kernels are cpd, we can take any pd kernel and offset it by b and it will still be at least cpd. For further examples of cpd kernels, cf. [2, 14, 4, 11]. We now return to the main flow of the argume~t. Proposition 3 allows us to construc5 the feature map for k from that of the pd kernel k. To this end, fix Xo E X and define k according to (7). Due to Proposition 3, k is positive definite. Therefore, we may employ the Hilbert space representation ? : X -t H of k (ct. (1?, satisfying (?(x), ?(X')) = k(x, x'), hence 11?(x) - ?(x' )112 = (?(x) - ?(X'), ?(x) - ?(X')) = k(x, x) + k(X', x') - 2k(x, x'). (9) Substituting (7) yields 114>(x) - 4>(x' )112 = -k(x, x') 1 + 2 (k(x, x) + k(X', x')) . (10) We thus have proven the following result. Proposition 5 (Hilbert space representation of cpd kernels [7, 2]) Let k be a realvalued conditionally positive definite kernel on X, satisfying k(x, x) = 0 for all x E X. Then there exists a Hilbert space H of real-valued functions on X, and a mapping 4> : X -t H, such that (11) 114>(x) - 4>(x' )112 = -k(x, x'). lfwe drop the assumption k(x, x) = 0, the Hilbert .space representation reads 114>(x) - 4>(x' )112 = -k(x, x') + 21 (k(x, x) + k(X', x')) . It can be shown that if k(x, x) = 0 for all x E X, then d(x, x') := 114>(x) - (12) J -k(x, x') = 4>(x' )11 is a semi-metric; it is a metric if k(X,X') f:. 0 for x f:. x' [2]. We next show how to represent general symmetric kernels (thus in particular cpd kernels) as symmetric bilinear forms Q in feature spaces. This generalization of the previously known feature space representation for pd kernels comes at a cost: Q will no longer be a dot product. For our purposes, we can get away with this. The result will give us an intuitive understanding of Proposition 3: we can then write k as k(X,X') := Q(4)(x) 4>(xo), 4>(x' ) - 4>(xo)). Proposition 3 thus essentially adds an origin in feature space which corresponds to the image 4>(xo) of one point Xo under the feature map. For translation invariant algorithms, we are always allowed to do this, and thus turn a cpd kernel into a pd one - in this sense, cpd kernels are "as good as" pd kernels. Proposition 6 (Vector space representation of symmetric kernels) Let k be a realvalued symmetric kernel on X. Then there exists a linear .space H of real-valued functions on X , endowed with a symmetric bilinear form Q(., .), and a mapping 4> : X -t H, such that k(x, x') = Q(4)(x), 4>(x' )). (13) Proof The proof is a direct modification of the pd case. We use the map (3) and linearly complete the image as in (4). Define Q(f,g) := L:l LT~1 ad3j k(xi, xj). To see that it is well-defined, although it explicitly contains the expansion coefficients (which need not be unique), note that Q(f, g) = LT~1 /3jf(xj), independent of the ai. Similarly, for g, note that Q(f, g) = Li aig(xi), hence it is independent of /3j. The last two equations also show that Q is bilinear; clearly, it is symmetric. ? Note, moreover, that by definition of Q, k is a reproducing kernel for the feature space (which is not a Hilbert space): for all functions f (4), we have Q(k(.,x),f) = f(x); in particular, Q(k(., x), k(., x')) = k(x, x'). Rewriting k as k(x, x') := Q(4)(x) - 4>(xo), 4>(x ' ) - 4>(xo)) suggests an immediate generalization of Proposition 3: in practice, we might want to choose other points as origins in feature space - points that do not have a preimage Xo in input space, such as (usually) the mean of a set of points (cf. [12]). This will be useful when considering kernel PCA. Crucial is only that our reference point's behaviour under translations is identical to that of individual points. This is taken care of by the constraint on the sum of the Ci in the following proposition. The asterisk denotes the complex conjugated transpose. Proposition 7 (Exercise 2.23, [2]) Let K be a symmetric matrix, e E ~m be the vector of all ones, J the m x m identity matrix, and let c E em satisfy e*c = 1. Then K := (J is positive definite ec*)K(J - ce*) (14) if and only if K is conditionally positive definite. Proof "~": suppose K is positive definite, i.e. for any a E em, we have o ~ a* Ka = a* Ka + a*ec* Kce*a - a* Kce*a - a*ec* Ka. (15) In the case a*e = e*a = 0 (cf. (6?, the three last terms vanish, i.e. 0 ~ a* Ka, proving that K is conditionally positive definite. "?=": suppose K is conditionally positive definite. The map (J - ce*) has its range in the orthogonal complement of e, which can be seen by computing, for any a E em, e*(J - ce*)a= e*a-e*ce*a = O. (16) Moreover, being symmetric and satisfying (J - ce*)2 = (J - ce*), the map (J - ce*) is a projection. Thus K is the restriction of K to the orthogonal complement of e, and by definition of conditional positive definiteness, that is precisely the space where K is positive definite. ? This result directly implies a corresponding generalization of Proposition 3: Proposition 8 (Adding a general origin) Let k be a symmetric kernel, and let Ci E satisfy E~l Ci = 1. Then e Xl, ... ,X m E X, (17) is positive definite if and only if k is conditionally positive definite. Proof Consider a set of points x~, . . . , x~" m' E N, x~ EX, and let K be the (m + m') x (m + m') Gram matrix based on Xl, .. . , X m , X~, ... , x~,. Apply Proposi? tion 7 using c m +! = ... = cm +m ' = O. Example 9 (SVMs and kernel peA) (i) The above results show that conditionally positive definite kernels are a natural choice whenever we are dealing with a translation invariant problem, such as the SVM: maximization of the margin of separation between two classes of data is independent of the origin '.I' position. Seen in this light, it is not surprising that the structure of the dual optimization problem (cf [13}) allows cpd kernels: as noticed in [11, 10}, the constraint E~l QiYi = 0 projects out the same sub.lpace as (6) in the definition of conditionally positive definite kernels. (ii) Another example of a kernel algorithm that works with conditionally positive definite kernels is kernel peA [9}, where the data is centered, thus removing the dependence on the origin infeature .Ipace. Formally, this follows from Proposition 7 for Ci = 11m. Example 10 (Parzen windows) One of the simplest distance-based classification algorithms conceivable proceeds as follows. Given m+ points labelled with + 1, m_ points labelled with -1, and a test point ?( x), we compute the mean squared distances between the latter and the two classes, and assign it to the one where this mean is smaller, We use the distance kernel trick (Proposition 5) to express the decision function as a kernel expansion in input space: a short calculation shows that y = sgn (_1_ L m+ Yi=l k(X,Xi) - _1_ L m_ k(X,Xi) + c) , (19) Yi=-l with the constant offset c = (1/2m_) L:Yi=-l k(Xi, Xi) - (1/2m+) L:Yi=l k(Xi, Xi). Note thatfor some cpd kernels, such as (8), k(Xi, Xi) is always 0, thus c = O. For others, such as the commonly used Gaussian kernel, k(Xi, Xi) is a nonzero constant, in which case c also vanishes. For normalized Gaussians and other kernels that are valid density models, the resulting decision boundary can be interpreted as the Bayes decision based on two Parzen windows density estimates of the classes; for general cpd kernels, the analogy is a mere formal one. Example 11 (Toy experiment) In Fig. J, we illustrate the finding that kernel peA can be carried out using cpd kernels. We use the kernel (8). Due to the centering that is built into kernel peA (cf Example 9, (ii), and (5)), the case (3 = 2 actually is equivalent to linear peA. As we decrease (3, we obtain increasingly nonlinear feature extractors. Note, moreover, that as the kernel parameter (3 gets smaller, less weight is put on large distances, and we get more localizedfeature extractors (in the sense that the regions where they have large gradients, i.e. dense sets of contour lines in the plot, get more localized). Figure 1: Kernel PCA on a toy dataset using the cpd kernel (8); contour plots of the feature extractors corresponding to projections onto the first two principal axes in feature space. From left to right: (3 = 2,1.5,1,0.5. Notice how smaller values of (3 make the feature extractors increasingly nonlinear, which allows the identification of the cluster structure. 3 Conclusion We have described a kernel trick for distances in feature spaces. It can be used to generalize all distance based algorithms to a feature space setting by substituting a suitable kernel function for the squared distance. The class of kernels that can be used is larger than those commonly used in kernel methods (known as positive definite kernels) . We have argued that this reflects the translation invariance of distance based algorithms, as opposed to genuinely dot product based algorithms. SVMs and kernel PCA are translation invariant in feature space, hence they are really both distance rather than dot product based. We thus argued that they can both use conditionally positive definite kernels . In the case of the SVM, this drops out of the optimization problem automatically [11], in the case of kernel PCA, it corresponds to the introduction of a reference point in feature space. The contribution of the present work is that it identifies translation invariance as the underlying reason, thus enabling us to use cpd kernels in a much larger class of kernel algorithms, and that it draws the learning community's attention to the kernel trick for distances. Acknowledgments. Part of the work was done while the author was visiting the Australian National University. Thanks to Nello Cristianini, Ralf Herbrich, Sebastian Mika, Klaus Miiller, John Shawe-Taylor, Alex Smola, Mike Tipping, Chris Watkins, Bob Williamson, Chris Williams and a conscientious anonymous reviewer for valuable input. References [1] M. A. Aizerman, E. M. Braverman, and L. 1. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Autom. and Remote Contr. , 25:821- 837, 1964. [2] C. Berg, J.P.R. Christensen, and P. Ressel. Hannonic Analysis on Semigroups. Springer-Verlag, New York, 1984. [3] B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, PA, July 1992. ACM Press. [4] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219- 269, 1995. [5] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSC-CRL-99-1O, Computer Science Department, University of California at Santa Cruz, 1999. [6] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A 209:415-446, 1909. [7] I. J. Schoenberg. Metric spaces and positive definite functions. 44:522- 536, 1938. Trans. Amer. Math. Soc., [8] B. Sch61kopf, C. J. C. Burges, and A. J. Smola. Advances in Kernel Methods - Support Vector Learning. MIT Press, Cambridge, MA, 1999. [9] B. SchDlkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [10] A. Smola, T. FrieB, and B. ScMlkopf. Semiparametric support vector and linear programming machines. In M.S. Keams, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Infonnation Processing Systems 11 , pages 585 - 591, Cambridge, MA, 1999. MIT Press. [11] A. Smola, B. SchDlkopf, and K-R. Miiller. The connection between regularization operators and support vector kernels. Neural Networks , 11:637- 649, 1998. [12] W.S . Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958. [13] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [14] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [15] C. Watkins, 2000. personal communication.
1862 |@word norm:2 hannonic:1 pick:1 thatfor:1 contains:1 series:1 ka:4 surprising:1 yet:1 attracted:1 john:1 cruz:1 girosi:1 kyb:1 qiyi:1 drop:2 plot:2 short:2 math:1 herbrich:1 mathematical:2 ucsc:1 direct:1 indeed:1 mpg:1 automatically:1 actual:1 window:2 considering:2 becomes:1 project:1 underlying:3 moreover:3 what:2 kind:1 cm:1 interpreted:1 m_:3 finding:1 nutshell:1 classifier:1 uk:1 positive:28 consequence:1 bilinear:3 yd:1 might:2 mika:1 schdlkopf:2 suggests:1 range:1 unique:1 acknowledgment:1 practice:1 definite:26 sch61kopf:1 thought:1 projection:2 suggest:1 get:5 onto:1 operator:1 put:1 context:1 restriction:1 fruitful:1 map:7 reviewer:1 equivalent:1 williams:1 attention:1 identifying:1 insight:1 haussler:2 utilizing:1 ralf:1 proving:1 century:1 notion:2 embedding:1 schoenberg:2 rozonoer:1 suppose:2 programming:1 origin:5 trick:12 pa:1 roy:1 recognition:3 particularly:2 satisfying:4 genuinely:1 mike:1 region:1 remote:1 decrease:1 solla:1 valuable:1 pd:14 vanishes:1 cristianini:1 personal:1 basis:1 exi:1 represented:1 various:1 describe:1 london:1 tell:1 klaus:1 kce:2 larger:3 valued:2 unseen:1 eigenvalue:1 product:12 intuitive:2 cluster:1 jjk:1 illustrate:1 measured:2 soc:2 implies:3 come:1 australian:1 pea:5 centered:1 sgn:1 observational:1 argued:2 behaviour:1 assign:1 fix:1 generalization:6 really:2 anonymous:1 proposition:15 hold:2 guildhall:1 algorithmic:1 mapping:3 substituting:2 conceiving:1 purpose:1 infonnation:1 reflects:1 mit:2 clearly:3 always:2 gaussian:1 rather:1 avoid:1 ira:2 endow:1 ax:1 mainly:1 hk:1 sense:2 contr:1 irn:1 keams:1 interested:1 issue:1 dual:1 classification:1 construct:1 f3:1 identical:2 jones:1 others:1 cpd:24 report:1 spline:1 employ:1 national:1 individual:1 semigroups:1 microsoft:1 attempt:1 interest:1 braverman:1 light:1 integral:1 poggio:1 orthogonal:2 old:1 euclidean:1 taylor:1 theoretical:2 instance:2 kij:1 maximization:1 cost:1 fusing:1 successful:1 too:1 answer:1 chooses:1 st:1 density:2 thanks:1 siam:1 ym:1 parzen:2 again:1 squared:3 opposed:1 choose:1 return:1 conjugated:1 li:1 toy:2 potential:1 de:1 coefficient:1 satisfy:2 explicitly:1 later:1 try:1 view:1 tion:1 start:1 bayes:1 contribution:1 ir:6 convolutional:1 iixjl12:1 yield:1 ofthe:1 generalize:3 identification:1 ipace:1 mere:1 bob:1 whenever:1 sebastian:1 definition:8 centering:1 naturally:1 associated:2 di:1 proof:5 dataset:1 hilbert:7 formalize:1 cj:2 actually:2 cbms:1 tipping:1 formulation:1 done:2 evaluated:1 amer:1 just:1 smola:5 working:1 hand:2 cohn:1 nonlinear:7 preimage:1 effect:1 concept:1 true:2 normalized:1 hence:4 regularization:2 stance:1 read:1 symmetric:12 nonzero:1 conditionally:12 generalized:2 prominent:1 complete:1 geometrical:1 image:2 common:1 volume:1 extend:1 cambridge:3 ai:2 llx:2 philos:1 mathematics:1 similarly:1 shawe:1 dot:12 similarity:7 operating:1 longer:1 add:1 verlag:1 yi:4 seen:2 care:1 july:1 semi:1 ii:8 full:1 technical:1 calculation:2 autom:1 variant:1 essentially:1 metric:3 kernel:88 represent:1 want:2 semiparametric:1 crucial:2 operate:1 regional:1 facilitates:1 flow:1 xj:7 architecture:1 wahba:1 proposi:1 whether:2 pca:5 utility:1 schoikopf:1 miiller:3 york:2 useful:3 santa:1 informally:1 svms:7 simplest:3 canonical:1 nsf:1 notice:1 torgerson:1 write:1 discrete:1 affected:1 express:1 rewriting:1 ce:7 worrying:1 sum:2 aig:1 run:2 angle:1 ilt:1 almost:1 guyon:1 separation:1 draw:1 decision:3 scaling:1 ct:1 completing:1 annual:1 constraint:2 precisely:1 alex:1 department:1 according:1 combination:1 smaller:3 em:3 increasingly:2 b:1 modification:1 christensen:1 invariant:4 xo:20 taken:2 equation:2 previously:2 turn:4 nonempty:1 know:1 end:1 gaussians:1 endowed:1 apply:1 away:1 appropriate:1 existence:1 substitute:1 denotes:2 ensure:1 cf:5 hinge:1 noticed:1 already:2 question:2 dependence:1 visiting:1 conceivable:1 gradient:1 distance:20 mapped:1 street:1 chris:2 frieb:1 ressel:1 nello:1 tuebingen:1 trivial:1 reason:1 length:1 index:1 providing:1 equivalently:1 potentially:1 negative:2 rise:1 enabling:1 immediate:1 situation:1 looking:1 communication:1 reproducing:1 community:3 complement:2 nonlinearly:1 pair:1 cast:1 connection:4 california:1 boser:1 established:1 trans:2 proceeds:2 usually:3 pattern:8 xm:1 below:1 aizerman:1 built:1 suitable:2 natural:1 predicting:1 realvalued:2 identifies:1 carried:1 philadelphia:1 understanding:1 loss:1 interesting:1 men:2 proven:1 analogy:1 ingredient:1 localized:1 asterisk:1 foundation:1 cicj:3 mercer:2 editor:2 story:1 translation:9 surprisingly:1 last:2 transpose:1 dis:1 side:2 formal:1 burges:1 boundary:1 default:1 world:1 gram:2 valid:1 contour:2 author:1 made:1 commonly:2 ec:3 bernhard:1 dealing:1 pittsburgh:1 xi:20 why:1 nature:1 expansion:2 williamson:1 complex:2 necessarily:1 domain:1 main:1 dense:1 linearly:1 allowed:1 fig:1 definiteness:1 wiley:1 sub:1 position:1 xl:3 exercise:1 vanish:1 watkins:2 extractor:4 removing:1 offset:2 svm:2 derives:1 exists:4 workshop:1 vapnik:2 adding:1 ci:10 dissimilarity:3 margin:2 ilx:4 led:1 lt:2 simply:1 forming:1 expressed:1 springer:2 corresponds:2 satisfies:1 acm:2 ma:2 conditional:1 identity:1 labelled:2 jf:1 crl:1 principal:1 called:3 invariance:2 formally:2 berg:1 support:4 latter:1 arises:1 ex:1
943
1,863
An Information Maximization Approach to Overcomplete and Recurrent Representations Oren Shriki and Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904, Israel Daniel D. Lee Bell Laboratories Lucent Technologies Murray Hill, NJ 07974 Abstract The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example. 1 Introduction Many unsupervised learning algorithms such as principal component analysis, vector quantization, self-organizing feature maps, and others use the principle of minimizing reconstruction error to learn appropriate features from multivariate data [1, 2]. Independent components analysis (ICA) can similarly be understood as maximizing the likelihood of the data under a non-Gaussian generative model, and thus is related to minimizing a reconstruction cost [3, 4, 5]. On the other hand, the same ICA algorithm can also be derived without regard to a particular generative model by maximizing the mutual information between the data and a nonlinearly transformed version of the data [6]. This principle of information maximization has also been previously applied to explain optimal properties for single units, linear networks, and symplectic transformations [7, 8, 9]. In these proceedings, we show how the principle of maximizing mutual information can be generalized to overcomplete as well as recurrent representations. In the limit of zero noise, we derive gradient descent learning rules for both the feedforward and recurrent weights. Finally, we show the application of these learning rules to some simple illustrative examples. M output variables N input variables Figure 1: Network diagram of an overcomplete, recurrent representation. x are input data which influence the output signals s through feedforward connections W. The signals s also interact with each other through the recurrent interactions K. 2 Information Maximization The "Infomax" formulation of leA considers the problem of maximizing the mutual information between N-dimensional data observations {x} which are input to a network resulting in N-dimensional output signals {s} [6]. Here, we consider the general problem where the signals s are M -dimensional with M ~ N. Thus, the representation is overcomplete because there are more signal components than data components. We also consider the situation where a signal component Si can influence another component Sj through a recurrent interaction Kji. As a network, this is diagrammed in Fig. 1 with the feedforward connections described by the M x N matrix Wand the recurrent connections by the M x M matrix K. The network response s is a deterministic function of the input x: (1) where 9 is some nonlinear squashing function. In this case, the mutual information between the inputs x and outputs s is functionally only dependent on the entropy of the outputs: J(s, x) = H(s) - H(slx) '" H(s). (2) The distribution of s is aN-dimensional manifold embedded in aM-dimensional vector space and nominally has a negatively divergent entropy. However, as shown in Appendix 1, the probability density of s can be related to the input distribution via the relation: P(s) ex: P(x) y!det(xTx) (3) where the susceptibility (or Jacobian) matrix X is defined as: OSi Xij =~. uXj (4) This result can be understood in terms of the singular value decomposition (SVD) of the matrix x. The transformation performed by X can be decomposed into a series of three transformations: an orthogonal transformation that rotates the axes, a diagonal transformation that scales each axis, followed by another orthogonal transformation. A volume element in the input space is mapped onto a volume element in the output space, and its volume change is described by the diagonal scaling operation. This scale change is given by the product of the square roots of the eigenvalues of XT X. Thus, the relationship between the probability distribution in the input and output spaces includes the proportionality factor, y'det(xTx), as formally derived in Appendix 1. We now get the following expression for the entropy of the outputs: -I 1 P(x) ) = -2 (logdet(x T X)) y'det(xTx) where the brackets indicate averaging over the input distribution. H(s) '" 3 dxP(x) log ( + H(x), (5) Learning rules From Eq. (5), we see that minimizing the following cost function: 1 E = -"2Tr(log(XTX)), (6) is equivalent to maximizing the mutual information. We first note that the susceptibility X satisfies the following recursion relation: Xij where G ij = g~ . (Wij + ~ KikXk j ) = (GW + GKX)ij, (7) = 8ijg~ and g~ == g' (Lj WijXj + Lk KikS k) . Solving for X in Eq. (7) yields the result: = = X (G- 1 - K)-1W <]>W, (8) 1 G- - K. <]>ij can be interpreted as the sensitivity in the recurrent network where <]>-1 == of the ith unit's output to changes in the total input of the jth unit. We next derive the learning rules for the network parameters using gradient descent, as shown in detail in Appendix 2. The resulting expression for the learning rule for the feedforward weights is: ~W 8E = - ' f8W / - = 'f/ (rT + <]>T 'YxT) (9) where'f/ is the learning rate, the matrix r is defined as r = (X T X)-1 XT <]> (0) l' . = (Xr)ii (g~t)3 (11) and the vector 'Y is given by 'Yi Multiplying the gradient in Eq. (9) by the matrix (WWT) yields an expression analogous to the "natural" gradient learning rule [10]: ~W = 'f/W (I + (X T 'YxT)) . (2) Similarly, the learning rule for the recurrent interactions is ~K 8E = -'f/ 8K = 'f/ ((xrf + <]>T'YsT) . (13) In the case when there are equal numbers of input and output units, M = N, and there are no recurrent interactions, K = 0, most of the previous expressions simplify. The susceptibility matrix X is diagonal, <]> = G, and r = W- 1 . Substituting back into Eq. (9) for the learning rule for W results in the update rule: ~W = 'f/ [(W T )-1 + (zx T )] , (14) where Z i = gr / g~. Thus, the well-known Infomax leA learning rule is recovered as a special case ofEq. (9) [6] . (a) (b) (c) Figure 2: Results of fitting 3 filters to a 2-dimensional hexagon distribution with 10000 sample points. 4 Examples We now apply the preceding learning algorithms to a simple two-dimensional (N = 2) input example. Each input point is generated by a linear combination of three (twodimensional) unit vectors with angles of 00 , 1200 and 240 0 ? The coefficients are taken from a uniform distribution on the unit interval. The resulting distribution has the shape of a unit hexagon, which is slightly more dense close to the origin than at the boundaries. Samples of the input distribution are shown in Fig. 2. The second order cross correlations vanish, so that all the structure in the data is described only by higher order correlations. We fix the sigmoidal nonlinearity to be g(x} = tanh(x}. 4.1 Feedforward weights A set of M = 3 overcomplete filters for W are learned by applying the update rule in Eq. (9) to random normalized initial conditions while keeping the recurrent interactions fixed at K = O. The length of the rows of W were constrained to be identical so that the filters are projections along certain directions in the two-dimensional space. The algorithm converged after about 20 iterations. Examples of the resulting learned filters are shown by plotting the rows of W as vectors in Fig. 2. As shown in the figure, there are several different local minimum solutions. If the lengths of the rows of Ware left unconstrained, slight deviations from these solutions occur, but relative orientation differences of 60 0 or 120 0 between the various filters are preserved. 4.2 Recurrent interactions To investigate the effect of recurrent interactions on the representation, we fixed the feedforward weights in W to point in the directions shown in Fig. 2(a), and learned the optimal recurrent interactions K using Eq. (13). Depending upon the length of the rows of W which scaled the input patterns, different optimal values are seen for the recurrent connections. This is shown in Fig. 3 by plotting the value of the cost function against the strength of the uniform recurrent interaction. For small scaled inputs, the optimal recurrent strength is negative which effectively amplifies the output signals since the 3 signals are negatively correlated. With large scaled inputs, the optimal recurrent strength is positive which tend to decrease the outputs. Thus, in this example, optimizing the recurrent connections performs gain control on the inputs. 3 ??? 2.5 ' 2? .... 1.5 C/) o U 1 0.5 O? -0.5 . IWI=1 -1 -1.5 -1 IWI=5 -0.5 0 0.5 1.5 k Figure 3: Effect of adding recurrent interactions to the representation. The cost function is plotted as a function of the recurrent interaction strength, for two different input scaling parameters. 5 Discussion The learned feedforward weights are similar to the results of another ICA model that can learn overcomplete representations [11]. Our algorithm, however, does not need to perform approximate inference on a generative model. Instead, it directly maximizes the mutual information between the outputs and inputs of a nonlinear network. Our method also has the advantage of being able to learn recurrent connections that can enhance the representational power of the network. We also note that this approach can be easily generalized to undercomplete representations by simply changing the order of the matrix product in the cost function. However, more work still needs to be done in order to understand technical issues regarding speed of convergence and local minima in larger applications. Possible extensions of this work would be to optimize the nonlinearity that is used, or to adaptively change the number of output units to best match the input distribution. We acknowledge the financial support of Bell Laboratories, Lucent Technologies, and the US-Israel Binational Science Foundation. 6 Appendix 1: Relationship between input and output distributions In general, the relation between the input and output distributions is given by P(s) = ! dxP(x)P(slx). (15) Since we use a deterministic mapping, the conditional distribution of the response given the input is given by P(slx) = 8(s - g(Wx + Ks)). By adding independent Gaussian noise to the responses of the output units and considering the limit where the variance of the noise goes to zero, we can write this term as 1 e-~lls-g(Wx+Ks)112 6.-+0 (2?r~2)N/2 P(slx) = lim (16) The output space can be partitioned into those points which belong to the image of the input space, and those which are not. For points outside the image of the input space, P(s) = O. Consider a point s inside the image. This means that there exists Xo such that s = g(Wxo + Ks). For small~, we can expand g(Wx + Ks) - s ::::: X8x, where X is P(slx) (17) The expression in the square brackets is a delta function in x around Xo. Using Eq. (15) we finally get P(s) = P(x) O(s) (18) Jdet(xTx) where the characteristic function O(s) is 1 if s belongs to the image of the input space and is zero otherwise. Note that for the case when X is a square matrix (M expression reduces to the relation P(s) = P(x) II det(x)l. 7 = N), this Appendix 2: Derivation of the learning rules To derive the appropriate learning rules, we need to calculate the derivatives of E with respect to some set of parameters A. In general, these derivatives are obtained from the expression: 7.1 Feedforward weights In order to derive the learning rule for the weights W, we first calculate OWeb o~ a e ) " o~ "S: ( ~ae OWl m + OWl m Web = ~al6bm + "S: OWl m Web? OXab " OWl m = ae (20) From the definition of ~, we see that: O~ae __ ,,~ . oGi:/~. OWl m L.J at OWl m Je (21) oGi/ _ 6ij og~ _ 6 g~' OSi OWl m - - (gD 2 OWl m - - ij (gD3 OWl m ' (22) tJ and where g~' == g" (Lj WijXj + Lk KikS k). The derivatives of s also satisfy a recursion relation similar to Eq. (7): OSi OWl m = gi' I ( 6U x m + 7 "OSj ) Kij OWl m ' (23) which has the solution: (24) Putting all these results together in Eq. (19) and taking the trace, we get the gradient descent rule in Eq. (9). 7.2 Recurrent interactions To derive the learning rules for the recurrent weights K, we first calculate the derivatives of Xab with respect to Kim: OXab oK 1m o<1>ae '"" o<1>ijl = '"" ~ oK Web = - ~ <1>ai OK <1>je W eb. e 1m e,i,j 1m (25) From the definition of <1>, we obtain: 0<1> ij 1 ?lK u 1m 6ij 0 g~ = - -( ')2 ?lK gi u 1m - 6il 6jm. (26) The derivatives of g' are obtained from the following relations: (27) and (28) which results from a recursion relation similar to Eq. (23). Finally, after combining these results and calculating the trace, we get the gradient descent learning rule in Eq. (13). References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Hayldn, S (1999). Neural networks: a comprehensive foundation. 2nd ed., Prentice-Hall, Upper Saddle River, NJ. [3] Jutten, C & Herault, J (1991). Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing 24,1-10. [4] Hinton, G & Ghahramani, Z (1997). Generative models for discovering sparse distributed representations. Philosophical Transactions Royal Society B 352, 1177-1190. [5] Pearlmutter, B & Parra, L (1996). A context-sensitive generalization of ICA. In ICONIP'96, 151-157. [6] Bell, AJ & Sejnowsld, TJ (1995). An information maximization approach to blind separation and blind deconvolution. Neural Comput. 7, 1129- 1159. [7] Barlow, HB (1989). Unsupervised learning. Neural Comput. 1,295-311. [8] Linsker, R (1992). Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4,691-702. [9] Parra, L, Deco, G, & Miesbach, S (1996). Statistical independence and novelty detection with information preserving nonlinear maps. Neural Comput. 8,260-269. [10] Amari, S, Cichocld, A & Yang, H (1996). A new learning algorithm for blind signal separation. Advances in Neural Information Processing Systems 8, 757-763. [11] Lewicki, MS & Sejnowsld, TJ (2000). Learning overcomplete representations. Neural Computation 12 337- 365.
1863 |@word version:1 nd:1 proportionality:1 decomposition:1 tr:1 initial:1 series:1 daniel:1 recovered:2 si:1 wx:3 shape:1 update:2 generative:4 discovering:1 ith:1 sigmoidal:1 uxj:1 along:1 consists:1 fitting:1 yst:1 inside:1 ica:5 decomposed:1 jm:1 considering:1 underlying:1 suffice:1 maximizes:1 israel:2 interpreted:1 transformation:6 nj:2 scaled:3 control:1 unit:13 positive:1 understood:2 local:3 limit:3 ware:1 eb:1 k:4 slx:5 xr:1 bell:3 xtx:5 gkx:1 projection:1 get:4 onto:1 close:1 twodimensional:1 prentice:1 influence:2 applying:1 context:1 optimize:1 equivalent:1 conventional:1 deterministic:3 map:2 center:1 maximizing:7 jerusalem:1 go:1 rule:20 ijg:1 financial:1 racah:1 analogous:1 origin:1 element:2 calculate:3 sompolinsky:1 decrease:1 diagrammed:1 solving:1 negatively:2 upon:1 easily:1 various:1 derivation:1 outside:1 larger:2 otherwise:1 amari:1 gi:2 dxp:2 advantage:1 eigenvalue:1 reconstruction:2 interaction:14 product:2 combining:1 osi:3 organizing:1 representational:1 amplifies:1 convergence:1 derive:5 recurrent:28 depending:1 ij:7 eq:12 indicate:1 direction:2 filter:5 owl:11 fix:1 generalization:1 symplectic:1 parra:2 extension:1 around:1 hall:1 mapping:1 shriki:1 substituting:1 driving:1 susceptibility:3 tanh:1 sensitive:1 gaussian:2 og:1 derived:2 ax:1 likelihood:1 kim:1 am:1 inference:1 dependent:1 lj:2 relation:7 expand:1 transformed:1 wij:1 issue:1 orientation:1 herault:1 constrained:1 special:2 mutual:9 equal:2 identical:1 unsupervised:2 linsker:1 others:1 simplify:1 lls:1 comprehensive:1 detection:1 investigate:1 bracket:2 tj:3 orthogonal:2 plotted:1 overcomplete:8 kij:1 maximization:4 cost:5 deviation:1 undercomplete:1 uniform:2 xrf:1 gr:1 yxt:2 gd:1 adaptively:1 density:1 sensitivity:1 jdet:1 river:1 lee:1 physic:1 infomax:2 enhance:1 together:1 deco:1 derivative:5 includes:1 coefficient:1 satisfy:1 blind:4 performed:1 root:1 iwi:2 ofeq:1 square:3 il:1 variance:1 characteristic:1 yield:2 multiplying:1 zx:1 converged:1 explain:1 ed:1 synaptic:1 definition:2 against:1 wxo:1 gain:1 lim:1 back:1 wwt:1 ok:3 higher:1 response:3 formulation:1 done:1 correlation:2 hand:1 web:3 nonlinear:3 jutten:1 aj:1 effect:2 normalized:1 barlow:1 laboratory:2 illustrated:1 gw:1 ogi:2 self:1 illustrative:1 m:1 generalized:2 hill:1 ijl:1 iconip:1 pearlmutter:1 performs:1 image:4 binational:1 volume:3 belong:1 slight:1 functionally:1 ai:1 unconstrained:1 similarly:2 nonlinearity:2 multivariate:1 optimizing:1 belongs:1 certain:1 verlag:1 yi:1 seen:1 minimum:2 preserving:1 wijxj:2 preceding:1 novelty:1 maximize:1 signal:10 ii:2 reduces:1 technical:1 match:1 cross:1 ae:4 iteration:1 oren:1 lea:2 preserved:1 interval:1 diagram:1 singular:1 source:1 tend:1 yang:1 feedforward:9 hb:1 independence:1 architecture:1 regarding:1 det:4 expression:7 york:1 logdet:1 xij:2 delta:1 write:1 putting:1 changing:1 wand:1 angle:1 separation:3 appendix:5 kji:1 scaling:2 hexagon:2 haim:1 followed:1 strength:4 occur:1 speed:1 combination:1 slightly:1 partitioned:1 xo:2 taken:1 previously:1 jolliffe:1 neuromimetic:1 operation:1 apply:1 appropriate:2 calculating:1 ghahramani:1 murray:1 society:1 rt:1 diagonal:3 gradient:6 rotates:1 mapped:1 manifold:1 considers:1 length:3 relationship:2 minimizing:3 hebrew:1 trace:2 negative:1 perform:1 sejnowsld:2 upper:1 observation:1 acknowledge:1 descent:4 situation:1 hinton:1 nonlinearly:1 connection:8 philosophical:1 learned:4 able:1 pattern:1 royal:1 power:1 natural:1 recursion:3 technology:2 axis:1 lk:4 relative:1 embedded:1 foundation:2 principle:4 plotting:2 squashing:1 row:4 keeping:1 jth:1 understand:1 institute:1 taking:1 sparse:1 distributed:1 regard:1 boundary:1 adaptive:1 transaction:1 sj:1 approximate:1 osj:1 learn:3 correlated:1 interact:1 dense:1 xab:1 noise:4 fig:5 je:2 comput:4 vanish:1 jacobian:1 lucent:2 xt:2 divergent:1 deconvolution:1 exists:1 quantization:1 adding:2 effectively:1 entropy:5 simply:1 saddle:1 lewicki:1 nominally:1 springer:1 satisfies:1 conditional:1 change:4 averaging:1 principal:2 total:1 svd:1 formally:1 support:1 miesbach:1 ex:1
944
1,864
A New Approximate Maximal Margin Classification Algorithm Claudio Gentile DSI, Universita' di Milano, Via Comelico 39, 20135 Milano, Italy [email protected] Abstract A new incremental learning algorithm is described which approximates the maximal margin hyperplane w.r.t. norm p ~ 2 for a set of linearly separable data. Our algorithm, called ALMAp (Approximate Large Margin algorithm w.r.t. norm p), takes 0 ((P~21;;2) corrections to separate the data with p-norm margin larger than (1 - 0:) ,,(, where,,( is the p-norm margin of the data and X is a bound on the p-norm of the instances. ALMAp avoids quadratic (or higher-order) programming methods. It is very easy to implement and is as fast as on-line algorithms, such as Rosenblatt's perceptron. We report on some experiments comparing ALMAp to two incremental algorithms: Perceptron and Li and Long's ROMMA. Our algorithm seems to perform quite better than both. The accuracy levels achieved by ALMAp are slightly inferior to those obtained by Support vector Machines (SVMs). On the other hand, ALMAp is quite faster and easier to implement than standard SVMs training algorithms. 1 Introduction A great deal of effort has been devoted in recent years to the study of maximal margin classifiers. This interest is largely due to their remarkable generalization ability. In this paper we focus on special maximal margin classifiers, i.e., on maximal margin hyperplanes. Briefly, given a set linearly separable data, the maximal margin hyperplane classifies all the data correctly and maximizes the minimal distance between the data and the hyperplane. If euclidean norm is used to measure the distance then computing the maximal margin hyperplane corresponds to the, by now classical, Support Vector Machines (SVMs) training problem [3]. This task is naturally formulated as a quadratic programming problem. If an arbitrary norm p is used then such a task turns to a more general mathematical programming problem (see, e.g., [15, 16]) to be solved by general purpose (and computationally intensive) optimization methods. This more general task arises in feature selection problems when the target to be learned is sparse. A major theme of this paper is to devise simple and efficient algorithms to solve the maximal margin hyperplane problem. The paper has two main contributions. The first contribution is a new efficient algorithm which approximates the maximal margin hyperplane w.r.t. norm p to any given accuracy. We call this algorithm ALMAp (Approximate Large Margin algorithm w.r.t. norm p). ALMAp is naturally viewed as an on-line algorithm, i.e., as an algorithm which processes the examples one at a time. A distinguishing feature of ALMAp is that its relevant parameters (such as the learning rate) are dynamically adjusted over time. In this sense, ALMAp is a refinement of the on-line algorithms recently introduced in [2]. Moreover, ALMA2 (i.e., ALMAp with p = 2) is a perceptron-like algorithm; the operations it performs can be expressed as dot products, so that we can replace them by kernel functions evaluations. ALMA2 approximately solves the SVMs training problem, avoiding quadratic programming. As far as theoretical performance is concerned, ALMA2 achieves essentially the same bound on the number of corrections as the one obtained by a version of Li and Long's ROMMA algorithm [12], though the two algorithms are different. l In the case that p is logarithmic in the dimension of the instance space (as in [6]) ALMAp yields results which are similar to those obtained by estimators based on linear programming (see [1, Chapter 14]). The second contribution of this paper is an experimental investigation of ALMA2 on the problem of handwritten digit recognition. For the sake of comparison, we followed the experimental setting described in [3,4,12]. We ran ALMA2 with polynomial kernels, using both the last and the voted hypotheses (as in [4]), and we compared our results to those described in [3, 4, 12]. We found that voted ALMA2 generalizes quite better than both ROMMA and the voted Perceptron algorithm, but slightly worse than standard SVMs. On the other hand ALMA2 is much faster and easier to implement than standard SVMs training algorithms. For related work on SVMs (with p = 2), see Friess et al. [5], Platt [17] and references therein. The next section defines our major notation and recalls some basic preliminaries. In Section 3 we describe ALMAp and claim its theoretical properties. Section 4 describes our experimental comparison. Concluding remarks are given in the last section. 2 Preliminaries and notation An example is a pair (x, y), where x is an instance belonging to n nand y E {-1, +1} is the binary label associated with x. A weight vector w = (WI, ... , w n ) E nn represents an n-dimensional hyperplane passing through the origin. We associate with w a linear threshold classifier with threshold zero: w : x -t sign( w . x) = 1 if w . x ~ 0 and = -1 otherwise. When p ~ 1 we denote by Ilwllp the p-norm of w, i.e., Ilwllp = (E~=llwiIP)llp (also, Ilwll oo = limp-+oo (E~=llwiIP)llp = maxi IWil). We say that q is dual to p if ~ + ~ 1 holds. For instance, the I-norm is dual to the oo-norm and the 2-norm is self-dual. In this paper we assume that p and q are some pair of dual values, = with p ~ 2. We use p-norms for instances and q-norms for weight vectors. The (normalized) p-norm margin (or just the margin, if p is clear from the surrounding context) of a hyperplane w with Ilwll q :s; 1 on example (x,y) is defined as ~I~I'I~' If this margin is positive 2 then w classifies (x, y) correctly. Notice that from Holder's inequality we have yw? x:S; Iw, xl :s; Ilwll q Ilxllp :s; Ilxll p. Hence ~I~I'I~ E [-1,1] . Our goal is to approximate the maximal p-norm margin hyperplane for a set of examples (the training set). For this purpose, we use terminology and analytical tools from the on-line lIn fact, algorithms such as ROMMA and the one contained in Kowalczyk [10] have been specifically designed for euclidean norm. Any straightforward extension of these algorithms to a general norm p seems to require numerical methods . 2We assume that w . x = 0 yields a wrong classification, independent of y. learning literature. We focus on an on-line learning model introduced by Littlestone [14]. An on-line learning algorithm processes the examples one at a time in trials. In each trial, the algorithm observes an instance x and is required to predict the label y associated with x . We denote the prediction by f). The prediction f) combines the current instance x with the current internal state of the algorithm. In our case this state is essentially a weight vector w , representing the algorithm's current hypothesis about the maximal margin hyperplane. After the prediction is made, the true value of y is revealed and the algorithm suffers a loss, measuring the "distance" between the prediction f) and the label y. Then the algorithm updates its internal state. In this paper the prediction f) can be seen as the linear function f) = w . x and the loss is a margin-based 0-1 Loss: the loss of W on example (x, y) is 1 if ~I~I'I~ ~ (1 - a) "y and o otherwise, for suitably chosen a, "y E [0,1]. Therefore, if Ilwll q ~ 1 then the algorithm incurs positive loss if and only if w classifies (x, y) with (p-norm) margin not larger than (1- a) "y. The on-line algorithms are typically loss driven, i.e., they do update their internal state only in those trials where they suffer a positive loss. We call a correction a trial where this occurs. In the special case when a = 1 a correction is a mistaken trial and a loss driven algorithm turns to a mistake driven [14] algorithm. Throughout the paper we use the subscript t for x and y to denote the instance and the label processed in trial t. We use the subscript k for those variables, such as the algorithm's weight vectorw, which are updated only within a correction. In particular, Wk denotes the algorithm's weight vector after k-l corrections (so that WI is the initial weight vector). The goal of the on-line algorithm is to bound the cumulative loss (i.e., the total number of corrections or mistakes) it suffers on an arbitrary sequence of examples S = (Xl, yd, ... , (XT, YT). If S is linearly separable with margin "y and we pick a < 1 then a bounded loss clearly implies convergence in a finite number of steps to (an approximation of) the maximal margin hyperplane for S. 3 The approximate large margin algorithm ALMAp ALMAp is a large margin variant of the p-norm Perceptron algorithm 3 [8, 6], and is similar in spirit to the variable learning rate algorithms introduced in [2]. We analyze ALMAp by giving upper bounds on the number of corrections. The main theoretical result of this paper is Theorem 1 below. This theorem has two parts. Part 1 bounds the number of corrections in the linearly separable case only. In the special case when p = 2 this bound is very similar to the one proven by Li and Long for a version of ROMMA [12]. Part 2 holds for an arbitrary sequence of examples. A bound which is very close to the one proven in [8, 6] for the (constant learning rate) p-norm Perceptron algorithm is obtained as a special case. Despite this theoretical similarity, the experiments we report in Section 4 show that using our margin-sensitive variable learning rate algorithm yields a clear increase in performance. In order to define our algorithm, we need to recall the following mapping f from [6] (a p-indexing for f is understood): f : nn -t nn, f = (h, ... , fn), where h(w) = sign(wi) IWilq-1 / Ilwllr2, w = (WI, ... , Wn) E nn. Observe that p = 2 yields the identify function. The (unique) inverse f- 1 of f is [6] f- 1 : nn -t nn, f- 1 = U11, ... , f;;l), where f i- 1((}) = sign(Bi) IBi lp-1 / 11(}11~-2, () = (B1, ... ,Bn) E nn, namely, f- 1 is obtained from f by replacing q with p. 3The p-norm Perceptron algorithm is a generalization of the classical Perceptron algorithm [18]: p-norm Perceptron is actually Perceptron when p = 2. Algorithm ALMAp(aj B, C) with a E (0,1], B, C > O. Initialization: Initial weight vector WI = 0; k = 1. Fort = 1, ... ,Tdo: Get example (Xt, Yt) E nn x {-I, +1} and update weights as follows: Set 'Yk = B JP=l ~j If Yt Wk?Xt <_ (1 - a) ""k then: I Ilxtll p '11 - 'Ik - C I v'P=I IIXtll p Vii' = f-l(f(Wk) + 'T/k Yt Xt), Wk+1 = w~/max{l, Ilw~llq}, W~ k+-k+1. Figure 1: The approximate large margin algorithm ALMA p . is described in Figure 1. The algorithm is parameterized by a E (0,1], B > 0 and C > O. The parameter a measures the degree of approximation to the optimal margin hyperplane, while Band C might be considered as tuning parameters. Their use will be made clear in Theorem 1 below. Let W = {w E nn : Ilwllq :::; I}. ALMAp maintains a vector Wk of n weights in W. It starts from WI = O. At time t the algorithm processes the example (Xt, Yt). If the current weight vector Wk classifies (Xt, Yt) with margin not larger than (1 - a) 'Yk then a correction occurs. The update rule4 has two main steps. The first step gives w~ through the classical update of a (p-norm) perceptron-like algorithm (notice, however, that the learning rate 'T/k scales with k, the number of corrections occurred so far). The second step gives Wk+1 by projecting w~ onto W : Wk+1 = wklllw~llq if Ilw~llq > 1 and Wk+1 = w~ otherwise. The projection step makes the new weight vector Wk+1 belong to W. ALMAp The following theorem, whose proof is omitted due to space limitations, has two parts. In part 1 we treat the separable case. Here we claim that a special choice of the parameters Band C gives rise to an algorithm which approximates the maximal margin hyperplane to any given accurary a. In part 2 we claim that if a suitable relationship between the parameters Band C is satisfied then a bound on the number of corrections can be proven in the general (nonseparable) case. The bound of part 2 is in terms of the margin-based quantity V-y(Uj (x,y)) = max{O,'Y - rl~i~}' 'Y > O. (Here a p-indexing for V-y is understood). V-y is called deviation in [4] and linear hinge loss in [7]. Notice that Band C in part 1 do not meet the requirements given in part 2. On the other hand, in the separable case Band C chosen in part 2 do not yield a hyperplane which is arbitrarily (up to a small a) close to the maximal margin hyperplane. Theorem 1 Let W = {w E nn : Ilwllq :::; I}, S = ((xI,yd, .'" (XT,YT)) E (nn x { -1, + 1}) T, and M be the set of corrections of ALM Ap( aj B, C) running on S (i. e., the set of trials t such that Ytll~t\l~t :::; (1 - a hk). 1. Let 'Y* = maxWEW mint=I, ... ,T Y{I~itt thefollowing bouncP on IMI: 2 (p - 1) (2 IMI:::; (,,(*)2 ~ -1 ) 2 > O. Then 8 ALMAp(aj v'8/a, + ~ - 4 =0 41n the degenerate case that Xt = 0 no update takes place. 5We did not optimize the constants here. ( p- 1 ) a2(,,(*)2 . y'2) achieves (1) Furthermore, throughout the run of ALMAp(a; VS/a, v'2) we have 'Yk ~ 'Y*. Hence (1) is also an upper bound on the number of trials t such that Vil~,\I~' (1 - a) 'Y*. :-: ; 2. Let the parameters Band C in Figure 1 satisfy the equation 6 C 2 + 2 (1- a) B C = l. Then for any u E W, ALMAp( a; B, C) achieves the followin g bound on any'Y> 0, where p2 = J2~2: 1M I, holding for Observe that when a = 1 the above inequality turns to a bound on the number of mistaken trials. In such a case the value of 'Yk (in particular; the value of B) is immaterial, while C is forced to be 1. 0 When p = 2 the computations performed by ALMAp essentially involve only dot products (recall that p = 2 yields q = 2 and [ = [-1 = identity). Thus the generalization of ALMA2 to the kernel case is quite standard. In fact, the linear combination W k+1 . X can be computed recursively, since Wk+1 . x = W k ?Xt'1 k V,X,.X. Here the denominator Nk+1 k+l equals max{1, Ilw~112} and the norm Ilw~J12 is again computed recursively by Ilw~ll~ = Ilw~_lIIVN~ + 2'T/k YtWk' Xt + 'T/~ Ilxtl12' where the dot product Wk' Xt is taken from the k-th correction (the trial where the k-th weight update did occur). 4 Experimental results We did some experiments running ALMA2 on the well-known MNIST OCR database.? Each example in this database consists of a 28x28 matrix representing a digitalized image of a handwritten digit, along with a {0,1, ... ,9}-valued label. Each entry in this matrix is a value in {O, 1, ... ,255}, representing a grey level. The database has 60000 training examples and 10000 test examples. The best accuracy results for this dataset are those obtained by LeCun et al. [11] through boosting on top of the neural net LeNet4. They reported a test error rate of 0.7%. A soft margin SVM achieved an error rate of 1.1 % [3]. In our experiments we used ALMA2(a; -i-, v'2) with different values of a. In the following ALMA2(a) is shorthand for ALMA2(a; -i-, v'2). We compared to SVMs, the Perceptron algorithm and the Perceptron-like algorithm ROMMA [12]. We followed closely the experimental setting described in [3, 4, 12]. We used a polynomial kernel K of the form K(x, y) = (1 + x . y)d, with d = 4. (This choice was best in [4] and was also made in [3, 12].) However, we did not investigate any careful tuning of scaling factors. In particular, we did not determine the best instance scaling factor s for our algorithm (this corresponds to using the kernel K (x, y) = (1 + x . y / S )d). In our experiments we set s = 255. This was actually the best choice in [12] for the Perceptron algorithm. We reduced the lO-class problem to 10 binary problems . Classification is made according to the maximum output of the 10 binary classifiers. The results are summarized in Table 1. As in [4], the output of a binary classifier is based on either the last hypothesis produced by the algorithms (denoted by "last" in Table 1) or Helmbold and Warmuth's [9] leave-one-out voted hypothesis (denoted by "voted"). We refer the reader to [4] for details. We trained the algorithms by cycling up to 3 times ("epochs") over the training set. All the results shown in Table 1 are averaged over 10 random permutations of the training sequence. The columns marked 6Notice that Band C in part 1 do not satisfy this relationship. 7 Available on Y. LeCun's home page: http://www.research.att.com/... yann/ocr/mnisti. "Corr's" give the total number of corrections made in the training phase for the 10 labels. The first three rows of Table 1 are taken from [4, 12, 13]. The first two rows refer to the Perceptron algorithm, 8 while the third one refers to the best 9 noise-controlled (NC) version of ROMMA, called "aggressive ROMMA". Our own experimental results are given in the last six rows. Among these Perceptron-like algorithms, ALMA2 "voted" seems to be the most accurate. The standard deviations about our averages are reasonably small. Those concerning test errors range in (0.03%,0.09%). These results also show how accuracy and running time (as well as sparsity) can be traded-off against each other in a transparent way. The accuracy of our algorithm is slightly worse than SVMs'. On the other hand, our algorithm is quite faster and easier to implement than previous implementations of SVMs, such as those given in [17,5] . An interesting features of ALMA2 is that its approximate solution relies on fewer support vectors than the SVM solution. We found the accuracy of 1.77 for ALMA2(1.0) fairly remarkable, considering that it has been obtained by sweeping through the examples just once for each of the ten classes. In fact, the algorithm is rather fast: training for one epoch the ten binary classifiers of ALMA2(1.0) takes on average 2.3 hours and the corresponding testing time is on average about 40 minutes. (All our experiments have been performed on a PC with a single Pentium? III MMX processor running at 447 Mhz.) 5 Concluding Remarks In the full paper we will give more extensive experimental results for ALMA2 and ALMAp with p > 2. One drawback of ALMAp'S approximate solution is the absence of a bias term (i.e., a nonzero threshold). This seems to make little difference for MNIST dataset, but there are cases when a biased maximal margin hyperplane generalizes quite better than an unbiased one. It is not clear to us how to incorporate the SVMs' bias term in our algorithm. We leave this as an open problem. Table 1: Experimental results on MNIST database. "TestErr" denotes the fraction of misclassified patterns in the test set, while "Corr's" gives the total number of training corrections for the 10 labels. Recall that voting takes place during the testing phase. Thus the number of corrections of "last" is the same as the number of corrections of "voted". "last" "voted" agg-ROMMA(NC) ("last") "last" ALMA2(1.0) "voted" "last" ALMA2(0.9) "voted" ALMA2(0.8) "last" "voted" Perceptron 1 Epoch TestErr Corr's 2.71% 7901 2.23% 7901 2.05% 30088 2.52% 7454 1.77% 7454 2.10% 9911 1.69% 9911 1.98% 12810 1.68% 12810 2 Epochs TestErr Corr's 2.14% 10421 1.86% 10421 1.76% 44495 2.01% 9658 1.52% 9658 1.74% 12711 1.49% 12711 1.72% 16464 1.44% 16464 3 Epochs TestErr Corr's 2.03% 11787 1.76% 11787 1.67% 58583 1.86% 10934 1.47% 10934 1.64% 14244 1.40% 14244 1.60% 18528 1.35% 18528 8These results have been obtained with no noise control. It is not clear to us how to incorporate any noise control mechanism into the classical Perceptron algorithm. The method employed in [10, 12] does not seem helpful in this case, at least for the first epoch. 9 According to [12] , ROMMA's last hypothesis seems to perform better than ROMMA's voted hypothesis. Acknowledgments Thanks to Nicolo Cesa-Bianchi, Nigel Duffy, Dave Helmbold, Adam Kowalczyk, Yi Li, Nick Littlestone and Dale Schuurmans for valuable conversations and email exchange. We would also like to thank the NIPS2000 anonymous reviewers for their useful comments and suggestions. The author is supported by a post-doctoral fellowship from Universita degli Studi di Milano. References [1] M. Anthony, P. Bartlett, Neural Network Learning: Theoretical Foundations, CMU, 1999. [2] P. Auer and C. Gentile Adaptive and self-confident on-line learning algorithms. In 13th COLT, 107- 117, 2000. [3] C. Cortes, V. Vapnik. Support-vector networks. Machine Learning, 20, 3: 273- 297, 1995. [4] Y. Freund and R. Schapire. Large margin classification using the perceptron algorithm. Journal of Machine Learning, 37, 3: 277- 296, 1999. [5] T.-T. Friess, N. Cristianini, and C. Campbell. The kernel adatron algorithm: a fast and simple leaming procedure for support vector machines. In 15th ICML, 1998. [6] C. Gentile and N. Littlestone. The robustness of the p-norm algorithms. In 12th COLT, 1- 11 , 1999. [7] c. Gentile, and M. K. Warmuth. Linear hinge loss and average margin. In 11th NIPS, 225- 231 , 1999. [8] A. I . Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In 10th COLT, 171- 183, 1997. [9] D. Helmbold and M. K. Warmuth. On weak learning. JCSS, 50, 3: 551- 573, 1995. [10] A. Kowalczyk. Maximal margin perceptron. In Smola, Bartlett, Scholkopf, and Schuurmans editors, Advances in large margin classifiers, MIT Press, 1999. [11] Y. Le Cun, L.I. Iackel, L. Bottou, A. Brunot, C. Cortes, I.S. Denker, H. Drucker, I. Guyon, U. Muller, S. Sackinger, P. Simard, and V. Vapnik, Comparison of learning algorithms for handwritten digit recognition. In ICANN, 53-60, 1995. [12] Y. Li, and P. Long. The relaxed online maximum margin algorithm. In 12th NIPS, 498- 504, 2000. [13] Y. Li. From support vector machines to large margin classifiers, PhD Thesis, School of Computing, the National University of Singapore, 2000. [14] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285- 318, 1988. [15] O. Mangasarian , Mathematical programming in data mining. Data Mining and Knowledge Discovery,42, 1: 183- 201, 1997. [16] P. Nachbar, I.A. Nossek, I. Strobl, The generalized adatron algorithm. In Proc. 1993 IEEE ISCAS, 2152-5, 1993. [17] I . C. Platt. Fast training of support vector machines using sequential minimal optimization. In Scholkopf, Burges and Smola editors, Advances in kernel methods: support vector machines, MIT Press, 1998. [18] F. Rosenblatt. Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. Spartan Books, Washington, D.C., 1962.
1864 |@word trial:10 version:3 briefly:1 polynomial:2 norm:27 seems:5 suitably:1 open:1 grey:1 bn:1 u11:1 pick:1 incurs:1 recursively:2 initial:2 att:1 current:4 comparing:1 com:1 fn:1 numerical:1 limp:1 designed:1 update:8 v:1 fewer:1 nips2000:1 warmuth:3 boosting:1 hyperplanes:1 mathematical:2 along:1 ik:1 scholkopf:2 consists:1 shorthand:1 combine:1 alm:1 nonseparable:1 brain:1 rule4:1 little:1 considering:1 abound:1 classifies:4 moreover:1 notation:2 maximizes:1 bounded:1 adatron:2 voting:1 classifier:8 wrong:1 platt:2 control:2 positive:3 ilxll:1 understood:2 treat:1 mistake:2 despite:1 subscript:2 meet:1 approximately:1 yd:2 might:1 ibi:1 ap:1 therein:1 initialization:1 doctoral:1 dynamically:1 bi:1 range:1 averaged:1 unique:1 lecun:2 acknowledgment:1 testing:2 implement:4 digit:3 procedure:1 projection:1 refers:1 get:1 onto:1 close:2 selection:1 context:1 optimize:1 www:1 reviewer:1 yt:7 straightforward:1 helmbold:3 estimator:1 j12:1 updated:1 target:1 programming:6 distinguishing:1 hypothesis:6 origin:1 associate:1 recognition:2 fries:2 database:4 jcss:1 solved:1 observes:1 ran:1 yk:4 valuable:1 cristianini:1 immaterial:1 trained:1 chapter:1 surrounding:1 forced:1 fast:4 describe:1 spartan:1 mmx:1 quite:6 whose:1 larger:3 solve:1 valued:1 say:1 otherwise:3 ability:1 online:1 sequence:3 analytical:1 net:1 maximal:16 product:3 almap:24 j2:1 relevant:1 degenerate:1 convergence:2 requirement:1 incremental:2 leave:2 adam:1 oo:3 school:1 p2:1 solves:1 implies:1 closely:1 drawback:1 attribute:1 milano:3 require:1 exchange:1 transparent:1 generalization:3 investigation:1 preliminary:2 anonymous:1 adjusted:1 extension:1 correction:18 hold:2 considered:1 great:1 mapping:1 predict:1 traded:1 claim:3 major:2 achieves:3 a2:1 omitted:1 purpose:2 proc:1 label:7 iw:1 sensitive:1 tool:1 mit:2 clearly:1 rather:1 claudio:1 focus:2 hk:1 pentium:1 sense:1 helpful:1 nn:11 typically:1 nand:1 misclassified:1 classification:4 dual:4 among:1 denoted:2 colt:3 special:5 fairly:1 equal:1 once:1 washington:1 represents:1 icml:1 report:2 national:1 phase:2 iscas:1 llq:3 interest:1 investigate:1 mining:2 evaluation:1 pc:1 devoted:1 accurate:1 nossek:1 grove:1 digitalized:1 euclidean:2 littlestone:5 theoretical:5 minimal:2 instance:9 column:1 soft:1 mhz:1 measuring:1 deviation:2 entry:1 imi:2 reported:1 nigel:1 confident:1 thanks:1 off:1 quickly:1 again:1 thesis:1 satisfied:1 cesa:1 worse:2 book:1 simard:1 li:6 aggressive:1 summarized:1 wk:12 satisfy:2 performed:2 analyze:1 start:1 maintains:1 contribution:3 voted:12 accuracy:6 holder:1 largely:1 yield:6 identify:1 weak:1 handwritten:3 produced:1 vil:1 dave:1 processor:1 suffers:2 email:1 against:1 naturally:2 associated:2 di:2 proof:1 dataset:2 recall:4 conversation:1 knowledge:1 actually:2 auer:1 campbell:1 higher:1 though:1 furthermore:1 just:2 smola:2 ilxtll:1 hand:4 replacing:1 sackinger:1 alma:1 defines:1 aj:3 normalized:1 true:1 unbiased:1 hence:2 nonzero:1 deal:1 ll:1 during:1 self:2 inferior:1 generalized:1 performs:1 image:1 mangasarian:1 recently:1 rl:1 jp:1 belong:1 occurred:1 approximates:3 refer:2 mistaken:2 tuning:2 dot:3 similarity:1 nicolo:1 own:1 recent:1 italy:1 irrelevant:1 driven:3 mint:1 inequality:2 binary:5 arbitrarily:1 yi:1 devise:1 muller:1 seen:1 nachbar:1 gentile:5 relaxed:1 employed:1 determine:1 full:1 faster:3 x28:1 long:4 lin:1 concerning:1 post:1 controlled:1 prediction:5 variant:1 basic:1 denominator:1 essentially:3 ilwllq:2 cmu:1 kernel:7 achieved:2 fellowship:1 biased:1 romma:11 comment:1 spirit:1 seem:1 call:2 revealed:1 iii:1 easy:1 concerned:1 wn:1 alma2:20 followin:1 intensive:1 drucker:1 six:1 bartlett:2 effort:1 suffer:1 passing:1 remark:2 useful:1 clear:5 yw:1 involve:1 band:7 ten:2 svms:11 processed:1 reduced:1 http:1 schapire:1 singapore:1 notice:4 sign:3 correctly:2 rosenblatt:2 terminology:1 threshold:4 fraction:1 year:1 run:1 inverse:1 parameterized:1 place:2 throughout:2 reader:1 guyon:1 yann:1 home:1 scaling:2 bound:12 followed:2 quadratic:3 occur:1 sake:1 concluding:2 separable:6 according:2 combination:1 belonging:1 describes:1 slightly:3 wi:6 lp:1 cun:1 brunot:1 projecting:1 indexing:2 taken:2 computationally:1 equation:1 turn:3 mechanism:2 generalizes:2 operation:1 available:1 denker:1 observe:2 ocr:2 kowalczyk:3 robustness:1 denotes:2 running:4 top:1 hinge:2 giving:1 uj:1 universita:2 classical:4 quantity:1 occurs:2 strobl:1 cycling:1 distance:3 separate:1 thank:1 discriminant:1 studi:1 relationship:2 iwil:1 nc:2 holding:1 ilw:6 tdo:1 rise:1 implementation:1 perform:2 bianchi:1 upper:2 finite:1 arbitrary:3 sweeping:1 introduced:3 pair:2 required:1 namely:1 fort:1 extensive:1 comelico:1 nick:1 learned:1 hour:1 nip:2 llp:2 below:2 pattern:1 sparsity:1 max:3 suitable:1 representing:3 epoch:6 literature:1 discovery:1 freund:1 loss:12 dsi:2 permutation:1 interesting:1 limitation:1 suggestion:1 proven:3 remarkable:2 foundation:1 degree:1 principle:1 editor:2 lo:1 row:3 supported:1 last:12 lenet4:1 bias:2 burges:1 perceptron:20 sparse:1 dimension:1 avoids:1 cumulative:1 dale:1 author:1 made:5 refinement:1 adaptive:1 far:2 approximate:8 b1:1 xi:1 degli:1 table:5 neurodynamics:1 itt:1 reasonably:1 schuurmans:3 bottou:1 anthony:1 did:5 icann:1 main:3 linearly:4 noise:3 theme:1 xl:2 third:1 theorem:5 minute:1 xt:11 maxi:1 svm:2 cortes:2 ilwll:4 mnist:3 vapnik:2 sequential:1 corr:5 phd:1 duffy:1 margin:39 nk:1 easier:3 vii:1 logarithmic:1 expressed:1 contained:1 agg:1 corresponds:2 relies:1 viewed:1 formulated:1 goal:2 identity:1 careful:1 marked:1 leaming:1 replace:1 absence:1 specifically:1 unimi:1 hyperplane:16 called:3 total:3 experimental:8 perceptrons:1 internal:3 support:8 arises:1 incorporate:2 avoiding:1
945
1,865
Model Complexity, Goodness of Fit and Diminishing Returns Igor V. Cadez Information and Computer Science University of California Irvine, CA 92697-3425, U.S.A. Padhraic Smyth Information and Computer Science University of California Irvine, CA 92697-3425, U.S.A. Abstract We investigate a general characteristic of the trade-off in learning problems between goodness-of-fit and model complexity. Specifically we characterize a general class of learning problems where the goodness-of-fit function can be shown to be convex within firstorder as a function of model complexity. This general property of "diminishing returns" is illustrated on a number of real data sets and learning problems, including finite mixture modeling and multivariate linear regression. 1 Introduction, Motivation, and Related Work Assume we have a data set D = {Xl, X2, ... , x n }, where the X i could be vectors, sequences, etc. We consider modeling the data set D using models indexed by a complexity index k, 1 :::; k :::; kmax ? For example, the models could be finite mixture probability density functions (PDFs) for vector Xi'S where model complexity is indexed by the number of components k in the mixture. Alternatively, the modeling task could be to fit a conditional regression model y = g(Zk) + e, where now y is one of the variables in the vector X and Z is some subset of size k of the remaining components in the X vector. Such learning tasks can typically be characterized by the existence of a model and a loss function. A fitted model of complexity k is a function of the data points D and depends on a specific set of fitted parameters B. The loss function (goodnessof-fit) is a functional of the model and maps each specific model to a scalar used to evaluate the model, e.g., likelihood for density estimation or sum-of-squares for regression. Figure 1 illustrates a typical empirical curve for loss function versus complexity, for mixtures of Markov models fitted to a large data set of 900,000 sequences. The complexity k is the number of Markov models being used in the mixture (see Cadez et al. (2000) for further details on the model and the data set). The empirical curve has a distinctly concave appearance, with large relative gains in fit for low complexity models and much more modest relative gains for high complexity models. A natural question is whether this concavity characteristic can be viewed as a general phenomenon in learning and under what assumptions on model classes and Nwnber of M Ixture Cmnpo nen1S 11] Figure 1: Log-likelihood scores for a Markov mixtures data set. loss functions the concavity can be shown to hold. The goal of this paper is to illustrate that in fact it is a natural characteristic for a broad range of problems in mixture modeling and linear regression. We note of course that for generalization that using goodness-of-fit alone will lead to the selection of the most complex model under consideration and will not in general select the model which generalizes best to new data. Nonetheless our primary focus of interest in this paper is how goodness-of-fit loss functions (such as likelihood and squared error, defined on the training data D) behave in general as a function of model complexity k. Our concavity results have a number of interesting implications. For example, for model selection methods which add a penalty term to the goodness-of-fit (e.g., BIC), the resulting score function as a function of model complexity will be unimodal as a function of complexity k within first order. Li and Barron (1999) have shown that for finite mixture models the expected value of the log-likelihood for any k is bounded below by a function of the form -C /k where C is a constant which is independent of k. The results presented here are complementary in the sense that we show that the actual maximizing log-likelihood itself is concave to first-order as a function of k. Furthermore, we obtain a more general principle of "diminishing returns," including both finite mixtures and subset selection in regression. 2 Notation We define y = y(x) as a scalar function of x, namely a prediction at x. In linear regression y = y(x ) is a linear function of the components in x while in density estimation y = y(x) is the value of the density function at x. Although the goals of regression and density estimation are quite different, we can view them both as simply techniques for approximating an unknown true function for different values of x. We denote the prediction of a model of complexity k as Yk (xIB) where the subscript indicates the model complexity and B is the associated set of fitted parameters. Since different choices of parameters in general yield different models, we will typically abbreviate the notation somewhat and use different letters for different parameterizations of the same functional form (i.e., the same complexity), e.g., we may use Yk(X),gk(X), hk(X) to refer to models of complexity k instead of specifying Yk(xIBd,Yk(xIB 2 ),Yk(xIB3 ), etc. Furthermore, since all models under discussion are functions of x, we sometimes omit the explicit dependence on x and use a compact notation Yk, 9k, hk? We focus on classes of models that can be characterized by more complex models having a linear dependence on simpler models within the class. More formally, any model of complexity k can be decomposed as: (1) Yk = a191 + a2h1 + ... + ak W 1? In PDF mixture modeling we have Yk = p( x) and each model 91, hI, .. . ,Zl is a basis PDF (e.g., a single Gaussian) but with different parameters. In multivariate linear regression each model 91, hI, ... ,WI represents a regression on a single variable, e.g., 91(X) above is 91(X) = 'Ypxp where xp is the p-th variable in the set and 'Yp is the corresponding coefficient one would obtain if regressing on xp alone. One of the 91, hI, ... ,WI can be a dummy constant variable to account for the intercept term. Note that the total parameters for the model Yk in both cases can be viewed as consisting of both the mixing proportions (the a's) and the parameters for each individual component model. The loss function is a functional on models and we write it as E(Yk). For simplicity, we use the notation EZ to specify the value of the loss function for the best kcomponent model. This way, EZ :S E(Yk) for any model Yk1. For example, the loss function in PDF mixture modeling is the negative log likelihood. In linear regression we use empirical mean squared error (MSE) as the loss function. The loss functions of general interest in this context are those that decompose into a sum of functions over data points in the data set D (equivalently an independence assumption in a likelihood framework), i.e., n (2) i=l For example, in PDF mixture modeling !(Yk) = -In Yk, while in regression modeling !(Yk) = (y - Yk)2 where Y is a known target value. 3 Necessary Conditions on Models and Loss Functions We consider models that satisfy several conditions that are commonly met in real data analysis applications and are satisfied by both PDF mixture models and linear regression models: 1. As k increases we have a nested model class, i.e., each model of complexity k contains each model of complexity k' < k as a special case (i.e., it reduces to a simpler model for a special choice of the parameters). 2. Any two models of complexities k1 and k2 can be combined as a weighted sum in any proportion to yield a valid model of complexity k = k1 + k 2. 3. Each model of complexity k = k1 + k2 can be decomposed into a weighted sum of two valid models of complexities k1 and k2 respectively for each valid choice of k1 and k 2. The first condition guarantees that the loss function is a non-increasing function of k for optimal models of complexity k (in sense of minimizing the loss function E), the second condition prevents artificial correlation between the component models, while the third condition guarantees that all components are of equal expressive power. As an example, the standard Gaussian mixture model satisfies all three properties whether the covariance matrices are unconstrained or individually constrained. As a counter-example, a Gaussian mixture model where the covariance matrices are constrained to be equal across all components does not satisfy the second property. lWe assume the learning task consists of minimization of the loss function. If maximization is more appropriate , we can just consider minimization of the negative of the loss function. 4 Theoretical Results on Loss Function Convexity We formulate and prove the following theorem: Theorem 1: In a learning problem that satisfies the properties from Section 3, the loss function is first order convex in model complexity k, meaning that EZ+1 - 2EZ + EZ_ 1 ~ 0 within first order (as defined in the proof). The quantities EZ and EZ?l are the values of the loss function for the best k and k ? I-component models. Proal: In the first part of the proof we analyze a general difference of loss functions and write it in a convenient form. Consider two arbitrary models, 9 and hand the corresponding loss functions E(g) and E(h) (g and h need not have the same complexity). The difference in loss functions can be expressed as: n E(g) - E(h) L {I [g(Xi)] - I [h(Xi)]} i=l n L {I [h(xi)(1 + Jg ,h(Xi))]- I [h(Xi)]} i=l n = a L h(Xi)!' (h(Xi)) Jg ,h(Xi). (3) i=l where the last equation comes from a first order Taylor series expansion around each Jg ,h(Xi) = 0, a is an unknown constant of proportionality (to make the equation exact) and J g,h () -=- g(x) - h(x) X - (4) h(x) represents the relative difference in models 9 and h at point x. For example, Equation 3 reduces to a first order Taylor series approximation for a = 1. If I (y) is a convex function we also have: n E(g) - E(h) ~ L h(Xi)!'(h(Xi))Jg,h(Xi). (5) i=l since the remainder in the Taylor series expansion R2 = I/2f"(h(I + 8J))J 2 ~ O. In the second part of the proof we use Equation 5 to derive an appropriate condition on loss functions. Consider the best k and k ? I-component models and the appropriate difference of the corresponding loss functions EZ+1 - 2EZ + EZ_ 1 , which we can write using the notation from Equation 3 and Equation 5 (since we consider convex functions I(y) = -lny for PDF modeling and I(y) = (y - Yi)2 for best subset regression) as: EZ+1 - 2EZ + EZ_ 1 = n n i =l n i=l n > LyZ(Xi)!'(yZ(Xi))JYZ +1 ,YZ(Xi) + L yZ(Xi)!'(yZ(Xi))JyZ_1,YZ (Xi) i=l n = LyZ(Xi)!'(yZ(Xi)) [JYZ+1 ,YZ(Xi) i=l i=l + JYZ_1,YZ(Xi)] . (6) According to the requirements on models in Section 3, the best k + I-component model can be decomposed as Y'k+1 = (1 - E)gk + Eg1, where gk is a k-component model and gl is a I-component modeL Similarly, an artificial model can be constructed from the best k - I-component model: ek = (1 - E)Y'k-1 + Eg1? Upon subtracting y'k from each of the equations and dividing by Y'k, using notation from Equation 4, we get: oYk+1 ? 'Yk? (1 - E)09k ,y;' + EOg1 ,y;, O~k ,y;' = (1- E)OY;'_l 'Y;' + EOg1 ,y;" which upon subtraction and rearrangement of terms yields: Oy'k+l'k y' + Oy'1;:-I'k y' = (1 - E)09k ' y'k + oC k y' + EOy'1;:-I'k y" ~ ' k (7) If we evaluate this equation at each of the data points Xi and substitute the result back into equation 6 we get: E'k+1 - 2E'k + E'k-1 ~ n LY'k (Xi)!' (Y'k(Xi)) [(1 - E)09k ,y;, (Xi) + O~k ,y;' (Xi) + EOy;'_l'Y;' (Xi )]' (8) i =l In the third part of the proof we analyze each of the terms in Equation 8 using Equation 3. Consider the first term: n ilgk ,y;, = LY'k(xd!'(Y'k(Xi))09k ,y;, (Xi ) (9) i=l that depends on a relative difference of models gk and y'k at each of the data points Xi . According to Equation 3, for small 09k ,Y;' (Xi) (which is presumably true), we can set a: :::::; 1 to get a first order Taylor expansion. Since y'k is the best k-component model, we have E(gk) ~ E(y'k) = Ek and consequently E(gk) - E(yk) = a:ilgk ,y;, :::::; ilgk ,y;, ~ 0 (10) Note that in order to have the last inequality hold, we do not require that a: :::::; 1, but only that (11) a:~0 which is a weaker condition that we refer to as the first order approximation. In other words, we only require that the sign is preserved when making Taylor expansion while the actual value need not be very accurate. Similarly, each of the three terms on the right hand side of Equation 8 is first order positive since E(yk) ::; E(gk), E(ek), E(Y'k-1)' This shows that E k+1 - 2Ek within first order, concluding the proof. 5 + E'k-1 ~ 0 Convexity in Common Learning Problems In this section we specialize Theorem 1 to several well-known learning situations. Each proof consists of merely selecting the appropriate loss function E (y) and model family y. 5.1 Concavity of Mixture Model Log-Likelihoods Theorem 2: In mixture model learning, using log-likelihood as the loss function and using unconstrained mixture components, the in-sample log likelihood is a firstorder concave function of the complexity k. Prool: By using I (y) = -In Y in Theorem 1 the loss function E(y) becomes the negative of the in-sample log likelihood, hence it is a first-order convex function of complexity k, i.e., the log likelihood is first-order concave. Corollary 1: If a linear or convex penalty term in k is subtracted from the in-sample log likelihood in Theorem 2, using the mixture models as defined in Theorem 2, then the penalized likelihood can have at most one maximum to within first order. The BIC criterion satisfies this criterion for example. 5.2 Convexity of Mean-Square-Error for Subset Selection in Linear Regression Theorem 3: In linear regression learning where Yk represents the best linear regression defined over all possible subsets of k regression variables, the mean squared error (MSE) is first-order convex as a function of the complexity k. Prool: We use I(Yk(xi)) = (Yi - ydXi))2 which is a convex function of Yk . The corresponding loss function E(Yk) becomes the mean-square-error and is first-order convex as a function of the complexity k by the proof of Theorem 1. Corollary 2: If a concave or linear penalty term in k is added to the mean squared error as defined in Theorem 3, then the resulting penalized mean-square-error can have at most one minimum to within first order. Such penalty terms include Mallow's Cp criterion, AIC, BIC, predicted squared error, etc., (e.g., see Bishop (1995)). 6 Experimental Results In this section we demonstrate empirical evidence of the approximate concavity property on three different data sets with model families and loss functions which satisfy the assumptions stated earlier: 1. Mixtures 01 Gaussians: 3962 data points in 2 dimensions, representing the first two principal components of historical geopotential data from upper-atmosphere data records, were fit with a mixture of k Gaussian components, k varying from 1 to 20 (see Smyth, Ide, and Ghil (1999) for more discussion ofthis data). Figure 2(a) illustrates that the log-likelihood is approximately concave as a function of k. Note that it is not completely concave. This could be a result of either local maxima in the fitting process (the maximum likelihood solutions in the interior of parameter space were selected as the best obtained by EM from 10 different randomly chosen initial conditions), or may indicate that concavity cannot be proven beyond a first-order characterization in the general case. 2. Mixtures 01 Markov Chains: Page-request sequences logged at the msnbc. com Web site over a 24-hour period from over 900,000 individuals were fit with mixtures of first-order Markov chains (see Cadez et al. (2000) for further details). Figure 1 again clearly shows a concave characteristic for the log-likelihood as a function of k, the number of Markov components in the model. 3. Subset Selection in Linear Regression: Autoregressive (AR) linear models were fit (closed form solutions for the optimal model parameters) to a monthly financial time series with 307 observations, for all possible combinations of lags (all possible ,. '66 '58 Nwnbcr of Mixture Components rt] Number of Reg ressloo Van ables [It] Figure 2: (a) In-sample log-likelihood for mixture modeling of the atmospheric data set, (b) mean-squared error for regression using the financial data set. subsets) from order k = 1 to order k = 12. For example, the k = 1 model represents the best model with a single predictor from the previous 12 months, not necessarily the AR(l) model. Again the goodness-of-fit curve is almost convex in k (Figure 2(b?, except at k = 9 where there is a slight non-concavity: this could again be either a numerical estimation effect or a fundamental characteristic indicating that concavity is only true to first-order. 7 Discussion and Conclusions Space does not permit a full discussion of the various implications of the results derived here. The main implication is that for at least two common learning scenarios the maximizing/minimizing value of the loss function is strongly constrained as model complexity is varied. Thus, for example, when performing model selection using penalized goodness-of-fit (as in the Corollaries above) variants of binary search may be quite useful in problems where k is very large (in the mixtures of Markov chains above it is not necessary to fit the model for all values of k, i.e., we can simply interpolate within first-order). Extensions to model selection using loss-functions defined on out-of-sample test data sets can also be derived, and can be carried over under appropriate assumptions to cross-validation. Note that the results described here do not have an obvious extension to non-linear models (such as feed-forward neural networks) or loss-functions such as the 0/1 loss for classification. References Bishop, C., Neural Networks for Pattern Recognition, Oxford University Press, 1995, pp. 376- 377. Cadez, 1., D. Heckerman, C. Meek, P. Smyth, and S. White, 'Visualization of navigation patterns on a Web site using model-based clustering,' Technical Report MS-TR-00-18, Microsoft Research, Redmond, WA. Li, Jonathan Q., and Barron, Andrew A., 'Mixture density estimation,' presented at NIPS 99. Smyth, P., K. Ide, and M. Ghil, 'Multiple regimes in Northern hemisphere height fields via mixture model clustering,' Journal of the Atmospheric Sciences, vol. 56, no. 21, 3704- 3723, 1999.
1865 |@word proportion:2 proportionality:1 covariance:2 tr:1 initial:1 contains:1 score:2 series:4 selecting:1 cadez:4 com:1 numerical:1 alone:2 selected:1 record:1 characterization:1 parameterizations:1 simpler:2 height:1 constructed:1 consists:2 prove:1 specialize:1 fitting:1 expected:1 decomposed:3 actual:2 increasing:1 becomes:2 bounded:1 notation:6 what:1 guarantee:2 firstorder:2 concave:8 xd:1 k2:3 zl:1 ly:2 omit:1 jyz:2 positive:1 local:1 ak:1 oxford:1 subscript:1 approximately:1 specifying:1 range:1 ghil:2 mallow:1 empirical:4 convenient:1 word:1 get:3 cannot:1 interior:1 selection:7 kmax:1 context:1 intercept:1 map:1 maximizing:2 convex:10 formulate:1 simplicity:1 financial:2 target:1 smyth:4 exact:1 recognition:1 yk1:1 trade:1 counter:1 yk:22 convexity:3 complexity:32 upon:2 completely:1 basis:1 various:1 artificial:2 quite:2 lag:1 itself:1 sequence:3 subtracting:1 remainder:1 mixing:1 requirement:1 prool:2 illustrate:1 derive:1 andrew:1 dividing:1 predicted:1 come:1 indicate:1 ixture:1 met:1 eog1:2 require:2 atmosphere:1 generalization:1 decompose:1 extension:2 hold:2 around:1 presumably:1 estimation:5 a191:1 individually:1 weighted:2 minimization:2 clearly:1 gaussian:4 varying:1 corollary:3 derived:2 focus:2 pdfs:1 likelihood:18 indicates:1 hk:2 sense:2 typically:2 diminishing:3 classification:1 constrained:3 special:2 equal:2 field:1 having:1 represents:4 broad:1 igor:1 report:1 randomly:1 interpolate:1 individual:2 consisting:1 microsoft:1 rearrangement:1 interest:2 investigate:1 regressing:1 mixture:28 navigation:1 chain:3 implication:3 accurate:1 necessary:2 modest:1 indexed:2 taylor:5 theoretical:1 fitted:4 lwe:1 modeling:10 earlier:1 ar:2 goodness:8 maximization:1 subset:7 predictor:1 characterize:1 combined:1 density:6 fundamental:1 off:1 squared:6 again:3 satisfied:1 padhraic:1 ek:4 return:3 yp:1 li:2 account:1 coefficient:1 satisfy:3 depends:2 view:1 closed:1 analyze:2 square:4 characteristic:5 yield:3 nonetheless:1 pp:1 obvious:1 associated:1 proof:7 irvine:2 gain:2 back:1 feed:1 specify:1 strongly:1 furthermore:2 just:1 correlation:1 hand:2 web:2 expressive:1 effect:1 true:3 hence:1 illustrated:1 white:1 oc:1 ide:2 criterion:3 m:1 pdf:6 demonstrate:1 cp:1 meaning:1 consideration:1 common:2 functional:3 slight:1 refer:2 monthly:1 unconstrained:2 similarly:2 jg:4 xib:2 etc:3 add:1 multivariate:2 hemisphere:1 scenario:1 inequality:1 binary:1 yi:2 minimum:1 somewhat:1 subtraction:1 period:1 full:1 unimodal:1 multiple:1 reduces:2 technical:1 characterized:2 cross:1 prediction:2 variant:1 regression:19 sometimes:1 preserved:1 independence:1 fit:15 bic:3 whether:2 penalty:4 useful:1 northern:1 sign:1 dummy:1 write:3 vol:1 merely:1 sum:4 letter:1 logged:1 family:2 almost:1 ydxi:1 hi:3 meek:1 aic:1 x2:1 ables:1 concluding:1 performing:1 according:2 request:1 combination:1 across:1 heckerman:1 em:1 wi:2 making:1 equation:14 visualization:1 generalizes:1 gaussians:1 permit:1 barron:2 appropriate:5 subtracted:1 existence:1 substitute:1 remaining:1 include:1 clustering:2 k1:5 yz:8 approximating:1 question:1 quantity:1 added:1 primary:1 dependence:2 rt:1 index:1 minimizing:2 equivalently:1 gk:7 negative:3 stated:1 unknown:2 upper:1 observation:1 markov:7 finite:4 msnbc:1 behave:1 situation:1 varied:1 arbitrary:1 nwnber:1 atmospheric:2 namely:1 california:2 hour:1 nip:1 beyond:1 redmond:1 below:1 pattern:2 goodnessof:1 regime:1 including:2 lny:1 power:1 natural:2 abbreviate:1 representing:1 carried:1 relative:4 loss:32 interesting:1 oy:3 proven:1 versus:1 validation:1 xp:2 principle:1 course:1 penalized:3 gl:1 last:2 side:1 weaker:1 distinctly:1 van:1 curve:3 dimension:1 valid:3 concavity:8 autoregressive:1 forward:1 commonly:1 historical:1 approximate:1 compact:1 xi:34 alternatively:1 search:1 lyz:2 zk:1 ca:2 expansion:4 mse:2 complex:2 necessarily:1 main:1 motivation:1 complementary:1 site:2 explicit:1 xl:1 third:2 theorem:10 specific:2 bishop:2 r2:1 evidence:1 ofthis:1 illustrates:2 simply:2 appearance:1 ez:10 prevents:1 expressed:1 scalar:2 nested:1 satisfies:3 conditional:1 viewed:2 goal:2 month:1 consequently:1 specifically:1 typical:1 except:1 principal:1 total:1 experimental:1 indicating:1 select:1 formally:1 jonathan:1 evaluate:2 reg:1 phenomenon:1
946
1,866
 ! " # $ ' ( ) *             ! ! " # !$ &%  )' ( *+ ! % & ,.-0/2Y135Z\47[67]_8:^`a]c9;bO-=]_d</?eGf>)gi@;hiABj:@Dk+CElT]m^`a1n35FGd$FH13JILk2Z:KMj!4$opIOd7NQbOPSgqk2RrQstITf67uv6B-=lO135bOI:]m4Vk]mU=^`af<TZ <TWX<T/ w dB[Z:]i^`an?pd7^gm`ar5[xG^`??]xy?z{jTf+^`?eQZTz{|:bOjOgc^`aZO}?|O~bLztg_}?$~ ???T?? ? f c g _ g ?  ? ??a?T???????Q???????H?B?v??T?i?H???m? ??? ???J?? ?B?B?????T?2? ??? ?q?7? ?_? ?????T? ??? ?T?2??i??B?H???q???J??B???T? ??i?T?O?:?D?X? ]ml:h?fbOgiu?]qkB?&kB]_?^`?`k f?f? Z&g;~LlO^`gcZ:gcdBf?dB?[i|:b:r?^`k+d7g_ZXu*dBj!j?e?f?]q?fVg?k2???XbXd7[cZ\gc[?j!ZO^` k+d7]_Z?r?~:?|XlLd?kg_[?[?f fd7? r?j$dBbO[_l:]_[x^` g_d7fd7[_Z?jO??^`?[^`? [?? ]_k+f]_r?gq~\dB[pkB[]i?Gk[?]cb[~:? ? d.~???k2k??0uv[Q?B?O?f?0bLb:lO?pZlX]~:fd7f+g?g_e ]{d ? ??f? du.? ^`?[?? ]m]m~Of?d]m~OZOdpbOu.d7^` }|Xd7d7ZXg?jTf2d e=? ]cfgmuvk+^`lXZ:fT^` Z:[?^`}.]_^`?dqf??XZ$k+f2uve\lO]_~:rJd7d [??;? ?)gqk2du [?~Ou?f+kB??]_gc]q^` ?~:kB? ]ik+Zvk+Z?|Xdk+l:? lOfg_uvf?Ol:^`?bOuV]cdBkBj$]c^` |f?Zy ]d7~^` }d d7Z:o lOyOg_[f] |:g fr?` ?ad7u?uV[cu ?m??d?] ??~ ~LfOj?^` [??^` ?[~:k ^`?? ?q~:~?^` d7^`n[d7bXj&[?|Tdjy e ? f k+g g_g_] yO~ ^` d ZL}?Z b fu bT].d g_k2^`?Z&? k d7r ^` [?}f d7rZXbOjT] d^` f ? Zfu.f ?e ]mlX^` ] ~OfTy d$[?^`af2gc]ce?d7^` f?[?kvZ?b:r?l:]?fgc[?ZMdB|LjTk!k^` ? ? ]m[x?VfuVgib:k2l?bXr5?[xd7]_^` g?fZ } [x? yO]_[jO~L]m^^`a`?d?uv[u?k+d7Z\l:f2[?l:ei^` g_f[xf7ZX^` ?L?7[?dV^` ?;uV????kB~:]c^`?d? f?Z? ?f?^`?u??D[ k2lO?VZXbO? j?]m?!kB]m]q? ~O^` ?0fd7?ZXZ?? k2rd?? ?O? lXd?fuvk2gcZ\d7lOjTlXrJ^`?dqfZO?Og?} ]? d?? ?Ok+Z!lXd7[?gcdm^` ]?uv??d7Z?]???[?? f?? Z??p]m~O^` ]md$~OwfbO?:]?tk+? ZTk2y?ZX[xj?^` }?k2Z:|\^` ? k2?r?fk2Z:ZTd]ij:jOkd ]m? k.g_dB[?kd?[?]qd[?^` k2Z?ZX]qj?~Od.[?~Ok f+????? b:]_gq~\k kB? ]py???f2d e ]q~Od.[?f?r?bO]c^` f?Z0? Y? Z?bOl:g_lXd ?fd7g?Z?] ]?? y?d d? ]_k2fgqgp[v?!u$kb ? ? ~O~S^`?ZOkBd]?[ ]cd7?HZ??O]q? ^` f??&Z?[ ?~Xk???[vk2|XlLd7ZOd?^`?Z??X??l\?Bk2?^`?j??? ]q? f?? ? ?k2d7b\g_ZO[_[?d7^`?r?k2?|XZ?klO[?d7g_fj ? dB? rJ[_k[ [_? [?r?^` ?\k[_d7[?gm^` [v?\d?[xgqb [ ? ~S? d2? k?} [ ? [?jOd7gmdk7?p? |\^`?k r?? ^` ?:k [;u f+[?e??*?d7??g_ZOk d7g r?|\?|\d k?g ?[?d7?Bj ???? ? rJk??t[ck [?Z^` ?Xj?d7gq[?[;l r?^` ^`a[QZ ]md ~LukB]?d?]_] ~:~ dfOjL? [f?uv????lLbOk ~]mkB|\]ckO^` f? Z:?k2??r ??? f?? ?;uv??lLr?Z d?d?O^`f ]?ey?] g_~ dBd ?b:u ^` g_k d^`aZj ]c]cf~:^`??\[?ZXl:j.k2l\]_~Ld7g{d??[?fdir?bOlL]_g_^`ad7f[?Z?d7Z[ ]p? k2k$r?dBg_[?dBjTk?b[t? ? d7jT?H?0?gq?7k2? ZO?T?V?p~Ok+lOd7gcl:d g_f? ?O^^`?` uV[Q]qkB~O]qd^` f?ZTZ.b:]cu$f$|X]_d7~:gtdf2?e?gq]_k2gqk2u?^`aZ:uV^`aZLkB}?]cg_d?^`a?:??k+uv??l:?Tr?}dB^` [?nO?=^` Z:YZ} gcgq^k2` [?Z\dvjT]cf?f&uv?Vr5y ??? ? ~:? fT? f?? [x^` ?ZOf }vu ??l:bTg_]?fBkB?i] ^`[qf ? Z ? k fr{r?b:? uvf u Z\[{l rf+d e;?O??^` ]?y ??G?p??^` ]_~:~L^` f?[.bOk+]il:gcl d7g l:f rJ?Ok ^`?? u d?u?kB] d7^` Z?f ]?Z??q?X??k2Z\^` jV[]cf~:|Od7]qZ&k ^` Z [?dqdB]_j?]c^` Z:|Oy} ??k2ZXj!? ?p??^` ]q?i~T???i[x^` ? uv? ??^` r?k?? ? g??jTd?????? ZO?\^`a?=]c^?p` f?~:Z\d7[ig_e?d?fg??]c?p~G? ??dvf2^` ]c[~:]_d7~:gd |O??r5f ? ? ?:[??????|:r?d.f ? ?X?&Z\f+jep]_]_~X~:kd?]?f?^`aZ?gc^` }?l:^` gmZLk k2? rQ]_^u?` ? d.kB]m??gc^`d ??? ??k2Z ? [?d?]i? ????? ?p^` ]_~:fbO]k+ZTy?[?^`?}?ZO^`a? ? k2Z]jTd ? g_d7k[?d^`?Z?]c~:d$k ?7? bOgqk ? yVf+e0]m~Od.[?fr?bO]_^`afZ0? Yd?ZS?Ol\[?d7d g_? ^` ]muv^`afd7Z?Z]m?!k+r0f+g_eid7[?]qb:~Or?d!]?[?lX?:k2k2l\Z\d7j?gv????d d!? f?jOZ ^`?[ ? ? rJb\bL[_jO[$d]m?p~O^`d]_~?]q~Okvd7f?jTgc^`?[y?? bXf2[cei[?^`]qf~OZ?d!^`aZ?uvd?[?]_d ~:? ]cfO^`?j?fZ???O?O? d ? ]c^`?f?Z???}^` n?dB[ ? ??????? ?T? ?;? ? ?i? ?????O? ? ?? ? ???  @ -= <  =4?67/+8 JK?K?<?6B-=80P 8:/?I 9=9 /28 =13JK?I?613JN W <T13 W <ON DN 26B13J8XN=4 Y|\Z d?]g_~d7rJdikB]q] d7~ jd7f ]_g f$y?k2f Z.e=dq? ?:d l:g_k+Z ZXd r?[?^` u fZ.k ?_^` ZT~:]_^`aZf$dBkp[{e???dBkd ]_?b:f gcZdp[ [?^` l:jOk d ?gid?? f+f2e?n2kjOg_^` ^uv` k ZXd7ZL? d[?^` ?fdZ g_Z d r ?5[ ]?yT? l:? ?^`?? ?k2Qr?5?y ?t ??~ ^` [=dB[?r?dk2gc? }?k d?Z g         !"#$ &%(') * + ,"- %/.10 3 ' 25 2 4 67 8 9:7 8 - % ' 9"7 8 - 0 ' . 7 8 <=> -; ' A *N K O K PQRS *T ?  A@1 CBEDGFH. 6 > I 6"J I AK IML W #$  Z[W S XW " K@$*W @ S X K O K Y   V\Z K@1 KQ]   , " . ^ "_ ? ` ,:- 0).$% ' 9 7 8 - % '*a - % '^b % 9:> . 9 J .KUKUUV A1 * 22c6 7 8 9 7 8 - 0 ' . -ed ' ?  A@1 a - % ' K $ K/1 @f"f  QR  gh  K"^  ^gij$ k   P K#1Z@V%TUl[ K  O K Y#m   Z@ niZ@1OZQe.(h*Co 9]7 8 - % ' 9Sp 8 - % 'Ya - % 'b % 22rqK7 8 p 8 UHls @_t  C*$ C1   K O KuS#1  Zi Kv_   O  P Ahw   i Qx ky*% > .KUKUAU.$%jz_{X|@ a - % ' . ? @1 SQxZ#A !    KO @  Q PZ @ a - %('}fgw     @1 ~#K Q *P @ O )Cf $   ;V?  ? ? <=> ,]- 0 .%3? ' 9 7 8 - %?? ? ' ?? 6"7 8 9 7 8 - 0[' U -e? ' l[ ni@SZOZSQx  gw)1 C K O KuS#1 ~T$@Q?* *V  wS C Kk @1 ~#KQ3#AZS?@$  >3? z v? _   ? ? k1 ?  P*$ ? K3 C_@  tw A  O S@f Qx A ? <= > 9 7 8 - % ? ' 9p 8 - % ? '3?? q 7 8? p 8 U3???? ?z ? ? z?? z1? 22 ? z?? z^? . ? ? -u? ' ? z ? z? ? ? - % 7 8 .% p 8 ' Y@ ?.???? 22  A@1   ????i? @??*$@  t ?  $? KQR Kk KZ$ ?  7 8 p 8 22 ? ? z ?k?? ? ? zK?z ? ? z ? ; .KUAUKU:. ? .   #Qx k?@$ Z@?Q3  i  OZQ(C*$@  t ?   z ? z ? z ? K@1 ~ K 6 > Ic6 J I UKUKU 6 z IcL U??e ? )QRO! V% p 8 Y@?0?  V *v"*1  Z ? " k?*$# W  ? W U S  /O  "?) W Kv*$  Z ? @1@  P T_) VYQxQR ?  SOk @1*t  ?*  " U ? z? ? z? 6 7? -e? ' U 9[7 8 % p 8 ' ???? ? p 8 ? 7 8 . 637 8 ???? ? ? QROOZ SOTS K )f"#\ k ~Z1& $X *v"_   ?- ? ' ? ) f?A    /?)gS?@??@1Kt  C*1 ~ ; ??? 1  ?|i A  O  | "#$   - ^ K . UOUR.?[\ K@K. * . #" A@ ? ' ? ? ? z ? ? ? ? z ? ? , - 0/.$% ? ' ? -e? ' 9 7 8 - 0[' ???? z ? 7? 8 22 ? z ??? U ? 7 ? . ? =<> 6 78 6 78 ? ? z1? ? z ?  @1 ???  3 !PZ K#$@ -?,:- % > .10 ' .KUUKU:. ,]- %jz_.0 ''^? " C? 7 8  ( V?|w#$ZQx ? U ? S ?)$ }$*[ Kv*$   ?  j  A  #KQ - k 1#KQR   OVu#$@ '  *v"*1  Z ? U ;  C?S#?QR\Z  A?QeU -?; ?Z?? ' ?   #1  *#A@  f *: 3@?  K#$  X") ? "  j%wZ)1 3?u$  ~O A P K#1Z@  w ** @ #A U ?"??? ???*? ?x????_??i???A?K?? ? ?x???????K???????*??? ???????? ????? ?K?i?_??h????????? ?A?? ?R? ? ?????]? ? l ?KP  ?k K@  #KQ)  "?$fS  QR    * w?  QxQ~m#$Z  1     O?    ?#AZkZH @$#1  #$ 1 ? - ?) *Qu. ; ??? ' ?  K@ @1 K QY# ? ? @??*@1  t f ?g   h^CQRQ3"  1  PZ #  ?A  /#A Q?Q * X??? ? ? ?e??$?j?A?????_? U[?!  A   k?# N ? ?    #A   $*     Q #A??$ ~?1 ?  /$   ~OZ A" K#Ak"  $  Zw 22 ?(?j?3??? ? ? ? ? ? ? ? ? ? ? ? 3 22 RO - 6 7 ? ' 78 . 6 > ~w @     @  Q . ?  @ ? ? ? ?? ? ?? | Z@?Z a f  Q? u@1Z? ??"@$? D?? ? ? ? I?6 J I W a #$Qx kW "?AK  -?? ' I? L U??  ? . W? ?  ?QR A    !#"%$'&)(+*,$(-)./.10324/6578*3$:9<;>=@?BADC-E E#F+;>=?1HGI.KJ . kmle0 2 n4/oq 5 p L*  2 (/M 2 NO* ,1 MQP6( > 0 $SR  $T CVU M $@W$  / (>X Y[Z]\@^Z X`_8ab^3c8deX/c8\@fg\1hQ_8\_8a i8j[Z]j  ( r& ( A`$ 5 2)*  s 8*[$ 6(  s  0 $* C 2 NtRW CM u N *,/$+$ v$(/$L&2w5x. 2 M 6* 2v(y M  s   T )A T $w U`M  (/ S*,/$S)./.10324/6578*3z2w(VE EHF{;|=?%RW T}T 03$8L* T C 03$8 U &g$~*[,$O&2v5'. U *8*[ 2w(  T &2 M * M 2 N 5Q ( C Pw$03(1$ T 5'$g*,/2 M ??2)*3$q*, 8*?*3,/zM 5Q8J *30 4b M (/2)* M  ( wU T  0? U $~*32I*,/$W=??*$035+t??28Rq$ s $0w&2w5x. U * ( *3,1$?$z $( $&2w5x. 2 M  *[ 2v(  M ??x ]???b2w. $0L*62w(S??,$0$r 03$75x$*3,/2 M *32-&2v5x. U *$Q*,/$Q?/0 M *O?y$ $( s  T U $ M )(/ $ $( s $8&*2w0 M 2 N 9??A U *b*,$607 s $0[ $x0 U ((/ ( * 5x$ M )0$ M  (1 ?@&)(K* T C A $ T 2 R??7 ]??8 2w( T CQ N ?B?]?%?? ?W2 Rq$ s $0@*3,/$7 CM *0?v5?*$8&3,1(/ ? U $Q$ M &g0 A $8-)A`2 s $x&)(-A $ U`M $8S*32S&2w5x. U *[$Q)(?).J ./024/ 5QL*3 2w(H*2I*3,1$?$6 w$ ( s  T U $ M )(`b$ v$( s $8&*2v0 M R?$q0$8? U  03$w u N R?$ U/M $I M?U A M $*t2 N *,$ * 0   ( ( 8*O2 N M? ? $I? |??? ?B*32:& 0 $8L*3$q* , $ 5 8* 03 4 $  $ ( . 0 2wAT}$ 5 2)N$? U 8*  2 (r? wR?$ & (r*,/$(S).1.03284 578*3$e*3,/$b$ $( N U (`&* 2v( M 8*O TT ?%. 2w (K* MWU1M  ( $L? ? ? U 8*3??? 2w(S??}?? q?$??*I? *,/ M T2 R J!0  (P ?? ? ).1. 0 2 4z5 ?? 8? *  2 ( *32 9 A`$:$ ( 2)*$8+A C+? ??H?q?#F ? ?? ? !?  ?  ?F  R ?? , ? $ 0 $ ? x?? ?  ( V? x 0 $Q* , $ ICM * 0 ? 5 )./. 0 2 4 5 8*  2 ( M 2)Ne* , $S$ z $ (s  T U $ M?Ls $8&[*32 0[M   )( B?  2 N *3,/$#?? ???B?%?? 57? 8*30 4?? C  ./. T C  ( $8? U L*3 2w( M ?x (/S?:RW *3,Q? ?? ? ?V? ? )(`{(/2)* ( *3, L*?  ?  I? ??v?? ( S*3,1$:?/0 M *W?y&[2 T U 5x( M 2 N E )(` E   &g2w ( &[K$8[/Rq$b)003 s $H8*q*[,$:)../0284 5QL*62w( N 2v05 U T )$  ? ?  ? ??v ? ?? ? t?  ???x?? ?  ???   ? ?? ? t?  ??? ???? ? ? ? ? ?  9 ? ? ? ? ???  ? ?  ? ??v RW,1$g0${? ??}?   M *[,$%?*3,|$ $( s $8&3*[2w0x2 N *3,/$ ? ? ? $ $g(1./032wA T $5??? ?  (/?9 ? ? ?  M *,$ )./.02v./0 8*[$H?y? ??M?U A578*[03 4S?? 2 ? N 9yOI2 *3$b*3,`8*I*3,1$:$(v*[036$ M 2 N ? W8*e*3,/$ ? . 2w (K* M )03$q? 6U`M *I03$ M & T $L s $0 M  2v( M 2 N ?   " $:&)(B*3,1$03$ N 2w03$b&2v5'. U *$: ({)../0284 5QL* 2w(r*2 *,/$:E E#FB;?=@?Q6(B*[ 5x$ ? ? ?~;S? ? ?  ? ?S? ?? U/M  ( :*,$:WC M *[03?w5?*3$8&,/(? U $) ?????y?t?K?8?? ???`?H?L??????????)?K???}?y????? ??? ? 9 ?!?/? ?  ?  " $B&2v( M K$0I*3,1$r? U  T  * C 2 N?* , $: .. 0 2 4 5 8*  2 (B9 ? N]2 0I9 8* ] * , $ ? . 2  ( * MWU1M $8QN?2 0 * , $b$  $ ( $&2 5 . 2 M? *  2 ( )(    *[,$O?b? ? 2 *,$0~. 2w (K* M ??$*q9?A`$O./ 0?* * 2v($Lx (v*[2bA T 2&3P M 9 ??? ? 9 ?? ??? ?  9 F )(1?9 ? ? ? ? %? T U/ v  ( *3,1$? ./.032L4 5Q8* 2v(| ? # (v*2??  ? ? ? F? *x M $L ? M C'? ? *3? 2 ? M ,/2)R|*3, L* ? ? ? ?  9 ? ? ? 9 ? ? ? ? 9 ? ? ?  ?8? ? U 0?*3,1$0?Rq$ M $$I*3,`8*W9 ??? ? |9 ? ?-? ? ?9 ? ? ?K? ? |9 ? ? ? ?D? ? ?9 ?K? ?-? ? |9 ? ?K? ??? ? A U * *, 8*I? ? ? ? ? ? ? ? ?9 ? ? ??? ? 9 ? ? ? ? 9 ? ? ? ? ? ??qu ,/$HK6?@$03$( &[$#9 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?  M  ( N v&[*b*3,/$?? ^?D?YS^c8dOX`a}f[dQf\/h 2 N 9 ??? ?  *:M $8 M?C *[2 M ,128R?*[,/8*x$8? U L*3 2w(??8?+ M $4/w&[*W (r*,/$b& M $e*3,`8*I9?,  M 0[)(1P ? )(/7*3, L* ? T  (/$L)0 T C J! ( $.`$( K$(v*e&2 T U 5x( M )0$ &3,/2 M $(@ ? ?? ? ?K?????V?:?/?8? " $q,/ s $?03$L&g$(K* T C A $8&2w5x$qRW)0$?2 N *,/$~R?2w03Pe2 N ? 03 $?$ fht_La? ? ?v? ? RW,/2 U M $qWRq$ ,w*3$L 0[)(`K2v5 M?U A M  5x. T  ( 2 N *[,$?02)R M  (/:&2 T U 5'( M 2 N W03$8&*g ( vU T )0?578*30 4O*[2O&2v5'. U *3$q)( )./.0284 5QL*3 2w(x*32:*,/$H?K?#??2 N *3,`8*q5Q8*[03 4@?G M  M . $&  T & M $O (+)./.0284 5QL*3$I$ $(J $&g2w5x.`2 M  *[ 2w({2 N B?/?~??5QL*303 4-&)({A`$x2vAK*)6(1$x? U 0ORq2w03P  s $ M )(%6(`K$. $(/$(w* $g0 s L*3 2w(r2 N *,/$O0$ M?U T *b ]RW6*,/2 U *q*3,1$ORq$ v,w*3 ( D U M  (/ xWC M *[03?w5?*,$2w03Cw)(`Q ./. T  $ M  *x*2 P $ 03( $T 5 w& ,1 ( $ M?G T M 2?? 5 2vT}- ( V?1& , ?wT P 2w.u N m? ?w?v? b,  s $ 0 $&g$ ( *3T C $ M & 0 A $  M . )0 M $ 03$$8 C 578*30 4?).1.03284 578*3 2w({5x$*,/2 *#* U 03( M 2 U *#*3,`8*O*[,$ N 2v05?2 N 2 U 0        !"# $#%  "&'(")"*)'(+,"-  .  /% 0" 1'23(4+4" "5+76#&1 2-8#9:1;%9;"<"9="> '29?( 2@*A B (.'( 3(#=C=%/"->9>D'*) 2#"1)&1E FHG8#>#.  (55IJ% G&H2"-#98>HK7 25;( 2& /A B 5ML"1E ?G9 'MNPOQ(Q(QR "83(!S> 3(  *  ;    *.8  2(&T " U "   #   #WV "   0X?* K (;  Y    '(   " #  G9Z&9>H;2* [\52+<]^; X    (IJA4_<"DD *H"488`(   . 2#"1KP X a ("1 #% G&.7"05KPE&9F(=+<]b X    9TKcId#KPE84( 2 !&F>9F 2 ; (8#A e8f@g hjik#lHim m nop q r\stiluwv:i1x kkq r@iyzm noj{ukk|{}Pikkq r~{ilq r@o8yziy ??nu?nukkq r@o9y _<"."1 ? &; G8G93(. W8  * 1@ G\.5!+4 2 ' .K X  # #  " #%    &9  'W?F&8    ; UNb?W?URa   L &18  c?   "  4  '    "  A ? Z08#  Y F"#F(5? ? #'0 !15;S> ??# jA ?W?  '   [ W  ? &   ?&9jW>  3 " 4>*? N Iz?2?j? R?)???? + "    ?   " U  1  ' '(#;#AH_<"-?<*(W $(W "#?'( [3(?G*)?/NP?4RZ?? ?j?a?N??<RW+4"??cN^?<RF $7;" 3(;(TKj'"-?5+4 "-#@# 2(#`? N ?c? ? %? R AU?4?(; 'W"F?W; ? ?;  " 1 ? %"4>*?E ?ED> *FG9`03(&9> '4"-?5? ?????#? >*?;# ?G*"<  ; (DI? ? ???.??@? N >%1A 'A ? <?A%j??(? O(R AA ?M? ?=???? N0??R ?<?J??? ? ? ?J? ? ? ?W? ?j? ? ? ? ?#? ? +4"1 "5 4? N?8? ? R AU_<"D 2# 2*-#?? 4? N ? ?9R A _`"1:9? [(.K?(.")?D&> t2#@(0 ?/ J 2??G85#( 2&; .;(#G@D+<*(%G&W  (:G1> :(5"12? ??9?@??????9?;?#?(??????^????5"9 G9#5(8>H+4" ":+<( X <+<? 5;(;  N A 'A8> ?   ?48!?4G8# N ??(?(? R>R A _`"1Z |E"5 ? ; 3(.5 ? &1 UK?(?("5 ;  (:"2>(& 5KTH * ? ?K "KP N ?7?M? N I?? ? ? R>R ???? ???!??? N ? OR :G9#( 2&/%j+4"#2?z?? N I?? ? ? R ???>??% ?? F! '(8  =)??8w? ? "  WG8<# 2&#?H# @*F(8KP(@+?#A E98WW?UAa??> '4"<  ; (FI? ? ????? ? ? ?cAa?? ( "?    [  %+TW; &F???? ?4?=? ??%?2?? ? N??,? ? ??R<   ? ? ? ?? ? ? ? AT_<"   +< " 3(G* ? G&  *9?$ K   &  AA ? ? ? ?A N ?? R ?)?? ? ? ? N ?? ? ? R ? ? ? ("H  (! <";KP( ?2N ? ? ? R AT_<"W 2# 2( * ;?? ? '   ?2N ? ?8R A ? K?( .&@4KP( ;"?#( 2&# (2K?;# 2 ?c\'(&1?" ? G&*FKP .&? KP4 ? : 3(;0 H 4\>3 SG@Ac_<" 4@+?U"W  ' 8? [X  "!? N ?(? ? R  G9F    . ?  [    *%  &80-(T    K >   ;  'D " ?X(    8  ;   ? A5N ? [@  D8)?4G8#?N????(?(R; G8." +?51 Z"2 ?'( 8a X #@ ] "1?| "1#@(> ?/ ((>(A R2? K?#(&>4"#T 2#"U X 7E?]^3@  ?;(&@ @>|G8D&>?K?( X #j 2 Y  /A ? ?H???F?j?? ?P???j?U??a?2???j????>?? ? F1>#4#8 2j>& #`(!#@> ?8  (!8H ' > ( X #A ??fPe ?.}?ik kq r~a{(ilq r\o8y???p/mUu?nq r@sJuy8l k ? &    8    c+<  ?  2(&`&1  D '  " 7? L ?a?; 8L   3 2NP? L ? L R "  +  >;   ' WG8(>A ? (;"!# 2D"8( O(?? [1&?##%8G9 '2"2#@!'E*]^@#3(?3@&4K      "!#$%& ('# ('#)*+, -&'/.,"0102 3+4 -#5 &65'7.#  8 5'9 :";# ;96<=1>?<:" > /:"'.#  ')@:4@.#  )"   0#?55A?%!# B   ;C> ('D #:<E <FG1>  H5EIJ%E K'5.L;/M  ')7%#7NO& #"<%P& # :# K+JE  :1'D" .#%Q <  R  .S  'UTV  -=  +JP'.XW R 5ZY3!"!1["\]Z^"5'#5I"_:&`a7`b:1+dcCYfe$?gh\Zii j&k E lY>mon p1e,mqgSn p rs#Yb0#ut  1r\>\vI 5E#I   .2P0# t w1;C=hMI   < 6x1E)":`A%# .%qx&  '< :"'y5"<%L.2  +,'/>  :"'l?O16;/>5.z  '|{<}"-^1:" #`~F &fYM !1!"! \%,J<G  ' ) ` 1<%:  j k I 1>E :@    1M: '  R  ?x&  ;6: `~ 0  ~?    ` "<%:  ? I ">:  0#?A?  Tza.# <E  R %#5# C5  +,5'"%?5 ??? ?? ? ? ??  ? ? ? ?? ?? ?? ? ?? ?? ?  ? ? ? ? ? ???b? ? ???? ? ????? ??%?f???F??&?F??# ? ??=?E???%? ? F v ? F ?  ? ? ? ? ?? ? ?? ?? ?? ?? ? ???  Yb1\ ? ?? ???? ? ? ?? ??? ? ? ?? ? ? ??? ??? ??? YR \  Y  \* :") Qf :")P  : : `   ' ; + R   : `   :  @)2   ' 3@? Z? 1  5. -  ' ?  :I ?  K)";    P C >` : +  ' <6: `   <  1   H   ;/   ' )   ` ; = B@? <  1   H   ?2  ,?]>? + R :  .2 ' :~   +, 'q:`h,0 >;#-E$`b:1 5"<E??z Y R \ - :")Q3-:") -:&:`h'#;#+ R 5 :`l?/:" / &)2 ('3 ?z ??????"?? ?G???2?A?Z? P?3'   H 3@  /   +  ' 5?O  ,E1 ^Z -6:Z.  u<   +,  ' ,  q.  ()   E: ` <="4? ? ?2? ??`b:"+?*35vTy@x&?_?z?2@> u?@:`lE# >; R MF?;>.A?/&'._`b:1 5"<E?x&=;# :&`???;C>P0q.#  8 55'1 >; R >E:`~%E  '  ')7.E?:&` >  ?,??M:71: R q R - :Z1>5 , 8 5<E@:`v  6x&   R   =  >?7:"'7%#,5>;( ?5>,5M;(F6,<:1+,  .:7?<:1 :&`?" :"EY?:1;# :&`"0102\v`?:"6`?;#G B ? <="M  HC5@Y  K&I   :";2?&  :5  +_  '#),?Z\ ii  0 ?? t #? t1 '. ?3' ?   ) ;,2Yb1\I~@ -:4Y:"'y,=: ) Qf=: ) <=5\ 5>;(`?:"? t"#??   :";) P  @.2F  K:   %5 ` :   [  B :2:.P C M` : +  ' <E  ( : R %   ' .7.#:I ' E:_??ii ??ii 1[# B :#:#.7 CFM`?:"+J'/<E <5'?=>: R  : R E  '.P;C>  ')@?y??? ? `?:"?&+, - I    ??ii 0"? )":#:#.?>;(E: R F&  '5.PI   _?Pii t##?E1t"@'/.z"[#?3' ?   )";49Y R \~%# '2;+ R ?:`O?C:1  %&^15'q:P<E:1+, ;#* ?Z? ? x&=;#@:`~?UYf1 .#H'.P  '7w1;C  :"'L1\   ? -:.P )   'C3 ? ?   x15  HC5$@?,Yb?r \ <=  ' ) R 5x2  :";# :`lE#@<F:"+, #;#EE  :1'l ?? ? ?"?? ?=?D?9?A??? ]7`f1<?EqE# R 53P C5>`?:"+J'<7: R %&  '.S  ?<-:2>7:XYf'. >:1+4F  +, R >EO%'/\AE:&`#%#~`b;#= B? <%?1>  H/h;M  ' ) &-  ! hE%&  '  ' ) &+, - >;)")153?  4 = : `     '#` : +    : '|  '  _     '  ' )y>E  u R    ' )Z;#   G  ? 5.?? ' :?3;C3  @%  )"FE<:   > /: ' .   ' )J??     '  ' )_ C:   ' E  :q   +,  ' @  (  u>;   'Z+ :   .9E    IZ<E:"+, /&.D7 /5>`b:1+_&'<7:` Z`b;#= B ? <=1>  H5J;>  '#)D:"'-??L>; R M%P:`6%# %&  '#  '#)Z.#P:` M  ?_?y??)2  'C3?E#? ?#3E}"+?  #:#  +_   :"'?YI   E7??ii ?7\?I   < <F:"+, #;#?? ??+_  ,5  )"5'.#<E:"+, /:2>    :1'l? R ;2+_^"h;C>:&`h-?PE%&  '  ') .# C:1  '2E? )   ' `b:1v"<,??I~ ;/>.q 0 .#  8 55'"&+, #=5h`?:"+] ?J%E  '  ' ) &+, - Y  +  .PI    :";#  5  "< +  ' \ ?? ' .P / >` : + 5.P /    .7<: +   ->: '    *%" ^ I 1 :_.# (<F  +, (' %,? ??($`?:"+?E#@53:&`O@.# K)"  %?" ('q6H/E3 F# C5 K+,5'" ?   +   '  ' .L3E ' .  .y.#5x  (   : 'z` :   J / >` : +  ' <,: ` 1<  <  1   H  *` :  ??ii ?3?f?a?M?9??3? ?%?-??3?3? ? ?#??%?>? ? ?5?q?>?2?#?>?>? ? ?@??&?>?h????,?h? ?(?5?$???3?%? ???#?E?>? ? ?5?_??O?v?5?$?"?5?>?@??(? ? ?E???M??#?E?? ?(?E?1?5?M??5?    "!$#%# & (*)    ( ) "!$#+#,& -/. 0 12   465 "!$! . 0 "! . 07       '                                              , 3                                 '            8 9 :<;=   ) >@?BA>   C7 DA?E!GF H?JI ;K; (*) 7 ;L M . 0 N O>PIQ . 0 <R A  ;= #< !GSCA . 0  !GUTJ#   A!G# ( )EV 7 A  SQ>W #YX . 0 !GFZ!MF< [ "!G>M\ ]7 ;KW . 0 N >DIC . 0 R + . 0 R   #, 7 A  SCA, . 0 ! . 0 A  A?^ . 0_  T #  A!$#+[ Va` ? A > cbb    `   `   `  `   9 F dH+<#%"!GC#< > ## &,. 0!$. 0 A  A?e!$F   I  :Q>fA?[>M>WA>G . 0=P>MSCA>g!W #h?BA>f:CA!WF 7 ;BW . 0 N O>$ ` ;KA  R X . 0 !WFi!GF       # 465 "!$! . 0 "! . 037*7 A  S I !W#U?J>MA  !WF<*SC . 0 >W#%# . 0 1 a>M C7      `   `   `   `   >  R . 0&ai. 029j: ;   `  ; A R X[. 0 ! F 2A ? ! F k#,. 0 1l >  7    #i!GF  465 "!$! . 0="! . 0 7%7 A  S I !W#2?B>MA  !$F  SQ . 0>M#Z# . 0 1 O>W Q7  Zm %g!WFQ!f!WF8    SQ>g?BA>   C7  . 0 *;=X[  [:CO!g!$a>*?BA>d!MF<  8"!W>W\ n7 ;L M . 0 N O> fo  4"pqpGrMsqt bb    ;K; N &  C7 Dv # I P!$Ak# . 0 1 >W  ! >MgI ;=!$>Mf . 0 R 8. 0 Nl7   !d ![!GF  ,xOy ;KO& ;   A!Gf;KgA!MFC![!$F f&> . 0  uw . 0 E  ;=;=>j?{A >|!WF<   "!W>W\ }7 ;L M . 0 N O>j!WFQ  ?{A >|!WF<~?{I ;K; (*) 7 ;L M . 0 N O>  M  S8;=jA?z . 0_  (*) T   V ?. 0 R T   V     `    T   `   V    `                                                         ,                                   ? ASC > . 0gA+A ? ! F D W> > A >  >  !W[A ? ! F  ( ) T     V$` ? 9q: ;     . 0 R T   V$` [ T   `    Va`   T   `   V   #   T  `    V 7 L;  M . 0 N O >GE?BA>~!GF   # . 0 1 a>M  ![O! g?   ????~??? ?K?Z?'?l?%?  9 F<f!$F . 0 >$#YO? SC> . 0    ! 7 A   . 0 # >G!MF<?>MgI<;!$PA:,!O . 0  #h?{A>d!WF8!G  # . 0 1l > O,!!$  ? fA ? #,. 0= 7 > . 0. 0 C!M. 0  R A<+# . 0 R . 0 ! ?J> A ? ;=; A ? ! F %A! F  >  ?? ?9q: ;   X? R. 0 &!WF<  I  :CO>fA?>W>MA >$?{A >*?JAI<>f# . 0 1 >W  !fS<>W# . 0 7 !WA>$ T  V (D) T   V$` !GF U?JI<;=; (*) 7 ;? M . 0 N > ? TB: V ?. 0 R T  V ` !$F  (*) S >W# . 07 !MA >[IQ . 0 <R   a?8 7 ![ . 0 R   #  7 A  SQA' . 0 ! . 0 A  A?q!GF  ?{I ;K;?  !W> . 0 ??  #?>WO!G . 0 . 0 R+? bb    . 0R   &;=I8%T 7 V  ?T  `  V !WF<   "!G>M\  S8>W# . 07 !WA>IC . 0  R ? bb   ` bb    T?# V  T   `   V   #?T{ V  T   `   V  9 F  > gI ; !$. 0 ?HA'"! 7  @? > %&  > ?g. 0 . 0 ;  >  [A!W. 0 7 %! F !U! F  > gI ; !$ ? A > ?. 0 R T   V <# [ T   `   V  > [! F ?M^ EA > :QO!g!W > ! F  ( ) T     V ? A >  AI'!/A ? ! F    7 g ` ! F  >MgI ;=!$jA?  zT  `   V >WE!WF8W  [ qA>?:QG!M!W>q!$F<  (D) T   V . 0 AI,!?A?  7 g   #H!WF<^>WgI<;!O?A?  lT  `  V >W[!$F *W  P?A>~:CO!g!W>~!MFC  (*) T   V . 0  AI !eA?  7 g   . 0 !g!W^A?qI<SU!WA   zT ? ` V < S ;=    W! #  ?,I<; 5 X~ . 0 R F ! 7 A H S >M# . 07 5 m dFQ& *;K@A . 0  < ` <  R ;=dS<>M# . 0=7 !GA>?A? !MA >$??{A >[  ;=; ?   # : I ![FC&  A!e?{AI  # . 0  < S >WA &     !$?A&>~f . 0 < !MF<DW  ? . 0 _   ???L? ? ?,?<??'? ?? ?K?Q???? ? ??? ?K?Z? ?C?? m F<&[;?gAd!$"!G#D!$F    "!M>W\ ? $!GF A8#fA  !$F ??<? ?,??<?^>W R >WW . 0 A  S<>MA :8;=  ` !G?   ?{>WA  !MF<[? ??? >WOSQA' . 0 !WA> ? ? ??~??G?,?? ?,?~?O?'?,?j?6?l?,?q? ?,? ?C? ?O? ? ? ??? ? ? ?? ? ? ? ?,?? ? ?,? ?"?,? ? ? `  zI<g #d: *?'HA ; [C#*? 7 F \ ;=? AS ??T    V  9 F ?S > A: ;  F  . 0 S<I !q& > . 0=: ;  T  7  ; #d!MA _ O>MA      #I  . 0 !&> . 0 C7  V$`   P!$>$ . 0 <. 0 <R O?<  S<;=[  #   !$"!O?<  S<;= jm         ! "#$! %&'&$()*%+%,*!-+.!#,/0+1-+"3254 687 /119 )+ :'*    ;<3! 7 +/$ ! -3=>%-+?3<   3@2A-+CBEDGF@H+HH3IKJH+H   L J+H3I ?33%"#2M- #/N3O/$P QR ST 31U!2A-+BVDWF L J 4 X Y[Z \^]`_baC]*]*Z \dcfe g !  #h+!i ?#-Oh9 @  TjU35$0k ""- 7   O  l-+;/ i 11-m h+ U& l)3(n/ + &"#*o53"- 2p<3/-:"*'O   -+# OR @ 2A-+  9qr/P+  nQ%   -+3C/  nb/  ) +/$o /3</U+4s>  C&"#@*o^";/-:C ?-*%2A -+G    )+%#@K @  7   )+3" -+?1 >-2 3 t  *93  !# -+#   .O @;?/ 9  U  T 1u "3"- 7   @<  -+#%-2v! l) '3o '  -3  l9  @ (  1  -  @/<@   Uz2  /<   -  R*@   -  L 4xw% +3h   < )+y- 2    /-+!"33 -+ 5<-+  <3-+1; C< { @   7 I ?*|-Q1U}zBE~i?"-&$(-+}- 2   4??-+ "-?3?'T?  <3-+# #3f- 2b 7  !"3Pfv@h+-@#@u)-*-"2?-+ T /$%/!?# -+?3$  @T  3)h 1@u- 2BV-2?-Q1Uy 2.?33#3 b4?wKv?  #/< @+@ I vKv-+PT 7 "#@/<  @ QS<@  -!B?@i/-?y?ST*9h&  ?14vs>3 "Q" - 7  T@  -|  1  + U <-!?# "   /P  Uz) -*-3;2?-+9.T?.1  +! Q  Q   i3P?C2M- / i T  ) *h 1@ 3/OU   "   U 4 ?|???p?*? ?A?*???#?3???*?#?@? ?? ??3??*????9??C?????@? ???P??9?*? ?l?+?P?*?v?? ? ?5?<?*?5?5? ? ??`? ??*????*????@?O??9??*?%?>?^???y???P? ?&? ?<?C???*? ?*???3??T? ? ? ?+???@???*? ? ???3??3??'?K? ?T?? ?<?? ?P?*?l? ????? ?*?$?f?l?'?+??<??5??*?3??5??? ?&?@??O??? ??<?5?$???&???^??? ??? ??5?+? ? ??1?????!?? ? ?&??5?'?????u?<?5?'?9??T?3? ?? ?r??$??3? eC_ ? ] ? ??@? ??+????>??C?*?5????@? ?u??*?v?+?+?S?&?? ? ???3?M?5?5??.?9??*???^?v? ? ?`?^?P?O?d???5? ?*??P? ? ??`?&?#?u??P??? ????8?1?S?<?3? ?O? ????<?5?<? ? ? ? ?*?@?3??l???? ? ?+?<? ? ??3? ??+?3? ? ?A??@??? ??3?i? ? ? ??*???????|?? ? ?*? ? ? ???#?? ? ?O?N?????M????+?N?K? ??Q??#?u?.?3? ???<??*?A??P? ? ??@?u?<?5?$????T?v?<?3??5?>??? ? ?<??? ? ??? ? ???v?3?+? ?;?<??5??!?u?<?5?'?9??T?p?<?@?? ?? ? ?? ?<?$??b?%?l?3?%??*?3?'???%?l?#?????$? ?3??1? ??3???5?<??@??? ? ??^?C??@?@?&? ?^???9?l?!?u? ? ??&? ? ??+?9?v?????`?3?+? ? ? ? ?l?<???M? ??+?z??*???<?+? ? ?9???? ??*?$? ?P? ??? ?? ?5??`??y??=?P?*? b?? ? O??P? ? ??*? ?^? ???v?3??^??!?3? ? ? ? ?*?? ?.?*?? ? ? ?? ? ? ?>?<??.?f?%?p?|? ? ?<?@?? ? ?}?v?<???$?5?5?? ??}??3?;?<?1??5?5? ? ?3??<?? ? ?? ?*?5?? ?*? ??`?5?5??(???? ?&? ?<? ?5??+?? ? ?'?&? ? ?f? ? ?&? ??? ?5?<??5?5?? ?@? ? ?=?^?z?+????? ? ???3??+????9??.? ? ? ?<? ? ? v? @??5? ? ??i??.??M? ? ??P? ? ?$? ? ? ? ?@?p?u????'?? >?*? ? ?@???5?? ?^?!?b?? ?5?<? ?????5?$??r??K??b??<?*?@??l?5?@???3?#?%?l?????5?? ?&??? ?+?*?3?[?#?>?l?b? ? ?1??*???5? ? ? ???? ? ?<?? ?? ? >? ?9??? ? ??? ?v?5?? ? ?*? ?? ? ? ???p??K? ???(?+?? >?*? ? ?O???5? ? ?d??b??<?5?$?3??$?<??3?9?? ? ? ?? ? ?@?? ??? ?l?@??+?^? ? ?l???:? ? i?b??l? ? ?+???<?$??u???+???u?l????C??.? ? ?<?@?? ? ????*?*? ?%?5?*???S??>?P??$??+?? ?5?3??$? ? ? ? ?O? ??*? ? ?^?*??5?$?? ? ??+?+??<? ^??????????+? ?@??.? ? ??*?%?? %??+?d?? %? ?P????&?'?&? ?#? ??? *? ? ?<? ? ??? ?l?@??+?^? ? ??f?? ?@?1?????l??? ?? ? (?l?l? ??f?N? ?A?%? ? ?<?@?? ? ? ?u?@?+??? ?*?? ?<??9?3??`?? ?9??*??(?+?5? ? ?9??? ?O? ??*? ?Q?$?? ??<? ?@??l?*?? ??@?+?l?$?:? %??+?^?? ???v?3?????M? ? ??+? '?? ? ?? ? ?<? ? ?? ??1??3??3?+?l??[?+?? ?l?@?@??5? ? ?? ?@??? ? ?u? ?3?9?5? u??<?$?? ?!??5??? ? ?u??+??<??? ??<?? ? ???.??? ???*?? ?*? ? ?? ?*?? ? ? ? ?5?<? ?&? ? ? ? ?? ?d? ?P?*?%?? ???^???*?1? ?+?d?&? ?*??.? ? ??3?? ??? ?? ?5??*? ? ?? ? ?@?? ? ? ?? ? ?5???*? ? ? ?? ??? ? ? ?@? ?!? ?? ?.? ? ?*?#? ?8?'?*?+? ? ???C??C? ? ??? ? ???#?*?p?3??M? ??f?d?f??.??M? ? ??P? ? ???*?P?^?? ?+? ? ? ????`?^?? '?*?>? ? ???&? ???????? ?*?? ?b??? ?&?1?'?+? ?;? ? ?3? ? ?8?5? ?@? ??? ? ? ? ? ? ??9?N? ?&??f?$??9? <? ?? ??.? ? ??3?? @??A? ?|???*?? ? ??+? ?l?*?`??(? ?p?? ?? ? ?? ?? ??^? ? ? ? ?^? ??`?^? ???(? ? ? ? ? ?v?+?*?l? ? ?? ? ?<? ? ? ? ?<? ? ? ?$? ? ?f? ? ?? ? ?*? ?u? ? ? ? ? ? ? ? ?8? ?? ? ? ? ? ?<?v?5? ?? ? ?$? ?? ? ? ?*?*??? ?<? ? ?<? ? ? ? ?<? ?? ?<? ? ??? ? ?l???(?? ?$?Q???*? ???l?3? ? ?9?3? ?? C? ? ???? ? ? ? ? ? ?<?5? ? ?? ?<?1??5?5? ? ?3?<??? ? ?? ?u? ? ?? ???5?5??(??:???? ?<?5?5?$?<? ^??????????+? ?@??.? ? ??*?%?? ???.?^??&? u?3?? ?5? ? ? ?? ?|?@?? ? ? ?? ?*????P?(? ? ?? ?*? ?? ??? ? *? ?@?      ! ! 3 #     "  # ! 4  1 0 65   $3 !< $ 2 # :9 65 ?> @    '&)( % 1 ; A # 7       '&)(+*-,/. 8 = CB ( * &DB .  
1866 |@word cu:2 pw:1 d2:1 llo:1 p0:1 k7:2 tr:1 o2:1 ka:1 od:3 gv:1 cfo:1 n0:1 v:1 dcfe:1 yr:1 nq:1 xk:1 lx:2 c6:1 zii:1 ik:1 x0:1 pkb:1 xz:2 uz:2 ol:2 td:1 jm:1 lxz:1 z:1 gid:1 k2:8 uk:1 zl:1 ly:1 t13:1 t1:1 ak:1 fpe:1 au:1 co:1 uy:1 vu:2 sq:3 sca:3 ga:3 nb:1 py:1 yt:1 l:1 kqr:1 oh:1 jtd:2 gm:2 pa:1 jk:2 q7:1 ep:1 ft:2 i1x:1 rq:5 mu:1 ov:1 kmj:1 f2:2 muu:1 joz:1 d7r:1 zo:6 kp:9 sc:3 mon:1 gi:8 g1:1 gq:4 fr:1 zm:2 tu:1 oz:4 utv:1 kv:3 az:3 qr:9 ky:1 g9:7 tk:1 gcl:1 iq:1 ij:2 pii:1 qd:2 kb:9 ja:4 sok:1 jok:2 b9:1 lof:1 ic:2 cb:2 bj:1 g8:7 u3:2 hgi:1 rjb:1 ck:1 og:1 q3:3 yo:2 jz_:2 wf:7 vk2:1 bt:1 uhl:1 kw:1 k2f:2 t2:1 np:2 ve:1 ab:1 w5:2 cbb:1 g_:6 llr:1 tj:1 kt:1 xy:1 wfi:1 e0:1 kq:1 piq:1 a1n:1 dir:1 muv:1 st:2 ym:1 jo:3 w1:2 sqx:1 tz:2 lhi:1 wk:1 blb:1 vi:1 il:1 ni:2 iml:1 zbe:1 zy:1 zx:2 fo:1 ed:1 c7:5 di:1 icl:1 f2d:1 ut:1 ou:2 e1t:1 ea:2 ok:4 yb:3 d:1 ei:1 su:1 q0:1 azf:1 i2:1 gw:2 qe:2 gh:2 dbj:1 fi:1 ji:2 b4:1 s8:2 he:1 uv:14 uvd:1 dbk:1 mfc:2 v0:3 azo:1 wv:1 vt:1 uku:1 ey:1 ii:6 d7:17 rj:2 af:3 y:1 jy:1 dic:1 va:2 z5:1 qi:1 ko:1 ae:1 cz:1 qy:1 c1:1 twx:1 w2:1 sr:1 fkp:1 hz:1 ura:1 db:8 oq:1 n7:1 xj:2 zi:1 cn:1 qj:1 o0:1 f:2 bac:1 rw:7 fgd:1 fz:1 wr:1 hji:1 dbd:1 iz:1 jv:1 ce:1 uw:1 sti:1 wu:1 i9:2 vf:1 q9:1 g:1 cko:1 bv:1 xdk:1 x2:1 wc:2 c8:1 qb:1 utj:1 c2m:1 dqf:3 qrs:1 ur:1 qu:2 tw:1 ztd:1 vty:1 dv:2 azj:1 k2l:2 caa:1 jn:1 rz:1 cf:4 yx:1 xw:3 bl:1 g0:3 v5:4 fa:4 md:2 kth:1 w0:1 rjk:1 cq:1 fe:1 gk:1 ba:6 zt:3 ic6:1 zf:1 gc:5 lxd:1 boll:1 z1:3 qkb:2 qa:1 ev:1 tb:1 rf:2 oj:1 zvk:1 ne:1 xg:1 gf:8 kj:2 w03:2 mlx:1 oy:1 dq:1 bk2:1 i8:1 pi:3 lo:3 qf:4 fg:1 fb:1 qn:1 qx:6 bb:4 sot:1 ml:2 bol:1 qz:2 ku:1 zk:1 ca:1 du:1 hmi:1 hc:2 uwv:1 da:1 sp:1 k2k:2 cei:1 gtd:1 x1:2 gad:1 pe:1 wrq:1 cvu:1 z0:2 badc:1 r2:2 dk:1 pz:4 w9:1 mf:5 lt:1 fc:1 bo:5 aa:2 a8:2 ma:8 wt:1 egf:1 ya:1 l4:1 c9:1
947
1,867
What can a single neuron compute? Blaise Agiiera y Areas, l Adrienne L. Fairhall, 2 and William Bialek2 1 Rare Books Library, Princeton University, Princeton, New Jersey 08544 2NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540 blaisea@prineeton. edu {adrienne, bialek} @researeh. nj. nee. com Abstract In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1 Introduction Classical neural network models approximate neurons as devices that sum their inputs and generate a nonzero output if the sum exceeds a threshold. From our current state of knowledge in neurobiology it is easy to criticize these models as oversimplified: where is the complex geometry of neurons, or the many different kinds of ion channel, each with its own intricate multistate kinetics? Indeed, progress at this more microscopic level of description has led us to the point where we can write (almost) exact models for the electrical dynamics of neurons, at least on short time scales. These nearly exact models are complicated by any measure, including tens if not hundreds of differential equations to describe the states of different channels in different spatial compartments of the cell. Faced with this detailed microscopic description, we need to answer a question which goes well beyond the biological context: given a continuous dynamical system, what does it compute? Our goal in this paper is to make this question about what a neuron computes somewhat more precise, and then to explore what we take to be the simplest example, namely the Hodgkin- Huxley model [1],[2] (and refs therein). 2 What do we mean by the question? Real neurons take as inputs signals at their synapses and give as outputs sequences of discrete, identical pulses-action potentials or 'spikes'. The inputs themselves are spikes from other neurons, so the neuron is a device which takes N '" 103 pulse trains as inputs and generates one pulse train as output. If the system operates at 2 msec resolution and the window of relevant inputs is 20 msec, then we can think of a single neuron as having an input described by a '" x 104 bit word-the presence or absence of a spike in each 2 msec bin for each presynaptic cell-which is then mapped to a one (spike) or zero (no spike). More realistically, if the average spike rates are'" 10 sec- 1, the input words can be compressed by a factor of ten. Thus we might be able to think about neurons as evaluating a Boolean function of roughly 1000 Boolean variables, and then characterizing the computational function of the cell amounts to specifying this Boolean function. The above estimate, though crude, makes clear that there will be no direct empirical attack on the question of what a neuron computes: there are too many possibilities to learn the function by brute force from any reasonable set of experiments. Progress requires the hypothesis that the function computed by a neuron is not arbitrary, but belongs to a simple class. Our suggestion is that this simple class involves functions that vary only over a low dimensional subspace of the inputs, and in fact we will start by searching for linear subspaces. Specifically, we begin by simplifying away the spatial structure of neurons and take inputs to be just injected currents into a point- like neuron. While this misses some of the richness in real cells, it allows us to focus on developing our computational methods. Further, it turns out that even this simple problem is not at all trivial. If the input is an injected current, then the neuron maps the history of this current, I(t < to), into the presence or absence of a spike at time to. More generally we might imagine that the cell (or our description) is noisy, so that there is a probability of spiking P[spike@toII(t < to)] which depends on the current history. We emphasize that the dependence on the history of the current means that there still are many dimensions to the input signal even though we have collapsed any spatial variations. If we work at time resolution flt and assume that currents in a window of size T are relevant to the decision to spike, then the inputs live in a space of D = T / flt, of order 100 dimensions in many interesting cases. If the neuron is sensitive only to a low dimensional linear subspace, we can define a set of signals S1, S2,???, SK by filtering the current, s,.. = 1 00 dtf,..(t)I(to - t), (1) so that the probability of spiking depends only on this finite set of signals, P[spike@toII(t < to)] = P[spike@to]g(s1,s2,? .. ,SK), (2) where we include the average probability of spiking so that 9 is dimensionless. If we think of the current I(t < to) as a vector, with one dimension for each time sample, then these filtered signals are linear projections of this vector. In this formulation, characterizing the computation done by a neuron means estimating the number of relevant stimulus dimensions (K, hopefully much less than D), identifying the filters which project into this relevant subspace,! and then characterizing the nonlinear function g(8) . The classical perceptron- like cell of neural network theory has only one relevant dimension and a simple form for g. 3 Identifying low-dimensional structure The idea that neurons might be sensitive only to low-dimensional projections of their inputs was developed explicitly in work on a motion sensitive neuron of the fly visual system [3]. Rather than looking at the distribution P[spike@tols(t < to)], with s(t) the input signal (velocity of motion across the visual field in [3]), that work considered the distribution of signals conditional on the response, P[s(t < to)lspike@to]; these are related by Bayes' rule, P[spike@tols(t < to)] = P[s(t < to)lspike@to] _ _ _ _ _ _----'P=-[=sp=ik=e:..::@ =to]P[s(t<to)] lNote that the projection the individual this subspace, (3) the individual filters don't really have any meaning; what is meaningful is operator that is formed by the whole set of these filters. Put another way, filters specify both a K - dimensional subspace and a coordinate system on but there is no reason to prefer one coordinate system over another. Within the response conditional ensemble P[s(t < to)lspike@to] we can compute various moments. Thus the spike triggered average stimulus, or reverse correlation function [4], is the first moment ST A(T) =j [ds] P[s(t < to)lspike@to]s(to - T). (4) We can also compute the covariance matrix of fluctuations around this average, Cspike(T,T') = j[dS] P[s(t < to)lspike@to]s(to-T)s(to-T')-STA(T)STA(T'). (5) In the same way that we compare the spike triggered average to some constant average level ofthe signal (which we can define to be zero) in the whole experiment, we want to compare the covariance matrix Cspike with the covariance of the signal averaged over the whole experiment, Cprior(T,T') = j[dS] P[s(t < to)]s(to - T)S(tO - T'). (6) Notice that all of these covariance matrices are D x D in size. The surprising finding of [3] was that the change in the covariance matrix, t1C = Cs ike - Cprior, had only a very small number of nonzero eigenvalues. In fact it can be shown that if the probability of spiking depends on K linear projections of the stimulus as in eq. (2), and if the inputs s(t) are chosen from a Gaussian distribution, then the rank of the matrix t1C is exactly K. Further, the eigenvectors associated with nonzero eigenvalues span the relevant subspace (up to a rotation associated with the autocorrelations in the inputs. Thus eigenvalue analysis of the spike triggered covariance matrix gives us a direct way to search for a low dimensional linear subspace that captures the relevant stimulus features. 4 The Hodgkin-Huxley model We recall the details of the Hodgkin-Huxley model and note some special features that guide our analysis. Hodgkin and Huxley [1] modeled the dynamics of the current through a patch of membrane by flow through ion-specific conductances: I(t) dV = Cdt + 9Kn4 (V - VK) + 9Na m3h (V - VNa) + 91 (V - VI), (7) where K and N a subscripts denote potassium- and sodium-related variables, respectively, and l (for 'leakage') terms are a catch-all for other ion species with slower dynamics. C is the membrane capacitance. The subscripted voltages VI and VNa are ion-specific reversal potentials. 91, 9K and 9Na are empirically determined maximal conductances for the different ions,2 and the gating variables n, m and h (on the interval [0,1]) have their own voltage dependent dynamics: dn/dt dm/dt dh/dt = = = (O.OlV + 0.1)(1 - n) exp( -O.lV) - 0.125n exp(V/80) (0.1 V + 2.5)(1- m) exp( -0.1 V - 1.5) - 4m exp(V/18) 0.07(1 - h) exp(0.05V) - h exp( -0.1 V - 4), (8) with V in m V and t in msec. Here we are interested in dynamic inputs I(t), but it is important to remember that for constant inputs the Hodgkin-Huxley model undergoes a Hopf bifurcation to spike at a constant frequency; further, this frequency is rather insensitive to the precise value of the input above onset. This 'rigidity' of the system is felt also in 2We have used the original parameters, with a sign change for voltages: C = lJ.tF /cm 2, = 36mU/cm2, gNa = 120mU/cm2, gl = O.3mU/cm 2, VK = -12mV, VNa = +115mV, Vi = +10.613mV. We have taken our system to be a 7r x 30 2 J.tm 2 patch of membrane. gK many regimes of dynamic stimulation, and can be thought of as a strong interaction among successive spikes. These interactions lead to long memory times, reflecting the infinite phase memory of the periodic orbit which exists for constant input. While spike interactions are interesting, we want to focus on the way that input current modulates the probability of spiking. To separate these effects we consider only 'isolated' spikes. These are defined by accumulating the interspike interval distribution and noticing that for some intervals t > tc the distribution decays exponentially, which means that the system has lost memory of the previous spike; thus spikes which are more than tc after the previous spike are isolated. In what follows we consider the response of the Hodgkin- Huxley model to currents I(t) with zero mean, 0.275 nA standard deviation, and 0.5 msec correlation time. 5 How many dimensions? Fig. 1 shows the change in covariance matrix f1C( r, r') for isolated spikes in our HH simulation, and fig. 2(a) shows the resulting spectrum of eigenvalues as a function of sample size. The result strongly suggests that there are many fewer than D relevant dimensions. In particular, there seem to be two outstanding modes; the STA itself lies largely in the subspace of these modes, as shown in Fig. 2(b). 0.01 ~ 0.00 S ~ t' ({l\se c) Figure 1: The isolated spike triggered covariance matrix f1C(r,r'). The filters themselves, shown in fig. 3, have simple forms; in particular the second mode is almost exactly the derivative of the first. If the neuron filtered its inputs and generated a spike when the output of the filter crosses threshold, we would find that there are two significant dimensions, corresponding to the filter and its derivative. It is tempting to suggest, then, that this is a good approximation to the HH model, but we will see that this is not correct. Notice also that both filters have significant differentiating components- the cell is not simply integrating its inputs. Although fig. 2(a) suggests that two modes dominate, it also demonstrates that the smaller nonzero eigenvalues of the other modes are not just noise. The width of any spectral band of eigenvalues near zero due to finite sampling should decline with increasing sample size. However, the smaller eigenvalues seen in fig. 2(a) are stable. Thus while the system is primarily sensitive to two dimensions, there is something 02 0.5 (a) .? 2 (b) Cl'=O_ iQ- =-- - _ _ """'_ 0.0 20 13 OJ "e- "- -0 .5 -1.0 10+3 10+4 10+5 1 10+6 number of spikes accu mulated Figure 2: (a) Convergence ofthe largest 32 eigenvalues of the isolated spike triggered covariance with increasing sample size_ (b) Projections of the isolated STA onto the covariance modes_ eigenmodes 1 and 2 ...... .. normalized derivative of mode 1 -30 -25 -20 Figure 3: Most significant two modes of the spike-triggered covariance_ missing in this picture. To quantify this, we must first characterize the nonlinear function g(81' 82). 6 Nonlinearity and information At each instant of time we can find the relevant projections of the stimulus 81 and 82. By construction, the distribution of these signals over the whole experiment, P(81, 82), is Gaussian. On the other hand, each time we see a spike we get a sample from the distribution P(81' 82Ispike@to), leading to the picture in fig. 4. The prior and spike conditional distributions clearly are better separated in two dimensions than in one, which means that our two dimensional description captures more than the spike triggered average. Further, we see that the spike conditional distribution is curved, unlike what we would expect for a simple thresholding device. Combining eq's. (2) and (3), we have ( ) _ P(81,82Ispike@to) P( ) , 81,82 9 81, 82 - (9) so that these two distributions determine the input/output relation of the neuron in this 2D space. We emphasize that although the subspace is linear, 9 can have arbitrary nonlinearity. Fig. 4 shows that this input/output relation has sharp edges, but also some fuzziness. The HH model is deterministic, so in principle the input/output relation should be a c5 function: spikes occur only when certain exact conditions are met. Of course we have blurred things a bit by working at finite time -wo 2 ~ .~ "0 "E a '~" "0 :?.. N en -2 -4 ~ 4 a 2 s, (standard deviations) Figure 4: 104 spike-conditional stimuli projected along the first 2 covariance modes. The circles represent the cumulative radial integral of the prior distribution from 00; the ring marked 10-4, for example, encloses 1 - 10- 4 of the prior. resolution. Given that we work at finite llt, spikes carry only a finite amount of information, and the quality of our 2D approximation can be judged by asking how much of this information is captured by this description. As explained in [5], the arrival time of a single spike provides an information lonespike = ( r~) log2 [r~)] ), (10) where r(t) is the time dependent spike rate, f is the average spike rate, and (. . .) denotes an average over time. With a deterministic model like HH, the rate r(t) either is zero or corresponds to one spike occurring in one bin of size llt, that is r = l/11t. The result is that lonespike = -log2(fllt). On the other hand, if the probability of spiking really depends only on the stimulus dimensions 81 and 82, we can substitute r(t) - f -+ P(81,82Ispike@t) ---'--=::::-:-=--'--=----:---"- P(81,82)' (11) and use the ergodicity of the stimulus to replace time averages in Eq. (10). Then we find [3, 5] (12) If our two dimensional approximation were exact we would find l~~~s:pike = lone spike; more generally we will find 1~~~ss2pike ~ lone spike, and the fraction of the information we capture measures the quality of the approximation. This fraction is plotted in fig. 5 as a function of time resolution. For comparison, we also show the information captured by considering only the stimulus projection along the STA. -+- Covariance modes 1 and 2 (2D) 02 ~~----~~----~----~--~ 6 B 10 time discretization (msec) Figure 5: Fraction of spike timing information captured by STA (lower curve) and projection onto covariance modes 1 and 2 (upper curve). 7 Discussion The simple, low-dimensional model described captures a substantial amount of information about spike timing for a HH neuron. The fraction is maximal near bot = 5.5msec, reaching nearly 70%. However, the absolute information captured saturates for both the 1D and 2D cases, at RJ 3.5 and 5 bits respectively, for smaller bot. Hence the information fraction captured plummets; recovering precise spike timing requires a more complex, higher dimensional representation of the stimulus. Is this effect important, or is timing at this resolution too noisy for this extra complexity to matter in a real neuron? Stochastic HH simulations have suggested that, when realistic noise sources are taken into account, the timing of spikes in response to dynamic stimuli is reproducible to within 1- 2 msec [6]. This suggests that such timing details may indeed be important. Even in 2D, one can observe that the spike conditional distribution is curved (fig. 4); it is likely to curve along other dimensions as well. It may be possible to improve our approximation by considering the computation to take place on a low-dimensional but curved manifold, instead of a linear subspace. The curvature in Fig. 4 also implies that the computation in the HH model is not well approximated by an integrate and fire model, or a perceptron model limited to linear separations. Characterizing the complexity of the computation is an important step toward understanding neural systems. How to quantify this complexity theoretically is an area for future work; here, we have made progress toward this goal by describing such computations in a compact way and then evaluating the completeness of the description using information. The techniques presented are applicable to more complex models, and of course to real neurons. How does the addition of more channels increase the complexity of the computation? Will this add more relevant dimensions or does the non-linearity change? References [1] [2] [3] [4] A. Hodgkin and A. Huxley. J. Physiol., 117, 1952. C. Koch . Biophysics of computation. New York: Oxford University Press, 1999. W. Bialek and R. de Ruyter van Steveninck. Proc. R. Soc. Lond. B, 234, 1988. F. Rieke, D. Warland, R. de Ruyter van Steveninck and W. Bialek. Spikes: exploring the neural code. Cambridge, MA: MIT Press, 1997. [5] N. Brenner, S. Strong, R. Koberle, W. Bialek and R. de Ruyter van Steveninck. Neural Comp., 12, 2000. [6] E. Schneidman, R. Freedman and I. Segev. Neural Comp., 10, 1998.
1867 |@word cm2:2 simulation:2 pulse:3 simplifying:1 covariance:13 carry:2 moment:2 reduction:1 current:12 com:1 discretization:1 surprising:1 must:1 physiol:1 realistic:1 interspike:1 mulated:1 reproducible:1 fewer:1 device:3 short:1 filtered:2 hodgkinhuxley:1 provides:1 completeness:1 successive:1 attack:1 dn:1 along:3 direct:2 differential:1 hopf:1 ik:1 theoretically:1 indeed:2 intricate:1 roughly:1 themselves:2 oversimplified:1 window:2 considering:2 increasing:2 begin:1 estimating:1 project:1 f1c:2 linearity:1 what:9 kind:1 cm:2 developed:1 lone:2 finding:1 nj:1 remember:1 exactly:2 demonstrates:1 brute:1 timing:6 oxford:1 subscript:1 fluctuation:1 might:3 therein:1 specifying:1 suggests:3 limited:1 averaged:1 steveninck:3 lost:1 implement:1 area:2 empirical:1 thought:1 projection:8 word:2 integrating:1 radial:1 suggest:1 get:1 onto:2 encloses:1 operator:1 judged:1 put:1 collapsed:1 dimensionless:1 context:1 live:1 accumulating:1 map:1 deterministic:2 missing:1 go:1 formulate:1 resolution:5 identifying:2 blaise:1 rule:1 dominate:1 searching:1 rieke:1 variation:1 coordinate:2 imagine:1 construction:1 exact:4 hypothesis:1 velocity:1 approximated:1 fly:1 electrical:1 capture:5 richness:1 substantial:1 mu:3 complexity:4 dynamic:8 jersey:2 various:1 train:2 separated:1 describe:1 compressed:1 think:3 noisy:2 itself:1 plummet:1 sequence:1 triggered:7 eigenvalue:8 multistate:1 interaction:3 maximal:2 relevant:11 combining:1 realistically:1 description:10 potassium:1 convergence:1 ring:1 iq:1 progress:3 eq:3 strong:2 soc:1 recovering:1 c:1 involves:1 implies:1 quantify:2 met:1 correct:1 filter:8 stochastic:1 bin:2 integrateand:1 really:2 biological:1 exploring:1 kinetics:1 around:1 considered:1 koch:1 exp:6 vary:1 proc:1 applicable:1 sensitive:4 largest:1 tf:1 cdt:1 mit:1 clearly:1 gaussian:2 rather:2 reaching:1 voltage:3 focus:2 vk:2 rank:1 dependent:2 lj:1 relation:3 subscripted:1 interested:1 among:1 spatial:3 special:1 bifurcation:1 field:1 having:1 sampling:1 identical:1 ike:1 nearly:2 future:1 stimulus:11 primarily:1 sta:6 individual:2 geometry:1 phase:1 fire:2 william:1 conductance:2 possibility:1 edge:1 integral:1 orbit:1 circle:1 plotted:1 isolated:6 boolean:3 asking:1 deviation:2 rare:1 hundred:1 too:2 characterize:1 answer:1 periodic:1 st:1 na:3 book:1 derivative:3 leading:1 account:1 potential:2 de:3 sec:1 blurred:1 matter:1 explicitly:1 mv:3 depends:4 vi:3 onset:1 performed:1 start:1 bayes:1 complicated:1 formed:1 compartment:1 largely:1 ensemble:1 identify:1 ofthe:2 comp:2 history:3 synapsis:1 llt:2 frequency:2 dm:1 associated:2 recall:1 knowledge:1 reflecting:1 higher:1 dt:3 response:4 specify:1 formulation:1 done:1 though:2 strongly:1 just:2 ergodicity:1 correlation:2 d:3 hand:2 working:1 nonlinear:2 hopefully:1 autocorrelations:1 undergoes:1 eigenmodes:1 mode:10 quality:2 effect:2 normalized:1 hence:1 nonzero:4 width:1 motion:2 meaning:1 rotation:1 stimulation:1 spiking:6 empirically:1 insensitive:1 exponentially:1 significant:4 cambridge:1 nonlinearity:4 had:1 stable:1 add:1 something:1 curvature:1 own:2 belongs:1 reverse:1 certain:1 seen:1 captured:5 somewhat:1 determine:1 schneidman:1 signal:10 tempting:1 rj:1 exceeds:1 cross:1 long:1 biophysics:1 represent:1 ion:5 cell:7 addition:1 want:2 interval:3 source:1 extra:1 unlike:1 dtf:1 thing:1 flow:1 seem:1 near:2 presence:2 easy:1 independence:1 t1c:2 idea:1 tm:1 decline:1 wo:1 york:1 pike:1 action:1 generally:2 detailed:1 clear:1 eigenvectors:1 se:1 amount:3 ten:2 band:1 simplest:1 generate:1 notice:2 bot:2 sign:1 write:1 discrete:1 threshold:2 fraction:6 sum:2 noticing:1 injected:2 hodgkin:8 place:1 almost:2 reasonable:1 patch:2 separation:1 decision:1 prefer:1 bit:3 fairhall:1 occur:1 huxley:8 segev:1 felt:1 generates:1 span:1 lond:1 developing:1 o_:1 combination:1 membrane:3 across:1 smaller:3 lnote:1 s1:2 dv:1 explained:1 taken:2 equation:1 turn:1 describing:1 hh:7 reversal:1 observe:1 away:1 spectral:1 slower:1 original:1 substitute:1 denotes:1 include:1 log2:2 instant:1 warland:1 classical:2 leakage:1 capacitance:1 already:1 question:4 spike:49 dependence:1 tols:2 bialek:4 microscopic:2 subspace:11 separate:1 mapped:1 accu:1 manifold:1 presynaptic:1 trivial:1 reason:1 toward:2 gna:1 code:1 modeled:1 gk:1 upper:1 neuron:25 finite:5 curved:3 neurobiology:1 looking:1 precise:3 saturates:1 arbitrary:2 sharp:1 namely:1 nee:1 beyond:1 able:1 suggested:1 dynamical:1 criticize:1 regime:1 including:1 memory:3 oj:1 ispike:3 force:1 sodium:1 improve:1 library:1 picture:2 catch:1 koberle:1 faced:1 prior:3 understanding:1 expect:1 suggestion:1 interesting:2 filtering:1 lv:1 integrate:1 thresholding:1 principle:1 course:2 gl:1 guide:1 perceptron:3 institute:1 characterizing:4 differentiating:1 absolute:1 van:3 curve:3 dimension:14 evaluating:2 cumulative:1 computes:2 c5:1 made:1 projected:1 approximate:1 emphasize:2 compact:1 vna:3 don:1 spectrum:1 continuous:1 search:1 sk:2 channel:3 learn:1 ruyter:3 adrienne:2 complex:4 cl:1 sp:1 s2:2 whole:4 noise:2 arrival:1 freedman:1 ref:1 fig:11 en:1 msec:8 lie:1 crude:1 specific:2 gating:1 decay:1 flt:2 exists:1 modulates:1 nec:1 occurring:1 tc:2 led:1 simply:1 explore:1 likely:1 visual:2 corresponds:1 dh:1 ma:1 conditional:6 goal:2 marked:1 fuzziness:1 replace:1 absence:2 brenner:1 change:4 specifically:1 determined:1 operates:1 infinite:1 miss:1 specie:1 meaningful:1 outstanding:1 princeton:3 rigidity:1
948
1,868
Sparsity of data representation of optimal kernel machine and leave-one-out estimator A. Kowalczyk Chief Technology Office, Telstra 770 Blackburn Road, Clayton, Vic. 3168, Australia (adam.kowalczy [email protected]) Abstract Vapnik's result that the expectation of the generalisation error ofthe optimal hyperplane is bounded by the expectation of the ratio of the number of support vectors to the number of training examples is extended to a broad class of kernel machines. The class includes Support Vector Machines for soft margin classification and regression, and Regularization Networks with a variety of kernels and cost functions. We show that key inequalities in Vapnik's result become equalities once "the classification error" is replaced by "the margin error", with the latter defined as an instance with positive cost. In particular we show that expectations of the true margin error and the empirical margin error are equal, and that the sparse solutions for kernel machines are possible only if the cost function is "partially" insensitive. 1 Introduction Minimization of regularized risk is a backbone of several recent advances in machine learning, including Support Vector Machines (SVM) [13], Regularization Networks (RN) [5] or Gaussian Processes [15]. Such a machine is typically implemented as a weighted sum of a kernel function evaluated for pairs composed of a data vector in question and a number of selected training vectors, so called support vectors. For practical machines it is desired to have as few support vectors as possible. It has been observed empirically that SVM solutions have often very few support vectors, or that they are sparse, while RN machines are not. The paper shows that this behaviour is determined by the properties of the cost function used (its partial insensitivity, to be precise). Another motivation for interest in sparsity of solutions comes from celebrated result of Vapnik [13] which links the number of support vectors to the generalization error of SVM via a bound on leave-one-out estimator [9]. This result has been originally shown for a special case of classification with hard margin cost function (optimal hyperplane). The papers by Opper and Winther [10], Jaakkola and Haussler [6], and Joachims [7] extend Vapnik's result in the direction of bounds for classification error of SVM's. The first of those papers deals with the hard margin case, while the other two derive tighter bounds on classification error of the soft margin SVMs with [-insensitive linear cost. In this paper we extend Vapnik's result in another direction. Firstly, we show that it holds for to a wide range of kernel machines optimized for a variety of cost functions, for both classification and regression tasks. Secondly, we find that Vapnik's key inequalities become equalities once "the misclassification error" is replaced by "the margin error" (defined as the rate of data instances incurring positive costs). In particular, we find that for margin errors the following three expectations: (i) of the empirical risk, (ii) of the the true risk and (iii) of the leave-one-out risk estimator are equal to each other. Moreover, we show that they are equal to the expectation of the ratio of support vectors to the number of training examples. The main results are given in Section 2. Brief discussion of results is given in Section 3. 2 Main results Given an l-sample {(Xl, yd, .... , (XI, YI)} of patterns Xi E Xc IRn and target values Yi E Y c IR. The learning algorithms used by SVMs [13], RNs [5] or Gaussian Processes [15] minimise the regularized risk functional of the form: A I min (f,b) E1f. xlR Rreg[J, bj =L C(Xi,Yi, ~i[f, b]) + -21Ifll~. i=l (1) Here 1? denotes a reproducing kernel Hilbert space (RKHS) [1], 11.111f. is the corresponding norm, A > 0 is a regularization constant, C : X X Y x IR -+ IR+ is a non-negative cost function penalising for the deviation ~i[f, bj = Yi - iii of the estimator iii := f(Xi) + (3b from target Yi at location Xi, b E IR is a constant (bias) and (3 E {O, 1} is another constant ((3 = 0 is used to switch the bias off). The important Representer Theorem [8,4] states that the minimizer (1) has the expansion: I f(x) =L Qik(Xi,X), (2) i=l where k : X x X -+ IR is the kernel corresponding to the RKHS 1?. In the following section we shall show that under general assumptions this expansion is unique. 0, then Xi is called a support vector of f(.). If Qi '" 2.1 Unique Representer Theorem We recall, that a function is called a real analytic function on a domain c IRq if for every point of this domain the Taylor series for the function converges to this function in some neighborhood of that point. 1 A proof of the following crucial Lemma is omitted due to lack of space. Lemma 2.1. If cp : X -+ IR is an analytic function on an open connected subset X then the subset cp-l (0) C X is either equal to X or has Lebesgue measure O. c IRn, Analyticity is essential for the above result and the result does not hold even for functions infinitely differentiable, in general. Indeed, for every closed subset V C IRn there exists an infinitely differentiable function (COO) on IRn such that (,1>-1(0) = V and there exist closed subsets with positive Lebesgue measure and empty interior. Hence the Lemma, and consequently the subsequent results, do not hold for the broader class of all Coo functions. I Examples of analytic functions are polynomials. The ordinary functions such as sin( x), cos( x) and exp(x) are examples of non-polynomial analytic functions. The function 'IjJ(x) := exp( -1/x 2 ) for x > 0 and 0, otherwise, is an example of infinitely differentiable function of the real line but not analytic (locally it is not equal to its Taylor series expansion at zero). Standing assumptions. The following is assumed. = 1. The set X C IRn is open and connected and either Y {? I} (the case of classification) or Y C IR is an open segment (the case of regression). 2. The kernel k : X x X -+ IR is a real analytic function on its domain. 3. The cost function ~~c(x , Y,~) is convex, differentiable on IR and c(x, y, 0) for every (x,y) E X x Y. It can be shown that c(x, Y,~) > 0 =0 (3) ?:} 4. lisafixedinteger, 1 < l ~ dim(1l),andthetrainingsample(xl,yt}, ... ,(XI,YI) is iid drawn from a continuous probability density p( x, y) on X x Y. 5. The phrase "with probability I" will mean with probability 1 with respect to the selection of the training sample. = Note that standard polynomial kernel k(x,x') (1 + X? x')d, X,X' E IRn , satisfies the above assumptions with dim(1l) = (n~d). Similarly, the Gaussian kernel k(x, x') = exp( -llx - x' a) satisfies them with dim(1l) 00. W/ = Typical cost functions such as the super-linear loss functions cp(x, Y,~) = (y~)~ := (max(O,y~))P used for SVM classification, or CpE(X,y,~) = (I~I - f)~ used for SVM regression, or the super-linear loss cp(x, y,~) I~IP for p > 1 for RN regression, satisfy the above assumptions 2 . Similarly, variations of Huber robust loss [11, 14] satisfy those assumptions. = The following result strengthens the Representer Theorem [8, 4] Theorem 2.2. If l ~ dim1l, then both, the minimizer of the regularized risk (1) and its expansion (2) are unique with probability 1. Proof outline. Convexity of the functional (1, b)~Rreg[j, bj and its strict convexity with respect to ! E 1l implies the uniqueness of ! E 1l minimizing the regularized risk (1); cf.[3]. From the assumption that l ~ dim 1l we derive the existence of Xl, ... , Xl E X such that the functions !(Xi, .), i 1, ... , l, are linearly independent. Equivalently, the following Gram determinant is # 0: = </J(Xl, ... ,Xl) := det[(k(xi, .), k(xj, ?)hih::;i,j::;/ = det[k(xi,Xj)h::;i,j::;1 # O. Now Lemma 2.1 implies that </J(Xl, .. . , Xl) # 0 with probability 1, since </J : Xl -+ IR is an analytic function. Hence functions k(Xi' .) are linearly independent and the expansion (2) is unique with probability 1. Q.E.D. 2.2 Leave-one-out estimator In this section the minimizer (1) for the whole data sequence of l-training instances and some other objects related to it will be additionally marked with superscript '(l)'. The superscript' (l\ i)' will be used analogously to mark objects corresponding to the minimizer of (1) for the reduced training sequence, with ith instance removed. Lemma 2.3. With probability l,for every i E {I, ... , l}: a(l)i#O a(l) i # 0 ?:} C(Xi'Yi,~d!(l),b(I)]?O, (4) ?:} C(Xi' Yi, ~d!(l\i) ,b(l\i)]) (5) > O. 2Note that in general, if a function ?> : IR --+ IR is convex, differentiable and such that d?>/d~(O) 0, then the cost function c( x, y,~) := ?>( (~)+) is convex and differentiable. = Proof outline. With probability 1, functions k(xj, .), j = 1, ... , I, are linearly independent (cf. the proof of Theorem 2.2) and there exists a feature map <1> : X -+ ~l such that vectors Zj := <1>(Xj), i = 1, ... ,1 are linearly independent, k(xj,x) = Zj' <1>(x) and fU)(x) = zU) . <1>(x) + (Jb U) for every x E X, where zU) := L~=l aU)jzj. The pair (zU), bU)) minimizes the function (6) where ~j (z, b) := Yj - Z . Zj - (Jb. This function is differentiable due to the standing assumptions on the cost e. Hence, necessarily gradRreg = 0, at the minimum (zU), bU)), which due to the linear independence of vectors Zj, gives o:U) . J = -!A a~ ae (x . J' y ' i. (zU) bU))) J' <"J (7) , for every j = 1, ... , I. This equality combined with equivalence (3) proves (4). Now we proceed to the proof of (5). Note that the pair (zU\i) , bU\i)), where zU\i) .L~"'i aU\i) jZj, corresponds in the feature space to the minimizer (JU\i) , bU\i)) of the reduced regularized risk: flU\i) reg (z , b) .= . Sufficiency in (5). From (4) and characterization (7) ofthe critical point it follows immediately that if aU) i = 0, then the minimizers for the full and reduced data sets are identical. Necessity in (5). A supposition of a U\ :j:. 0 and e(xi' Yi, ~df(l\i), bU\i)]) contradiction. Indeed, from (4), e( xi, Yi, ~i (zU) , bU))) > 0, hence: flU) (zU\i) bU\i)) reg' = 0 leads to a = flU\i) (zU\i) b(l\i)) < < fl(l\i) (z(l) b(l)) = fl(l) (z(l) b(l)) - e(x?t, y't, <"t i ?(z(l) , b(l))) reg' reg' reg' fl(l) (z(l) , b(l)) reg = (z,b)ER1xR min R(l) (z, b). reg This contradiction completes the proof. Q.E.D. We say that Xi is a sensitive support vector if a(l) i :j:. 0 and f(l) :j:. f(l\i) , i.e., if its removal from the training set changes the solution. Corollary 2.4. Every support vector is sensitive with probability 1. Proof. If ai :j:. 0, then the vector z(l) f/. LinR(zl, .... , Zi-l, Zi+l, ... , Zl) since z(l) has a non-trivial component aiZi in the direction of ith feature vector Zi, while z(l\i) E LinR(zl, .... , Zi-l, Zi+1, ... , Zl) . Thus z(l) and z(l\i) have different directions in LinR(zl, ... , Zl) C Z and there exists j' E {l, ... , I} such that f U) (Xjl) :j:. f(l\i) (Xjl). Q.E.D. We define the empirical risk and the expected (true) risk of margin error Remp[!,bj L~=lI{c(xi'Yi,e;[f,b]?O} _ #{i; I Prob[e(x, Y, y - f(x) - (Jb) e(xi'Yi'~i [f,b]) > O} I > OJ, where (f, b) E 1i x lR, 10 denotes the indicator function and (number of elements) of a set. # denotes the cardinality From the above Lemma we obtain immediately the following result: Corollary 2.5. With probability 1: ( . Yt,. I(l\i) (X t.) + (3b(l\i)) > O} = #{'. (l) . ...t. O} = R #{ Z'., eXt, Z ,a t T l l [/(1) btl)] emp,? There exist counter-examples showing the phrase "with probability 1" above cannot be omitted. The sum on L.H.S. above is the leave-one-out estimator of the risk of margin error [14] for the minimizer of regularized risk (1). The above corollary shows that this estimator is uniquely determined by the number of support vectors as well as the number of training margin errors. Now from the Lunts-Brailovsky Theorem [14, Theorem 10.8] applied to the risk := I{c(x,y,y-!(x)-/3b>O} the following result is obtained. Theorem 2.6. Q(x, Yj I, b) E[Rexp (f(I-l) , b(l-l))] = E[Remp(f(l) , btl))] = E[#{i j ~(l)i :I O}], (8) where the first expectation is in the selection of training (l - 1)-sample and the remaining two are with respect to the selection of training l-sample. A cost function is called partially insensitive if there exists (x, y) E X x Y and 6 :I 6 such that c(x, y, ed = c(x, y, 6) = O. Otherwise, the cost c is called sensitive. Typical SVM cost functions are partially insensitive while typical RN cost functions are sensitive. The following result can be derived from Theorem 2.6 and Lemma 2.3. Corollary 2.7. If the number of support vectors is < l with aprobabi/ity > 0, then the cost function has to be partially insensitive. Typical cost functions penalize for an allocation of a wrong sign, i.e. 'v'(X,y,Y)EXXYXIR yf) <0 ~ c(x, y, y - f)) > O. (9) Let us define the risk of misclassification of the kernel machine f)(x) = I(x) + (3b for (f,b) E 1i x ~asRclas[/,b]:= Prob[yf)(x) < 0]. Assuming (9),wehaveRclas[/,b] :::; Rexp[/, b]. Combining this observation with (8) we obtain an extension of Vapnik's result [14, Theorem 10.5]: Corollary 2.8. If condition (9) holds then E[R clas (/(1-1) , b(l-l))] < E[#{i - j a(l)i l :I O}] -- E[R emp (/(1) , btl))] . (10) Note that the original Vapnik's result consists in an inequality analogous to the inequality in the above condition for the specific case of classification by optimal hyperplanes (hard margin support vector machines). 3 Brief Discussion of Results Essentiality of assumptions. For every formal result in this paper and any of the standing assumption there exists an example of a minimizer of (1) which violates the conclusions of the result. In this sense all those assumptions are essential. Linear combinations of admissible cost functions. Any weighted sum of cost functions satisfying our Standing Assumption 3 will satisfy this assumption as well, hence our formalism will apply to it. An illustrative example is the following cost function for classification c(x, y,~) = L:j Cj (max: (0, y(~ - f.j) )pj , where Cj > 0, f.j 2: 0 and Pj > 1 are constants andy E Y = {?1}. Non-differentiable cost functions. Our formal results can be extended with minor modifications to the case of typical, non-differentiable linear cost function such as c = (y~)+ = max:(O, y~) for SVM classification, c = (I~I - f.)+ for SVM regression and to the classification with hard margins SVMs (optimal hyperplanes). Details are beyond the scope of this paper. Note that the above linear cost functions can be uniformly approximated by differentiable cost functions, e.g. by Huber cost function [11, 14], to which our formalism applies. This implies that our formalism "applies approximately" to the linear loss case and some partial extension of it can be obtained directly using some limit arguments . However, using direct algebraic approach based on an evaluation of Kuhn-Tucker conditions one can come to stronger conclusions. Details will be presented elsewhere. Theory of generalization. Equality of expectations of empirical and expected risk provided by Theorem 2.6 implies that minimizers of regularized risk (1) are on average consistent. We should emphasize that this result holds for small training samples, of the size I smaller than VC dimension of the function class, which is dim(1i) + 1 in our case. This should be contrasted with uniform convergence bounds [2, 13, 14] which are vacuous unless I > > VC dimension. Significance of approximate solutions for RNs. Corollary 2.7 shows that sparsity of solutions is practically not achievable for optimal RN solutions since they use sensitive cost functions. This emphasizes the significance of research into approximately optimal solution algorithms in such a case, cf. [12] . Application to selection of the regularization constant. The bound provided by Corollary 2.8 and the equivalence given by Theorem 2.6 can be used as a justification of a heuristic that the optimal value of regularization constant A is the one which minimizes the number of margin errors (cf. [14]). This is especially appealing in the case of regression with f.-insensitive cost, where the margin error has a straightforward interpretation of sample being outside of the f.-tube. Application to modelling of additive noise. Let us suppose that data is iid drawn form the distribution of the form y = f(x) + f.noise, where f.noise is a random noise independent of x, with 0 mean. Theorem 2.6 implies the following heuristic for approximation of the noise distribution in the regression model y = f (x) + f.noise : PrOb[f.noise > f.ll::::: #{i; a(l) I :I O} . Here (f(l) , b(l)) is a minimizer ofthe regularized risk (1) with an f.-insensitive cost function, i.e. such that c(x, y,~) > 0 iff I~I > f.. Acknowledgement. The permission of the Chief Technology Officer, Telstra, to publish this paper, is gratefully acknowledged. References [1] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337 - 404, 1950. [2] P. Bartlett and J. Shave-Taylor. Generalization performance of support vector machines and other pattern classifiers. In B. Scholkopf, et. al., eds., Advances in Kernel Methods, pages 43-54, MIT Press, 1998. [3] C. Burges and D. J. Crisp. Uniqueness of the SVM solution. In S. Sola et. al., ed., Adv. in Neural Info. Proc. Sys. 12, pages 144-152, MIT Press, 2000. [4] D. Cox and F. O'Sullivan. Asymptotic analysis of penalized likelihood and related estimators. Ann. Statist., 18:1676-1695,1990. [5] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219-269,1995. [6] T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In Proc. Seventh Work. onAI and Stat. , San Francisco, 1999. Morgan Kaufman. [7] T. Joachims. Estimating the Generalization Performance of an SVM Efficiently. In Proc. of the International Conference on Machine Learning, 2000. Morgan Kaufman. [8] G. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation of stochastic processes and smoothing by splines. Ann. Math. Statist., 41 :495-502, 1970. [9] A. Lunts and V. Brailovsky. Evaluation of attributes obtained in statistical decision rules. Engineering Cybernetics, 3:98-109, 1967. [10] M. Opper and O. Winther. Gaussian process classification and SVM: Mean field results and leave-one out estimator. In P. Bartlett, et. al eds., Advances in Large Margin Classifiers, pages 301-316, MIT Press, 2000. [11] A. Smola and B. Scholkopf. A tutorial on support vector regression. Statistics and Computing, 1998. In press. [12] A. 1. Smola and B. Scholkopf. Sparse greedy matrix approximation for machine learning. Typescript, March 2000. [13] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [14] V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [15] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models. Kluwer, 1998.
1868 |@word cpe:1 rreg:2 determinant:1 cox:1 polynomial:3 norm:1 stronger:1 achievable:1 open:3 necessity:1 celebrated:1 series:2 rkhs:2 com:1 xlr:1 additive:1 subsequent:1 girosi:1 analytic:7 greedy:1 selected:1 sys:1 ith:2 lr:1 characterization:1 math:1 location:1 hyperplanes:2 firstly:1 mathematical:1 direct:1 become:2 scholkopf:3 consists:1 huber:2 expected:2 indeed:2 telstra:3 cardinality:1 provided:2 estimating:1 bounded:1 moreover:1 kimeldorf:1 backbone:1 kaufman:2 minimizes:2 every:8 wrong:1 classifier:2 zl:6 positive:3 engineering:1 limit:1 ext:1 flu:3 yd:1 approximately:2 au:3 equivalence:2 co:1 range:1 practical:1 unique:4 yj:2 sullivan:1 empirical:4 road:1 cannot:1 interior:1 selection:4 risk:17 crisp:1 map:1 yt:2 straightforward:1 williams:1 convex:3 immediately:2 contradiction:2 estimator:9 haussler:2 rule:1 ity:1 variation:1 justification:1 analogous:1 target:2 suppose:1 xjl:2 element:1 strengthens:1 satisfying:1 approximated:1 observed:1 connected:2 adv:1 counter:1 removed:1 convexity:2 segment:1 neighborhood:1 outside:1 heuristic:2 say:1 otherwise:2 statistic:1 ip:1 superscript:2 sequence:2 differentiable:10 combining:1 iff:1 insensitivity:1 convergence:1 empty:1 adam:1 leave:6 converges:1 object:2 derive:2 stat:1 minor:1 implemented:1 come:2 implies:5 direction:4 kuhn:1 attribute:1 stochastic:1 vc:2 australia:1 violates:1 behaviour:1 generalization:4 tighter:1 secondly:1 extension:2 hold:5 rexp:2 practically:1 exp:3 scope:1 bj:4 omitted:2 uniqueness:2 estimation:1 proc:3 sensitive:5 weighted:2 minimization:1 mit:3 gaussian:5 super:2 jaakkola:2 office:1 broader:1 corollary:7 derived:1 joachim:2 modelling:1 likelihood:1 sense:1 dim:5 inference:1 minimizers:2 typically:1 irn:6 classification:13 smoothing:1 special:1 equal:5 once:2 field:1 identical:1 blackburn:1 broad:1 jones:1 representer:3 jb:3 spline:1 few:2 composed:1 replaced:2 lebesgue:2 interest:1 evaluation:2 andy:1 fu:1 partial:2 poggio:1 unless:1 taylor:3 desired:1 instance:4 formalism:3 soft:2 ordinary:1 cost:30 phrase:2 deviation:1 subset:4 uniform:1 seventh:1 shave:1 combined:1 ju:1 density:1 winther:2 international:1 standing:4 bu:8 off:1 probabilistic:1 analogously:1 tube:1 e1f:1 american:1 li:1 includes:1 satisfy:3 closed:2 ir:12 efficiently:1 ofthe:3 bayesian:1 emphasizes:1 iid:2 cybernetics:1 ed:4 tucker:1 proof:7 remp:2 recall:1 penalising:1 hilbert:1 cj:2 originally:1 sufficiency:1 evaluated:1 smola:2 aronszajn:1 lack:1 yf:2 true:3 regularization:6 hence:5 equality:4 deal:1 sin:1 ll:1 uniquely:1 illustrative:1 outline:2 cp:4 functional:2 empirically:1 insensitive:7 extend:2 interpretation:1 kluwer:1 typescript:1 llx:1 ai:1 similarly:2 gratefully:1 recent:1 irq:1 verlag:1 inequality:4 yi:12 morgan:2 minimum:1 ii:1 full:1 qi:1 prediction:2 regression:11 ae:1 expectation:7 df:1 publish:1 kernel:15 qik:1 penalize:1 completes:1 crucial:1 strict:1 jordan:1 iii:3 variety:2 switch:1 xj:5 independence:1 zi:5 architecture:1 wahba:1 brailovsky:2 det:2 minimise:1 bartlett:2 algebraic:1 proceed:1 york:2 locally:1 statist:2 svms:3 reduced:3 exist:2 zj:4 tutorial:1 sign:1 shall:1 officer:1 key:2 acknowledged:1 drawn:2 pj:2 btl:3 sum:3 prob:3 decision:1 bound:5 fl:3 correspondence:1 argument:1 min:2 combination:1 march:1 smaller:1 appealing:1 modification:1 coo:2 jzj:2 incurring:1 apply:1 kowalczyk:1 permission:1 existence:1 original:1 denotes:3 remaining:1 cf:4 graphical:1 xc:1 prof:1 especially:1 society:1 question:1 link:1 trivial:1 assuming:1 ratio:2 minimizing:1 equivalently:1 info:1 negative:1 observation:1 extended:2 team:1 precise:1 rn:7 reproducing:2 clayton:1 vacuous:1 pair:3 optimized:1 hih:1 beyond:2 pattern:2 sparsity:3 max:3 including:1 oj:1 misclassification:2 critical:1 regularized:8 indicator:1 analyticity:1 vic:1 technology:2 brief:2 acknowledgement:1 removal:1 asymptotic:1 loss:4 allocation:1 consistent:1 editor:1 elsewhere:1 penalized:1 bias:2 formal:2 burges:1 wide:1 emp:2 sparse:3 opper:2 dimension:2 gram:1 san:1 transaction:1 approximate:1 emphasize:1 assumed:1 francisco:1 xi:19 continuous:1 chief:2 additionally:1 nature:1 robust:1 expansion:5 necessarily:1 domain:3 significance:2 main:2 linearly:4 motivation:1 whole:1 noise:7 wiley:1 xl:9 admissible:1 theorem:13 specific:1 zu:10 showing:1 lunt:2 supposition:1 svm:12 essential:2 exists:5 vapnik:10 margin:17 infinitely:3 ijj:1 clas:1 partially:4 applies:2 springer:1 corresponds:1 minimizer:8 satisfies:2 marked:1 consequently:1 ann:2 hard:4 change:1 generalisation:1 determined:2 typical:5 uniformly:1 hyperplane:2 contrasted:1 lemma:7 called:5 support:16 mark:1 latter:1 reg:7
949
1,869
Text Classification using String Kernels HUlna Lodhi John Shawe-Taylor N ello Cristianini Chris Watkins Department of Computer Science Royal Holloway, University of London Egham, Surrey TW20 OEX, UK {huma, john, nello, chrisw}Cdcs.rhbnc.ac.uk Abstract We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [6] is made showing encouraging results. 1 Introduction Standard learning systems (like neural networks or decision trees) operate on input data after they have been transformed into feature vectors XI, ??? , Xl E X from an n dimensional space. There are cases, however, where the input data can not be readily described by explicit feature vectors: for example biosequences, images, graphs and text documents. For such datasets, the construction of a feature extraction module can be as complex and expensive as solving the entire problem. An effective alternative to explicit feature extraction is provided by kernel methods. Kernel-based learning methods use an implicit mapping ofthe input data into a high dimensional feature space defined by a kernel function, i.e. a function returning the inner product between the images of two data points in the feature space. The learning then takes place in the feature space, provided the learning algorithm can be entirely rewritten so that the data points only appear inside dot products with other data points. Several linear algorithms can be formulated in this way, for clustering, classification and regression. The most typical example of kernel-based systems is the Support Vector Machine (SVM) [10, 3], that implements linear classification. One interesting property of kernel-based systems is that, once a valid kernel function has been selected, one can practically work in spaces of any dimensionality without paying any computational cost, since the feature mapping is never effectively performed. In fact, one does not even need to know what features are being used. In this paper we examine the use of a kernel method based on string alignment for text categorization problems. A standard approach [5] to text categorisation makes use of the so-called bag of words (BOW) representation, mapping a document to a bag (i.e. a set that counts repeated elements), hence losing all the word order information and only retaining the frequency of the terms in the document. This is usually accompanied by the removal of non-informative words (stop words) and by the replacing of words by their stems, so losing inflection information. This simple technique has recently been used very successfully in supervised learning tasks with Support Vector Machines (SVM) [5]. In this paper we propose a radically different approach, that considers documents simply as symbol sequences, and makes use of specific kernels. The approach is entirely subsymbolic, in the sense that it considers the document just like a unique long sequence, and still it is capable to capture topic information. We build on recent advances [11, 4] that demonstrated how to build kernels over general structures like sequences. The most remarkable property of such methods is that they map documents to vectors without explicitly representing them, by means of sequence alignment techniques. A dynamic programming technique makes the computation of the kernels very efficient (linear in the documents length). It is surprising that such a radical strategy, only extracting allignment information, delivers positive results in topic classification, comparable with the performance of problem-specific strategies: it seems that in some sense the semantic of the document can be at least partly captured by the presence of certain substrings of symbols. Support Vector Machines [3] are linear classifiers in a kernel defined feature space. The kernel is a function which returns the dot product of the feature vectors ?(x) and ?(X') of two inputs x and x' K(x, x') = ?(x)T ?(X'). Choosing very high dimensional feature spaces ensures that the required functionality can be obtained using linear classifiers. The computational difficulties of working in such feature spaces is avoided by using a dual representation of the linear functions in terms of the training set S = {(Xl, Y1) ,(X2, Y2), ... , (xm, Ym)}, m f(x) = LCkiYiK(X, Xi) - b. ;=1 The danger of overfitting by resorting to such a high dimensional space is averted by maximising the margin or a related soft version of this criterion, a strategy that has been shown to ensure good generalisation despite the high dimensionality [9,8]. 2 A Kernel for Text Sequences In this section we describe a kernel between two text documents. The idea is to compare them by means of the substrings they contain: the more substrings in common, the more similar they are. An important part is that such substrings do not need to be contiguous, and the degree of contiguity of one such substring in a document determines how much weight it will have in the comparison. For example: the substring 'c-a-r' is present both in the word 'card' and in the word ' custard', but with different weighting. For each such substring there is a dimension of the feature space, and the value of such coordinate depends on how frequently and how compactly such string is embedded in the text. In order to deal with non-contiguous substrings, it is necessary to introduce a decay factor). E (0,1) that can be used to weight the presence of a certain feature in a text (see Definition 1 for more details). EXaIllple. Consider the words cat, car, bat, bar. If we consider only k = 2, we obtain an 8-dimensional feature space, where the words are mapped as follows: rp(cat) rp(car) rp(bat) rp(bar) c-a ).2 ).2 0 0 c-t ).3 0 0 0 a-t ).2 0 ).2 0 b-a 0 0 ).2 ).2 b-t 0 0 c-r 0 ).3 0 0 0 ).3 a-r 0 ).2 0 ).2 b-r 0 0 0 ).3 Hence, the unnormalized kernel between car and cat is K(car,cat) = ).4, wherease the normalized version is obtained as follows: K(car,car) = K(cat,cat) = 2).4 +).6 and hence IC(car,cat) = ).4/(2).4 + ).6) = 1/(2 + ).2). Note that in general the document will contain more than one word, but the mapping for the whole document is into one feature space. Punctuation is ignored, but spaces are retained. However, for interesting substring sizes (eg > 4) direct computation of all the relevant features would be impractical even for moderately sized texts and hence explicit use of such representation would be impossible. But it turns out that a kernel using such features can be defined and calculated in a very efficient way by using dynamic progamming techniques. We derive the kernel by starting from the features and working out their inner product. In this case there is no need to prove that it satisfies Mercer's conditions (symmetry and positive semi-definiteness) since they will follow automatically from its definition as an inner product. This kernel is based on work [11, 4] mostly motivated by bioinformatics applications. It maps strings to a feature vector indexed by all k tuples of characters. A k-tuple will have a non-zero entry if it occurs as a subsequence anywhere (not necessarily contiguously) in the string. The weighting of the feature will be the sum over the occurrences of the k-tuple of a decaying factor of the length of the occurrence. Definition 1 (String subsequence kernel) Let ~ be a finite alphabet. A string is a finite sequence of characters from~, including the empty sequence. For strings s, t, we denote by Is I the length of the string s = Sl .?. sl s I, and by st the string obtained by concatenating the strings sand t. The string sri : j] is the substring Si ??? Sj of s. We say that u is a subsequence of s, if there exist indices i = (i l , ... ,ilul)' with 1 :S i l < ... < i 1ul :S lsi, such that Uj = S i j' for j = 1, ... ,lui, 01' u = sri] for short. The length l(i) of the subsequence in s is ilul - i l + 1. We denote by ~n the set of all finite strings of length n, and by~? the set of all strings DO (1) We now define feature spaces Fn = lR 1: n ? The feature mapping rp for a string s is given by defining the u coordinate rpu (s) for each u E ~n. We define rpu(s) = L i :u = s [il ).l(i) , (2) for some ..\ < 1. These features measure the number of occurrences of subsequences in the string-s weighting them according to their lengths. Hence, the inner product of the feature vectors for two strings sand t give a sum over all common subsequences weighted according to their frequency of occurrence and lengths L (<I>u(s) . <l>u(t)) = L L L L L uEEn L ..\l(i) uEEn i:u= s [iJ ..\l(j) j :u = t liJ ..\l(i)+l(j). uEEn i :u = s [iJ j :u=t liJ In order to derive an effective procedure for computing such kernel, we introduce an additional function which will aid in defining a recursive computation for this kernel. Let L L L KHs, t) ..\l s l+ltl- i ,-j,+2, uEE' i: u = s [iJj :u = t liJ 1, ... , n -1, that is counting the length to the end of the strings sand t instead of just l(i) and l(j). We can now define a recursive computation for K: and hence compute K n , Definition 2 Recursive computation of the subsequence kernel. Kb(s, t) K:(s,t) K;(s,t) KHsx, t) 1, for all s, t, 0, if min(lsl, It l) 0, if min(lsl, It l) ..\K:(s, t) + L < i, < i, KL1(S, t[l : j - 1])..\lt l -1+ 2 , j :tj=X i = 1, .. . ,n - Kn(s,t)+ L 1, K~_1(s,t[1:j-1])..\2. j:tj = x The correctness of this recursion follows from observing how the length of the strings has increased, incurring a factor of ..\ for each extra character, until the full length of n characters has been attained. If we wished to compute Kn(s, t) for a range of values of n, we would simply perform the computation of K:(s, t) up to one less than the largest n required, and then apply the last recursion for each Kn (s, t) that is needed using the stored values of K:(s, t). We can of course create a kernel K (s, t) that combines the different Kn (s, t) giving different (positive) weightings for each n. Once we have create such a kernel it is natural to normalise to remove any bias introduced by document length. We can produce this effect by normalising the feature vectors in the feature space. Hence, we create a new embedding 1>(s) = JJ.!l . to t h e kerneI 11 1>(s )ll' w h?1Ch? glVes rIse /?(s, t) /. .) / <I>(s) <I>(t) ) \ <I>(s) . <I>(t) = \ 1I<I>(s)11 . 1I <I>(t) 11 1 1I<I>(s)IIII<I>(t)11 (<I>(s) . <I>(t)) K(s,t) = )K(s, s)K(t, t) The normalised kernel introduced above was implemented using the recursive formulas described above. The next section gives some more details of the algorithmics and this is followed by a section describing the results of applying the kernel in a Support Vector Machine for text classification. 3 Algorithmics In this section we describe how special design techniques provide a significant speedup of the procedure, by both accelerating the kernel evaluations and reducing their number. We used a simple gradient based implementation of SVMs (see [3]) with a fixed threshold. In order to deal with large datasets, we used a form of chunking: beginning with a very small subset of the data and gradually building up the size of the training set, while ensuring that only points which failed to meet margin 1 on the current hypothesis were included in the next chunk. Since each evaluation of the kernel function requires not neglect able computational resources, we designed the system so to only calculate those entries of the kernel matrix that are actually required by the training algorithm. This can significantly reduce the training time, since only a relatively small part of the kernel matrix is actually used by our implementation of SVM. Special care in the implementation of the kernel described in Definition 1 can significantly speed-up its evaluation. As can be seen from the description of the recursion in Definition 2, its computation takes time proportional to n Is Iii 12, as the outermost recursion is over the sequence length and for each length and each additional character in sand i a sum over the sequence i must be evaluated. The complexity of the computation can be reduced to 0 (n Is Iii I), by first evaluating K;'(sx, i) = L Kf_l(s, i[l : j - 1]).xltl-H2 j:tj = x and observing that we can then evaluate KI(s, i) with the O(lsllil) recursion, KI(sx, i) = .xKi(s, i) + KI'(sx, i). Now observe that Ki'(sx, iu) = .x1uIKI'{sx, i), provided x does not occur in u, while K:'(sx, ix) = .x (Kf'( sx, i) + .xKf_l (s, i)) . These observations together give an O( lsl lt l) recursion for computing K:'(s, t). Hence, we can evaluate the overall kernel in O(n lslltl) time. 4 Experimental Results Our aim was to test the efficacy of this new approach to feature extraction for text categorization, and to compare with a state-of-the-art system such as the one used in [6]. Expecially, we wanted to see how the performance is affected by the tunable parameter k (we have used values 3, 5 and 6). As expected, using longer substrings in the comparison of two documents gives an improved performance. We used the same dataset as that reported in [6], namely the Reuters-21578 [7], as well as the Medline doucment collection of 1033 document abstracts from the National Library of Medicine. We performed all of our experiments on a subset of four categories, 'earn', 'acq', 'crude', and 'corn'. A confusion matrix can be used to summarize the performance of the classifier (number of true/false positives/negatives): P N P N TP FN FP TN We define preCISIOn: P = T:~P and recall:R = T:~N. We then define the quantitiy F1 = ;,~~ to measure the performance of the classifier. We applied the two different kernels to a subset of Reuters of 380 training examples and 90 test examples. The only difference in the experiments was the kernel used. The splits of the data were had the following sizes and numbers of positive examples in training and test sets: numbers of positive examples in training (testing) set out of 370 (90): earn 152 (40); 114 (25); 76 (15); 38 (10) in the Reuters database. The preliminary experiment used different values of k, in order to identify the optimal one, with the category 'earn' . The follwing experiments all used a sequence length of 5 for the string subsequences kernel. We set A = 0.5. The results obtained are shown in the following where the precision, recall and F1 values are shown for both kernels. 3 S-K 5 S-K 6 S-K W-K F1 0.925 0.936 0.936 0.925 Precision 0.981 0.992 0.992 0.989 Recall 0.878 0.888 0.888 0.867 # SV 138 237 268 250 Table 1: F1, Precision, Recall and number of Support Vectors for top reuter category earn averaged over 10 splits (n S-K == string kernel oflength n, W-K == word kernel earn acq crude corn F1 0.936 0.867 0.936 0.779 5 S-K kernel Precis. Recall 0.992 0.888 0.914 0.828 0.979 0.90 0.886 0.7 #SV 237 269 262 231 F1 0.925 0.802 0.904 0.762 W-K kernel Precis. Recall 0.989 0.867 0.843 0.7680 0.91 0.907 0.833 0.71 # SV 250 276 262 264 Table 2: Precision, Recall and F1 numbers for 4 categories for the two kernels: word kernel (W-K) and subsequences kernel (5 S-K) The results are better in one category, similar or slightly better for the other categories. They certainly indicate that the new kernel can outperform the more classical approach, but equally the performance is not reliably better. T he last table shows the results obtained for two categories in medLine data, numbers 20 and 23. Query #20 #23 Train/Test 24/15 22/15 3 S-K(#SV) 0.20 (101) 0.534 (107) 5 S-K(#SV) 0.637 (295) 0.409 (302) 6 S-K(#SV) 0.75 (386) 0.75 (382) W-K(#SV) 0.235 (598) 0.636 (618) Table 3: F1 and number of Support Vectors for top two Medline queries 5 Conclusions The paper has presented a novel kernel for text analysis, and tested it on a categorization task, which relies on evaluating an inner product in a very high dimensional feature space. For a given sequence length k (k = 5 was used in the experiments reported) the features are indexed by all strings of length k. Direct computation of all the relevant features would be impractical even for moderately sized texts. The paper has presented a dynamic programming style computation for computing the kernel directly from the input sequences without explicitly calculating the feature vectors. Further refinements of the algorithm have resulted in a practical alternative to the more standard word feature based kernel used in previous SVM applications to text classification [6]. We have presented an experimental comparison of the word feature kernel with our subsequences kernel on a benchmark dataset with encouraging results. The results reported here are very preliminary and many questions remain to be resolved. First more extensive experiments are required to gain a more reliable picture of the performance of the new kernel, including the effect of varying the subsequence length and the parameter).. The evaluation of the new kernel is still relatively time consuming and more research is needed to investigate ways of expediting this phase of the computation. References [1] M. Aizerman, E. Braverman, and L. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821-837, 1964. [2] B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152. ACM Press, 1992. [3] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. www.support-vector.net. [4] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, University of California in Santa Cruz, Computer Science Department, July 1999. [5] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. Technical Report 23, LS VIII, University of Dortmund, 1997. [6] T. Joachims. Text categorization with support vector machines. In Proceedings of European Conference on Machine Learning (ECML), 1998. [7] David Lewis. Reuters-21578 collection. Technical report, Available at: 1987. http://www.research.att.com/~ewis/reuters21578.html. [8] J. Shawe-Taylor and N. Cristianini Margin Distribution and Soft Margin In Advances in Large Margin Classifiers, MIT Press 2000. [9] J. Shawe-Taylor, P. Bartlett, R. Williamson and M. Anthony Structural Risk Minimization over Data-Dependent Hierarchies In EEE Transactions on Information Theory 1998 [10] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [11] C. Watkins. Dynamic alignment kernels. Technical Report CSD-TR-98-11, Royal Holloway, University of London, Computer Science department, January 1999.
1869 |@word uee:1 sri:2 version:2 seems:1 lodhi:1 tr:1 efficacy:1 att:1 document:16 current:1 comparing:1 com:1 surprising:1 si:1 must:1 readily:1 john:2 cruz:1 fn:2 informative:1 wanted:1 remove:1 designed:1 prohibitive:1 selected:1 beginning:1 short:1 lr:1 normalising:1 ucsc:1 direct:3 prove:1 combine:1 inside:1 introduce:3 expected:1 examine:1 frequently:1 automatically:1 encouraging:2 provided:3 what:1 string:22 contiguity:1 impractical:2 returning:1 classifier:6 uk:2 control:1 appear:1 positive:6 despite:2 meet:1 range:1 averaged:1 bat:2 unique:1 practical:1 testing:1 recursive:4 implement:1 procedure:2 tw20:1 danger:1 significantly:2 word:15 close:1 risk:1 impossible:1 applying:1 www:2 map:2 demonstrated:1 starting:1 l:1 haussler:2 embedding:1 coordinate:2 rozonoer:1 construction:1 lsl:3 hierarchy:1 programming:3 losing:2 hypothesis:1 element:1 expensive:1 recognition:1 database:1 module:1 capture:1 calculate:1 ensures:1 remote:1 complexity:1 moderately:2 cristianini:3 dynamic:5 solving:1 compactly:1 resolved:1 cat:7 alphabet:1 train:1 effective:2 london:2 describe:2 query:2 choosing:1 say:1 sequence:13 net:1 propose:1 product:9 relevant:3 bow:1 description:1 empty:1 produce:1 categorization:5 derive:2 radical:1 ac:1 ij:2 wished:1 paying:1 implemented:1 indicate:1 functionality:1 kb:1 sand:4 emphasising:1 f1:8 preliminary:3 practically:1 ic:1 mapping:5 bag:2 largest:1 correctness:1 create:3 successfully:1 reuters21578:1 weighted:2 minimization:1 mit:1 aim:1 varying:1 joachim:2 inflection:1 sense:2 dependent:1 entire:1 transformed:1 iu:1 overall:1 classification:6 dual:1 html:1 retaining:1 art:1 special:2 once:2 never:1 extraction:3 report:4 national:1 resulted:1 phase:1 consisting:1 investigate:1 braverman:1 evaluation:4 certainly:1 alignment:3 punctuation:1 tj:3 tuple:2 capable:1 necessary:1 modest:1 tree:1 indexed:2 taylor:4 rpu:2 oex:1 theoretical:1 increased:1 soft:2 contiguous:3 tp:1 averted:1 cost:1 entry:2 subset:3 kl1:1 stored:1 reported:3 kn:4 sv:7 chunk:1 st:1 ym:1 together:1 earn:5 medline:3 style:1 return:1 potential:1 accompanied:1 automation:1 explicitly:2 depends:1 performed:2 observing:2 decaying:2 acq:2 il:1 efficiently:1 ofthe:1 identify:1 dortmund:1 substring:11 definition:6 surrey:1 frequency:2 stop:1 gain:1 tunable:1 dataset:2 recall:7 car:7 dimensionality:2 actually:2 attained:1 supervised:1 follow:1 improved:1 evaluated:2 though:1 just:2 implicit:1 anywhere:1 until:1 working:2 replacing:1 grows:1 building:1 effect:2 contain:2 y2:1 normalized:1 true:1 hence:9 semantic:1 deal:2 eg:1 ll:1 unnormalized:1 criterion:1 confusion:1 tn:1 delivers:1 reuter:1 image:2 novel:2 recently:1 common:2 ltl:1 exponentially:2 he:1 oflength:1 significant:1 cambridge:1 resorting:1 shawe:4 had:1 dot:2 longer:1 recent:1 certain:2 captured:1 seen:1 additional:2 care:1 july:1 semi:1 full:2 stem:1 technical:4 long:1 equally:1 ensuring:1 regression:1 kernel:58 iiii:1 extra:1 operate:1 extracting:1 structural:1 presence:2 counting:1 iii:2 split:2 inner:7 idea:1 reduce:1 motivated:1 rhbnc:1 bartlett:1 accelerating:1 ul:1 jj:1 ignored:1 santa:1 involve:1 amount:1 svms:1 category:7 reduced:1 http:1 sl:2 exist:1 lsi:1 outperform:1 discrete:1 affected:1 four:1 threshold:1 contiguously:2 graph:1 sum:3 place:1 guyon:1 decision:1 eee:1 comparable:1 entirely:2 ki:4 followed:1 annual:1 occur:1 categorisation:1 x2:1 speed:1 min:2 xki:1 relatively:2 corn:2 speedup:1 department:3 according:2 describes:1 slightly:1 remain:1 character:6 precis:2 gradually:1 chunking:1 resource:1 turn:1 count:1 describing:1 needed:2 know:1 end:1 available:1 rewritten:1 incurring:1 apply:1 observe:1 egham:1 occurrence:5 alternative:2 rp:5 top:2 clustering:1 ensure:1 neglect:1 medicine:1 calculating:1 giving:1 build:2 uj:1 classical:1 question:1 occurs:1 strategy:3 gradient:1 card:1 mapped:1 normalise:1 chris:1 topic:2 nello:1 considers:2 viii:1 maximising:1 length:19 retained:1 index:1 mostly:1 negative:1 rise:1 design:1 implementation:3 reliably:1 perform:1 observation:1 convolution:1 datasets:2 benchmark:1 finite:3 ecml:1 january:1 defining:2 y1:1 introduced:2 david:1 namely:1 required:4 extensive:1 california:1 algorithmics:2 huma:1 boser:1 able:1 bar:2 usually:1 aizerman:1 xm:1 pattern:1 fp:1 summarize:1 royal:2 including:2 reliable:1 difficulty:1 natural:1 recursion:6 representing:1 library:1 picture:1 lij:3 text:19 removal:1 kf:1 embedded:1 cdc:1 interesting:2 proportional:1 remarkable:1 h2:1 foundation:1 degree:1 mercer:1 editor:1 course:1 last:2 bias:1 normalised:1 outermost:1 dimension:2 calculated:1 valid:1 evaluating:2 made:1 collection:2 refinement:1 avoided:1 transaction:1 sj:1 overfitting:1 tuples:1 xi:2 consuming:1 subsequence:14 table:4 symmetry:1 williamson:1 necessarily:2 complex:1 european:1 anthony:1 csd:1 whole:1 reuters:4 repeated:1 definiteness:1 aid:1 wiley:1 precision:5 explicit:3 concatenating:1 xl:2 crude:2 watkins:2 weighting:4 ix:1 subsymbolic:1 formula:1 specific:2 showing:1 symbol:2 decay:1 svm:4 workshop:1 false:1 vapnik:2 effectively:1 occurring:1 margin:6 sx:7 lt:2 simply:2 ijj:1 failed:1 ordered:1 ch:1 radically:1 khs:1 determines:1 satisfies:1 relies:1 acm:2 lewis:1 sized:2 ello:1 formulated:1 crl:1 included:1 typical:1 generalisation:1 lui:1 reducing:1 called:1 partly:1 experimental:3 holloway:2 support:10 bioinformatics:1 evaluate:2 tested:1
950
187
671 PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS Alan F. Murray Alister Hamilton Dept. of Elec. Eng., Dept. of Elec. Eng., University of Edinburgh, University of Edinburgh, Mayfield Road, Mayfield Road, Edinburgh, EH9 3JL Edinburgh, EH9 3JL United Kingdom. United Kingdom. Lionel Tarassenko Dept. of Eng. Science, University of Oxford, Parks Road, Oxford, OX1 3PJ United Kingdom. ABSTRACT We describe pulse - stream firing integrated circuits that implement asynchronous analog neural networks. Synaptic weights are stored dynamically, and weighting uses time-division of the neural pulses from a signalling neuron to a receiving neuron. MOS transistors in their "ON" state act as variable resistors to control a capacitive discharge, and time-division is thus achieved by a small synapse circuit cell. The VLSI chip set design uses 2.5J.1.m CMOS technology. INTRODUCTION Neural network implementations fall into two broad classes - digital [1,2] and analog (e.g. [3,4]). The strengths of a digital approach include the ability to use well-proven design techniques, high noise immunity, and the ability to implement programmable networks. However digital circuits are synchronous, while biological neural networks are asynchronous. Furthermore, digital multipliers occupy large areas of silicon. Analog networks offer asynchronous behaviour, smooth neural activation and (potentially) small circuit elements. On the debit side, however, noise immunity is low, arbitrary high precision is not possible; and no reliable "mainstream" analog nonvolatile memory technology exists. Many analog VLSI implementations are nonprogrammable, and therefore have fixed functionality. For instance, subthreshold MOS devices have been used to mimic the nonlinearities of neural behaviour, in implementing Hopfield style nets [3] , associative memory [5] , visual processing functions [6] , and auditory processing [7]. Electron-beam programmable resistive interconnects have been used to represent synaptic weights between more conventional operational-amplifier neurons [8,4]. We describe programmable analog pulse-firillg neural networks that use 00chip dynamic analog storage capacitors to store synaptic weights, currently 672 Hamilton, Murray and Tarassenko refreshed from an external RAM via a Digital -Analog converter. PULSE-FIRING NEURAL NETWORKS A pulse-firing neuron, i is a circuit which signals its state, V. by generating a stream of 0-5V pulses on its output. The pulse rate R.' varies from 0 when neuron i is OFF to R.(max) when neuron i is fully ION. Switching between the OFF and ON stAtes is a smooth transition in output pulse rate between these lower and upper limits. In a previous system, outlined below, the synapse allows a proportion of complete presynaptic neural pulses V. to be added (electrically OR-ed) to its output. A synaptic "gating" function, determined by T .. , allowed bursts of complete pulses through the synapse. Moving down a'l column of synapses, therefore, we see an ever more crowded asynchronous mass of pulses, representing the aggregated activity of the receiving neuron. In the system that forms the substance of this paper, a proportion (determined by T ..) of each presynaptic pulse is passed l] to the postsynaptic summation. INTEGRATOR RING OSCLLATOR ~------------------------~I~I--------------------------------~ 11111111111111111111111111 Excitatory "-.... A~ ~ I I I I 11111111111111111111111111 III! Wibitory Activity XI Pll.SE GENERATOR Figure 1. Neuron Circuit NEURON CIRCUIT Figure 1 shows a CMOS implementation of the pulse-firing neuron function in a system where excitatory and inhibitory pulses are accumulated on separate channels. The output stage of the neuron consists of a "ring oscillator" - a feedback circuit containing an odd number of logic inversions, with the loop broken by a NAND gate, controlled by a smoothly varying voltage representing the neuron's total activity, j=" -1 Xj = L j=O TjjV, Programmable Analog Pulse-Firing Neural Networks This activity is increased or decreased by the dumping or removal of charge packets from the "integrator" circuit. The arrival of an excitatory pulse dumps charge, while an inhibitory pulse removes it. Figure 2 shows a device level (SPICE) simulation of the neuron circuit. A strong excitatory input causes the neural potential to rise in steps and the neuron turns ON. Subsequent inhibitory pulses remove charge packets from the integrating capacitor at a higher rate, driving the neuron potential down and switching the neuron OFF. 5 Ol------J Neuro n Output 5 Neural Potential (V4) '0 > O---------J Inhibitory input 5 o Excitatory input o 9 Figure 2. SPICE Simulation of Neuron SYNAPSE CIRCUIT - USING CHOPPING CLOCKS In an earlier implementation, "chopping clocks" were introduced - synchronous to one another, but asynchronous to the neural firing. One bit of the (digitally stored) weight T .. indicates its sign, and each other bit of precision is represented by a chopping clock. The clocks are non-overlapping, the MSB clock is high for lh of the time, the next for % of the time, etc. These clocks are used to gate bursts of pulses such that a fraction T .. of the pulses are passed from the input of the synapse to either the excita?ory or inhibitory output channel. 673 674 Hamilton, Murray and Tarassenko CHOPPING CLOCK SYSTEM - PROBLEMS A custom VLSI synaptic array has been constructed [9] with the neural function realised in discrete SSI to allow flexibility in the choice of time constants. The technique has proven successful, but suffers from a number of problems:- Digital gating ("using chopping clocks") is clumsy - Excitation and Inhibition on separate lines - bulky - Synapse complicated and of large area - < 100 synapses per chip - < 10 neurons per chip In order to overcome these problems we have devised an alternative arithmetic technique that modulates individual pulse widths and uses analog dynamic weight storage. This results in a much smaller synapse. < w ? WxTij L xTij -----,I L Increment Activity Figure 3. Pulse Multiplication SYNAPSE CIRCUIT - PULSE MULTIPLICATION The principle of operation of the new synapse is illustrated in Figure 3. Each presynaptic pulse of width W is modulated by the synaptic weight T .. such that the resulting postsynaptic pulse width is lJ W.Tij This is achieved by using an analog voltage to modulate a capacitive discharge as illustrated in Figure 4. The presynaptic pulse enters a CMOS inverter whose positive supply voltage (V dd) is controlled by T ... The capacitor is nominally charged to Vdd, but begins to discharge at a gonstant rate when the input pulse arrives. When the voltage on the capacitor falls below the threshold of the following inverter, the synapse output goes high. At the end of the presynaptic pulse the capacitor recharges rapidly and the synapse output goes low, having output a pulse of length W.T ". The circuit is now lJ Programmable Analog Pulse-Firing Neural Networks ready for the next presynaptic pulse. This mechanism gives a linear relationship between multiplier Wand inverter supply voltage, Vdd. Tik Determines Vdd for inverter Vk Ire1 Figure 4. Improved Synapse Circuit FULL SYNAPSE Synaptic weight storage is achieved using dynamic analog storage capacitors refreshed from off-chip RAM via a Digital-Analog converter. A CMOS active-resistor inverter is used as a buffer to isolate the storage capacitor from the multiplier circuit as shown in the circuit diagram of a full synapse in Figure 5. Vdd TIt -11- SYNAPTIC WEIGHT TIt I T PRESYNAPTIC STATE Vk BIAS VOl.TAGE Figure s. Full Synapse Circuit A capacitor distributed over a column of synaptic outputs stores neural activity, x., as an analog voltage. The range over which the synapse voltage - pulse tithe multiplier relationship is linear is shown in Figure 6. This wide 675 676 Hamilton, Murray and Tarassenko (:=c2V) range may be used to implement inhibition and excitation in a single synapse, by "splitting" the range such that the lower volt (l-2V) represents inhibition, and the upper volt (2-3V) excitation. Each presynaptic pulse removes a packet of charge from the activity capacitor while each postsynaptic pulse adds charge at twice the rate. In this way, a synaptic weight voltage of 2V, giving a pulse length multiplier of lh, gives no net change in neuron activity x .. The synaptic weight voltage range 1-2V therefore gives a net reduction'in neuron activity and is used to represent inhibition, the range 2-3V gives a net increase in neuron activity and is used to represent excitation. 1.0 0.6 - o.4 - ---- -- ------ - ------ ---- - -- . 0.2 o .' o . . . . - --- -- - -- -- - - -,' 123 4 Synapse Voltage Tij (V) 5 Figure 6. Multiplier Linearity The resulting synapse circuit implements excitation and inhibition in 11 transistors per synapse. It is estimated that this technique will yield more than 100 fu.ly programmable neurons per chip. FURTHER WORK There is still much work to be done to refine the circuit of Figure 5 to optimise (for instance) the mark-space ratio of the pulse firing and the effect of pulse overlap, and to minimise the power consumption. This will involve the creation of a custom pulse-stream simulator, implemented directly as code, to allow these parameters to be studied in detail in a way that probing an actual chip does not allow. Finally, as Hebbian- (and modified Hebbian - for instance [10]) learning schemes only require a synapse to "know" the presynaptic and postsynaptic states, we are able to implement it on-chip at little cost, as the chip topology makes both of these signals available available to the synapse locally. This work introduces as many exciting possibilities for truly autonomous systems as it does potential problems! Programmable Analog Pulse-Firing Neural Networks Acknowledgements The authors acknowledge the support of the Science and Engineering Research Council (UK) in the execution of this work. References 1. A. F. Murray, A. V. W. Smith, and Z. F. Butler, "Bit - Serial Neural Networks," Neural Information Processing Systems (Proc. 1987 NIPS Conference), p. 573, 1987. 2. S. C. J. Garth, "A Chipset for High Speed Simulation of Neural Network Systems," IEEE Conference on Neural Networks, San Diego, vol. 3, pp. 443 - 452, 1987. 3. M~ 4. H. P. Graf, L. D. Jackel, R. E. Howard, B. Straughn, J. S. Denker, W. Hubbard, D. M. Tennant, and D. Schwartz, "VLSI Implementation of a Neural Network Memory with Several Hundreds of Neurons," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 182 - 187, 1986. 5. M. Sivilotti, M. R. Emerling, and C. A. Mead, "A Novel Associative Memory Implemented Using Collective Computation," Chapel Hill Conf. on VLSI, pp. 329 - 342, 1985. 6. M. A. Sivilotti, M. A. Mahowald, and C. A. Mead, "Real - Time Visual Computations Using Analog CMOS Processing Arrays," Stanford VLSI Confeence, pp. 295-312, 1987. 7. C. A. Mead, in Analog VLSI and Neural Systems, Addison-Wesley, 1988. 8. W. Hubbard, D. Schwartz, J. S. Denker, H. P. Graf, R. E. Howard, L. D. Jackel, B. Straughn, and D. M. Tennant, "Electronic Neural Networks," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 227 - 234, 1986. 9. A. F. Murray, A. V. W. Smith, and L. Tarassenko, "FullyProgrammable Analogue VLSI Devices for the Implementation of Neural Networks," Int. Workshop on VLSI for Artificial Intelligence, 1988. 10. S. Grossberg, "Some Physiological and Biochemical Consequences of Psychological Postulates," Proc. Natl. Acad. Sci. USA, vol. 60, pp. 758 - 765, 1968. A. SiviloUi, M. R. Emerling, and C. A. Mead, "VLSI Architectures for Implementation of Neural Networks," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 408 - 413, 1986. 677
187 |@word inversion:1 proportion:2 chopping:5 simulation:3 pulse:40 eng:3 reduction:1 united:3 activation:1 subsequent:1 remove:3 msb:1 intelligence:1 device:3 signalling:1 smith:2 burst:2 constructed:1 supply:2 consists:1 resistive:1 mayfield:2 simulator:1 integrator:2 ol:1 actual:1 little:1 begin:1 linearity:1 circuit:19 mass:1 sivilotti:2 act:1 charge:5 uk:1 control:1 schwartz:2 ly:1 hamilton:4 positive:1 engineering:1 limit:1 consequence:1 switching:2 acad:1 oxford:2 mead:4 firing:10 twice:1 studied:1 dynamically:1 range:5 grossberg:1 implement:5 area:2 road:3 integrating:1 storage:5 conventional:1 charged:1 go:2 splitting:1 recharges:1 chapel:1 array:2 autonomous:1 increment:1 discharge:3 diego:1 us:3 element:1 tarassenko:5 enters:1 digitally:1 broken:1 dynamic:3 vdd:4 tit:2 creation:1 division:2 hopfield:1 chip:9 represented:1 elec:2 describe:2 artificial:1 whose:1 stanford:1 ability:2 associative:2 transistor:2 net:4 loop:1 rapidly:1 flexibility:1 pll:1 lionel:1 generating:1 cmos:5 ring:2 snowbird:3 odd:1 bulky:1 strong:1 implemented:2 functionality:1 packet:3 alp:3 implementing:1 require:1 behaviour:2 biological:1 summation:1 mo:2 electron:1 driving:1 inverter:5 proc:5 tik:1 currently:1 jackel:2 council:1 hubbard:2 emerling:2 modified:1 varying:1 voltage:10 vk:2 indicates:1 accumulated:1 biochemical:1 integrated:1 lj:2 nand:1 vlsi:10 having:1 represents:1 park:1 broad:1 mimic:1 individual:1 amplifier:1 possibility:1 custom:2 introduces:1 truly:1 arrives:1 natl:1 fu:1 lh:2 psychological:1 instance:3 column:2 increased:1 earlier:1 mahowald:1 cost:1 ory:1 hundred:1 successful:1 stored:2 varies:1 v4:1 off:4 receiving:2 postulate:1 containing:1 external:1 conf:1 style:1 potential:4 nonlinearities:1 crowded:1 int:1 stream:3 realised:1 complicated:1 subthreshold:1 yield:1 synapsis:2 suffers:1 synaptic:11 ed:1 pp:7 refreshed:2 auditory:1 wesley:1 higher:1 improved:1 synapse:22 done:1 furthermore:1 stage:1 clock:8 ox1:1 overlapping:1 usa:1 effect:1 multiplier:6 volt:2 illustrated:2 width:3 excitation:5 hill:1 complete:2 novel:1 jl:2 analog:19 silicon:1 outlined:1 moving:1 mainstream:1 inhibition:5 etc:1 add:1 store:2 buffer:1 aggregated:1 signal:2 arithmetic:1 full:3 hebbian:2 alan:1 smooth:2 offer:1 devised:1 serial:1 controlled:2 neuro:1 represent:3 achieved:3 cell:1 beam:1 ion:1 decreased:1 diagram:1 isolate:1 capacitor:9 iii:1 xj:1 architecture:1 converter:2 topology:1 minimise:1 synchronous:2 passed:2 cause:1 programmable:8 tij:2 se:1 involve:1 locally:1 occupy:1 spice:2 inhibitory:5 sign:1 estimated:1 per:4 discrete:1 vol:3 threshold:1 pj:1 ram:2 fraction:1 wand:1 electronic:1 bit:3 eh9:2 alister:1 refine:1 activity:10 strength:1 straughn:2 speed:1 electrically:1 smaller:1 postsynaptic:4 turn:1 mechanism:1 know:1 addison:1 end:1 available:2 operation:1 denker:2 alternative:1 gate:2 capacitive:2 include:1 xtij:1 giving:1 murray:6 added:1 separate:2 sci:1 consumption:1 presynaptic:9 tage:1 length:2 code:1 relationship:2 ratio:1 kingdom:3 potentially:1 rise:1 implementation:7 design:2 collective:1 upper:2 neuron:23 howard:2 acknowledge:1 ever:1 arbitrary:1 introduced:1 immunity:2 chipset:1 nip:1 able:1 below:2 reliable:1 max:1 optimise:1 memory:4 analogue:1 power:1 overlap:1 representing:2 scheme:1 interconnects:1 technology:2 ready:1 acknowledgement:1 removal:1 multiplication:2 graf:2 fully:1 proven:2 generator:1 digital:7 principle:1 dd:1 exciting:1 excitatory:5 asynchronous:5 side:1 allow:3 bias:1 fall:2 wide:1 edinburgh:4 distributed:1 feedback:1 overcome:1 transition:1 ssi:1 author:1 san:1 garth:1 logic:1 active:1 xi:1 butler:1 channel:2 operational:1 noise:2 arrival:1 allowed:1 dump:1 clumsy:1 nonvolatile:1 probing:1 precision:2 resistor:2 weighting:1 down:2 substance:1 gating:2 physiological:1 exists:1 workshop:1 modulates:1 tennant:2 execution:1 smoothly:1 visual:2 nominally:1 determines:1 modulate:1 oscillator:1 change:1 determined:2 total:1 mark:1 support:1 modulated:1 dept:3
951
1,870
From Margin To Sparsity Thore Graepel, Ralf Herbrich Computer Science Department Technical University of Berlin Berlin, Germany {guru, ralfh)@cs.tu-berlin.de Robert C. Williamson Department of Engineering Australian National University Canberra, Australia Bob. [email protected] Abstract We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mistake bound as a margin dependent sparsity guarantee allows us to give a PAC-style generalisation error bound for the classifier learned by the perceptron learning algorithm. The bound value crucially depends on the margin a support vector machine would achieve on the same data set using the same kernel. Ironically, the bound yields better guarantees than are currently available for the support vector solution itself. 1 Introduction In the last few years there has been a large controversy about the significance of the attained margin, i.e. the smallest real valued output of a classifiers before thresholding, as an indicator of generalisation performance. Results in the YC, PAC and luckiness frameworks seem to indicate that a large margin is a pre- requisite for small generalisation error bounds (see [14, 12]). These results caused many researchers to focus on large margin methods such as the well known support vector machine (SYM). On the other hand, the notion of sparsity is deemed important for generalisation as can be seen from the popularity of Occam's razor like arguments as well as compression considerations (see [8]). In this paper we reconcile the two notions by reinterpreting an improved version of Novikoff's well known perceptron convergence theorem as a sparsity guarantee in dual space: the existence of large margin classifiers implies the existence of sparse consistent classifiers in dual space. Even better, this solution is easily found by the perceptron algorithm. By combining the perceptron mistake bound with a compression bound that originated from the work of Littlestone and Warmuth [8] we are able to provide a PAC like generalisation error bound for the classifier found by the perceptron algorithm whose size is determined by the magnitude of the maximally achievable margin on the dataset. The paper is structured as follows: after introducing the perceptron in dual variables in Section 2 we improve on Novikoff's percept ron convergence bound in Section 3. Our main result is presented in the subsequent section and its consequences on the theoretical foundation of SYMs are discussed in Section 5. 2 (Dual) Kernel Perceptrons We consider learning given m objects X = {Xl, ... , X m } E xm and a set Y = {Yl, . .. Ym} E ym drawn iid from a fixed distribution P XY = P z over the space X x {-I, +I} = Z of input-output pairs. Our hypotheses are linear classifiers X f-t sign ((w, 4> (x))) in some fixed feature space K ~ ?~ where we assume that a mapping 4> : X --+ K is chosen a priori l . Given the features ?i : X --+ ~ the classical (primal) percept ron algorithm aims at finding a weight vector w E K consistent with the training data. Recently, Vapnik [14] and others - in their work on SVMs - have rediscovered that it may be advantageous to learn in the dual representation (see [1]), i.e. expanding the weight vector in terms of the training data m Wo: = m L>~i4> (Xi) = 2: ctiXi, i=l i=l (1) and learn the m expansion coefficients a E ~m rather than the components of w E K. This is particularly useful if the dimensionality n = dim (K) of the feature space K is much greater (or possibly infinite) than the number m of training points. This dual representation can be used for a rather wide class of learning algorithms (see [15]) - in particular if all we need for learning is the real valued output (w, Xi)/C of the classifier at the m training points Xl, . .. , x m . Thus it suffices to choose a symmetric function k : X x X --+ ~ called kernel and to ensure that there exists a mapping 4>k : X --+ K such that Vx,x' EX: k (x,x') = (4)k (x) ,4>k (x'))/C (2) . A sufficient condition is given by Mercer's theorem. Theorem 1 (Mercer Kernel [9, 7]). Any symmetric function k E Loo (X x X) that is positive semidefinite, i.e. Ix Ix Vf E L2 (X): k(x,x') f (x) f (x') dx dx';::: 0, is called a Mercer kernel and has the following property: if 'if;i E L2 (X) solve the eigenvalue problem Ix k (x, x') 'if;i (x') dx' = Ai'if;dx) with Ix 'if;; (x) dx = 1 and Vi f:. j : Ix 'if;i (x) 'if;j (x) dx = 0 then k can be expanded in a uniformly convergent series, i. e. 00 k(x,x') = 2:Ai'if;dx) 'if;dx') . i=l In order to see that a Mercer kernel fulfils equation (2) consider the mapping 4>k (x) = ( A 'if;l (x), y'):;'if;2 (x), ... ) (3) whose existence is ensured by the third property. Finally, the percept ron learning algorithm we are going to consider is described in the following definition. Definition 1 (Perceptron Learning). The perceptron learning procedure with the fixed learning rate TJ E ~+ is as follows: 1. Start in step zero, i.e. t = 0, with the vector at = O. 2. If there exists an index i E {I, ... , m} such that Yi (w 0:., Xi) /C (at+l)i and t t- t = (at)i + TJYi ?:> wO:.+ 1 = wo:. + TJYiXi ? + 1. lSomtimes, we abbreviate 4> (x) by x always assuming :::; 4> is fixed. 0 then (4) 3. Stop, if there is no i E {I, . .. , m} such that Yi (Wat' Xi) J( ~ o. Other variants of this algorithm have been presented elsewhere (see [2, 3]). 3 An Improvement of N ovikoff's Theorem In the early 60's Novikoff [10) was able to give an upper bound on the number of mistakes made by the classical perceptron learning procedure. Two years later, this bound was generalised to feature spaces using Mercer kernels by Aizerman et al. [1). The quantity determining the upper bound is the maximally achievable unnormalised margin maxaElR.~ 'Yz (a) normalised by the total extent R(X) of the data in feature space, i.e. R (X) = maxxiEX IlxillJ( . Definition 2 (Unnormalised Margin). Given a training set Z = (X, Y) and a vector a E IRm the unnormalised margin 'Yz (a) is given by 'Yz (a ) . Yi (Wa,Xi)J( = ( Xi,y;)EZ mm IlwallJ( Theorem 2 (Novikoffs Percept ron Convergence Theorem 110,1]). Let Z = (X, Y) be a training set of size m. Suppose that there exists a vector a* E IRm such that 'Yz (a*) > O. Then the number of mistakes made by the perceptron algorithm in Definition 1 on Z is at most ( R(X) 'Yz (a*) )2 Surprisingly, this bound is highly influenced by the data point Xi E X with the largest norm IIXil1 albeit rescaling of a data point would not change its classification. Let us consider rescaling of the training set X before applying the perceptron algorithm. Then for the normalised training set we would have R (Xnorm ) = 1 and 'Yz (a) would change into the normalised margin rz (a) first advocated in [6). Definition 3 (Normalised Margin). Given a training set Z = (X, Y) and a vector a E IRm the normalised margin rz (a) is given by r za ( )= . mm (Xi,y;)EZ Yi (wa, Xi)J( . IIwallJ( IIXillJ( By definition, for all Xi E X we have R (X) 2: Ilxi IIJ(. Hence for any a E IRm and all (Xi ,Yi) E Z such that Yi (Wa,Xi)J( > 0 > R(X) Yi(Wa,Xi)1C IIwall lC IIXillJ( Yi(Wa,Xi ) 1C Ilwall lC which immediately implies for all Z = (X, Y) E 1 Yi(Wa,Xi ) f , ' IIwalldxirlC zm such that 'Yz (a) > 0 R(X) > _1_. 'Yz (a) - rz (a) (5) Thus when normalising the data in feature space, i.e. k norm (x Xl) _ , - k (X , XI) ...jk(x,x).k(X',X') ' the upper bound on the number of steps until convergence of the classical perceptron learning procedure of Rosenblatt [11) is provably decreasing and is given by the squared r.h.s of (5). Considering the form of the update rule (4) we observe that this result not only bounds the number of mistakes made during learning but also the number 110:11 0 of non-zero coefficients in the 0: vector. To be precise, for 'T/ = 1 it bounds the ?1 norm 110:111 of the coefficient vector 0: which, in turn, bounds the zero norm 110:110 from above for all vectors with integer components. Theorem 2 thus establishes a relation between the existence of a large margin classifier w* and the sparseness of any solution found by the perceptron algorithm. 4 Main Result In order to exploit the guaranteed sparseness of the solution of a kernel perceptron we make use of the following lemma to be found in [8, 4). Lemma 1 (Compression Lemma). Fix d E {l, . . . ,m}. For any measure P z , the probability that m examples Z drawn iid according to Pz will yield a classifier 0: (Z) learned by the perceptron algorithm with 110: (Z)llo = d whose generalisation error PXY [Y (w a(Z), <P (X?) /C ::; 0] is greater than c is at most (~)(l_c)m-d. (6) Proof. Since we restrict the solution 0: (Z) with generalisation error greater than c only to use d points Zd c:;; Z but still to be consistent with the remaining set Z \ Zd, this probability is at most (1 - c)m-d for a fixed subset Zd. The result follows by the union bound over all (r;;) subsets Zd. Intuitively, the consistency on the m - d unused training points witnesses the small generalisation error with high probability. 0 If we set (6) to ~ and solve for c we have that with probability at most ~ over the random draw of the training set Z the percept ron learning algorithm finds a vector 0: such that 110:110 = d and whose generalisation error is greater than c (m, d) = m~d (In ((r;;)) + In (m) + In (~)) . Thus by the union bound, if the per- ceptron algorithm converges, the probability that the generalisation error of its solution is greater than c (m, 110:110) is at most 8. We have shown the following sparsity bounds also to be found in [4). Theorem 3 (Generalisation Error Bound for Perceptrons). For any measure P z , with probability at least 1 - 8 over the random draw of the training set Z of size m, if the perceptron learning algorithm converges to the vector 0: of coefficients then its generalisation error PXY [Y (w a(Z), <p (X?) /C ::; 0] is less than (7) This theorem in itself constitutes a powerful result and can easily be adopted to hold for a large class of learning algorithms including SVMs [4) . This bound often outperforms margin bounds for practically relevant training set sizes, e.g. m < 100 000. Combining Theorem 2 and Theorem 3 thus gives our main result. Theorem 4 (Margin Bound). For any measure P z , with probability at least 1 - c5 over the random draw of the training set Z of size m, if there exists a vector u* such that I ~* = (~~~~:)rl ~ m then the generalisation error P XY [Y(wo:(Z),I/>(X))x: ~ 0] of the classifier u found by the perceptron algorithm is less than m~~* (8) (In ((:.)) +In(m)+ln(D) The most intriguing feature of this result is that the mere existence of a large margin classifier u* is sufficient to guarantee a small generalisation error for the solution u of the perceptron although its attained margin ~z (u) is likely to be much smaller than ~z (u*). It has long been argued that the attained margin ~z (u) itself is the crucial quantity controlling the generalisation error of u. In light of our new result if there exists a consistent classifier u* with large margin we know that there also exists at least one classifier u with high sparsity that can efficiently be found using the percept ron algorithm. In fact, whenever the SYM appears to be theoretically justified by a large observed margin, every solution found by the perceptron algorithm has a small guaranteed generalisation error - mostly even smaller than current bounds on the generalisation error of SYMs. Note that for a given training sample Z it is not unlikely that by permutation of Z there exist o ((,:'!)) many different consistent sparse classifiers u. 5 Impact on the Foundations of Support Vector Machines Support vector machines owe their popularity mainly to their theoretical justification in the learning theory. In particular, two arguments have been put forward to single out the solutions found by SYMs [14, p. 139]: SYM (optimal hyperplanes) can generalise because 1. the expectation of the data compression is large. 2. the expectation of the margin is large. The second reason is often justified by margin results (see [14, 12]) which bound the generalisation of a classifier u in terms of its own attained margin ~z (u). If we require the slightly stronger condition that ~* < ~, n 2: 4, then our bound (8) for solutions of percept ron learning can be upper bounded by ~ (~*lnC::n)+ln(mn~1)+ln(c5n1~1))' which has to be compared with the PAC margin bound (see [12, 5]) ~ (64~*log2 (:::. ) log2 (32m) + log2 (2m) + log2 (~) ) Despite the fact that the former result also holds true for the margin rz (u*) (which could loosely be upper bounded by (5)) ? the PAC margin bound's decay (as a function of m) is slower by a log2 (32m) factor, digit perceptron lIali o mistakes bound SVM Iiall o bound o 0.2 740 844 6.7 0.2 1379 11.2 1 0.2 643 843 6.0 0.1 989 8.6 2 0.4 1168 1345 9.8 0.4 1958 14.9 3 0.4 1512 1811 12.0 0.4 1900 14.5 4 0.4 1078 1222 9.2 5 0.4 1277 1497 10.5 0.4 823 960 7.4 0.4 1224 10.2 0.5 2024 15.3 0.3 1527 12.2 6 7 0.5 1103 1323 9.4 0.4 2064 15.5 8 0.6 1856 2326 14.3 0.7 1920 2367 14.6 0.5 2332 17.1 0.6 2765 19.6 9 Table 1: Results of kernel perceptrons and SVMs on NIST (taken from [2, Table 3]). The kernel used was k (x, x') = ((x, x') x + 1)4 and m = 60000. For both algorithms we give the measured generalisation error (in %), the attained sparsity and the bound value (in %, 8 = 0.05) of (7) . ? for any m and almost any 8 the margin bound given in Theorem 4 guarantees a smaller generalisation error . ? For example, using the empirical value K,* ~ 600 (see [14, p. 153]) in the NIST handwritten digit recognition task and inserting this value into the PAC margin bound, it would need the astronomically large number of m > 410 743 386 to obtain a bound value of 0.112 as obtained by (3) for the digit "0" (see Table 1). With regard to the first reason, it has been confirmed experimentally that SVMs find solutions which are sparse in the expansion coefficients o. However, there cannot exist any distribution- free guarantee that the number of support vectors will in fact be sma1l 2 . In contrast, Theorem 2 gives an explicit bound on the sparsity in terms of the achievable margin (0*). Furthermore, experimental results on the NIST datasets show that the sparsity of solution found by the perceptron algorithm is consistently (and often by a factor of two) greater than that of the SVM solution (see [2, Table 3] and Table 1). ,z 6 Conclusion We have shown that the generalisation error of a very simple and efficient learning algorithm for linear classifiers - the perceptron algorithm - can be bounded by a quantity involving the margin of the classifier the SVM would have found on the same training data using the same kernel. This result implies that the SVM solution is not at all singled out as being superior in terms of provable generalisation error. Also, the result indicates that sparsity of the solution may be a more fundamental property than the size of the attained margin (since a large value of the latter implies a large value of the former). Our analysis raises an interesting question: having chosen a good kernel, corresponding to a metric in which inter- class distances are great and intra- class distances are short, in how far does it matter which consistent classifier we use? Experimental 2Consider a distribution PXY on two parallel lines with support in the unit ball. Suppose that their mutual distance is ../2. Then the number of support vectors equals the training set size whereas the perceptron algorithm never uses more than two points by Theorem 2. One could argue that it is the number of essential support vectors [13] that characterises the data compression of an SVM (which would also have been two in our example). Their determination, however, involves a combinatorial optimisation problem and can thus never be performed in practical applications. results seem to indicate that a vast variety of heuristics for finding consistent classifiers, e.g. kernel Fisher discriminant, linear programming machines, Bayes point machines, kernel PCA & linear SVM, sparse greedy matrix approximation perform comparably (see http://www . kernel-machines. org/). Acknowledgements This work was done while TG and RH were visiting the ANU Canberra. They would like to thank Peter Bartlett and Jon Baxter for many interesting discussions. Furthermore, we would like to thank the anonymous reviewer, Olivier Bousquet and Matthias Seeger for very useful remarks on the paper. References [I] M. Aizerman, E. Braverman, and L. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821- 837, 1964. [2] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 1999. [3] T. Friess, N. Cristianini, and C. Campbell. The Kernel-Adatron: A fast and simple learning procedure for Support Vector Machines. In Proceedings of the 15- th International Conference in Machine Learning, pages 188- 196, 1998. [4] T. Graepel, R. Herbrich, and J. Shawe-Taylor. Generalisation error bounds for sparse linear classifiers. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, pages 298- 303, 2000. in press. [5] R. Herbrich. Learning Linear Classifiers - Theory and Algorithms. PhD thesis, Technische Universitiit Berlin, 2000. accepted for publication by MIT Press. [6] R. Herbrich and T. Graepel. A PAC-Bayesian margin bound for linear classifiers: Why SVMs work. In Advances in Neural Information System Processing 13, 2001. [7] H. Konig. Eigenvalue Distribution of Compact Operators. Birkhiiuser, Basel, 1986. [8] N. Littlestone and M. Warmuth. Relating data compression and learn ability. Technical report, University of California Santa Cruz, 1986. [9] T. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Transaction of London Philosophy Society (A), 209:415446, 1909. [10] A. Novikoff. On convergence proofs for perceptrons. In Report at the Symposium on Mathematical Theory of Automata, pages 24- 26, Politechnical Institute Brooklyn, 1962. [11] M. Rosenblatt. Principles of neurodynamics: Perceptron and Theory of Brain Mechanisms. Spartan- Books, Washington D.C., 1962. [12] J . Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926- 1940, 1998. [13] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [14] V. Vapnik. The Nature of Statistical Learning Theory. Springer, second edition, 1999. [15] G. Wahba. Support Vector Machines, Reproducing Kernel Hilbert Spaces and the randomized GACV. Technical report, Department of Statistics, University of Wisconsin, Madison, 1997. TR- NO- 984.
1870 |@word version:1 achievable:3 compression:6 advantageous:1 norm:4 stronger:1 crucially:1 llo:1 tr:1 series:1 outperforms:1 current:1 dx:8 intriguing:1 john:1 cruz:1 subsequent:1 update:1 greedy:1 warmuth:2 short:1 normalising:1 ron:7 herbrich:4 hyperplanes:1 org:1 ironically:1 mathematical:1 symposium:1 reinterpreting:2 theoretically:1 inter:1 brain:1 decreasing:1 considering:1 bounded:3 finding:2 guarantee:6 adatron:1 every:1 ensured:1 classifier:22 control:1 unit:1 before:2 positive:2 engineering:1 generalised:1 mistake:6 consequence:1 despite:1 au:1 practical:1 union:2 digit:3 procedure:4 empirical:1 pre:1 cannot:1 operator:1 put:1 risk:1 applying:1 www:1 reviewer:1 automaton:1 immediately:1 rule:1 ralf:1 notion:2 justification:1 rozonoer:1 hierarchy:1 controlling:1 suppose:2 programming:1 olivier:1 us:1 hypothesis:1 recognition:2 particularly:1 jk:1 fries:1 observed:1 remote:1 cristianini:1 controversy:1 raise:1 gacv:1 easily:2 fast:1 london:1 spartan:1 whose:4 heuristic:1 ralfh:1 valued:2 solve:2 ability:1 statistic:1 itself:3 singled:1 eigenvalue:2 matthias:1 zm:1 tu:1 relevant:1 combining:2 inserting:1 achieve:1 konig:1 convergence:6 converges:2 object:1 measured:1 advocated:1 c:1 involves:1 indicate:2 australian:1 implies:4 vx:1 australia:1 argued:1 require:1 suffices:1 fix:1 anonymous:1 mm:2 hold:2 practically:1 great:1 mapping:3 early:1 smallest:1 combinatorial:1 currently:1 largest:1 establishes:1 minimization:1 mit:1 always:1 aim:1 rather:2 publication:1 focus:1 improvement:2 consistently:1 indicates:1 mainly:1 contrast:1 seeger:1 dim:1 dependent:2 unlikely:1 relation:1 going:1 germany:1 provably:1 dual:6 classification:2 priori:1 mutual:1 equal:1 never:2 having:1 washington:1 constitutes:1 jon:1 others:1 report:3 novikoff:5 few:1 national:1 owe:1 rediscovered:1 highly:1 intra:1 braverman:1 semidefinite:1 light:1 primal:1 tj:1 integral:1 xy:2 loosely:1 irm:4 littlestone:2 taylor:2 theoretical:3 tg:1 introducing:1 technische:1 subset:2 syms:3 loo:1 guru:1 fundamental:1 international:1 randomized:1 yl:1 ym:2 squared:1 thesis:1 choose:1 possibly:1 book:1 style:1 rescaling:2 potential:1 de:1 automation:1 coefficient:5 matter:1 caused:1 depends:1 vi:1 later:1 performed:1 start:1 bayes:1 parallel:1 percept:7 efficiently:1 yield:2 handwritten:1 bayesian:1 comparably:1 iid:2 mere:1 confirmed:1 researcher:1 bob:1 za:1 influenced:1 whenever:1 definition:6 proof:2 stop:1 dataset:1 dimensionality:1 graepel:3 hilbert:1 campbell:1 appears:1 attained:6 improved:1 maximally:2 done:1 furthermore:2 until:1 hand:1 thore:1 true:1 former:2 hence:1 symmetric:2 during:1 razor:1 consideration:1 recently:1 ilxi:1 superior:1 rl:1 discussed:1 relating:1 ai:2 consistency:1 shawe:2 own:1 yi:9 seen:1 greater:6 technical:3 determination:1 long:1 impact:1 variant:1 involving:1 optimisation:1 expectation:2 metric:1 kernel:17 justified:2 whereas:1 thirteenth:1 crucial:1 seem:2 integer:1 structural:1 unused:1 baxter:1 variety:1 restrict:1 wahba:1 pca:1 bartlett:2 wo:4 peter:1 york:1 remark:1 useful:2 lnc:1 santa:1 svms:5 http:1 schapire:1 exist:2 sign:1 popularity:2 per:1 rosenblatt:2 zd:4 drawn:2 vast:1 year:2 powerful:1 almost:1 draw:3 vf:1 bound:38 guaranteed:2 convergent:1 pxy:3 annual:1 i4:1 fulfils:1 bousquet:1 argument:2 expanded:1 department:3 structured:1 according:1 ball:1 smaller:3 slightly:1 son:1 intuitively:1 taken:1 ln:3 equation:2 turn:1 mechanism:1 know:1 adopted:1 available:1 observe:1 wat:1 slower:1 existence:5 rz:4 remaining:1 ensure:1 log2:5 madison:1 exploit:1 yz:8 classical:3 society:1 question:1 quantity:3 visiting:1 distance:3 thank:2 berlin:4 argue:1 extent:1 discriminant:1 reason:2 provable:1 assuming:1 index:1 mostly:1 robert:1 negative:1 basel:1 perform:1 upper:5 datasets:1 nist:3 witness:1 precise:1 reproducing:1 pair:1 connection:1 california:1 learned:2 brooklyn:1 able:2 aizerman:2 pattern:1 xm:1 yc:1 sparsity:10 including:1 indicator:1 abbreviate:1 mn:1 improve:1 deemed:1 unnormalised:3 l2:2 characterises:1 acknowledgement:1 determining:1 wisconsin:1 freund:1 permutation:1 interesting:2 foundation:3 sufficient:2 consistent:7 mercer:6 thresholding:1 principle:1 occam:1 elsewhere:1 surprisingly:1 last:1 free:1 sym:3 normalised:5 perceptron:26 generalise:1 wide:1 institute:1 sparse:5 regard:1 forward:1 made:3 c5:1 far:1 transaction:2 compact:1 xi:16 why:1 table:5 neurodynamics:1 learn:3 nature:1 expanding:1 expansion:2 williamson:3 anthony:1 significance:1 main:3 rh:1 reconcile:1 edition:1 canberra:2 wiley:1 iij:1 lc:2 originated:1 explicit:1 xl:3 third:1 ix:5 theorem:16 luckiness:1 pac:7 pz:1 decay:1 svm:6 exists:6 essential:1 vapnik:3 albeit:1 phd:1 magnitude:1 anu:2 margin:35 sparseness:2 likely:1 ez:2 springer:1 astronomically:1 fisher:1 universitiit:1 change:2 experimentally:1 generalisation:23 determined:1 infinite:1 uniformly:1 lemma:3 called:2 total:1 accepted:1 experimental:2 perceptrons:4 support:11 latter:1 philosophy:1 requisite:1 ex:1
952
1,871
Factored Semi-Tied Covariance Matrices M.J.F. Gales Cambridge University Engineering Department Trumpington Street, Cambridge. CB2 IPZ United Kingdom [email protected] Abstract A new form of covariance modelling for Gaussian mixture models and hidden Markov models is presented. This is an extension to an efficient form of covariance modelling used in speech recognition, semi-tied covariance matrices. In the standard form of semi-tied covariance matrices the covariance matrix is decomposed into a highly shared decorrelating transform and a component-specific diagonal covariance matrix. The use of a factored decorrelating transform is presented in this paper. This factoring effectively increases the number of possible transforms without increasing the number of free parameters. Maximum likelihood estimation schemes for all the model parameters are presented including the component/transform assignment, transform and component parameters. This new model form is evaluated on a large vocabulary speech recognition task. It is shown that using this factored form of covariance modelling reduces the word error rate. 1 Introduction A standard problem in machine learning is to how to efficiently model correlations in multidimensional data. Solutions should be efficient both in terms of number of model parameters and cost of the likelihood calculation. For speech recognition this is particularly important due to the large number of Gaussian components used, typically in the tens of thousands, and the relatively large dimensionality of the data, typically 30-60. The following generative model has been used in speech recognition 1 X(T) O(T) W = F [ (1) X~T) ] (2) where X(T) is the underlying speech signal, F is the observation transformation matrix, W is generated by a hidden Markov model (HMM) with diagonal covariance matrix Gaussian IThis describes the static version of the generative model. The more general version is described by replacing equation 1 by x( T) = Cx( T - 1) + w. mixture model (GMM) to model each state 2 and v is usually assumed to be generated by a GMM, which is common to all HMMs. This differs from the static linear Gaussian models presented in [7] in two important ways. First w is generated by either an HMM or GMM, rather than a simple Gaussian distribution. The second difference is that the "noise" is now restricted to the null space of the signal x (7). This type of system can be considered to have two streams. The first stream, the n1 dimensions associated with X(7), is the set of discriminating, useful, dimensions. The second stream, the n2 dimensions associated with v, is the set of non-discriminating, nuisance, dimensions. Linear discriminant analysis (LDA) and heteroscedastic LDA (HLDA) [5] are both based on this form of generative model. When the dimensionality of the nuisance dimensions is reduced to zero this generative model becomes equivalent to a semi-tied covariance matrix system [3] with a single, global, semi-tied class. This generative model has a clear advantage during recognition compared to the standard linear Gaussian models [2] in the reduction in the computational cost of the likelihood calculation. The likelihood for component m may be computed as 3 ( po ( ). 7 ,IL (m) , ~(m) F) _ diag' l(7) N((F-1) - Idet(F)I (). [1]07 , IL (m) , ~(m)) diag (3) where lL(m) is the n1 -dimensional mean and ~~~lg the diagonal covariance matrix of Gaussian component m. l (7) is the nuisance dimension likelihood which is independent of the component being considered and only needs to be computed once for each time instance. The initial normalisation term is only required during recognition when multiple transforms are used. The dominant cost is a diagonal Gaussian computation for each component, O(n1) per component. In contrast a scheme such as factor analysis (a covariance modelling scheme from the linear Gaussian model in [7]) has a cost of O(ni) per component (assuming there are n1 factors). The disadvantage of this form of generative model is that there is no simple expectation-maximisation (EM) [1] scheme for estimating the model parameters. However, a simple iterative scheme is available [3]. For some tasks, such as speech recognition where there are many different "sounds" to be recognised, it is unlikely that a single transform is sufficient to well model the data. To reflect this there has been some work on using multiple feature-spaces [3, 2]. The standard approach for using multiple transforms is to assign each component, m, to a particular transform, F( Tm). To simplify the description of the new scheme only modifications to the semi-tied covariance matrix scheme, where the nuisance dimension is zero, are considered. The generative model is modified to be 0(7) = F(T m )X(7), where Tm is the transform class associated with the generating component, m, at time instance 7. The assignment variable, T m , may either be determined by an "expert", for example using phonetic context information, or it may be assigned in a maximum likelihood (ML) fashion [3]. Simply 2 Although it is not strictly necessary to use diagonal covariance matrices, tllese currently dominate applications in speech recognition. w could also be generated by a simple GMM. 3This paper uses the following convention: capital bold letters refer to matrices e.g. A, bold letters refer to vectors e.g. b, and scalars are not bold e.g. c. When referring to elements of a matrix or vector subscripts are used e.g. ai is tlle ith row of matrix A, aij is tlle element of row i column j of matrix A and bi is element i of vector b. Diagonal matrices are indicated by A diag . Where multiple streams are used tllis is indicated, for example, by A[s], this is a n. x n matrix (n is tlle dimensionality of tlle feature vector and n. is tlle size of stream 8). Where subsets of tlle diagonal matrices are specified tlle matrices are square, e.g. Adiag[s] is ns x ns square diagonal matrix. AT is tlle transpose of tlle matrix and det( A) is tlle determinant of the matrix. increasing the number of transforms increases the number of model parameters to be estimated, hence reducing the robustness of the estimates. There is a corresponding increase in the computational cost during recognition. In the limit there is a single transform per component, the standard full-covariance matrix case. The approach adopted in this paper is to factor the transform into multiple streams. Each component can then use a different transform for each stream. Hence instead of using an assignment variable an assignment vector is used. In order to maintain the efficient likelihood computation of equation 3, F(r)-l, rather than F(r), must be factored into rows. This is a partitioning of the feature space into a set of observation streams. In common with other factoring schemes this dramatically increases the effective number of transforms from which each component may select without increasing the number of transform parameters. Though this paper only considers factoring semi-tied covariance matrices the extension to the "projection" schemes presented in [2] is straightforward. This paper describes how to estimate the set of transforms and determine which subspaces a particular component should use. The next section describes how to assign components to transforms and, given this assignment, how to estimate the appropriate transforms . Some initial experiments on a large vocabulary speech recognition task are presented in the following section. 2 Factored Semi-Tied Covariance Matrices In order to factor semi-tied covariance matrices the inverse of the observation transformation for a component is broken into multiple streams. The feature space of each stream is then determined by selecting from an inventory of possible transforms. Consider the case where there are S streams. The effective full covariance matrix of component m, ~(m), may be written as ~(m) = F(z(~)) ~(':') F(Z(~))T where the form of F(z(~)) is restricted dlag , so that 4 (4) and z(m) is the S-dimensional assignment vector for component m. The complete set of model parameters, M, consists of the standard model parameters, the component means, '... , Af~')} for each variances, weights and, additionally, the set of transforms { Af~l stream s (Rs is the number of transforms associated with stream s) and the assignment vector z(m) for each component. Note that the semi-tied covariance matrix scheme is the case when S = 1. The likelihood is efficiently estimated by storing transformed observations for each stream transform, i.e. O(T). Af;! The model parameters are estimated using ML training on a labelled set of training data o = {0(1), . .. , o(T)}. The likelihood of the training data may be written as p(OIM) = LIT (P(q(T)lq(T -1)) L w(m)p(O(T);IL(m),~g;lg'A(Z(~)))) E> r mE(}(r) 4A similar factorisation has also been proposed in [4]. (5) where e is the set of all valid state sequences according to the transcription for the data, q(T) is the state at time T of the current path, O(T) is the set of Gaussian components belonging to state q(T), and w(m) is the prior of componentm. Directly optimising equation 5 is a very large optimisation task, as there are typically millions of model parameters. Alternatively, as is common with standard HMM training, an EM-based approach is used. The posterior probability of a particular component, m, generating the observation at a given time instance is denoted as 'Ym (T). This may be simply found using the forward backward algorithm [6] and the old set of model parameters M. The new set of model parameters will be denoted as M. The estimation of the component priors and HMM transition matrices are estimated in the standard fashion [6]. Directly optimising the auxiliary function for the model parameters is computationally expensive [3] and does not allow the embedding of the assignment process. Instead a simple iterative optimisation scheme is used as follows : 1. Estimate the within class covariance matrix for each Gaussian component in the system, W(m), using the values of 'Ym (T). Initialise the set of assignment vectors, {z} = {Z(1), ... , Z(M)} and the set of transforms for each stream {A} = A(Rt) A(1) A(RS)} {A (1) [1)"'" [1) , ... , [8)"'" [8) . 2. Using the current estimates of the transforms and assignment vectors obtain the ML estimate of the set of component specific diagonal covariance matrices incorporating the appropriate parameter tying as required. This set of parameters will be denoted as {t} = {~~~g"'" ~~~}. 3. Estimate the new set of transforms, { A }, using the current set of component covariance matrices { t } and assignment vectors { Z }. The new auxiliary function at this stage will be written as Q(M, M; {t } , { z} ). 4. Update the set of assignment variables for each component { Z }, given the current set of model transforms, { A } . 5. Goto (2) until convergence, or an appropriate stopping criterion is satisfied. Oth- {t} erwise update and the component means using the latest transforms and assignment variables. There are three distinct optimisation problems within this task. First the ML estimate of the set of component specific diagonal covariance matrices is required. Second, the new set of transforms must be estimated. Finally the new set of assignment vectors is required. The ML estimates of the component specific variances (and means) under a transformation is a standard problem, e.g. for the semi-tied case see [3] and is not described further. The ML estimation of the transforms and assignment variables are described below. The transforms are estimated in an iterative fashion. The proposed scheme is derived by modifying the standard semi-tied covariance optimisation equation in [3]. A row by row optimisation is used. Consider row i of stream p of transform r, a[;fi' the auxiliary function may be written as (ignoring constant scalings and elements independent of a[;fi) Q(M M' {t} ", {z}) = "" (3(m) log ((c(z(m?a(Z~~?T)2) _ L...J [pj. [pj. m (z(m? L...J [sj} [sj} 8,r,j L w(m) (m)2 m:{z~m)=r} U diag[sjj K(sr j ) = and c[pji " " a(r) .K(srj)a(r)T L 'Ym(r) (6) T is the cofactor of row i of stream p of transform A (z(m? (r) . The gradient j [pji' differentiating the auxiliary function with respect to a[;fi' is given by5 j(r). = [pj. (m)c(z~m?} "" L...J m:{z~m)=r} [pj. { 2 (3 (z(m? (r)T C[pji _ 2a(r).K(pri) [pj. (8) a[pji The main cost for computing the gradient is calculating the cofactors for each component. Having computed the gradient the Hessian may also be simply calculated as H(r) . [pj. = (m) (z(m?T (z(m?} "" L...J { _2(3 m:{z~m)=r} c [pji c[pji ( (z(m? c[pji _ 2K(pri) (r)T)2 (9) a[pji The Hessian is guaranteed to be negative definite so the Newton direction must head towards a maximum. At the t + 1th iteration (r) ( a[pji t+ 1) _ - (r) () t - a[pji where the gradient and Hessian are based on the estimation scheme was highly stable. j(r) H(r)-l [pji tth (10) [Pji parameter estimates. In practice this The assignment for stream s of component m is found using a greedy search technique based on ML estimation. Stream s of component m is assigned using (A (u(,rm?) 12 ) Idet ( diag (A[;i A[;t) ) I Idet z(m) s - arg max { ( } (11) W(m) rER, where the hypothesised assignment of factor stream s, u(srm), is given by (srm) _ { uj - r, z~m), j =s (otherwise) ------------------------- 5When the standard semi-tied system is used (i.e. S form solution (r) _ (r) K(lri)-l a[l]i - C[l ]i (12) = 1) the estimation of row, i has the closed (Lm:{zim)=r} f3(m)) (r) K(lri)-l (r)T C[l]i C[l]i (7) As the assignment is dependent on the cofactors, which themselves are dependent on the other stream assignments for that component, an iterative scheme is required. In practice this was found to converge rapidly. 3 Results and Discussion An initial investigation of the use of factored semi-tied covariance matrices was carried out on a large-vocabulary speaker-independent continuous-speech recognition task. The recognition experiments were performed on the 1994 ARPA Hub 1 data (the HI task), an unlimited vocabulary task. The results were averaged over the development and evaluation data. Note that no tuning on the "development" data was performed. The baseline system used for the recognition task was a gender-independent cross-word-triphone mixtureGaussian tied-state HMM system. For details of the system see [8]. The total number of phones (counting silence as a separate phone) was 46, from which 6399 distinct context states were formed. The speech was parameterised into a 39-dimensional feature vector. The set of baseline experiments with semi-tied covariance matrices (8 = 1) used "expert" knowledge to determine the transform classes. Two sets were used. The first was based on phone level transforms where all components of all states from the same phone shared the same class (phone classes). The second used an individual transform per state (state classes). In addition a global transform (global class) and a full-covariance matrix system (comp class) were tested. Two systems were examined, a four Gaussian components per state system and a twelve Gaussian component system. The twelve component system is the standard system described in [8]. In both cases a diagonal covariance matrix system (labelled none) was generated in the standard HTK fashion [9]. These systems were then used to generate the initial alignments to build the semi-tied systems. An additional iteration of Baum-We1ch estimation was then performed. Three forms of assignment training were compared. The previously described expert system and two ML-based schemes, standard andfactored. The standard scheme used a single stream (8 = 1) which is similar to the scheme described in [3]. The factored scheme used the new approach described in this paper with a separate stream for each of the elements of the feature vector (8 = 39). Table 1: System performance on the 1994 ARPA HI task Assignment Scheme none global phone state comp phone phone - expert expert - standard factored 10.34 10.04 9.20 9.22 9.73 9.48 8.87 8.86 8.84 9.98 8.62 8.42 The results of the baseline semi-tied covariance matrix systems are shown in table 1. For the four component system the full covariance matrix system achieved approximately the same performance as that of the expert state semi-tied system. Both systems significantly (at the 95% level) outperformed the standard 12-component system (9.71 %). The expert phone system shows around an 9% degradation in performance compared to the state system, but used less than a hundredth of the number of transforms (46 versus 6399). Using the standard ML assignment scheme with initial phone classes, S = 1, reduced the error rate of the phone system by around 3% over the expert system. The factored scheme, S = 39, achieved further reductions in error rate. A 5% reduction in word error rate was achieved over the expert system, which is significant at the 95% level. Table 1 also shows the performance of the twelve component system. The use of a global semi-tied transform significantly reduced the error rate by around 9% relative. Increasing the number of transforms using the expert assignment showed no reduction in error rate. Again using the phone level system and training the component transform assignments, either the standard or the factored schemes, reduced the word error rate. Using the factored semi-tied transforms (S = 39) significantly reduced the error rate, by around 5%, compared to the expert systems. 4 Conclusions This paper has presented a new form of semi-tied covariance, the factored semi-tied covariance matrix. The theory for estimating these transforms has been developed and implemented on a large vocabulary speech recognition task. On this task the use of these factored transforms was found to decrease the word error rate by around 5% over using a single transform, or multiple transforms, where the assignments are expertly determined. The improvement was significant at the 95% level. In future work the problems of determining the required number of transforms for each of the streams and how to determine the appropriate dimensions will be investigated. References [1] A P Dempster, N M Laird, and D B Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39:1-38, 1977. [2] M J F Gales. Maximum likelihood multiple projection schemes for hidden Markov models. Technical Report CUEDIF-INFENGffR365, Cambridge University, 1999. Available via anonymous ftp from: svr-ftp.eng.cam.ac.uk. [3] M J F Gales. Semi-tied covariance matrices for hidden Markov models. IEEE Transactions Speech and Audio Processing, 7:272-281, 1999. [4] N K Goel and R Gopinath. Multiple linear transforms. In Proceedings ICASSP, 2001. To appear. [5] N Kumar. Investigation of Silicon-Auditory Models and Generalization of Linear Discriminant Analysisfor Improved Speech Recognition. PhD thesis, John Hopkins University, 1997. [6] L R Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, February 1989. [7] S Roweiss and Z Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305-345, 1999. [8] PC Woodland, J J Odell, V Valtchev, and S J Young. The development of the 1994 HTK large vocabulary speech recognition system. In Proceedings ARPA Workshop on Spoken Language Systems Technology, pages 104-109, 1995. [9] S J Young, J Jansen, J Odell, D Ollason, and P Woodland. The HTK Book (for HTK Version 2.0). Cambridge University, 1996.
1871 |@word determinant:1 version:3 r:2 covariance:34 eng:2 reduction:4 initial:5 united:1 selecting:1 current:4 must:3 written:4 john:1 update:2 generative:7 greedy:1 selected:1 ith:1 consists:1 themselves:1 decomposed:1 increasing:4 becomes:1 estimating:2 underlying:1 null:1 tying:1 developed:1 spoken:1 transformation:3 multidimensional:1 cuedif:1 rm:1 uk:2 partitioning:1 appear:1 engineering:1 limit:1 subscript:1 path:1 approximately:1 examined:1 heteroscedastic:1 hmms:1 bi:1 averaged:1 maximisation:1 practice:2 definite:1 differs:1 cb2:1 significantly:3 projection:2 word:5 svr:1 context:2 equivalent:1 baum:1 straightforward:1 latest:1 factorisation:1 factored:13 dominate:1 initialise:1 embedding:1 us:1 element:5 recognition:17 particularly:1 expensive:1 thousand:1 hypothesised:1 decrease:1 dempster:1 broken:1 cam:2 po:1 icassp:1 distinct:2 effective:2 otherwise:1 transform:20 laird:1 advantage:1 sequence:1 rapidly:1 description:1 convergence:1 generating:2 ftp:2 ac:2 auxiliary:4 implemented:1 convention:1 direction:1 modifying:1 tlle:10 assign:2 generalization:1 investigation:2 anonymous:1 extension:2 strictly:1 around:5 considered:3 lm:1 estimation:7 outperformed:1 currently:1 gaussian:14 modified:1 rather:2 derived:1 adiag:1 zim:1 improvement:1 modelling:4 likelihood:11 contrast:1 lri:2 baseline:3 dependent:2 factoring:3 stopping:1 typically:3 unlikely:1 hidden:5 transformed:1 arg:1 denoted:3 development:3 jansen:1 once:1 f3:1 having:1 optimising:2 ipz:1 lit:1 future:1 report:1 simplify:1 individual:1 n1:4 maintain:1 normalisation:1 highly:2 evaluation:1 alignment:1 mixture:2 pc:1 oth:1 necessary:1 incomplete:1 old:1 arpa:3 instance:3 column:1 disadvantage:1 assignment:25 cost:6 subset:1 srm:2 referring:1 twelve:3 discriminating:2 ym:3 hopkins:1 again:1 reflect:1 satisfied:1 thesis:1 gale:3 book:1 expert:11 bold:3 hundredth:1 stream:24 performed:3 closed:1 il:3 ni:1 square:2 formed:1 variance:2 efficiently:2 rabiner:1 none:2 comp:2 associated:4 static:2 auditory:1 knowledge:1 dimensionality:3 htk:4 improved:1 decorrelating:2 evaluated:1 though:1 parameterised:1 stage:1 correlation:1 until:1 replacing:1 lda:2 indicated:2 hence:2 assigned:2 pri:2 ll:1 during:3 nuisance:4 speaker:1 criterion:1 recognised:1 complete:1 fi:3 common:3 million:1 refer:2 significant:2 silicon:1 cambridge:4 cofactor:3 ai:1 tuning:1 language:1 stable:1 dominant:1 posterior:1 showed:1 phone:12 phonetic:1 additional:1 goel:1 triphone:1 determine:3 converge:1 signal:2 odell:2 semi:23 multiple:9 sound:1 full:4 reduces:1 technical:1 af:3 calculation:2 cross:1 optimisation:5 expectation:1 iteration:2 achieved:3 addition:1 sr:1 goto:1 counting:1 tm:2 det:1 speech:15 hessian:3 dramatically:1 useful:1 woodland:2 clear:1 transforms:28 ten:1 tth:1 reduced:5 generate:1 tutorial:1 estimated:6 per:5 four:2 capital:1 gmm:4 pj:6 backward:1 inverse:1 letter:2 rer:1 scaling:1 hi:2 guaranteed:1 unlimited:1 kumar:1 relatively:1 department:1 trumpington:1 according:1 belonging:1 describes:3 em:3 modification:1 restricted:2 computationally:1 equation:4 previously:1 idet:3 adopted:1 available:2 sjj:1 appropriate:4 robustness:1 pji:12 newton:1 unifying:1 calculating:1 ghahramani:1 uj:1 build:1 february:1 society:1 rt:1 diagonal:11 gradient:4 subspace:1 separate:2 street:1 hmm:5 me:1 considers:1 discriminant:2 assuming:1 kingdom:1 lg:2 negative:1 observation:5 markov:5 oim:1 head:1 required:6 specified:1 ollason:1 erwise:1 usually:1 below:1 royal:1 max:1 including:1 scheme:23 technology:1 carried:1 prior:2 review:1 determining:1 relative:1 versus:1 gopinath:1 sufficient:1 rubin:1 storing:1 row:8 free:1 transpose:1 aij:1 silence:1 allow:1 differentiating:1 dimension:8 vocabulary:6 valid:1 transition:1 calculated:1 forward:1 transaction:1 sj:2 transcription:1 ml:9 global:5 assumed:1 hlda:1 alternatively:1 search:1 iterative:4 continuous:1 table:3 additionally:1 ignoring:1 inventory:1 investigated:1 diag:5 main:1 valtchev:1 noise:1 n2:1 fashion:4 n:2 lq:1 tied:24 young:2 specific:4 hub:1 incorporating:1 workshop:1 effectively:1 phd:1 cx:1 simply:3 scalar:1 gender:1 towards:1 labelled:2 shared:2 determined:3 reducing:1 ithis:1 degradation:1 total:1 select:1 srj:1 audio:1 tested:1
953
1,872
Dopamine Bonuses Sham Kakade Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WC1N 3AR. sham@gat sby.u c l. ac . uk da y a n @gat sby.u c l. ac .uk Abstract Substantial data support a temporal difference (TO) model of dopamine (OA) neuron activity in which the cells provide a global error signal for reinforcement learning. However, in certain circumstances, OA activity seems anomalous under the TO model, responding to non-rewarding stimuli. We address these anomalies by suggesting that OA cells multiplex information about reward bonuses, including Sutton's exploration bonuses and Ng et al's non-distorting shaping bonuses. We interpret this additional role for OA in terms of the unconditional attentional and psychomotor effects of dopamine, having the computational role of guiding exploration. 1 Introduction Much evidence suggests that dopamine cells in the primate midbrain play an important role in reward and action learning. Electrophysiological studies support a theory that OA cells signal a global prediction error for summed future reward in appetitive conditioning tasks (Montague et al, 1996; Schultz et al, 1997), in the form of a temporal difference prediction error term. This term can simultaneously be used to train predictions (in the model, the projections of the OA cells in the ventral tegmental area to the limbic system and the ventral striatum) and to train actions (the projections of OA cells in the substantia nigra to the dorsal striatum and motor and premotor cortex). Appetitive prediction learning is associated with classical conditioning, the task of learning which stimuli are associated with reward; appetitive action learning is associated with instrumental conditioning, the task of learning actions that result in reward delivery. The computational role of dopamine in reward learning is controversial for two main reasons (Ikemoto & Panksepp, 1999; Redgrave et al, 1999). First, stimuli that are not associated with reward prediction are known to activate the dopamine system persistently, including in particular stimuli that are novel and salient, or that physically resemble other stimuli that do predict reward (Schultz, 1998). Second, dopamine release is associated with a set of motor effects, such as species- and stimulus-specific approach behaviors, that seem either irrelevant or detrimental to the delivery of reward. We call these unconditional effects. In this paper, we study this apparently anomalous activation of the OA system, suggesting that it multiplexes information about bonuses, potentially including exploration bonuses (Sutton, 1990; Dayan & Sejnowski, 1996) and shaping bonuses (Ng et al, 1999), on top of reward prediction errors. These responses are associated with the unconditional effects of OA, and are part of an attentional system. A B r"'1"'oApIL"",??' . .' \" ",L,.I\ ,,11,1 +'lj,.'-4'I~~II/"""""" + ~alijwb""'Jo.''''',jJ,'Ll.u. IIlf!t light i ~ ro"Nd I~ ~ "d~~L.II. ".11 11 11,,1, 111.1.1 .. . ?4ooms .. + raward D "" ? ? ' , 1 .... + 7oom. ~j!tuw.,lJ!"" ,i..IeI . . . . . . . .~ ~f . ?320ms ? door ? 44I,L.;...... c~e + AlMW,J 300ms bins 480ms 1.1W v.. dJor + 1+*''''''''''' \J.I.a M ? dJor . cL e . Figure 1: Activity of individual DA neurons - though substantial data suggest the homogeneous character of these responses (Schultz, 1998). See text for description. The latency and duration of the DA activation is about lOOms. The depression has duration of about 200 ms. The baseline spike rate is about 2-4 Hz. Adapted from Schultz et al (1990, 1992, & 1993) and Jacobs et al (1997). 2 DA Activity Figure 1 shows three different types of dopamine responses that have been observed by Schultz et al and Jacobs et al. Figures 1A;B show the response to a conditioned stimulus that becomes predictive of reward (CS+). For this, in early trials (figure 1A), there is no, or only a weak response to the CS+, but a strong response just after the time of delivery of the reward. In later trials (figure 18), after learning is complete (but before overtraining), the DA cells are activated in response to the stimulus, and fire at background rates to the reward. Indeed, if the reward is omitted, there is depression of DA activity at just the time during early trials that it used to excite the cells. These are the key data for which the temporal difference model accounts. Under the model, the cells report the temporal difference (TD) error for reward, ie the difference in amount of reward that is delivered and the amount that is expected. Let r(t) be the amount of reward received at time t and v(t) be the prediction of the sum total (undiscounted) reward to be delivered in a trial after time t, or: v(t) '" L r(T + t) . (1) r~O The TD component to the dopamine activity is the prediction error: c5(t) = r(t) + v(t + 1) - v(t) (2) which uses r(t) + v(t + 1) as an estimate of :Er>or(T + t), so that the TD error is an estimate of :Er>or(T + t) - v(t). Provided that the information about state includes information ab-out how much time has elapsed since the CS+ was presented (which must be available because of the precisely timed nature of the inhibition at the time of reward, if the expected reward is not presented), this model accounts well for the results in figure 1A. The general framework of reinforcement learning methods for Markov decision problems (MDPs) extends these results to the case of control. An MDP consists of states, actions, transition probabilities between states under the chosen action, and the associated rewards with these transitions. The goal of the subject solving a MOP is to find a policy (a choice of actions in each state) so as to optimize the sum total reward it receives. The TO error 8(t) can be used to learn optimal policies by implementing a form of policy iteration, which is an optimal control teclmique that is standard in engineering (Sutton & Barto, 1998; Bertsekas & Tsitsiklis, 1996). Figures lC;O show that reporting a prediction error for reward does not exhaust the behavioral repertoire of the OA cells. Figure lC shows responses to salient, novel, stimuli. The dominant effect is that there is a phasic activation of dopamine cells followed by a phasic inhibition, both locked to the stimulus. These novelty responses decrease over trials, but quite slowly for very salient stimuli (Schultz, 1998). In some cases, particularly in early trials of appetitive learning (figure lA top), there seems to be little or no phasic inhibition of the cells following the activation. Figure 10 shows what happens when a stimulus (door -) that resembles a reward-predicting stimulus (door +) is presented without reinforcement. Again a phasic increase over baseline followed by a depression is seen (lower 10). However, unlike the case in figure 1B, there is no persistent reward prediction, since if a reward is subsequently delivered (unexpectedly), the cells become active (not shown) (Schultz, 1998). 3 Multiplexing and reward distortion The most critical issue is whether it is possible to reconcile the behavior of the OA cells seen in figures lC;O with the putative computational role of OA in terms of reporting prediction error for reward. Intuitively, these apparently anomalous responses are benign, that is they do not interfere with the end point of normal reward learning, provided that they sum to zero over a trial. To see this, consider what happens once learning is complete. If we sum the prediction error terms from equation 2, starting from the time of the stimulus onset at t = I, we get L:t~l 8(t) = v(tend) - v(l) + L:t~l r(t) where tend is the time at the end of the trial. Assuming that v(tend)=O and v(l) =0, ie that the monkey confines its reward predictions to within a trial, we can see that any additional influences on 8(t) that sum to 0 preserve predicted sum future rewards. From figure I, this seems true of the majority of the extra responses, ie anomalous activation is canceled by anomalous inhibition, though it is not true of the uncancelled OA responses shown in figure lA (upper). Altogether, OA activity can still be used to learn predictions and choose actions - although it should not strictly be referred to solely in terms of prediction error for reward. Apart from the issue of anomalous activation that is not canceled (upper figure lA), this leaves open two key questions: what drives the extra OA responses; and what effects do they have. We offer a set of possible interpretations (mostly associated with bonuses) that it is hard to decide between on the basis of current data. 4 Novelty and Bonuses Three very different sorts of bonuses have been considered in reinforcement learning, novelty, shaping and exploration bonuses. The presence of the first two of these is suggested by the responses in figure 1. Bonuses modify the reward signals and so change the course of learning. They are mostly used to guide exploration of the world, and are typically heuristic ways of addressing the computationally intractable exploration-exploitation dilemma. O:b ~ ~ b O:~ b b b 000 -0.5 000 -0.5 o 10 time 20 0 10 20 0 time 10 time 20 0 10 20 trial ~ ~ 0 10 20 trial Figure 2: Activity of the DA system given novelty bonuses. The plots show different aspects of the TD error 8 as a function of time t within a trial (first three plots in each row) or as a function of number T of trials (last two). Upper) A novelty signal was applied for just the first timesteps of the stimulus and decayed hyperbolically with trial number as liT. Lower) A novelty signal was applied for the first two timesteps of the stimulus and now decayed exponentially as e-? 3T to demonstrate that the precise form of decay is irrelevant. Trial numbers and times are shown in the plots. The learning rate was E = 0.3. We first consider a novelty bonus, which we take as a model for uncancelled anomalous activity. A novelty bonus is a value that is added to states or state-action pairs associated with their unfamiliarity - novelty is made intrinsically rewarding. This is computationally reasonable, at least in moderation, and indeed it has become standard practice in reinforcement learning to use optimistic initial values for states to encourage systems to plan to get to novel or unfamiliar states. In TD terms, this is like replacing the true environmental reward r(t) at time t with r(t) --t r(t) + n(x(t), T) where x(t) is the state at time t and n(x(t), T) is the novelty of this state in trial T (an index we generally suppress). The effect on the TD error is then c5(t) = r(t) + n(x(t), T) + v(t + 1) - v(t) (3) The upper plots in figure 2 show the effect of including such an exploration bonus, in a case in which just the first timestep of a new stimulus in any given trial are awarded a novelty signal which decays hyperbolically to 0 as the stimulus becomes more familiar. Here, a novel stimulus is presented for a 25 trials without there being any reward consequences. The effect is just a positive signal which decreases over time. Learning has no effect on this, since the stimulus cannot predict away a novelty signal that lasts only a single timestep. The lower plots in figure 2 show that it is possible to get partial apparent cancellation through learning, if the novelty signal is applied for the first two timesteps of a stimulus (for instance if the novelty signal is calculated relatively slowly). In this case, the initial effect is just a positive signal (leftmost graph), the effect of TD learning gives it a negative transient after a few trials (second plot), and then, as the novelty signal decays to 0, the effect goes away (third plot). The righthand plots show how c5(t) behaves across trials. If there was no learning, then there would be no negative transient. The depression of the DA signal comes from the decay of the novelty bonuses. Novelty bonuses are true bonuses in the sense that they actually distort the reward function. In particular, this means that we would not expect the sum of the extra TD error terms to be 0 across a trial. This property makes them useful, for instance, in actually distorting the optimal policy in Markov decision problems to ensure that exploration is plmmed and executed in favor of exploitation. However, they can be dangerous for exactly the same reason - and there are reports of them leading to incorrect behavior, making agents search too much. '[tj tj d '[tj S 8 000 -1 000 -1 0 10 time 20 0 10 time 20 0 10 time D~ LJ Ej 20 0 10 20 trial 0 10 20 trial Figure 3: Activity of the DA system given shaping bonuses (in the same format as figure 2). Upper) The plots show different aspects of the TD error 8 as a function of time t within a trial (first three plots) or as a function of number T of trials (last two). Here, the shaping bonus comes from a if>(t) = 0 for the first two timesteps a stimulus is presented within a trial (t=1;2), and 0 thereafter, irrespective of trial number. The learning rate was to = 0.3. Lower) The same plots for to = 0 In answer to this concern, Ng et al (1999) invented the idea of non-distorting shaping bonuses. Ng et aI's shaping bonuses are guaranteed not to distort optimal policies, although they can still change the exploratory behavior of agents. This guarantee comes because a shaping bonus is derived from a potential function ?(x) of a state, distorting the TD error to c5(t) = r(t) + ?(x(t + 1)) - ?(x(t)) + v(t + 1) - v(t) (4) The difference from the novelty bonus of equation 3 is that the bonus comes from the difference between the potential functions for one state and the previous state, and they thus cancel themselves out when summed over a trial. Shaping bonuses must remain constant for the guarantee about the policies to hold. The upper plots in figure 3 show the effect of shaping bonuses on the TD error. Here, the potential function is set to the value 1 for the first two time steps of a stimulus in a trial, and 0 otherwise. The most significant difference between shaping and novelty bonuses is that the former exhibits a negative transient even in the very first trial, whereas, for the latter, it is a learned effect. If the learning rate is non-zero, then shaping bonuses are exactly predicted away over the course of normal learning. Thus, even though the same bonus is provided on trial 25 as trial 1, the TD error becomes 0 since the shaping bonus is predicted away. The dynamics of the decay shown in the last two plots is controlled by the learning rate for TD. The lower plots show what happens if learning is switched off at the time the shaping bonus is provided - this would be the case if the system responsible for computing the bonus takes its effect before the inputs associated with the stimulus are plastic. In this case, the shaping bonus is preserved. The final category of bonus is an ongoing exploration bonus (Sutton, 1990; Dayan & Sejnowski, 1996) which is used to ensure continued exploration. Sutton (1990) suggested adding to the estimated value of each state (or each state-action pair), a number proportional to the length of time since it was last visited. This ultimately makes it irresistible to go and visit states that have not been visited for a long time. Dayan & Sejnowski (1996) derived a bonus of this form from a model of environmental change that justifies the bonus. There is no evidence for this sort of continuing exploration bonus in the dopamine data, perhaps not surprisingly, since the tasks undertaken by the monkey offer little possibility for any trade-off between exploration and exploitation. ~~8 "171 ~rL-p~g_b_9g-1 -, o time 20 0 time 20 0 time 20 0 d.1 E]~~ .I:~-d~- J time 20 0 time 40 0 time 40 Figure 4: Activity 5(t) of the dopamine for partial predictability. del = delivered, pred = predicted. A;B) CS+ is presented with (A) or surprisingly, without (B) reward. C;D) CS- is presented without (C) or surprisingly, with (D) reward. On each trial, an initial stimulus (presented at t = 3 is ambiguous as to whether CS+ or CS- is presented (each occurs equally often), and the ambiguity is perfectly resolved at t = 4. E;F) The model shows the same behavior. Since the CS? comes at a random interval after the cue, the traces are stimulus locked to the relevant events. 5 Generalization Responses and Partial Observability Generalization responses (figure 10) show a persistent effect of stimuli that merely resemble a rewarded stimulus. However, animals do not terminally confuse normally rewarded and normally non-rewarded stimuli, since if a reward is provided in the latter case, then it engenders OA activity (as an unexpected reward should), and if it is not provided, then there is no depression (as would be the case if an expected reward was not delivered) (Schultz, 1998). One possibility is that this activity comes from a shaping bonus that is not learned away, as in the lower plots of figure 3. An alternative interpretation comes from partial observability. If the initial information from the world is ambiguous as to whether the stimulus is actually rewarding (door+, called CS+ trials) or nonrewarding (door-, called CS- trials), because of the similarity, then the animal should develop an initial expectation that there could be a reward (whose mean value is related to the degree of confusion). This should lead to a partial activation of the OA system. If the expectation is canceled by subsequent information about the stimulus (available, for instance, following a saccade), then the OA system will be inhibited below baseline exactly to nullify the earlier positive prediction. If the expectation is confirmed, then there will be continued activity representing the difference between the value of the reward and the expected value given the ambiguous stimulus. Figure 4 shows an example of this in a simplified case for which the animal receives information about the true stimulus over two timesteps, the first time step is ambiguous to the tune of 50%; the second perfectly resolves the ambiguity. Figures 4A;B show CS+ trials, with and without the delivery of reward; figures 4C;0 CS- trials, without and with the delivery of reward. The similarity of 4A;C to figure 10 is clear. Another instance of this generalization response is shown in figure IE. Here, an cue light (c?) is provided indicating whether a CS+ or a CS- (d?) is to appear at a random later time, which in turn is followed (or not) after a fixed interval by a reward (r?). OA cells show a generalization response to the cue light; and then fire to the CS+ or are unaffected by the CS-; and finally do not respond to the appropriate presence or absence of the cue. Figures 4E;F shows that this is exactly the behavior of the model. The OA response stimulus locked to CS+ arises because of the variability in the interval between the cue light and the CS+; if this interval were fixed, then the cells would only respond to the cue (c+), as in Schultz (1993). 6 Discussion We have suggested a set of interpretations for the activity of the OA system to add to that of reporting prediction error for reward. The two theoretically most interesting features are novelty and shaping bonuses. The former distort the reward function in such a way to encourage exploration of new stimuli, and new places. The latter are non-distorting, and can be seen as being multiplexed by the DA system together with the prediction error signal. Since shaping bonuses are not distorting they have no ultimate effect on action choice. However, the signal provided by the activation (and then cancellation) of DA can nevertheless have a significant neural effect. We suggest that DA release has unconditional effects in the ventral striatum (perhaps allowing stimuli to be read into pre-frontal working memory, Cohen et ai, 1998) and the dorsal striatum (perhaps engaging stimulus-directed approach and exploratory orienting behaviors, see Ikemoto & Panksepp (1999) for review). For stimuli that actually predict rewards (and so cause an initial activation of the DA system), these behaviors are often called appetitive; for novel, salient, and potentially important stimuli that are not known to predict rewards, they allow the system to pay appropriate attention. These effects of DA are unconditional, since they are hard-wired and not learned. In the case of partial observability, DA release due to the uncertain prediction of reward directly causes further investigation, and therefore resolution of the uncertainty. When unconditional and conditioned behaviors conflict, the former seem to dominate, as in the inability of animals to learn to run away from a stimulus in order to get food from it. The most major lacuna in the model is its lack of one or more opponent processes to DA that might report on punishments and the absence of predicted rewards. There is substantial circumstantial evidence that this might be one role for serotonin (which itself has unconditional effects associated with fear, fight, and flight responses that are opposite to those of DA), but there is not the physiological evidence to support or refute this possibility. Understanding the interaction of dopamine and serotonin in terms of their conditioned and unconditioned effects is a major task for future work. Acknowledgements Funding is from the NSF and the Gatsby Charitable Foundation. References [1] Bertsekas, DP & Tsitsitklis, IN (1996). Neuro-dynamic Programming. Cambridge, MA: Athena Scientific. [2] Cohen, JD, Braver, TS & O'Reilly, RC (1998). In AC Roberts, TW Robbins, editors, The Prefrontal Cortex: Executive and Cognitive Functions. Oxford: OUP. [3] Dayan, P, & Sejnowski, TJ (1996) . Machine Learning, 25: 5-22. [4] Horvitz, Je, Stewart, T, & Jacobs, B, (1997). Brain Research, 759:251-258. [5] Ikemoto, S, & Panksepp, J, (1999). Brain Research Reviews, 31:6-41. [6] Montague, PR, Dayan, P, & Sejnowski, TJ, (1996). Journal of Neuroscience, 16:1936-1947. [7] Ng, AY, Harada, D, and Russell, S, (1999) . Proceedings of the Sixteenth International Conference on Machine Learning. [8] [9] [10] [11] [12] [13] [14] [15] Redgrave, P, Prescott, T, & Gurney, K (1999). Trends in Neurosciences, 22: 146-151. Schultz, W, (1992). Seminars in the Neurosciences, 4: 129-138. Schultz, W, (1998). Journal ofNeurophysiologJJ, 80: 1-27. Schultz, W, Apicella, P, & Ljungberg, T, (1993) . Journal of Neuroscience, 13: 900-913. Schultz, W, Dayan, P, and Montague, PR, (1997). Science, 275: 1593-1599. Schultz, W, & Romo, R, (1990). Journal of Neuroscience, 63: 607-624. Sutton, RS, (1990). Machine Learning: Proceedings of the Seventh International Conference, 216-224. Sutton, RS & Barto, AG (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.
1872 |@word trial:37 exploitation:3 seems:3 instrumental:1 nd:1 open:1 r:2 jacob:3 initial:6 horvitz:1 current:1 activation:9 must:2 subsequent:1 benign:1 motor:2 plot:15 sby:2 cue:6 leaf:1 unfamiliarity:1 rc:1 become:2 persistent:2 incorrect:1 consists:1 behavioral:1 theoretically:1 expected:4 indeed:2 behavior:9 themselves:1 brain:2 td:13 resolve:1 little:2 food:1 becomes:3 provided:8 bonus:43 what:5 monkey:2 ag:1 guarantee:2 temporal:4 exactly:4 ro:1 uk:2 control:2 unit:1 normally:2 appear:1 bertsekas:2 before:2 positive:3 engineering:1 multiplex:2 modify:1 consequence:1 striatum:4 sutton:7 oxford:1 solely:1 might:2 resembles:1 suggests:1 locked:3 directed:1 responsible:1 practice:1 substantia:1 area:1 projection:2 reilly:1 pre:1 prescott:1 suggest:2 get:4 cannot:1 influence:1 optimize:1 romo:1 go:2 attention:1 starting:1 duration:2 resolution:1 continued:2 dominate:1 exploratory:2 play:1 oom:1 anomaly:1 programming:1 homogeneous:1 us:1 engaging:1 persistently:1 trend:1 particularly:1 observed:1 role:6 invented:1 panksepp:3 unexpectedly:1 decrease:2 trade:1 russell:1 substantial:3 limbic:1 reward:52 dynamic:2 ultimately:1 solving:1 predictive:1 dilemma:1 basis:1 resolved:1 montague:3 engenders:1 lacuna:1 train:2 london:1 activate:1 sejnowski:5 quite:1 premotor:1 heuristic:1 apparent:1 whose:1 distortion:1 otherwise:1 serotonin:2 favor:1 itself:1 delivered:5 final:1 unconditioned:1 interaction:1 relevant:1 sixteenth:1 description:1 undiscounted:1 wired:1 develop:1 ac:3 received:1 strong:1 c:18 resemble:2 predicted:5 come:7 subsequently:1 exploration:13 transient:3 implementing:1 bin:1 generalization:4 investigation:1 repertoire:1 refute:1 strictly:1 hold:1 considered:1 normal:2 predict:4 mop:1 major:2 ventral:3 early:3 omitted:1 visited:2 robbins:1 mit:1 tegmental:1 ej:1 barto:2 release:3 hyperbolically:2 derived:2 baseline:3 sense:1 dayan:7 lj:3 typically:1 fight:1 issue:2 canceled:3 plan:1 animal:4 summed:2 once:1 having:1 ng:5 lit:1 cancel:1 future:3 report:3 stimulus:40 inhibited:1 few:1 simultaneously:1 preserve:1 individual:1 familiar:1 fire:2 ab:1 circumstantial:1 possibility:3 righthand:1 appetitive:5 unconditional:7 light:4 activated:1 tj:5 wc1n:1 encourage:2 partial:6 continuing:1 timed:1 uncertain:1 instance:4 earlier:1 ar:1 stewart:1 queen:1 addressing:1 ikemoto:3 harada:1 seventh:1 too:1 answer:1 punishment:1 decayed:2 international:2 ie:4 rewarding:3 off:2 together:1 jo:1 again:1 ambiguity:2 choose:1 slowly:2 prefrontal:1 cognitive:1 leading:1 suggesting:2 account:2 potential:3 iei:1 exhaust:1 includes:1 onset:1 later:2 optimistic:1 apparently:2 sort:2 square:1 weak:1 plastic:1 teclmique:1 confirmed:1 drive:1 unaffected:1 overtraining:1 distort:3 associated:11 intrinsically:1 electrophysiological:1 shaping:18 actually:4 response:20 though:3 just:6 gurney:1 working:1 receives:2 flight:1 replacing:1 lack:1 del:1 interfere:1 perhaps:3 scientific:1 mdp:1 orienting:1 effect:23 true:5 former:3 read:1 ll:1 during:1 ambiguous:4 m:4 leftmost:1 ay:1 complete:2 demonstrate:1 confusion:1 novel:5 funding:1 behaves:1 rl:1 cohen:2 conditioning:3 exponentially:1 interpretation:3 interpret:1 unfamiliar:1 significant:2 cambridge:2 ai:2 cancellation:2 uncancelled:2 cortex:2 similarity:2 inhibition:4 add:1 dominant:1 ljungberg:1 irrelevant:2 apart:1 awarded:1 rewarded:3 certain:1 seen:3 additional:2 nigra:1 novelty:20 signal:15 ii:2 sham:2 england:1 offer:2 long:1 equally:1 visit:1 controlled:1 prediction:19 anomalous:7 neuro:1 circumstance:1 expectation:3 dopamine:13 physically:1 iteration:1 cell:16 preserved:1 background:1 whereas:1 interval:4 tuw:1 extra:3 unlike:1 hz:1 subject:1 tend:3 seem:2 call:1 presence:2 door:5 timesteps:5 perfectly:2 opposite:1 observability:3 idea:1 whether:4 distorting:6 ultimate:1 peter:1 cause:2 jj:1 action:11 depression:5 generally:1 latency:1 useful:1 clear:1 tune:1 amount:3 category:1 nsf:1 neuroscience:6 estimated:1 key:2 salient:4 thereafter:1 nevertheless:1 undertaken:1 timestep:2 graph:1 merely:1 sum:7 run:1 uncertainty:1 respond:2 extends:1 reporting:3 reasonable:1 decide:1 place:1 putative:1 delivery:5 decision:2 pay:1 followed:3 guaranteed:1 activity:15 adapted:1 dangerous:1 precisely:1 multiplexing:1 aspect:2 oup:1 relatively:1 format:1 across:2 remain:1 character:1 kakade:1 tw:1 primate:1 happens:3 midbrain:1 making:1 intuitively:1 pr:2 computationally:2 equation:2 turn:1 phasic:4 end:2 available:2 opponent:1 away:6 appropriate:2 braver:1 alternative:1 altogether:1 jd:1 responding:1 top:2 ensure:2 classical:1 question:1 added:1 spike:1 occurs:1 exhibit:1 apicella:1 detrimental:1 dp:1 attentional:2 oa:21 majority:1 athena:1 reason:2 assuming:1 length:1 index:1 mostly:2 executed:1 robert:1 potentially:2 trace:1 negative:3 suppress:1 policy:6 allowing:1 upper:6 neuron:2 markov:2 nullify:1 t:1 variability:1 precise:1 pred:1 pair:2 conflict:1 elapsed:1 learned:3 address:1 suggested:3 below:1 including:4 memory:1 critical:1 moderation:1 event:1 predicting:1 representing:1 loom:1 mdps:1 irrespective:1 text:1 review:2 understanding:1 acknowledgement:1 expect:1 interesting:1 proportional:1 foundation:1 switched:1 executive:1 agent:2 degree:1 controversial:1 editor:1 charitable:1 row:1 course:2 surprisingly:3 last:5 tsitsiklis:1 guide:1 allow:1 calculated:1 transition:2 world:2 c5:4 reinforcement:6 made:1 simplified:1 schultz:14 global:2 active:1 excite:1 search:1 nature:1 learn:3 cl:1 da:17 main:1 reconcile:1 referred:1 je:1 gatsby:2 predictability:1 lc:3 seminar:1 guiding:1 third:1 specific:1 er:2 decay:5 physiological:1 evidence:4 concern:1 intractable:1 adding:1 gat:2 conditioned:3 justifies:1 confuse:1 unexpected:1 fear:1 saccade:1 ooms:1 environmental:2 ma:2 goal:1 absence:2 hard:2 change:3 total:2 specie:1 called:3 la:3 indicating:1 support:3 latter:3 arises:1 dorsal:2 confines:1 inability:1 frontal:1 ongoing:1 multiplexed:1
954
1,873
On a Connection between Kernel PCA and Metric Multidimensional Scaling Christopher K. I. WilliaIns Division of Informatics The University of Edinburgh 5 Forrest Hill, Edinburgh EH1 2QL, UK c.k.i.williams~ed.ac.uk http://anc.ed.ac.uk Abstract In this paper we show that the kernel peA algorithm of Sch6lkopf et al (1998) can be interpreted as a form of metric multidimensional scaling (MDS) when the kernel function k(x, y) is isotropic, i.e. it depends only on Ilx - yll. This leads to a metric MDS algorithm where the desired configuration of points is found via the solution of an eigenproblem rather than through the iterative optimization of the stress objective function. The question of kernel choice is also discussed. 1 Introduction Suppose we are given n objects, and for each pair (i,j) we have a measurement of the "dissimilarity" Oij between the two objects. In multidimensional scaling (MDS) the aim is to place n points in a low dimensional space (usually Euclidean) so that the interpoint distances dij have a particular relationship to the original dissimilarities. In classical scaling we would like the interpoint distances to be equal to the dissimilarities. For example, classical scaling can be used to reconstruct a map of the locations of some cities given the distances between them. In metric MDS the relationship is of the form dij ~ f(Oij) where f is a specific function. In this paper we show that the kernel peA algorithm of Sch6lkopf et al [7] can be interpreted as performing metric MDS if the kernel function is isotropic. This is achieved by performing classical scaling in the feature space defined by the kernel. The structure of the remainder of this paper is as follows: In section 2 classical and metric MDS are reviewed, and in section 3 the kernel peA algorithm is described. The link between the two methods is made in section 4. Section 5 describes approaches to choosing the kernel function, and we finish with a brief discussion in section 6. 2 Classical and metric MDS 2.1 Classical scaling Given n objects and the corresponding dissimilarity matrix, classical scaling is an algebraic method for finding a set of points in space so that the dissimilarities are well-approximated by the interpoint distances. The classical scaling algorithm is introduced below by starting with the locations of n points, constructing a dissimilarity matrix based on their Euclidean distances, and then showing how the configuration of the points can be reconstructed (as far as possible) from the dissimilarity matrix. Let the coordinates of n points in p dimensions be denoted by Xi, i = 1, ... ,n. These can be collected together in a n x p matrix X . The dissimilarities are calculated by 8;j = (Xi - Xj)T(Xi - Xj). Given these dissimilarities, we construct the matrix A such that aij = 8;j' and then set B = H AH, where H is the centering matrix H = In - ~l1T . With 8;j = (Xi - Xj)T(Xi - Xj), the construction of B leads to bij = (Xi - xF(xj - x), where x = ~ L~=l Xi. In matrix form we have B = (HX)(HX)T, and B is real, symmetric and positive semi-definite. Let the eigendecomposition of B be B = V AV T , where A is a diagonal matrix and V is a matrix whose columns are the eigenvectors of B. If p < n, there will be n - p zero eigenvalues l . If the eigenvalues are ordered Al ~ A2 ~ ... ~ An ~ 0, then B = VpApVpT, where Ap = diag(Al, ... ,Ap) and Vp is the n x p matrix whose columns correspond to the first p eigenvectors of B, with the usual normalization so that the eigenvectors have unit length. The matrix X of the reconstructed coordinates "" 1 '" ..... of the points can be obtained as X = VpAJ, with B = X XT. Clearly from the information in the dissimilarities one can only recover the original coordinates up to a translation, a rotation and reflections of the axes; the solution obtained for X is such that the origin is at the mean of the n points, and that the axes chosen by the procedure are the principal axes of the X configuration. -! It may not be necessary to uses all p dimensions to obtain a reasonable approximation; a configuration X in k-dimensions can be obtained by using the largest k ! eigenvalues so that X = VkA~ . These are known as the principal coordinates of X in k dimensions. The fraction of the variance explained by the first k eigenvalues is A L~=l Ad L~=l Ai? Classical scaling as explained above works on Euclidean distances as the dissimilarities. However, one can run the same algorithm with a non-Euclidean dissimilarity matrix, although in this case there is no guarantee that the eigenvalues will be non-negative. Classical scaling derives from the work of Schoenberg and Young and Householder in the 1930's. Expositions of the theory can be found in [5] and [2]. 2.1.1 Opthnality properties of classical scaling Mardia et al [5] (section 14.4) give the following optimality property ofthe classical scaling solution. 1 In fact if the points are not in "general position" the number of zero eigenvalues will be greater than n - p. Below we assume that the points are in general position, although the arguments can easily be carried through with minor modifications if this is not the case. Theorem 1 Let X denote a configuration of points in ffi.P , with interpoint distances c5ri = (Xi - Xi)T (Xi - Xi). Let L be a p x p rotation matrix and set L = (L1' L 2), where L1 is p x k for k < p. Let X = X L 1, the projection of X onto a k-dimensional subspace of ffi.P , and let dri = (Xi - Xi) T (Xi - Xi). Amongst all projections X = X L 1, the quantity ? = Li,i (c5ri - dri) is minimized when X is projected onto its principal coordinates in k dimensions. For all i, j we have dii :::; c5ii . The value of ? for the principal coordinate projection is ? = 2n(Ak+1 + ... + Ap). 2.2 Relationships between classical scaling and peA There is a well-known relationship between PCA and classical scaling; see e.g. Cox and Cox (1994) section 2.2.7. Principal components analysis (PCA) is concerned with the eigendecomposition of the sample covariance matrix S = ~ XT H X. It is easy to show that the eigenvalues of nS are the p non-zero eigenvalues of B. To see this note that H2 = Hand thus that nS = (HX)T(HX). Let Vi be a unit-length eigenvector of B so that BVi = AiVi. Premultiplying by (HX)T yields (HX)T(HX)(HXf V i = Ai(Hx)T Vi (1) so we see that Ai is an eigenvalue of nS. Yi = (H X)T Vi is the corresponding eigenvector; note that Y; Yi = Ai. Centering X and projecting onto the unit vector Yi = X;1/2Yi we obtain HXYi = X;1/2 HX(HXf Vi = AY2 v i . (2) Thus we see that the projection of X onto the eigenvectors of nS returns the classical scaling solution. 2.3 Metric MDS The aim of classical scaling is to find a configuration of points X so that the interpoint distances dii well approximate the dissimilarities c5 ii . In metric MDS this criterion is relaxed, so that instead we require (3) where f is a specified (analytic) function. For this definition see, e.g. Kruskal and Wish [4] (page 22), where polynomial transformations are suggested. A straightforward way to carry out metric MDS is to define a error function (or stress) (4) where the {wii} are appropriately chosen weights. One can then obtain derivatives of S with respect to the coordinates of the points that define the dii'S and use gradient-based (or more sophisticated methods) to minimize the stress. This method is known as least-squares scaling. An early reference to this kind of method is Sammon (1969) [6], where wii = 1/c5ii and f is the identity function. Note that if f(c5 ii ) has some adjustable parameters () and is linear with respect to () 2, then the function f can also be adapted and the optimal value for those parameters given the current dij's can be obtained by (weighted) least-squares regression. 2 f can still be a non-linear function of its argument. Critchley (1978) [3] (also mentioned in section 2.4.2 of Cox and Cox) carried out metric MDS by running the classical scaling algorithm on the transformed dissim(for J.L > 0) . If ilarities. Critchley suggests the power transformation f(oij) = the dissimilarities are derived from Euclidean distances, we note that the kernel k(x,y) = -llx-ylli3 is conditionally positive definite (CPD) if f3::; 2 [1]. When the kernel is CPD, the centered matrix will be positive definite. Critchley's use of the classical scaling algorithm is similar to the algorithm discussed below, but crucially the kernel PCA method ensures that the matrix B derived form the transformed dissimilarities is non-negative definite, while this is not guaranteed by Critchley's transformation for arbitrary J.L. 00 A further member of the MDS family is nonmetric MDS (NMDS), also known as ordinal scaling. Here it is only the relative rank ordering between the d's and the o's that is taken to be important; this constraint can be imposed by demanding that the function f in equation 3 is monotonic. This constraint makes sense for some kinds of dissimilarity data (e.g. from psychology) where only the rank orderings have real meaning. 3 Kernel PCA In recent years there has been an explosion of work on kernel methods. For supervised learning these include support vector machines [8], Gaussian process prediction (see, e.g. [10]) and spline methods [9]. The basic idea of these methods is to use the "kernel trick". A point x in the original space is re-represented as a point ?(x) in a Np-dimensional feature space3 F, where ?(x) = (?1(X),?2(X), ... ,?NF(X)). We can think of each function ?j(-) as a non-linear mapping. The key to the kernel trick is to realize that for many algorithms, the only quantities required are of the form 4 ?(Xi).?(Xj) and thus if these can be easily computed by a non-linear function k(Xi,Xj) = ?(Xi).?(Xj) we can save much time and effort. Sch6lkopf, Smola and Miiller [7] used this trick to define kernel peA. One could compute the covariance matrix in the feature space and then calculate its eigenvectors/eigenvalues. However, using the relationship between B and the sample covariance matrix S described above, we can instead consider the n x n matrix K with entries Kij = k(Xi,Xj) for i,j = 1, .. . ,no If Np > n using K will be more efficient than working with the covariance matrix in feature space and anyway the latter would be singular. The data should be centered in the feature space so that L~=l ?(Xi) = o. This is achieved by carrying out the eigendecomposition of K = H K H which gives the coordinates of the approximating points as described in section 2.2. Thus we see that the visualization of data by projecting it onto the first k eigenvectors is exactly classical scaling in feature space. 4 A relationship between kernel PCA and metric MDS We consider two cases. In section 4.1 we deal with the case that the kernel is isotropic and obtain a close relationship between kernel PCA and metric MDS. If the kernel is non-stationary a rather less close relationship is derived in section 4.2. 3For some kernels NF = 00. 4We denote the inner product of two vectors as either a .h or aTh . 4.1 Isotropic kernels A kernelfunction is stationary if k(Xi' Xj) depends only on the vector T = Xi -Xj. A stationary covariance function is isotropic if k(Xi,Xj) depends only on the distance 8ij with 8;j = T.T, so that we write k(Xi,Xj) = r(8ij ). Assume that the kernel is scaled so that r(O) = 1. An example of an isotropic kernel is the squared exponential or REF (radial basis function) kernel k(Xi' Xj) = exp{ -O(Xi - Xj)T(Xi - Xj)}, for some parameter 0 > O. Consider the Euclidean distance in feature space 8;j = (?(Xi) - ?(Xj))T(?(Xi) ?(Xj)). With an isotropic kernel this can be re-expressed as 8;j = 2(1 - r(8ij )). Thus the matrix A has elements aij = r(8ij ) - 1, which can be written as A = K - 11 T. It can be easily verified that the centering matrix H annihilates 11 T, so that HAH = HKH. We see that the configuration of points derived from performing classical scaling on K actually aims to approximate the feature-space distances computed as 8ij = J2(1- r(8ij )). As the 8ij's are a non-linear function of the 8ij's this procedure (kernel MDS) is an example of metric MDS. Remark 1 Kernel functions are usually chosen to be conditionally positive definite, so that the eigenvalues of the matrix k will be non-negative. Choosing arbitrary functions to transform the dissimilarities will not give this guarantee. Remark 2 In nonmetric MDS we require that dij ~ f(8 ij ) for some monotonic function f. If the kernel function r is monotonically decreasing then clearly 1 - r is monotonically increasing. However, there are valid isotropic kernel (covariance) functions which are non-monotonic (e.g. the exponentially damped cosine r(8) = coo cos(w8); see [11] for details) and thus we see that f need not be monotonic in kernel MDS. Remark 3 One advantage of PCA is that it defines a mapping from the original space to the principal coordinates, and hence that if a new point x arrives, its projection onto the principal coordinates defined by the original n data points can be computed 5 . The same property holds in kernel PCA, so that the computation of the projection of ?(x) onto the rth principal direction in feature space can be computed using the kernel trick as L:~=1 o:i k(x, Xi), where or is the rth eigenvector of k (see equation 4.1 in [7]). This projection property does not hold for algorithms that simply minimize the stress objective function; for example the Sammon "mapping" algorithm [6] does not in fact define a mapping. 4.2 Non-stationary kernels Sometimes non-stationary kernels (e.g. k(Xi,Xj) = (1 + Xi.Xj)m for integer m) are used. For non-stationary kernels we proceed as before and construct 8;j = (?(Xi)-?(Xj))T(?(Xi)-?(Xj)). We can again show that the kernel MDS procedure operates on the matrix H K H. However, the distance 8ij in feature space is not a function of 8ij and so the relationship of equation 3 does not hold. The situation can be saved somewhat if we follow Mardia et al (section 14.2.3) and relate similarities 5Note that this will be, in general, different to the solution found by doing peA on the full data set of n + 1 points. bola_O I-, - -, bela=4 bela =10 '''' bela-20 .#,.. - .. -"--:;:-:;:- - - :::......?... ~(-:::/ .... I I / "", .,' ... '.:/ 500 1000 1500 2000 2500 k Figure 1: The plot shows 'Y as a function of k for various values of (3 = () /256 for the USPS test set. to dissimilarities through Jlj = Cii + Cjj - 2Cij, where Cij denotes the similarity between items i and j in feature space. Then we see that the similarity in feature space is given by Cij = ?(Xi).?(Xj) = k(Xi' Xj). For kernels (such as polynomial kernels) that are functions of Xi.Xj (the similarity in input space), we see then that the similarity in feature space is a non-linear function of the similarity measured in input space. 5 Choice of kernel Having performed kernel MDS one can plot the scatter diagram (or Shepard diagram) of the dissimilarities against the fitted distances. We know that for each pair the fitted distance d ij ::; Jij because of the projection property in feature space. The sum of the residuals is given by 2n E~=k+l Ai where the {Ai} are the eigenvalues of k = H K H. (See Theorem 1 above and recall that at most n of the eigenvalues of the covariance matrix in feature space will be non-zero.) Hence the fraction of the sum-squared distance explained by the first k dimensions is 'Y = E:=1 Ad E~=1 Ai. One idea for choosing the kernel would be to fix the dimensionality k and choose r(?) so that 'Y is maximized. Consider the effect of varying () in the RBF kernel k(Xi , Xj) =exp{-()(xi-xjf(Xi-Xj)}. (5) As () -+ 00 we have Jlj = 2(1- c5(i,j)) (where c5(i,j) is the Kronecker delta), which are the distances corresponding to a regular simplex. Thus K -+ In, H K H = H and'Y = k/(n -1). Letting () -+ 0 and using e- oz ~ 1- ()z for small (), we can show that Kij = 1 - ()c5lj as () -+ 0, and thus that the classical scaling solution is obtained in this limit. Experiments have been run on the US Postal Service database of handwritten digits, as used in [7]. The test set of 2007 images was used. The size of each image is 16 x 16 pixels, with the intensity of the pixels scaled so that the average variance over all 256 dimensions is 0.5. In Figure 1 'Y is plotted against k for various values of (3 = () /256. By choosing an index k one can observe from Figure 1 what fraction of the variance is explained by the first k eigenvalues. The trend is that as () decreases more and more variance is explained by fewer components, which fits in with the idea above that the () -t 00 limit gives rise to the regular simplex case. Thus there does not seem to be a non-trivial value of () which minimizes the residuals. 6 Discussion The results above show that kernel PCA using an isotropic kernel function can be interpreted as performing a kind of metric MDS. The main difference between the kernel MDS algorithm and other metric MDS algorithms is that kernel MDS uses the classical scaling solution in feature space. The advantage of the classical scaling solution is that it is computed from an eigenproblem, and avoids the iterative optimization of the stress objective function that is used for most other MDS solutions. The classical scaling solution is unique up to the unavoidable translation, rotation and reflection symmetries (assuming that there are no repeated eigenvalues). Critchley's work (1978) is somewhat similar to kernel MDS, but it lacks the notion of a projection into feature space and does not always ensure that the matrix B is non-negative definite. We have also looked at the question of adapting the kernel so as to minimize the sum of the residuals. However, for the case investigated this leads to a trivial solution. Acknowledgements I thank David Willshaw, Matthias Seeger and Amos Storkey for helpful conversations, and the anonymous referees whose comments have helped improve the paper. References [1] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer-Verlag, New York, 1984. [2] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Chapman and Hall, London, 1994. [3] F. Critchley. Multidimensionsal scaling: a short critique and a new method. In L. C. A Corsten and J. Hermans, editors, COMPSTAT 1978. Physica-Verlag, Vienna, 1978. [4] J. B. Kruskal and M. Wish. Multidimensional Scaling. Sage Publications, Beverly Hills, 1978. [5] Mardia, K V. and Kent, J. T. and Bibby, J. M. Multivariate Analysis. Academic Press, 1979. [6] J. W. Sammon. A nonlinear mapping for data structure analysis. IEEE Trans. on Computers, 18:401-409, 1969. [7] B. Scholkopf, A. Smola, and K-R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [8] V. N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995. [9] G. Wahba. Spline models for observational data. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1990. CBMS-NSF Regional Conference series in applied mathematics. [10] C. K I. Williams and D . Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342- 1351, 1998. [11] A. M. Yaglom. Correlation Theory of Stationary and Related Random Functions Volume I:Basic Results. Springer Verlag, 1987.
1873 |@word cox:6 polynomial:2 sammon:3 crucially:1 covariance:7 kent:1 carry:1 configuration:7 series:1 current:1 scatter:1 written:1 realize:1 analytic:1 plot:2 stationary:7 intelligence:1 fewer:1 item:1 isotropic:9 short:1 postal:1 location:2 hah:1 scholkopf:1 decreasing:1 increasing:1 what:1 kind:3 interpreted:3 minimizes:1 eigenvector:3 finding:1 transformation:3 guarantee:2 w8:1 multidimensional:5 nf:2 exactly:1 willshaw:1 scaled:2 uk:3 unit:3 positive:4 before:1 service:1 limit:2 ak:1 critique:1 ap:3 suggests:1 co:1 unique:1 definite:6 digit:1 procedure:3 adapting:1 projection:9 radial:1 regular:2 onto:7 close:2 map:1 imposed:1 compstat:1 williams:2 straightforward:1 starting:1 anyway:1 coordinate:10 schoenberg:1 notion:1 construction:1 suppose:1 us:2 origin:1 pa:1 trend:1 storkey:1 referee:1 trick:4 element:1 jlj:2 approximated:1 database:1 critchley:6 calculate:1 ensures:1 ordering:2 decrease:1 mentioned:1 carrying:1 division:1 basis:1 usps:1 easily:3 represented:1 various:2 london:1 choosing:4 whose:3 reconstruct:1 premultiplying:1 think:1 transform:1 advantage:2 eigenvalue:16 matthias:1 product:1 jij:1 remainder:1 j2:1 ath:1 oz:1 object:3 ac:2 measured:1 ij:12 minor:1 direction:1 saved:1 pea:6 centered:2 observational:1 dii:3 require:2 hx:9 fix:1 anonymous:1 physica:1 hold:3 hall:1 exp:2 mapping:5 kruskal:2 early:1 a2:1 largest:1 city:1 weighted:1 amos:1 clearly:2 gaussian:2 always:1 aim:3 rather:2 varying:1 publication:1 ax:3 derived:4 rank:2 seeger:1 industrial:1 sense:1 helpful:1 transformed:2 pixel:2 classification:1 denoted:1 c5ii:2 equal:1 construct:2 eigenproblem:2 f3:1 having:1 chapman:1 minimized:1 np:2 cpd:2 spline:2 simplex:2 semigroups:1 arrives:1 damped:1 explosion:1 necessary:1 euclidean:6 desired:1 re:2 plotted:1 fitted:2 kij:2 column:2 ay2:1 bibby:1 entry:1 dij:4 informatics:1 together:1 squared:2 again:1 unavoidable:1 choose:1 nmds:1 bela:3 derivative:1 return:1 li:1 depends:3 ad:2 vi:4 performed:1 helped:1 doing:1 recover:1 minimize:3 square:2 variance:4 beverly:1 maximized:1 yield:1 correspond:1 ofthe:1 vp:1 handwritten:1 bayesian:1 ah:1 ed:2 definition:1 centering:3 against:2 recall:1 conversation:1 dimensionality:1 sophisticated:1 nonmetric:2 actually:1 cbms:1 supervised:1 follow:1 smola:2 correlation:1 hand:1 working:1 christopher:1 nonlinear:2 lack:1 defines:1 effect:1 hence:2 symmetric:1 deal:1 conditionally:2 cosine:1 criterion:1 yaglom:1 hill:2 stress:5 l1:2 reflection:2 meaning:1 image:2 harmonic:1 rotation:3 exponentially:1 shepard:1 volume:1 discussed:2 rth:2 measurement:1 ai:7 llx:1 mathematics:2 similarity:6 dissim:1 multivariate:1 recent:1 verlag:4 yi:4 muller:1 greater:1 relaxed:1 somewhat:2 cii:1 monotonically:2 semi:1 ii:2 full:1 xf:1 bvi:1 academic:1 cjj:1 prediction:1 regression:1 basic:2 metric:16 kernel:51 normalization:1 sometimes:1 achieved:2 diagram:2 singular:1 appropriately:1 regional:1 comment:1 dri:2 member:1 seem:1 integer:1 space3:1 easy:1 concerned:1 xj:27 finish:1 psychology:1 fit:1 l1t:1 wahba:1 inner:1 idea:3 pca:10 effort:1 algebraic:1 miiller:1 proceed:1 york:2 remark:3 eigenvectors:6 http:1 nsf:1 delta:1 write:1 key:1 verified:1 fraction:3 year:1 sum:3 run:2 place:1 family:1 reasonable:1 forrest:1 scaling:30 guaranteed:1 adapted:1 kronecker:1 constraint:2 argument:2 optimality:1 performing:4 describes:1 modification:1 coo:1 christensen:1 explained:5 projecting:2 taken:1 equation:3 visualization:1 ffi:2 ordinal:1 know:1 letting:1 wii:2 observe:1 save:1 original:5 denotes:1 running:1 include:1 ensure:1 vienna:1 approximating:1 classical:24 society:1 objective:3 eh1:1 question:2 quantity:2 looked:1 md:27 diagonal:1 usual:1 amongst:1 gradient:1 subspace:1 distance:17 link:1 thank:1 ressel:1 collected:1 barber:1 trivial:2 assuming:1 length:2 index:1 relationship:9 ql:1 cij:3 relate:1 negative:4 rise:1 sage:1 adjustable:1 sch6lkopf:3 av:1 situation:1 householder:1 arbitrary:2 intensity:1 introduced:1 annihilates:1 pair:2 required:1 specified:1 david:1 connection:1 yll:1 trans:1 suggested:1 usually:2 below:3 pattern:1 herman:1 power:1 demanding:1 oij:3 residual:3 improve:1 brief:1 carried:2 philadelphia:1 acknowledgement:1 relative:1 eigendecomposition:3 h2:1 editor:1 translation:2 aij:2 edinburgh:2 dimension:7 calculated:1 valid:1 avoids:1 made:1 c5:4 projected:1 far:1 transaction:1 reconstructed:2 approximate:2 hkh:1 xi:40 iterative:2 reviewed:1 nature:1 symmetry:1 anc:1 investigated:1 constructing:1 diag:1 main:1 repeated:1 ref:1 n:4 position:2 wish:2 exponential:1 mardia:3 bij:1 young:1 theorem:2 specific:1 xt:2 showing:1 derives:1 vapnik:1 dissimilarity:19 ilx:1 simply:1 expressed:1 ordered:1 monotonic:4 springer:3 identity:1 exposition:1 rbf:1 operates:1 principal:8 berg:1 support:1 latter:1 interpoint:5
955
1,874
Machine Learning for Video-Based Rendering Arno Schadl arno@schoedl. org Irfan Essa [email protected] Georgia Institute of Technology GVU Center / College of Computing Atlanta, GA 30332-0280, USA. Abstract We present techniques for rendering and animation of realistic scenes by analyzing and training on short video sequences. This work extends the new paradigm for computer animation, video textures, which uses recorded video to generate novel animations by replaying the video samples in a new order. Here we concentrate on video sprites, which are a special type of video texture. In video sprites, instead of storing whole images, the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object. We present methods to create such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we use beam search to find a good sample sequence. We can specify the motion interactively by precomputing the sequence cost function using Q-Iearning. 1 Introduction Computer animation of realistic characters requires an explicitly defined model with control parameters. The animator defines keyframes for these parameters, which are interpolated to generate the animation. Both the model generation and the motion parameter adjustment are often manual, costly tasks. Recently, researchers in computer graphics and computer vision have proposed efficient methods to generate novel views by analyzing captured images. These techniques, called image-based r'endering, require minimal user interaction and allow photorealistic synthesis of still scenes[3]. In [7] we introduced a new paradigm for image synthesis, which we call video textures. In that paper, we extended the paradigm of image-based rendering into video-based rendering, generating novel animations from video. A video texture transitions~_--- __ Figure 1: An animation is created from reordered video sprite samples. Transitions between samples that are played out of the original order must be visually smooth. turns a finite duration video into a continuous infinitely varying stream of images. We treat the video sequence as a collection of image samples, from which we automatically select suitable sequences to form the new animation. Instead of using the image as a whole, we can also record an object against a bluescreen and separate it from the background using background-subtraction. We store the created opacity image (alpha channel) and the motion of the object for every sample. We can then render the object at arbitrary image locations to generate animations, as shown in Figure 1. We call this special type of video texture a video sprite. A complete description of the video textures paradigm and techniques to generate video textures is presented in [7]. In this paper, we address the controlled animation of video sprites. To generate video textures or video sprites, we have to optimize the sequence of samples so that the resulting animation looks continuous and smooth, even if the samples are not played in their original order. This optimization requires a visual similarity metric between sprite images, which has to be as close as possible to the human perception of similarity. The simple L2 image distance used in [7] gives poor results for our example video sprite, a fish swimming in a tank. In Section 2 we describe how to improve the similarity metric by training a classifier on manually labeled data [1]. Video sprites usually require some form of motion control. We present two t echniques to control the sprite motion while preserving the visual smoothness of the sequence. In Section 3 we compute a good sequence of samples for a motion path scripted in advance. Since the number of possible sequences is too large to explore exhaustively, we use beam search to make the optimization manageable. For applications like computer games, we would like to control the motion of the sprite interactively. We achieve this goal using a t echnique similar to Q-learning, as described in Section 4. 1.1 Previous work Before the advent of 3D graphics, the idea of creating animations by sequencing 2D sprites showing different poses and actions was widely used in computer games. Almost all characters in fighting and jump-and-run games are animated in this fashion. Game artists had to generate all these animations manually. Figure 2: Relationship between image similarities and transitions. There is very little earlier work in research on automatically sequencing 2D views for animation. Video Rewrite [2] is the work most closely related to video textures. It creates lip motion for a new audio track from a training video of the subject speaking by replaying short subsequences of the training video fitting best to the sequence of phonemes. To our knowledge, nobody has automatically generated an object animation from video thus far. Of course, we are not the first applying learning techniques to animation. The NeuroAnimator [4], for example, uses a neural network to simulate a physics-based model. Neural networks have also been used to improve visual similarity classification [6]. 2 Training the similarity metric Video textures reorder the original video samples into a new sequence. If the sequence of samples is not the original order, we have to insure that transitions between samples that are out of order are visually smooth. More precisely, in a transition from sample i to j, we substitute the successor of sample i by sample j and the predecessor of sample j by sample i. So sample i should be similar to sample j - 1 and sample i + 1 should be similar to sample j (Figure 2). The distance function Dij between two samples i and j should be small if we can substitute one image for the other without a noticeable discontinuity or "jump". The simple L2 image distance used in [7] gives poor results for the fish sprite, because it fails to capture important information like the orientation of the fish. Instead of trying to code this information into our system, we train a linear classifier from manually labeled training data. The classifier is based on six features extracted from a sprite image pair: ? difference in velocity magnitude, ? difference in velocity direction, measured in angle, ? sum of color L2 differences, weighted by the minimum of the two pixel alpha values, ? sum of absolute differences in the alpha channel, ? difference in average color, ? difference in blob area, computed as the sum of all alpha values. The manual labels for a sprite pair are binary: visually acceptable or unacceptable. To create the labels, we guess a rough estimator and then manually correct the classification of this estimator. Since it is more important to avoid visual glitches than to exploit every possible transition, we penalize false-positives 10 times higher than false-negatives in our training. segment boundary,: Ik I Figure 3: The components of the path cost function . All sprite pairs that the classifier rejected are no longer considered for transitions. If the pair of samples i and j is kept , we use the value of the linear classifying function as a measure for visual difference D ij . The pairs i, j with i = j are treated just as any other pair, but of course they have minimal visual difference. The cost for a transition Tij from sample i to sample j is then T ij = ~Di , j - l + ~Di+ l ,j . 3 Motion path scripting A common approach in animation is to specify all constraints before rendering the animation [8]. In this section we describe how to generate a good sequence of sprites from a specified motion path, given as a series of line segments. We specify a cost function for a given path, and starting at the beginning of the first segment, we explore the tree of possible transitions and find the path of least cost. 3.1 Sequence cost function The total cost function is a sum of per-frame costs. For every new sequence frame, in addition to the transition cost, as discussed in the previous section, we penalize any deviation from the defined path and movement direction. We only constrain the motion path, not the velocity magnitude or the motion timing because the fewer constraints we impose, the bett er the chance of finding a smooth sequence using the limited number of available video samples. The path is composed of line segments and we keep track of the line segment that the sprite is currently expected to follow . We compute the error function only with respect to this line segment. As soon as the orthogonal proj ection of the sprite position onto the segment passes the end of the current segment, we switch to the next segment. This avoids the ambiguity of which line segment to follow when paths are self-intersecting. We define an animation sequence (iI, PI, h), (i 2,P2, I2) ... (iN ,PN , IN) where ik, 1 ::; k ::; N, is the sample shown in frame k, Pk is the position at which it is shown, and Ik is the line segment that it has to follow. Let d(Pk' Id be the distance from point Pk to line Ik ' V(ik) the estimat ed velocity of the sprite at sample ik, and L(v(ik), Ik) is the angle between the velocity vector and the line segment. The cost function C for the frame k from this sequence is then (1) where WI and W2 are user-defined weights that trade off visual smoothness against the motion constraints. 3.2 Sequence tree search We seed our search with all possible starting samples and set the sprite position to the starting position of the first line segment. For every sequence, we store the total cost up to the current end of the path, the current position of the sprite, the current sample and the current line segment. Since from any given video sample there can be many possible transitions and it is impossible to explore the whole tree, we employ beam search to prune the set of sequences after advancing the tree depth by one transition. At every depth we keep the 50000 sequences with least accumulated cost. When the sprite reaches the end of the last segment, the sequence with lowest total cost is chosen. Section 5 describes the running time of the algorithm. 4 Interactive motion control For interactive applications like computer games, video sprites allow us to generate high-quality graphics without the computational burden of high-end modeling and rendering. In this section we show how to control video sprite motion interactively without time-consuming optimization over a planned path. The following observation allows us to compute the path tree in a much more efficient manner: If W2 in equation (1) is set to zero, the sprite does not adhere to a certain path but still moves in the desired general direction. If we assume the line segment is infinitely long, or in other words is indicating only a general motion direction l , equation (1) is independent of the position Pk of the sprite and only depends on the sample that is currently shown. We now have to find the lowest cost path through this set of states, a problem which is solved using Q-Iea rning [5] : The cost Fij for a path starting at sample i transitioning to sample j is (2) In other words, the least possible cost, starting from sample i and going to sample j, is the cost of the transition from i to j plus the least possible cost of all paths starting from j. Since this recursion is infinite, we have to introduce a decay term o ~ (t ~ 1 to assure convergence. To solve equation (2), we initialize with Fij = Tij for all i and j and then iterate over the equation until convergence. 4.1 Interactive switching between cost functions We described above how to compute a good path for a given motion direction l. To interactively control the sprite, we precompute Fij for multiple motion directions, for example for the eight compass directions. The user can then interactively specify the motion direction by choosing one of the precomputed cost functions. Unfortunately, the cost function is precomputed to be optimal only for a certain motion direction, and does not take into account any switching between cost functions, which can cause discontinuous motion when the user changes direction. Note that switching to a motion path without any motion constraint (equation (2) with WI = 0) will never cause any additional discontinuities, because the smoothness constraint is the only one left. Thus, we solve our problem by precomputing a cost function that does not constrain the motion for a couple of transitions, and then starts to constrain the motion with the new motion direction. The response delay allows us to gracefully adjust to the new cost function. For every precomputed
1874 |@word manageable:1 series:1 animated:1 current:5 must:1 realistic:2 fewer:1 guess:1 beginning:1 short:2 record:1 location:1 org:1 unacceptable:1 predecessor:1 ik:8 fitting:1 introduce:1 manner:1 expected:1 animator:1 automatically:3 little:1 insure:1 advent:1 lowest:2 arno:2 finding:2 every:6 iearning:1 estimat:1 interactive:3 classifier:5 control:7 before:2 positive:1 timing:1 treat:1 switching:3 analyzing:2 id:1 path:21 plus:1 neuroanimator:1 limited:1 area:1 rning:1 word:2 onto:1 ga:1 close:1 applying:1 impossible:1 optimize:1 center:1 starting:6 duration:1 estimator:2 user:4 us:2 velocity:6 assure:1 labeled:2 solved:1 capture:1 movement:1 trade:1 exhaustively:1 rewrite:1 segment:16 reordered:1 creates:1 train:2 separated:1 describe:2 choosing:1 widely:1 solve:2 sequence:25 blob:1 essa:1 interaction:1 achieve:1 description:1 convergence:2 generating:1 object:6 pose:1 measured:1 ij:2 noticeable:1 p2:1 concentrate:1 direction:11 fij:3 closely:1 correct:1 discontinuous:1 human:1 successor:1 require:2 considered:1 visually:4 seed:1 label:2 currently:2 create:3 weighted:1 rough:1 avoid:1 pn:1 varying:1 gatech:1 sequencing:2 accumulated:1 proj:1 going:1 tank:1 pixel:1 classification:2 orientation:1 special:2 initialize:1 never:1 manually:4 look:1 employ:1 composed:1 atlanta:1 interest:1 adjust:1 orthogonal:1 tree:5 desired:2 minimal:2 earlier:1 modeling:1 compass:1 planned:1 cost:24 deviation:1 delay:1 dij:1 graphic:3 too:1 stored:1 physic:1 off:1 synthesis:2 intersecting:1 ambiguity:1 recorded:1 interactively:5 creating:1 account:1 explicitly:1 depends:1 stream:1 view:2 start:1 phoneme:1 artist:1 cc:1 researcher:1 reach:1 manual:2 ed:1 against:2 associated:1 di:2 couple:1 photorealistic:1 knowledge:1 color:2 higher:1 follow:3 specify:4 response:1 anywhere:1 rejected:1 just:1 until:1 defines:1 quality:1 usa:1 i2:1 game:5 self:1 echnique:1 ection:1 trying:1 complete:1 motion:27 image:16 novel:4 recently:1 common:1 keyframes:1 discussed:1 smoothness:4 nobody:1 had:1 similarity:7 longer:1 store:2 certain:2 binary:1 captured:1 preserving:1 minimum:1 additional:1 impose:1 prune:1 subtraction:1 paradigm:4 ii:1 multiple:1 smooth:5 long:1 controlled:1 vision:1 metric:3 scripted:1 beam:3 penalize:2 background:3 addition:1 adhere:1 w2:2 pass:1 subject:1 call:2 rendering:6 switch:1 iterate:1 idea:1 six:1 render:1 sprite:30 speaking:1 cause:2 action:1 tij:2 bett:1 generate:9 fish:3 track:2 per:1 kept:1 advancing:1 swimming:1 sum:4 run:1 angle:2 extends:1 almost:1 acceptable:1 played:2 iea:1 precisely:1 constraint:5 constrain:3 scene:2 interpolated:1 simulate:1 rendered:1 precompute:1 poor:2 describes:1 character:2 wi:2 equation:5 turn:1 precomputed:3 end:4 gvu:1 available:1 eight:1 substitute:2 original:4 running:1 exploit:1 move:1 costly:1 distance:4 separate:1 gracefully:1 echniques:1 fighting:1 glitch:1 relationship:1 code:1 unfortunately:1 negative:1 observation:1 finite:1 scripting:1 extended:1 frame:4 arbitrary:1 introduced:1 pair:6 specified:1 discontinuity:2 address:1 usually:1 perception:1 video:37 suitable:1 treated:1 recursion:1 replaying:2 improve:2 technology:1 created:2 l2:3 generation:1 storing:1 classifying:1 pi:1 course:2 last:1 soon:1 allow:2 institute:1 absolute:1 boundary:1 depth:2 transition:14 avoids:1 collection:1 jump:2 far:1 alpha:5 keep:2 reorder:1 consuming:1 subsequence:1 search:5 continuous:2 lip:1 channel:2 irfan:1 precomputing:2 pk:4 whole:3 animation:21 screen:1 georgia:1 fashion:1 fails:1 position:6 transitioning:1 showing:1 er:1 decay:1 burden:1 false:2 texture:10 magnitude:2 explore:3 infinitely:2 visual:9 adjustment:1 opacity:1 chance:1 extracted:1 goal:1 change:1 infinite:1 called:1 total:3 indicating:1 select:1 college:1 audio:1
956
1,875
An Adaptive Metric Machine for Pattern Classification Carlotta Domeniconi, Jing Peng+, Dimitrios Gunopulos Dept. of Computer Science, University of California, Riverside, CA 92521 + Dept. of Computer Science, Oklahoma State University, Stillwater, OK 74078 { carlotta, dg} @cs.ucr.edu, [email protected] Abstract Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1 Introduction In classification, a feature vector x = (Xl,???, Xqy E lRq, representing an object, is assumed to be in one of J classes {i}{=l' and the objective is to build classifier machines that assign x to the correct class from a given set of N training samples. The K nearest neighbor (NN) classification method [3, 5, 7, 8, 9] is a simple and appealing approach to this problem. Such a method produces continuous and overlapping, rather than fixed, neighborhoods and uses a different neighborhood for each individual query so that all points in the neighborhood are close to the query, to the extent possible. In addition, it has been shown [4, 6] that the one NN rule has asymptotic error rate that is at most twice the Bayes error rate, independent of the distance metric used. The NN rule becomes less appealing in finite training samples, however. This is due to the curse-of-dimensionality [2]. Severe bias can be introduced in the NN rule in a high dimensional input feature space with finite samples. As such, the choice of a distance measure becomes crucial in determining the outcome of nearest neighbor classification. The commonly used Euclidean distance measure, while simple computationally, implies that the input space is isotropic or homogeneous. However, the assumption for isotropy is often invalid and generally undesirable in many practical applications. In general, distance computation does not vary with equal strength or in the same proportion in all directions in the feature space emanating from the input query. Capturing such information, therefore, is of great importance to any classification procedure in high dimensional settings. In this paper we propose an adaptive metric classification method to try to minimize bias in high dimensions. We estimate a flexible metric for computing neighborhoods based on Chi-squared distance analysis. The resulting neighborhoods are highly adaptive to query locations. Moreover, the neighborhoods are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be constant in the modified neighborhoods, whereby better classification performance can be obtained. 2 Local Feature Relevance Measure Our technique is motivated as follows. Let Xo be the test point whose class membership we are predicting. In the one NN classification rule, a single nearest neighbor x is found according to a distance metric D(x, xo). Let p(jlx) be the class conditional probability at point x. Consider the weighted Chi-squared distance [8, 11] D( )= X,Xo ~ [Pr(jlx) - Pr(jlxoW f=:. Pr(jlxo) , (1) which measures the distance between Xo and the point x, in terms of the difference between the class posterior probabilities at the two points. Small D(x, xo) indicates that the classification error rate will be close to the asymptotic error rate for one nearest neighbor. In general, this can be achieved when Pr(jlx) = Pr(jlxo), which states that if Pr(jlx) can be sufficiently well approximated at Xo, the asymptotic 1-NN error rate might result in finite sample settings. Equation (1) computes the distance between the true and estimated posteriors. Now, imagine we replace Pr(jlxo) with a quantity that attempts to predict Pr(jlx) under the constraint that the quantity is conditioned at a location along a particular feature dimension. Then, the Chi-squared distance (1) tells us the extent to which that dimension can be relied on to predict Pr(jlx). Thus, Equation (1) provides us with a foundation upon which to develop a theory of feature relevance in the context of pattern classification. Based on the above discussion, our proposal is the following. We first notice that Pr(jlx) is a function of x. Therefore, we can compute the conditional expectation of p(jlx), denoted by Pr(jlxi = z), given that Xi assumes value z, where Xi represents the ith component of x. That is, Pr(jlxi = z) = E[Pr(jlx)lxi = z] = J Pr(jlx)p(xlxi = z)dx. Here p(XIXi = z) is the conditional density of the other input variables. Let ri(x) = t j=l [Pr(jlx) -.Pr(~Xi = Zi)]2 Pr(J IXi - Zi) (2) ri(x) represents the ability offeature i to predict the Pr(jlx)s at Xi = Zi. The closer Pr(jlxi = Zi) is to Pr(jlx), the more information feature i carries for predicting the class posterior probabilities locally at x. We can now define a measure of feature relevance for Xo as 1 fi(XO) = K ri(z), L zEN(xo) (3) where N(xo) denotes the neighborhood of Xo containing the K nearest training points, according to a given metric. ri measures how well on average the class posterior probabilities can be approximated along input feature i within a local neighborhood of Xo. Small ri implies that the class posterior probabilities will be well captured along dimension i in the vicinity of Xo. Note that ri(xo) is a function of both the test point Xo and the dimension i, thereby making ri(xo) a local relevance measure. The relative relevance, as a weighting scheme, can then be given by the following exponential weighting scheme q Wi(XO) = exp(cRi(XO))/ L exp(cRl(XO)) (4) 1=1 where c is a parameter that can be chosen to maximize (minimize) the influence of ri on Wi, and Ri(X) = maxj rj(x) - ri(x). When c = 0 we have Wi = l/q, thereby ignoring any difference between the ri's. On the other hand, when c is large a change in ri will be exponentially reflected in Wi. In this case, Wi is said to follow the Boltzmann distribution. The exponential weighting is more sensitive to changes in local feature relevance (3) and gives rise to better performance improvement. Thus, (4) can be used as weights associated with features for weighted distance computation D(x, y) = V'L,r=1 Wi(Xi - Yi)2. These weights enable the neighborhood to elongate less important feature dimensions, and, at the same time, to constrict the most influential ones. Note that the technique is query-based because weightings depend on the query [1]. 3 Estimation Since both PrUlx) and Pr(jlxi = Zi) in (3) are unknown, we must estimate them using the training data {xn, Yn};;=1 in order for the relevance measure (3) to be useful in practice. Here Yn E {I, ... , J}. The quantity Pr(jlx) is estimated by considering a neighborhood Nl (x) centered at x: (5) where 1(?) is an indicator function such that it returns 1 when its argument is true, and 0 otherwise. To compute PrUlxi = z) = E[PrUlx)lxi = Z], we introduce a dummy variable gj such that if Y = j, then gj Ix = 1, otherwise gj Ix = 0, where j = 1,???, J. We then have PrUlx) = E[gjlx], from which it is not hard to show that PrUlxi = z) = E[gjlxi = z]. However, since there may not be any data at Xi = z, the data from the neighborhood of x along dimension i are used to estimate E[gj IXi = z], a strategy suggested in [7]. In detail, by noticing gj = l(y = j) the estimate can be computed from P r (.1 J Xi A = Zi ) = 'L,x n l(lxni - xii ~ bo i)l(Yn = j) , 'L,x EN2(X) l(l x ni - xii ~ bo i) EN 2(X) (6) n where N 2 (x) is a neighborhood centered at x (larger than N 1 (x)), and the value of bo i is chosen so that the interval contains a fixed number L of points: 'L,;;=1 1 (I Xni xii ~ bo i )l(xn E N 2 (x)) = L. Using the estimates in (5) and in (6), we obtain an empirical measure of the relevance (3) for each input variable i. 4 Empirical Results In the following we compare several classification methods using real data: (1) Adaptive metric nearest neighbor (ADAMENN) method (one iteration) described above, coupled with the exponential weighting scheme (4); (2) i-ADAMENN - ADAMENN with five iterations; (3) Simple K-NN method using the Euclidean distance measure; (4) C4.5 decision tree method [12]; (5) Machete [7] - an adaptive NN procedure, in which the input variable used for splitting at each step is the one that maximizes the estimated local relevance (7); (6) Scythe [7] - a generalization of the Machete algorithm, in which the input variables influence each split in proportion to their estimated local relevance, rather than the winner-take-all strategy of Machete; (7) DANN - discriminant adaptive nearest neighbor classification [8]; and (8) i-DANN - DANN with five iterations [8]. In all the experiments, the features are first normalized over the training data to have zero mean and unit variance, and the test data are normalized using the corresponding training mean and variance. Procedural parameters for each method were determined empirically through cross-validation. Table 1: Average classification error rates. ADAMENN i-ADAMENN K-NN C4.5 Machete Scythe DANN i-DANN Iris Sonar Vowel Glass Image Seg Letter Liver Lung 9.1 24.8 5.2 2.4 5.1 3.0 10.7 30.7 40.6 24.8 5.2 2.5 5.0 9.6 10.9 5.3 30.4 40.6 6.0 12.5 11.8 28.0 6.1 3.6 6.9 32.5 50.0 8.0 23.1 36.7 31.8 21.6 3.7 16.4 38.3 59.4 5.0 21.2 20.2 28.0 12.3 3.2 9.1 27.5 50.0 4.0 16.3 15.5 27.1 5.0 3.3 7.2 27.5 50.0 12.5 27.1 12.9 2.5 3.1 6.0 7.7 30.1 46.9 6.0 9.1 21.8 26.6 18.1 3.7 6.1 27.8 40.6 Classification Data Sets. The data sets used were taken from the VCI Machine Learning Database Repository [10], except for the unreleased image data set. They are: 1. Iris data. This data set consists of q = 4 measurements made on each of N = 100 iris plants of J = 2 species; 2. Sonar data. This data set consists of q = 60 frequency measurements made on each of N = 208 data of J = 2 classes ("mines" and "rocks"); 3. Vowel data. This example has q = 10 measurements and 11 classes. There are total of N = 528 samples in this example; 4. Glass data. This data set consists of q = 9 chemical attributes measured for each of N = 214 data of J = 6 classes; 5. Image data. This data set consists of 40 texture images that are manually classified into 15 classes. The number of images in each class varies from 16 to 80. The images in this database are represented by q = 16 dimensional feature vectors; 6. Seg data. This data set consists of images that were drawn randomly from a database of 7 outdoor images. There are J = 7 classes, each of which has 330 instances. Thus, there are N = 2,310 images in the database. These images are represented by q = 19 real valued attributes; 7. Letter data. This data set consists of q = 16 numerical attributes and J = 26 classes; 8. Liver data. This data set consists of 345 instances, represented by q = 6 numerical attributes, and J = 2 classes; and 9. Lung data. This example has 32 instances having q = 56 numerical features and J = 3 classes. Results: Table 1 shows the (cross-validated) error rates for the eight methods under consideration on the nine real data sets. Note that the average error rates 4 J: ~ I z ~ ? ~ z ~ 1 Z Z :.d ,~ i IIIIfII Irl <i u I...... I " -5 ",., Z ::E '" Ci B -'" 1;! u ~ I II1II ~ cz Figure 1: Performance distributions. for the Iris, Sonar, Glass, Liver and Lung data sets were based on leave-one-out cross-validation, whereas the error rates for the Vowel and Image data were based on ten two-fold cross-validation, and two ten-fold cross-validation for the Seg and Letter data, since larger data sets are available in these four cases. Table 1 shows clearly that ADAMENN achieved the best or near best performance over the nine real data sets, followed by i-ADAMENN. It seems natural to ask the question of robustness. That is, how well a particular method m performs on average in situations that are most favorable to other procedures. Following Friedman [7], we capture robustness by computing the ratio bm of its error rate em and the smallest error rate over all methods being compared in a particular example: bm = emf minl<k<8 ek. Thus, the best method m* for that example has bm ? = 1, and all other methods have larger values bm ~ 1, for m :f. m*. The larger the value of bm , the worse the performance of the mth method is in relation to the best one for that example, among the methods being compared. The distribution of the bm values for each method m over all the examples, therefore, seems to be a good indicator of robustness. Fig. 1 plots the distribution of bm for each method over the nine data sets. The dark area represents the lower and upper quartiles of the distribution that are separated by the median. The outer vertical lines show the entire range of values for the distribution. It is clear that the most robust method over the data sets is ADAMENN. In 5/9 of the data its error rate was the best (median = 1.0). In 8/9 of them it was no worse than 18% higher than the best error rate. In the worst case it was 65%. In contrast, C4.5 has the worst distribution, where the corresponding numbers are 267%, 432% and 529%. Bias and Variance Calculations: For a two-class problem with Pr(Y = 11x) = p(x), we compute a nearest neighborhood at a query Xo and find the nearest neighbor X having class label Y(X) (random variable). The estimate of p(xo) is Y(X). The bias and variance of Y(X) are: Bias = Ep(X) - p(xo) and Var = Ep(X) (1 - Ep(X)), where the expectation is computed over the distribution of the nearest neighbor X [8]. We performed simulations to estimate the bias and variance of ADAMENN, KNN, DANN and Machete on the following two-class problem. There are q = 2 input features and 180 training data. Each class contains three spherical bivariate normal subclasses, having standard deviation 0.75. The means of the 6 subclasses are chosen at random without replacement from the integers [1,2, ... ,8] x [1,2, ... ,8]. For each class, data are evenly drawn from each of the normal subclasses. Fig. 2 shows the bias and variance estimates from each method at locations (5,5,0,? . ?,0) and (2.3,7,0,???,0) , as a function of the number of noise variables over five independently generated training sets. Here the noise variables have independent standard Gaussian distributions. The true probability of class 1 for (5, 5,0,? . ? ,0) and (2.3,7,0, ? ??,0) are 0.943 and 0.747, respectively. The four methods have similar variance, since they all use three neighbors for classification. While the bias of KNN and DANN increases with increasing number of noise variables, ADAMENN retains a low bias by averaging out noise. 5 Related Work Friedman [7] describes an approach to learning local feature relevance that recursively homes in on a query along the most (locally) relevant dimension, where local relevance is computed from a reduction in prediction error given the query's value along that dimension. This method performs well on a number of classification tasks. In our notations, local relevance can be described by J I; (x) = 2: (Pr(j) - Pr(j IXi = (7) Zi)])2, j=l where Pr(j) represents the expected value of Pr(jlx). In this case, the most informative dimension is the one that deviates the most from Pr(j). 04 024 022 Adamenn Dann ---- ~ 025 11 s-? 02 015 01 r 02 Machete A v: ~ ? I ~ ~ 018 016 014 012 01 11 ~ g i Adam8lln - ~ Dann ---- 005 16 20 No of NOIse Van abies (a) Test point=(5,5) 0 12 No of NOise Variables ,. 01' 0 ,. 20 (c) Test point=(5,5) 0 4 t'--~---A~d~-OO-"---' I M'~'~. J Il~L .. g ~~: Machele - - 034 11 "2 f ~ tI ~ ~ "r 028 026 024 16 12 No 01 NOise Vanables (d) Test point=(2.3,7) 12 No 01 NOise Variables (b) Test point=(5,5) ;m No of NOIse Vanablas Machete- 005 Dann- 12 ~ ~damenn- 02 20 022 '--~-~A::;::d,men="---' 02 '" " 02' O1 t Macheta 006 004 " 045 04 (e) Test point=(2.3,7) 16 No 01 NOise Variables (f) Test point=(2.3,7) Figure 2: Bias and variance estimates. The main difference, however, between our relevance measure (3) and Friedman's (7) is the first term in the squared difference. While the class conditional probability is used in our relevance measure, its expectation is used in Friedman's. As a result, a feature dimension is more relevant than others when it minimizes (2) in case of our relevance measure, whereas it maximizes (7) in case of Friedman's. Furthermore, we take into account not only the test point Xo itself, but also its K nearest neighbors, resulting in a relevance measure (3) that is often more robust. In [8], Hastie and Tibshirani propose an adaptive nearest neighbor classification method based on linear discriminant analysis. The method computes a distance metric as a product of properly weighted within and between sum of squares matrices. They show that the resulting metric approximates the Chi-squared distance (1) by a Taylor series expansion. While sound in theory, the method has limitations. The main concern is that in high dimensions we may never have sufficient data to fill in q x q matrices. It is interesting to note that our work can serve as a potential bridge between Friedman's and that of Hastie and Tibshirani. 6 Summary and Conclusions This paper presents an adaptive metric method for effective pattern classification. This method estimates a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be more homogeneous in the modified neighborhoods. The experimental results show clearly that the ADAMENN algorithm can potentially improve the performance of K-NN and recursive partitioning methods in some classification problems, especially when the relative influence of input features changes with the location of the query to be classified in the input feature space. The results are also in favor of ADAMENN over similar competing methods such as Machete and DANN. References [1] Atkeson, C. , Moore, A.W., and Schaal, S. (1997). "Locally Weighted Learning," AI Review. 11:11-73. [2] Bellman, RE. (1961). Adaptive Control Processes. Princeton Univ. Press. [3] Cleveland, W.S. and Devlin, S.J. (1988). "Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting," J. Amer. Statist. Assoc. 83, 596-610. [4] Cover, T.M. and Hart, P.E. (1967). "Nearest Neighbor Pattern Classification," IEEE Trans. on Information Theory, pp. 21-27. [5] Domeniconi, C., Peng, J., and Gunopulos, D. (2000). "Adaptive Metric Nearest Neighbor Classification," Proc. of IEEE Conf. on CVPR, pp. 517-522, Hilton Head Island, South Carolina. [6] Duda, RO. and Hart, P.E. (1973) . Pattern Classification and Scene Analysis. John Wiley & Sons, Inc .. [7] Friedman, J .H. (1994). "Flexible Metric Nearest Neighbor Classification," Tech. Report, Dept. of Statistics, Stanford University. [8] Hastie, T. and Tibshirani, R. (1996). " Discriminant Adaptive Nearest Neighbor Classification", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 18, No. 6, pp. 607-615. [9] Lowe, D.G. (1995). "Similarity Metric Learning for a Variable-Kernel Classifier," Neural Computation 7(1):72-85. [10] Merz, C. and Murphy, P. (1996). UCI Repository of Machine Learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html. [11] Myles, J.P. and Hand, D.J. (1990). "The Multi-Class Metric Problem in Nearest Neighbor Discrimination Rules," Pattern Recognition, Vol. 23, pp. 1291-1297. [12] Quinlin, J.R (1993). C4.5: Programs for Machine Learning. Morgan-Kaufmann Publishers, Inc ..
1875 |@word repository:2 duda:1 proportion:2 seems:2 simulation:1 carolina:1 thereby:2 recursively:1 carry:1 reduction:1 myles:1 contains:2 efficacy:1 series:1 dx:1 must:1 john:1 numerical:3 informative:1 plot:1 discrimination:1 intelligence:1 isotropic:1 ith:1 provides:1 location:4 five:3 adamenn:13 along:12 consists:7 fitting:1 introduce:1 peng:2 expected:1 multi:1 chi:5 bellman:1 spherical:1 curse:2 considering:1 increasing:1 becomes:3 cleveland:1 moreover:1 notation:1 maximizes:2 isotropy:1 minimizes:1 subclass:3 ti:1 ro:1 classifier:2 assoc:1 partitioning:1 unit:1 control:1 yn:3 producing:2 local:10 gunopulos:2 might:1 twice:1 range:1 practical:1 practice:1 recursive:1 carlotta:2 jlxi:4 procedure:3 area:1 empirical:2 close:2 undesirable:1 context:1 influence:3 www:1 elongated:3 independently:1 splitting:1 rule:6 fill:1 imagine:1 homogeneous:2 us:1 ixi:3 approximated:2 recognition:1 database:5 ep:3 capture:1 worst:2 seg:3 mine:1 depend:1 serve:1 upon:1 represented:3 separated:1 univ:1 effective:1 query:10 emanating:1 tell:1 neighborhood:18 outcome:1 whose:1 larger:4 valued:1 cvpr:1 stanford:1 otherwise:2 ability:1 favor:1 knn:2 statistic:1 itself:1 rock:1 propose:3 product:1 relevant:5 uci:2 jing:1 produce:1 leave:1 object:1 oo:1 develop:1 liver:3 measured:1 nearest:20 c:2 implies:2 direction:1 correct:1 attribute:4 quartile:1 centered:2 enable:1 assign:1 generalization:1 sufficiently:1 ic:1 normal:2 exp:2 great:1 predict:3 vary:1 smallest:1 estimation:1 favorable:1 proc:1 label:1 sensitive:1 bridge:1 weighted:5 clearly:2 gaussian:1 modified:3 rather:2 validated:2 schaal:1 improvement:1 properly:1 indicates:1 tech:1 contrast:1 glass:3 cri:1 membership:1 nn:10 entire:1 mth:1 relation:1 classification:27 flexible:4 among:1 denoted:1 html:1 equal:1 never:1 having:3 manually:1 represents:4 others:1 report:1 randomly:1 dg:1 individual:1 dimitrios:1 murphy:1 maxj:1 replacement:1 vowel:3 attempt:1 friedman:7 highly:1 severe:2 nl:1 xni:1 closer:1 tree:1 offeature:1 euclidean:2 taylor:1 re:1 instance:3 cover:1 retains:1 deviation:1 varies:1 density:1 squared:6 zen:1 containing:1 worse:2 conf:1 ek:1 return:1 account:1 potential:1 inc:2 dann:11 performed:1 try:2 lowe:1 bayes:1 relied:1 lung:3 minimize:3 il:1 ni:1 square:1 variance:8 kaufmann:1 classified:2 mlearn:1 lrq:1 against:1 frequency:1 pp:4 associated:1 ask:1 dimensionality:2 ok:1 higher:1 follow:1 reflected:1 amer:1 furthermore:1 hand:2 irl:1 vci:1 overlapping:1 normalized:2 true:3 vicinity:1 chemical:1 moore:1 whereby:2 mlrepository:1 iris:4 performs:2 image:11 consideration:1 fi:1 empirically:1 winner:1 exponentially:1 approximates:1 measurement:3 ai:1 similarity:1 gj:5 posterior:5 yi:1 captured:1 morgan:1 minl:1 maximize:1 smoother:1 sound:1 rj:1 calculation:1 cross:5 hart:2 prediction:1 emf:1 regression:2 metric:16 expectation:3 iteration:3 kernel:1 cz:1 achieved:3 proposal:1 addition:1 whereas:2 interval:1 median:2 hilton:1 crucial:1 publisher:1 south:1 tend:3 integer:1 near:1 split:1 variety:1 zi:7 hastie:3 competing:1 devlin:1 motivated:1 riverside:1 nine:3 generally:1 useful:1 clear:1 abies:1 dark:1 locally:6 ten:2 statist:1 http:1 notice:1 estimated:4 dummy:1 tibshirani:3 xii:3 vol:2 four:2 procedural:1 drawn:2 sum:1 noticing:1 letter:3 home:1 decision:1 ucr:1 capturing:1 followed:1 fold:2 strength:1 constraint:1 ri:12 scene:1 argument:1 influential:4 according:2 describes:1 em:1 son:1 wi:6 appealing:2 island:1 making:1 en2:1 pr:28 xo:23 taken:1 computationally:1 equation:2 available:1 elongate:1 eight:1 robustness:3 lxi:2 denotes:1 assumes:2 build:1 especially:1 objective:1 question:1 quantity:3 strategy:2 said:1 distance:15 constricted:3 outer:1 evenly:1 extent:2 discriminant:3 o1:1 xixi:1 ratio:1 potentially:1 rise:1 boltzmann:1 unknown:1 upper:1 vertical:1 finite:4 situation:1 head:1 introduced:2 c4:4 california:1 trans:2 suggested:1 pattern:7 program:1 natural:1 predicting:2 indicator:2 representing:1 scheme:3 improve:1 coupled:1 deviate:1 review:1 determining:1 asymptotic:3 relative:2 plant:1 men:1 limitation:1 interesting:1 var:1 validation:4 foundation:1 sufficient:1 oklahoma:1 summary:1 bias:12 neighbor:19 van:1 dimension:16 xn:2 world:1 scythe:2 computes:2 commonly:1 adaptive:12 made:2 bm:7 atkeson:1 assumed:1 xi:7 machete:8 continuous:1 sonar:3 table:3 jlx:15 robust:2 ca:1 ignoring:1 expansion:1 main:2 noise:10 fig:2 en:1 wiley:1 exponential:3 xl:1 outdoor:1 weighting:5 ix:2 concern:1 bivariate:1 importance:1 ci:1 texture:1 conditioned:1 bo:4 conditional:8 ii1ii:1 invalid:2 replace:1 crl:1 change:3 hard:1 determined:1 except:1 averaging:1 total:1 domeniconi:2 specie:1 experimental:1 merz:1 relevance:17 dept:3 princeton:1
957
1,876
Vicinal Risk Minimization Olivier Chapelle, Jason Weston* , Leon Bottou and Vladimir Vapnik AT&T Research Labs, 100 Schultz drive, Red Bank, NJ, USA * Barnhill BioInformatics.com, Savannah, GA, USA. {chapelle, weston,leonb, vlad}@research.att.com Abstract The Vicinal Risk Minimization principle establishes a bridge between generative models and methods derived from the Structural Risk Minimization Principle such as Support Vector Machines or Statistical Regularization. We explain how VRM provides a framework which integrates a number of existing algorithms, such as Parzen windows, Support Vector Machines, Ridge Regression, Constrained Logistic Classifiers and Tangent-Prop. We then show how the approach implies new algorithms for solving problems usually associated with generative models. New algorithms are described for dealing with pattern recognition problems with very different pattern distributions and dealing with unlabeled data. Preliminary empirical results are presented. 1 Introduction Structural Risk Minimisation (SRM) in a learning system can be achieved using constraints on the parameter vectors, using regularization terms in the cost function, or using Support Vector Machines (SVM). All these principles have lead to well established learning algorithms. It is often said, however, that some problems are best addressed by generative models. The first problem is of missing data. We may for instance have a few labeled patterns and a large number of unlabeled patterns. Intuition suggests that these unlabeled patterns carry useful information. The second problem is of discriminating classes with very different pattern distributions. This situation arises naturally in anomaly detection systems. This also occurs often in recognition systems that reject invalid patterns by defining a garbage class for grouping all ambiguous or unrecognizable cases. Although there are successful non-generative approaches (Schuurmans and Southey, 2000) (Drucker, Wu and Vapnik, 1999), the generative framework is undeniably appealing. Recent results (Jaakkola, Meila and Jebara, 2000) even define generative models that contain SVM as special cases. This paper discusses the Vicinal Risk Minimization (VRM) principle, summarily introduced in (Vapnik, 1999). This principle was independently hinted at by Tong and Koller (Tong and Koller, 2000) with a useful generative interpretation. In particular, they proved that SVM are a limiting case of their Restricted Bayesian Classifiers. We extend Tong's and Koller's result by showing that VRM subsumes several well known techniques such as Ridge Regression (Hoerl and Kennard, 1970), Constrained Logistic Classifier, or Tangent Prop (Simard et aI., 1992). We then go on to show how VRM naturally leads to simple algo- rithms that can deal with problems for which one would have formally considered purely generative models. We provide algorithms and preliminary empirical results for dealing with unlabeled data or recognizing classes with very different pattern distributions. 2 Vicinal Risk Minimization The learning problem can be formulated as the search of the function f E F that minimizes the expectation of a given loss ?(f(x), y) . R(f) = f ?(f(x), y) dP(x, y) (1) In the classification framework, y takes values ?1 and ?(f(x) , y) is a step function such as 1 - Sign(yf(x)), whereas in the regression framework, y is a real number and commonly ?(f(x), y) is the mean squared error (f(x) _ y)2. The expectation (1) cannot be computed since the distribution P(x, y) is unknown. However, given a training set {(Xi, Yi) h <i<n, it is common to minimize instead the empirical ri~: -1 n Remp(f) =- L ?(f(Xi)' Yi) n i=l Empirical Risk Minimization (ERM) is therefore equivalent to minimizing the expectation of the loss function with respect to an empirical distribution Pemp(x,y) formed by assembling delta functions located on each example: 1 n dPemp(x, y) = - LOx, (X)Oy, (y) n i=l (2) It is quite natural to consider improved density estimates by replacing the delta functions ox, (x) by some estimate of the density in the vicinity of the point Xi, PXi (X). 1 dPest(x, y) =- n n L dPx, (x)Oy'(y) (3) i=l We can define in this way the vicinal risk of a function as: Rvic(f) = f ?(f(x),y) dPest(x,y) = ~ tf ?(f(x), Yi)dPXi (x) (4) ~=1 The Vicinal Risk Minimization principle consists of estimating argmin!EFR(f) by the function which minimizes the vicinal risk (4). In general, one can construct the VRM functional using any estimate dPest (x, y) of the density dP(x, y) , instead of restricting our choices to pointwise kernel estimates. Spherical gaussian kernel functions Nu(x - Xi) are otherwise an obvious choice for the local density estimate dPXi (x). The corresponding density estimate dPest is a Parzen windows estimate. The parameter u controls the scale of the density estimate. The extreme case u = 0 leads to the estimation of the density by delta functions and therefore leads to ERM. This must be distinguished from the case u -t 0 because the limit is taken after the minimization of the integral, leading to different results as shown in the next section. The theoretical analysis of ERM (Vapnik, 1999) shows that the crucial factor is the capacity of the class F offunctions. Large classes entail the risk of overfitting, whereas small classes entail the risk of underfitting. Two factors however are responsible for generalization of VRM, namely the quality of the estimate dPest and the size of the class F of functions. If dPest is a poor approximation to P then VRM can still perform well if F has suitably small capacity. ERM indeed uses a very naive estimate of dP and yet can provide good results. On the other hand, if F is not chosen with suitably small capacity then VRM can still perform well if the estimate dPest is a good approximation to dP . One can even take the set of all possible functions (whose capacity is obviously infinite) and still find a good solution if the estimate dPest is close enough to dP with an adequate metric. For example, if dPest is a Parzen window density estimate, then the Vicinal Risk minimizer is the Parzen window classifier. This latter property contrasts nicely with the ERM principle whose results strongly depend on the choice of the class of functions. Although we do not have a full theoretical understanding of VRM at this time, we expect considerable differences in the theoretical analysis of ERM and VRM. 3 Special Cases We now discuss the relationship of VRM to existing methods. There are obvious links between VRM and Parzen windows or Nearest Neighbour when the set of functions F is unconstrained. Furthermore, many existing algorithms can be viewed as special cases of VRM for different choices of F and dPest . a) VRM Regression and Ridge Regression - Consider the case of VRM for regression with spherical Parzen windows (using gaussian kernel) with standard deviation u and with a family F of linear functions fw ,b(X) = W . x + b. We can write the vicinal risk as: Rvic(f) The resulting expression is the empirical risk augmented by a regularization term. The particular cost function above is known as the Ridge Regression cost function (Hoed and Kennard, 1970). This result can be extended to the case of non linear functions f by performing a Taylor expansion of f(Xi + ?) . The corresponding regularization term then combines successive derivatives offunction f. Useful mathematical arguments can be found in (Leen, 1995). b) VRM and Invariant Learning - Generating synthetic examples is a simple way to incorporate selected invariances in a learning system. For instance, we can augment a optical character recognition database by applying applying translations or rotations to the initial examples. In the limit, this is equivalent to replacing each initial example by a distribution whose shape represents the desired invariances. This formulation naturally leads to a special case of VRM in which the local density estimates PXi (x) are elongated in the direction of invariance. Tangent-Prop (Simard et aI., 1992) is a more sophisticated way to incorporate invariances by adding an adequate regularization term to the cost function. Tangent-Prop has been formally proved to be equivalent to generating synthetic examples with infinitesimal deformations (Leen, 1995). This analysis makes Tangent-Prop a special case ofVRM. The local density estimate PXi is simply formed by Gaussian kernels with a covariance matrix whose eigenvectors describe the tangent direction to the invariant manifold. The eigenvalues then represent the respective strengths of the selected invariances. The tangent covariance matrix used in the SVM context by (Scholkopf et aI., 1998) specifies invariances globally. It can also been seen as a special case of VRM. c) VRM Classifier and Constrained Logistic Classifier - Consider the case of VRM for classification with spherical Parzen windows with standard deviation 0' and with a family F of linear functions fw,b(X) = W . x + b. We can assume without loss of generality that JJwJJ = 1. We can write the vicinal risk as: :1; : Ln f -Yi Sign(b + w . x) dP RVic(w,b) Xi (x) i=l = :1; : Ln -Yi f Sign(b + Xi W? + W? e:) dNu(e:) ,=1 We can decompose e: = WEw + e: ~ where WEw represents its component parallel to wand e: ~ represents its orthogonal component. Since JJwJJ = 1, we have W ? e: = Ew. After integrating over e: ~ we are left with the following expression: The latter integral can be seen as the convolution of the Gaussian Nu (x) with the step function Sign(x), which is a sigmoid shaped function with asymptotes at ?1. Using notation rp(x) = 2 erf(x) - 1, we can write: RVic(W, b) n = :;;:1 ?= -Yi rp (w. + b) Xi 0' ,=1 By rescaling wand b by a factor 1/0', we can write the following equivalent formulation of the VRM: In Yi rp(w? Xi + b) { Arg Min - - L :i': constra:til~wJJ = 1/0' (5) Except for the minor shape difference between sigmoid functions, the above formulation describes a Logistic Classifier with a constraint on the weights. This formulation is also very close to using a single artificial neuron with a sigmoid transfer function and a weight decay. The above proof illustrates a general identity. Transforming the empirical probability estimate (2) by convolving it with a kernel function is equivalent to transforming the loss function ?(f (x), y) by convolving it with the same kernel function . This is summarized in the following equality, where * represents the convolution operator. f ?(f(x),y) [NuO * dPemp (',y)] (x) = f [?(f(.),y) *NuO] (x) dPemp(x,y) d) VRM Classifier and SVM (Tong and Koller, 2000) - Consider again the case of VRM for classification with spherical Parzen windows with standard deviation 0' and with a family F of linear functions fw,b(X) = W . x + b. The resulting algorithm is in fact a Restricted Bayesian Classifier (Tong and Koller, 2000). Assuming that the examples are separable, Tong and Koller have shown that the resulting decision boundary tends towards the hard margin SVM decision boundary when a tends towards zero. The proof is based on the following observation: when a ~ 0, the vicinal risk (4) is dominated by the terms corresponding to the examples whose distance to the decision boundary is minimal. These examples in fact are the support vectors. On the other hand, choosing a > generates a decision boundary which depends on all the examples. The contribution of each example decreases exponentially when its distance to the decision boundary increases. This is only slightly different from a soft margin SVM whose boundary relies on support vectors that can be more distant than those selected by hard margin SVM. The difference here is just in the cost functions (sigmoid compared to linear loss). ? e) SVM and Constrained Logistic Classifiers - The two previous paragraphs show that the same particular case of VRM is (a) equivalent to a Logistic Classifier with a constraint and when the on the weights, and (b) tends towards the SVM classifier when a ~ examples are separable. As a consequence, we can state that the Logistic Classifier decision boundary tends towards the SVM decision boundary when we relax the constraint on the weights. ? In practice we can find the SVM solution with a Logistic Classifier by simply using an iterative weight update algorithm such as gradient descent, choosing small initial weights, and letting the norm of the weights grow slowly while the iterative algorithm is running. Although this algorithm is not exact, it is fast and efficient. This is in fact similar to what is usually done with back-propagation neural networks (LeCun et aI., 1998). The same algorithm can be used for the VRM. In that context early stopping is similar to choosing the optimal a using cross-validation. 4 New Algorithms and Results 4.1 Adaptive Kernel Widths It is known in density estimation theory that the quality of the density estimate can be improved using variable kernel widths (Breiman, Meisel and Purcell, 1977). In regions of the space where there is little data, it is safer to have a smooth estimate of the density, whereas in the regions of the space there is more data one wants to be as accurate as possible via sharper kernel estimates. The VRM principle can take advantage of these improved density estimates for other problem domains. We consider here the following density estimate: 1 dPest(x, y) = 8Yi (y) NUi (x - Xi) dx n . L ~ where the specific kernel width training set. ai for each training example Xi is computed from the a) Wisconsin Breast Cancer - We made a first test of the method on the Wisconsin breast cancer dataset l which contains 589 examples on 30 dimensions. We compared VRM using the set of linear classifiers with various underlying density estimates. The minimization was achieved using gradient descent on the vicinal risk. All hyperparameters were determined using cross-validation. The following table reports results averaged on 100 runs. hup:1/horn.first. gmd .de/ ..... raetschldata/breast-cancer. 1 Training Set HardSVM 10 20 40 80 11.3% 8.3 % 6.3 % 5.4% SoftSVM Beste 11.1 % 7.5% 5.5% 4.0% VRM Best fixed 10.8% 6.9% 5.2% 3.9% U VRM Adaptive Ui 9.6% 6.6% 4.8% 3.7% The adaptive kernel width Ui were computed by multiplying a global factor by the average distance of the five closest training examples. The best global factor is determined by crossvalidation. These results suggest that VRM with adaptive kernel widths can outperform state of the art classifiers on small training sets. b) MNIST "I" versus other digits - A second test was performed using the MNIST handwritten digits 2 ? We considered the sub-problem of recognizing the ones versus all other digits. The testing set contains 10000 digits (5000 ones and 5000 non-ones). Two training set sizes were considered with 250 or 500 ones and an equal number of non-ones. Computations were achieved using the algorithm suggested in section (3.e). We simply trained a single linear unit with a sigmoid transfer function using stochastic gradient updates. This is appropriate for implementing an approximate VRM with a single kernel width. Adaptive kernel widths are implemented by simply changing the slope of the sigmoid for each example. For each example Xi, the kernel width Ui is computed from the training set using the 5/1000th quantile of the distances of all other examples to example Xi. The sigmoid slopes are then computed by renormalizing the Ui in order to make their mean equal to 1. Early stopping was achieved with cross-validation. Training Set HardSVM 250+250 500+500 1000+1000 3.34% 3.11 % 2.94% VRM Fixed slope 2.79% 2.47% 2.08% VRM Adaptive slope 2.54% 2.27% 1.96% The statistical signifiance of these results can be asserted with very high probability by comparing the list of errors performed by each system (Bottou and Vapnik, 1992). Again these results suggest that VRM with adaptive kernel widths can be very useful with small training sets. 4.2 Unlabeled Data In some applications unlabeled data is in abundance whereas labeled data is not. The use of unlabeled data falls into the framework of VRM by simply making the same vicinal loss for unlabeled points. Given m unlabeled points xi, ... , x:n, one obtains the following formulation: Rvic(f) =;;:1 Ln f l(f(X),Yi)dPXi(x) + m1 i=l L f l(f(x),f(xn)dPx;(x) . m i=l To give an example of the usefulness of our approach consider the following example. Two normal distributions on the real line N( -1.6,1) and N(1.6, 1) model the patterns of two classes with equal probability; 20 labeled points and 100 unlabeled points are drawn. The following table compares the true generalization error of VRM with gaussian kernels and linear functions. Results are averaged over 100 runs. Two different kernel widths UL and Uu were used for kernels associated with labeled or unlabeled examples. Best kernel widths were obtained by cross-validation. We also studied the case UL -+ 0 in order to provide a result equivalent to a plain SVM. 2http://www.research.att.com/... yannlocr/index.html -+ 0 Best aL aL Best au Best au Labeled 6.5% 5.0% Labeled+Unlabeled 5.6% 4.3% Note that when both aL and au tend to zero, this algorithm reverts to a transduction algorithm due to Vapnik which was previously solved by the more difficult optimization procedure of integer programming (Bennet and Demiriz, 1999). 5 Conclusion In conclusion, the Vicinal Risk Minimization VRM principle provides a useful bridge between generative models and SRM methods such as SVM or Statistic Regularization. Several well known algorithms are in fact special cases of VRM. The VRM principle also suggests new algorithms. In this paper we proposed algorithms for dealing with unlabeled data and recognizing classes with very different pattern distributions, obtaining initial promising results. We hope that this approach can lead to further understanding of existing methods and also to suggest new ones. References Bennet, K. and Demiriz, A. (1999). Semi-supervised support vector machines. In Advances in Neural Information Processing Systems 11, pages 368-374. MIT Press. Bottou, L. and Vapnik, V. N. (1992). Local learning algorithms, appendix on confidence intervals. Neural Computation, 4(6):888- 900. Breiman, L., Meisel, W., and Purcell, E. (1977). Variable kernel estimates of multivariate densities. Technometrics , 19:135- 144. Drucker, H., Wu, D., and Vapnik, V. (1999). Support vector machines for spam categorization. Neural Networks, 10:1048- 1054. Hoed, A. and Kennard, R. W. (1970). Ridge regression: Biased estimation for non orthogonal problems. Technometrics, 12(1):55--67. Jaakkola, T., Meila, M., and Jebara, T. (2000). Maximum entropy discrimination. In Advances in Neural Information Processing Systems 12. MIT Press. LeCun, Y., Bottou, L., Orr, G., and Muller, K. (1998). Efficient backprop. In Orr, G. and K. , M., editors, Neural Networks: Tricks of the Trade. Springer. Leen, T. K. (1995). Invariance and regularization in learning. In Advances in Neural Infonnation Processing Systems 7. MIT Press. Scholkopf, B., Simard, P., Smola, A., Vapnik, V. (1998) . Prior knowledge in support vector kernels . In Advances in Neural Information Processing Systems 10. MIT Press. Schuurmans, D. and Southey, F. (2000). An adaptive regularization criterion for supervised learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML2000) . Simard, P. , Victorri, B., Le Cun, Y., and Denker, J. (1992). Tangent prop: a formalism for specifying selected invariances in adaptive networks. In Advances in Neural Information Processing Systems 4, Denver, CO. Morgan Kaufman. Tong, S. and Koller, D. (2000). Restricted bayes optimal classifiers. Proceedings of the 17th National Conference on Artificial Intelligence (AAAI). Vapnik, V. (1999). The Nature of Statistical Learning Theory (Second Edition). Springer Verlag, New York.
1876 |@word norm:1 suitably:2 covariance:2 carry:1 initial:4 contains:2 att:2 existing:4 com:3 comparing:1 yet:1 dx:1 must:1 distant:1 shape:2 offunctions:1 asymptote:1 update:2 discrimination:1 generative:9 selected:4 intelligence:1 provides:2 successive:1 five:1 mathematical:1 scholkopf:2 consists:1 combine:1 underfitting:1 paragraph:1 indeed:1 globally:1 spherical:4 little:1 window:8 estimating:1 notation:1 underlying:1 what:1 argmin:1 kaufman:1 minimizes:2 vicinal:14 nj:1 classifier:17 control:1 unit:1 local:4 tends:4 limit:2 consequence:1 au:3 studied:1 suggests:2 specifying:1 co:1 bennet:2 averaged:2 seventeenth:1 horn:1 responsible:1 lecun:2 testing:1 practice:1 dpx:2 digit:4 procedure:1 empirical:7 reject:1 confidence:1 integrating:1 suggest:3 cannot:1 ga:1 unlabeled:13 close:2 operator:1 risk:19 applying:2 context:2 www:1 equivalent:7 elongated:1 missing:1 go:1 independently:1 limiting:1 anomaly:1 olivier:1 exact:1 us:1 programming:1 trick:1 recognition:3 located:1 labeled:6 database:1 solved:1 region:2 decrease:1 vrm:39 trade:1 intuition:1 transforming:2 ui:4 wjj:1 trained:1 depend:1 solving:1 algo:1 purely:1 various:1 fast:1 describe:1 meisel:2 artificial:2 choosing:3 pemp:1 quite:1 whose:6 relax:1 otherwise:1 erf:1 statistic:1 demiriz:2 obviously:1 advantage:1 eigenvalue:1 crossvalidation:1 generating:2 renormalizing:1 categorization:1 unrecognizable:1 nui:1 nearest:1 minor:1 implemented:1 implies:1 uu:1 direction:2 summarily:1 stochastic:1 implementing:1 backprop:1 generalization:2 preliminary:2 decompose:1 hinted:1 considered:3 normal:1 early:2 estimation:3 integrates:1 hoerl:1 infonnation:1 bridge:2 tf:1 establishes:1 minimization:10 hope:1 mit:4 gaussian:5 breiman:2 jaakkola:2 minimisation:1 derived:1 pxi:3 contrast:1 savannah:1 stopping:2 koller:7 arg:1 classification:3 html:1 augment:1 constrained:4 special:7 art:1 equal:3 construct:1 nicely:1 shaped:1 represents:4 report:1 few:1 neighbour:1 national:1 technometrics:2 detection:1 extreme:1 asserted:1 accurate:1 integral:2 respective:1 orthogonal:2 taylor:1 desired:1 deformation:1 theoretical:3 minimal:1 instance:2 formalism:1 soft:1 cost:5 deviation:3 srm:2 recognizing:3 successful:1 usefulness:1 synthetic:2 density:17 international:1 discriminating:1 parzen:8 squared:1 again:2 aaai:1 slowly:1 convolving:2 simard:4 leading:1 derivative:1 rescaling:1 til:1 de:1 orr:2 summarized:1 subsumes:1 depends:1 performed:2 jason:1 lab:1 red:1 bayes:1 parallel:1 slope:4 wew:2 contribution:1 minimize:1 formed:2 bayesian:2 handwritten:1 multiplying:1 drive:1 explain:1 barnhill:1 infinitesimal:1 obvious:2 icml2000:1 naturally:3 associated:2 proof:2 rithms:1 proved:2 dataset:1 remp:1 vlad:1 knowledge:1 sophisticated:1 back:1 purcell:2 supervised:2 improved:3 leen:3 formulation:5 ox:1 strongly:1 generality:1 furthermore:1 just:1 done:1 smola:1 hand:2 replacing:2 propagation:1 logistic:8 yf:1 quality:2 dnu:1 usa:2 contain:1 true:1 regularization:8 vicinity:1 equality:1 deal:1 width:11 ambiguous:1 criterion:1 ridge:5 common:1 rotation:1 sigmoid:7 functional:1 denver:1 exponentially:1 extend:1 assembling:1 interpretation:1 m1:1 ai:5 meila:2 unconstrained:1 chapelle:2 entail:2 closest:1 multivariate:1 recent:1 verlag:1 yi:9 muller:1 seen:2 morgan:1 semi:1 full:1 smooth:1 cross:4 regression:8 breast:3 expectation:3 metric:1 kernel:22 represent:1 achieved:4 whereas:4 want:1 addressed:1 interval:1 victorri:1 grow:1 crucial:1 biased:1 tend:1 integer:1 structural:2 enough:1 drucker:2 expression:2 ul:2 york:1 adequate:2 garbage:1 useful:5 eigenvectors:1 gmd:1 http:1 specifies:1 outperform:1 sign:4 delta:3 write:4 drawn:1 changing:1 wand:2 run:2 family:3 wu:2 decision:7 appendix:1 strength:1 constraint:4 ri:1 dominated:1 generates:1 argument:1 min:1 leon:1 performing:1 separable:2 optical:1 poor:1 describes:1 slightly:1 character:1 appealing:1 cun:1 making:1 restricted:3 invariant:2 erm:6 taken:1 ln:3 previously:1 discus:2 letting:1 dpxi:3 denker:1 appropriate:1 distinguished:1 rp:3 running:1 quantile:1 occurs:1 said:1 gradient:3 dp:6 distance:4 link:1 capacity:4 manifold:1 assuming:1 pointwise:1 relationship:1 index:1 minimizing:1 vladimir:1 difficult:1 sharper:1 unknown:1 perform:2 convolution:2 neuron:1 observation:1 descent:2 situation:1 defining:1 extended:1 jebara:2 introduced:1 namely:1 efr:1 established:1 nu:2 suggested:1 usually:2 pattern:10 reverts:1 natural:1 lox:1 naive:1 prior:1 understanding:2 tangent:8 wisconsin:2 loss:6 expect:1 oy:2 versus:2 southey:2 validation:4 principle:10 editor:1 bank:1 translation:1 cancer:3 fall:1 boundary:8 dimension:1 xn:1 plain:1 commonly:1 adaptive:9 made:1 schultz:1 spam:1 approximate:1 obtains:1 dealing:4 global:2 overfitting:1 xi:14 search:1 iterative:2 table:2 promising:1 nature:1 transfer:2 obtaining:1 schuurmans:2 expansion:1 bottou:4 domain:1 hyperparameters:1 edition:1 constra:1 kennard:3 augmented:1 transduction:1 tong:7 sub:1 abundance:1 undeniably:1 specific:1 showing:1 list:1 decay:1 svm:14 grouping:1 mnist:2 vapnik:10 restricting:1 adding:1 illustrates:1 margin:3 entropy:1 simply:5 springer:2 minimizer:1 relies:1 prop:6 weston:2 viewed:1 formulated:1 identity:1 invalid:1 towards:4 considerable:1 fw:3 hard:2 safer:1 infinite:1 except:1 determined:2 invariance:8 ew:1 hup:1 formally:2 support:8 latter:2 arises:1 bioinformatics:1 incorporate:2
958
1,877
A comparison of Image Processing Techniques for Visual Speech Recognition Applications Michael S. Gray Computational Neurobiology Laboratory The Salk Institute San Diego, CA 92186-5800 Terrence J. Sejnowski Javier R. Movellan* Computational Neurobiology Laboratory The Salk Institute San Diego, CA 92186-5800 Department of Cognitive Science Institute for Neural Computation University of California San Diego Abstract We examine eight different techniques for developing visual representations in machine vision tasks. In particular we compare different versions of principal component and independent component analysis in combination with stepwise regression methods for variable selection. We found that local methods, based on the statistics of image patches, consistently outperformed global methods based on the statistics of entire images. This result is consistent with previous work on emotion and facial expression recognition. In addition, the use of a stepwise regression technique for selecting variables and regions of interest substantially boosted performance. 1 Introduction We study the performance of eight different methods for developing image representations based on the statistical properties of the images at hand. These methods are compared on their performance on a visual speech recognition task. While the representations developed are specific to visual speech recognition, the methods themselves are general purpose and applicable to other tasks. Our focus is on low-level data-driven methods based on the statistical properties of relatively untouched images, as opposed to approaches that work with contours or highly processed versions of the image. Padgett [8] and Bartlett [1] systematically studied statistical methods for developing representations on expression recognition tasks. They found that local wavelet-like representations consistently outperformed global representations, like eigenfaces. In this paper we also compare local versus global representations. The main differences between our work and that in [8] and [1] * To whom correspondence should be addressed. Figure 1: The normalization procedure. In each panel, the "+" indicates the center of the lips, and the "0" indicates the center of the image. The location of the lips was automatically determined using Luettin et al. point density model for lip tracking: (1) Original image; (2) The center of the lips was translated to the center ofthe image; (3) The image was rotated in the plane to horizontal; (4) The lips were scaled to a constant reference width; (5) The image was symmetrized relative to the vertical midline; (6) The intensity was normalized using a logistic gain control procedure. are: (1) We use image sequences while they used static images; (2) Our work involves images of the mouth region while their work involves images of the entire face; (3) Our recognition engine is a bank of hidden Markov model while theirs is a backpropagation network [8] and a nearest neighbor classifier [1]. In addition to the comparison of local and global representations, we propose an unsupervised method for automatically selecting regions and variables of interest. 2 Preprocessing and Recognition Engine The task was recognition of the words "one", "two", "three" and "four" from the Tulips1 [7] database. The database consists on movies of 12 subjects each uttering the digits in English twice. While the number of words is limited, the database is challenging due to differences in illumination conditions, ethnicity and gender of the subjects. Image preprocessing consisted of the following steps: First the contour of the outer lips were tracked using point distribution models, a data-driven technique based on analysis ofthe gray-level statistics around lip contours [5]. The lip images were then normalized for translation and rotation. This was accomplished by first padding the image on all sides with 25 rows or columns of zeros, and modulating the images in the spatial frequency domain. The images were symmetrized with respect to the vertical axis going through the center of the lips. This makes the final representation more robust to horizontal changes in illumination. The images were cropped to 65 pixels vertically x 87 pixels horizontally (see Figure 1) and their intensity was normalized using logistic gain control [7]. Eight different techniques were used on the normalized database each of which developed a different image basis. For each of these techniques the following steps were followed: (1) Projection: For each image in the database we compute the coordinates x(t) of the image with respect to the image bases developed using each of the eight techniques; (2) Tempoml differentiation: For each time step we compute the vectors 8(t) = x(t) - x(t - 1), where x(t) represents the coordinate vector of image presented at time t; (3) Gain control: Each component of x(t) and 8(t) is independently scaled using a logistic gain control function matched to the mean and variance of each component across an entire movie [7] . This results in a form of soft histogram equalization; (4) Global PCA PCA Spectrum Global TCA lCA Spectrum Figure 2: Global decompositions for the normalized image dataset. Row 1: Global kernels of principal component analysis ordered with first eigenimage on left. Row 2: Log magnitude spectrum of eigenimages. Row 3: Global pixel space independent component kernels ordered according to projected variance. Row 4: Log magnitude spectrum of global independent components. Recognition: The scaled x(t) and 8(t) coefficients are fed to the HMM recognition engine. 3 Global Methods We first evaluated the performance of techniques based on the statistics of the entire lip images as opposed to portions of it. This global approach has been shown to provide good performance on face recognition [9], expression recognition [2], and gender recognition tasks [4]. In particular we compared the performance of principal component analysis (PCA) and two different versions of independent component analysis (ICA). 3.1 Global PC A: We tried image bases that consisted of the first 50, 100 and 150 eigenvectors of the pixelwise covariance matrix. Best results were obtained with the first 50 principal components (which accounted for 94.6% of the variance) and are the only ones reported here. The top row of Figure 2 shows the first 5 eigenvectors displayed as images, their magnitude spectrum is shown in the second row. These eigenimages have most of their energy localized in low and horizontal spatial frequencies and are typically non-local in the spatial domain (i.e., have non-zero energy distributed over the whole image). 3.2 Global ICA: The goal of lnfomax ICA is to transform an input random vector such that the entropy of the output vector is maximized [3]. The main differences between ICA and PCA are: (1) ICA maximizes the joint entropy of the outputs, while PCA maximizes the sum of their variance; (2) PCA provides orthogonal basis vectors, while rcA basis vectors need not be orthogonal; (3) PCA outputs are always uncorrelated, but may not be statistically independent. ICA attempts to extract independent outputs, not just uncorrelated. We tried two different ICA approaches: ICA I: This method results in a non-orthogonal transformation of the bases developed via PCA. While such transformations do not change the underlying space of Figure 3: Upper left: Lip patches (12 pixels x 12 pixels) from randomly chosen locations used to develop local PCA and local lCA kernels. Lower left: Four orthogonal images generated from a single local PCA kernel. Right: Top 10 Local PCA and lCA kernels ordered according to projected variance (highest at top left). Note how the lCA vectors tend to be more local and consistent with the receptive fields found in VI. the representation they may facilitate the job of the recognition engine by decreasing the statistical dependency amongst the coordinates. First each image in the database was projected onto the space spanned by the first 50 eigenvectors of the pixelwise covariance matrix. Then lCA was performed on the 50 PCA coordinate variables to obtain a new 50-dimensional non-orthogonal basis. lCA II: A different approach to lCA was explored in [1] for face recognition tasks and by [6] for fMRI images. While in lCA-l the goal is to develop independent image coordinates, in rcA-II the goal is for the image bases themselves to be independent. Here independence of images is defined with respect to a probability space in which pixels are seen as outcomes and images as random vectors of such outcomes. The approach, which is described in detail in [6], resulted in a set of 50 images which were a non-orthogonal linear transformation of the first 50 eigenvectors of the pixelwise covariance matrix. The first 5 images (accounting for the largest amounts of projected variance) obtained via this approach to lCA are shown in the third row of Figure 2. The fourth row shows their magnitude spectrum. As reported in [1] the images obtained using this method are more local than those obtained via PCA. 4 Local Methods Padgett et al. [8] reported surprisingly good results on an emotion recognition tasks using PCA on random patches of the face instead of the entire face. Recent theoretical work also places emphasis on spatially localized, wavelet-like image bases. One potential advantage of spatially localized image bases is that they provide explicit information about where things are happening, not just about what is happening. This facilitates the work of recognition engines on some tasks but the theoretical reasons for this are unclear at this point. Local PCA and lCA kernels were developed based on a database of 18680 small patches (12 pixel x 12 pixel) chosen from random locations in the Tulip1s database. A sample of these random patches (superimposed on a lip image) is shown in the top panel of Figure 3. Hereafter we refer to the 12 pixel x 12 pixel images obtained PCAKemeil PCAKemel2 2 20 "n "" .. , ICAKemeil ,..LLL " ", ICAKemel9 ~ "" ", 41 10 '" Figure 4: Kernel-location combinations chosen using unblocked variable selection. Top of each quadrant: Local rcA or peA kernel. Bottom of each quadrant: Lip image convolved with corresponding local kernel, then downsampled. The numbers on the lip image indicate the order in which variables were chosen for the multiple regression procedure. There are no numbers on the right side of the lip images because only half of each lip image was used for the representation (since the images are symmetrized). via peA or leA as "kernels". Image bases were generated by centering a local peA or leA kernel onto different locations and padding the rest of the matrix with zeros, as displayed in Figure 3 (lower left panel). This results on bases images which are local in space (the energy is localized about a single patch) and shifted versions of each other. The process of obtaining image coordinates can be seen as a filtering operation followed by subsampling: First the images are filtered using a bank of filters whose impulse response are the kernels obtained via peA (or leA). The relevant coordinates are obtained by subsampling at 300 uniformly distributed locations (15 locations vertically by 20 locations horizontally). We explored four different filtering approaches: (1) Single linear shift invariant filter (LSI); (2) Single linear shift variant filter (LSV); (3) Bank of LSI filters with blocked selection; (4) Bank of LSI filters combined with unblocked selection. For the single-filter LSI approach, the images were convolved with a single local leA kernel or a local peA kernel. The top 5 local peA and leA kernels were each tested separately and the results obtained with the best of the 5 kernels were reported. For the single LSV-filtering approach different local peA kernels were derived for a total of 117 non-overlapping regions each of which occupied 5 x 5 pixels. Each region of the 934 images was projected onto the first principal component corresponding to that location. This effectively resulted in an LSV filtering operation. 4.1 Automatic Selection of Focal Points Padgett's [8] most successful method was based on outputs of local filters at manually selected focal regions. Their task was emotion recognition and the focal regions were the eyes and mouth. In visual speech recognition once the lips are chosen it Global Methods Local Methods Image Processing Global peA Global Il;A I Ulobal ICA II Single-Filter LSI peA Single-Filter LSI ICA Blocked Filter Bank PeA Blocked Filter Bank leA Unblocked Filter Bank peA Unblocked Filter Bank Il;A Performance ? s.e.m. 79.2 ? 4.7 61.5 ? 4.5 74.0 ? 5.4 90.6 ? 3.1 89.6 ? 3.0 85.4 ? 3.7 85.4 ? 3.0 91.7 ? 2.8 91.7 ? 3.2 Table 1: Best generalization performance (% correct) ? standard error of the mean for all image representations. is unclear which regions would be most informative. Thus we developed a method for automatic selection of focal regions. First 10 filters were developed via local leA (or peA). Each image was filtered using the 10-filter bank and the outputs were subsampled at 150 locations for a 1500 dimensional representation (10 filters x 150 locations) of each of the images in the dataset. Regions and variables of interest were then selected using a stepwise forward multiple regression procedure. First we choose the variable that, when averaging across the entire database, best reconstructed the original images. Here best reconstruction is defined in terms of least squares using a multiple regression model. Once a variable is selected, it is "tenured" and we search for the variable which in combination with the tenured ones best reconstructs the image database. The procedure is stopped when the number of tenured variables reaches a criterion point. We compared performance using 50, 100, and 150 tenured variables and report results with the best of those three numbers. We tested two different selection procedures, one blocked by location and one in which location was not blocked. In the first method the selection was done in blocks of 10 variables where each block contained the outputs of all the filters at a specific location. If a location was chosen, the outputs of the 10 filters in that location were automatically included in the final image representation. In the second method selection of variables was not blocked by location. Figure 4 shows, for 2 local peA and 2 local leA kernels, the first 10 variables chosen for each particular kernel using the forward selection multiple regression procedure. The numbers on the lip images in this figure indicate the order in which particular kernel/location variables were chosen using the sequential regression procedure: "I" indicates the first variable chosen, "2" the second, etc. 5 Results and Conclusions Table 1 shows the best generalization performance (out of the 9 HMM architectures tested) for each of the eight image representation methods. The local decompositions significantly outperformed the global ones (t(106) = 4.10, p < 0.001). The improved performance of local representations is consistent with current ideas on the importance of localized wavelet-like representations. However, it is unclear why local decompositions work better. One possibility is that these results apply only to this particular recognition engine and the problem at hand (i.e., hidden Markov models for speechreading). Yet similar results with local representations were reported in [8] on an emotion classification task with a 3 layer backpropaga- tion network and in [1] on an expression classification tasks with a nearest neighbor classifier. Another possible explanation for the advantage of local representations is that global unsupervised decompositions emphasize subject identity while local decompositions tend to hide it. We found some evidence consistent with this idea by testing global and local representations on a subject identification task (i.e., recognizing which person the lip images belong to). For this task the global representations outperformed the local ones. However this result is inconsistent with [8] which found local representations were better on emotion classification and on subject identification tasks. Another possibility is that local representations make more explicit information about where things are happening, not just what is happening, and such information turns out to be important for the task at hand. The image representations obtained using the bank of filter methods with unblocked selection yielded the best results. The stepwise regression technique used to select kernels and regions of interest led to substantial gains in recognition performance. In fact the highest generalization performance reported here (91. 7% with the bank of filters using unblocked variable selection) surpassed the best published performance on this dataset [5]. References [1] M.S. Bartlett. Face Image Analysis by Unsupervised Learning and Redundancy Reduction. PhD thesis, University of California, San Diego, 1998. [2] M.S. Bartlett, P.A. Viola, T.J. Sejnowski, J. Larsen, J. Hager, and P. Ekman. Classifying facial action. In D. Touretski, M. Mozer, and M. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 823-829. Morgan Kaufmann, San Mateo, CA, 1996. [3] A.J. Bell and T.J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6):1129-1159,1995. [4] G. Cottrell and J. 1991 Metcalfe. Face, gender and emotion recognition using holons. In D. Touretzky, editor, Advances in Neural Information Processing Systems, volume 3, pages 564- 571, San Mateo, CA, 1991. Morgan Kaufmann. [5] Juergen Luettin. Visual Speech and Speaker Recognition. PhD thesis, University of Sheffield, 1997. [6] M.J. McKeown, S. Makeig, G.G. Brown, T-P. Jung, S.S. Kindermann, A.J. Bell, and T.J. Sejnowski. Analysis of fmri data by decomposition into independent components. Proc. Nat. Acad. Sci., in press. [7] J .R. Movellan. Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 851- 858. MIT Press, Cambridge, MA,1995. [8] C. Padgett and G. Cottrell. Representing face images for emotion classification. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9, Cambridge, MA, 1997. MIT Press. [9] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71- 86, 1991.
1877 |@word version:4 tried:2 speechreading:1 accounting:1 decomposition:6 covariance:3 hager:1 reduction:1 selecting:2 hereafter:1 current:1 yet:1 cottrell:2 informative:1 half:1 selected:3 plane:1 filtered:2 provides:1 location:18 consists:1 ica:10 themselves:2 examine:1 decreasing:1 automatically:3 lll:1 matched:1 underlying:1 panel:3 maximizes:2 what:2 substantially:1 developed:7 transformation:3 differentiation:1 holons:1 makeig:1 scaled:3 classifier:2 control:4 local:36 vertically:2 acad:1 twice:1 emphasis:1 studied:1 mateo:2 challenging:1 limited:1 statistically:1 testing:1 block:2 movellan:2 backpropagation:1 digit:1 procedure:8 bell:2 significantly:1 projection:1 word:2 quadrant:2 downsampled:1 onto:3 selection:12 equalization:1 center:5 independently:1 spanned:1 coordinate:7 diego:4 padgett:4 recognition:25 database:10 bottom:1 region:11 highest:2 substantial:1 mozer:2 basis:4 tca:1 translated:1 joint:1 eigenimages:2 sejnowski:4 outcome:2 whose:1 statistic:4 transform:1 final:2 sequence:1 advantage:2 propose:1 reconstruction:1 relevant:1 mckeown:1 rotated:1 tulips1:1 develop:2 nearest:2 job:1 involves:2 indicate:2 correct:1 filter:20 pea:13 stochastic:1 generalization:3 around:1 purpose:1 proc:1 outperformed:4 applicable:1 kindermann:1 modulating:1 largest:1 hasselmo:1 mit:2 always:1 occupied:1 boosted:1 derived:1 focus:1 consistently:2 indicates:3 superimposed:1 entire:6 typically:1 hidden:2 going:1 pixel:11 classification:4 spatial:3 emotion:7 field:1 once:2 manually:1 represents:1 unsupervised:3 fmri:2 report:1 randomly:1 midline:1 resulted:2 subsampled:1 attempt:1 interest:4 highly:1 possibility:2 pc:1 facial:2 orthogonal:6 theoretical:2 stopped:1 column:1 soft:1 maximization:1 juergen:1 recognizing:1 successful:1 pixelwise:3 reported:6 dependency:1 combined:1 person:1 density:1 terrence:1 backpropaga:1 michael:1 thesis:2 opposed:2 choose:1 reconstructs:1 cognitive:2 potential:1 coefficient:1 vi:1 blind:2 performed:1 tion:1 portion:1 il:2 square:1 variance:6 kaufmann:2 maximized:1 ofthe:2 identification:2 published:1 reach:1 touretzky:2 centering:1 energy:3 frequency:2 larsen:1 turk:1 static:1 gain:5 dataset:3 javier:1 response:1 improved:1 leen:1 evaluated:1 done:1 just:3 hand:3 horizontal:3 overlapping:1 logistic:3 gray:2 impulse:1 unblocked:6 facilitate:1 normalized:5 consisted:2 brown:1 spatially:2 laboratory:2 width:1 speaker:1 criterion:1 image:70 rotation:1 tracked:1 volume:4 untouched:1 belong:1 theirs:1 refer:1 blocked:6 cambridge:2 automatic:2 focal:4 etc:1 base:8 recent:1 hide:1 driven:2 tesauro:1 accomplished:1 seen:2 morgan:2 ii:3 multiple:4 variant:1 regression:8 sheffield:1 vision:1 surpassed:1 histogram:1 normalization:1 kernel:21 luettin:2 lea:8 addition:2 cropped:1 separately:1 addressed:1 rest:1 subject:5 tend:2 facilitates:1 thing:2 inconsistent:1 jordan:1 lsv:3 ethnicity:1 independence:1 architecture:1 idea:2 shift:2 expression:4 pca:15 bartlett:3 padding:2 speech:6 action:1 eigenvectors:4 amount:1 processed:1 lsi:6 shifted:1 neuroscience:1 redundancy:1 four:3 sum:1 fourth:1 place:1 patch:6 separation:1 layer:1 followed:2 correspondence:1 yielded:1 relatively:1 department:1 developing:3 lca:10 according:2 combination:3 across:2 invariant:1 rca:3 turn:1 fed:1 operation:2 eight:5 apply:1 petsche:1 symmetrized:3 convolved:2 original:2 top:6 subsampling:2 receptive:1 unclear:3 amongst:1 sci:1 hmm:2 outer:1 whom:1 reason:1 upper:1 vertical:2 markov:2 displayed:2 pentland:1 viola:1 neurobiology:2 intensity:2 california:2 engine:6 explanation:1 mouth:2 representing:1 movie:2 eye:1 axis:1 extract:1 relative:1 filtering:4 versus:1 localized:5 consistent:4 lnfomax:1 editor:4 bank:11 systematically:1 uncorrelated:2 classifying:1 translation:1 row:9 accounted:1 surprisingly:1 jung:1 english:1 side:2 institute:3 neighbor:2 eigenfaces:2 face:8 distributed:2 contour:3 forward:2 uttering:1 san:6 preprocessing:2 projected:5 reconstructed:1 emphasize:1 global:21 spectrum:6 search:1 why:1 table:2 lip:19 robust:1 ca:4 obtaining:1 domain:2 main:2 whole:1 salk:2 explicit:2 third:1 wavelet:3 specific:2 explored:2 evidence:1 deconvolution:1 stepwise:4 sequential:1 effectively:1 importance:1 phd:2 magnitude:4 nat:1 illumination:2 entropy:2 led:1 visual:7 happening:4 horizontally:2 ordered:3 contained:1 tracking:1 gender:3 ma:2 goal:3 identity:1 ekman:1 change:2 included:1 determined:1 uniformly:1 averaging:1 principal:5 total:1 select:1 metcalfe:1 tested:3
959
1,878
Learning Sparse Image Codes using a Wavelet Pyramid Architecture Bruno A. Olshausen Department of Psychology and Center for Neuroscience, UC Davis 1544 Newton Ct. Davis, CA 95616 [email protected] Phil Sallee Department of Computer Science UC Davis Davis, CA 95616 [email protected] Michael S. Lewicki Department of Computer Science and Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, non-oriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing natural images. Our previous work in this area has focussed on learning basis functions that represent images in terms of sparse, independent components [1, 2]. This is done within the context of a linear generative model for images, in which an image I(x,y) is described in terms of a linear superposition of basis functions bi(x,y) with amplitudes ai> plus noise v(x,y): (1) A sparse, factorial prior is imposed upon the coefficients ai, and the basis functions are adapted so as to maximize the average log-probability of images under the model (which is equivalent to minimizing the model's estimate of the code length of the images). When the model is trained on an ensemble of whitened natural images, the basis functions converge to a set of spatially localized, oriented, and bandpass functions that tile the joint space of position and spatial-frequency in a manner similar to a wavelet basis. Similar results have been achieved using other forms of independent components analysis [3, 4]. One of the disadvantages of this approach, from an image coding perspective, is that it may only be applied to small sub-images (e.g., 12 x 12 pixels) extracted from a larger image. Thus, if an image were to be coded using this method, it would need to be blocked and would thus likely introduce blocking artifacts as the result of quantization or sparsification of the coefficients. In addition, the model is unable to capture spatial structure in the images that is larger than the image block, and scaling up the algorithm to significantly larger blocks is computationally intractable. The solution to these problems that we propose here is to assume translation- and scale-invariance among the basis functions, as in a wavelet pyramid architecture. That is, if a basis function is learned at one position and scale, then it is assumed to be repeated at all positions (spaced apart by two positions horizontally and vertically) and scales (in octave increments) . Thus, the entire set of basis functions for tiling a large image may be learned by adapting only a handful of parametersi.e., the wavelet filters and the scaling function that is used to expand them across scale. We show here that when a wavelet image model is adapted to natural images to yield coefficients that are sparse and as statistically independent as possible, the wavelet functions converge to a set of oriented functions, and the scaling function converges to a circularly symmetric lowpass filter appropriate for generating selfsimilarity across scale. Moreover, the resulting coefficients achieve higher coding efficiency (higher SNR for a fixed bit-rate) than traditional wavelet bases which are typically designed "by hand" according to certain mathematical desiderata [5]. 2 Wavelet image model The wavelet image model is specified by a relatively small number of parameters, consisting of a set of wavelet functions 'ljJi(X,y), i = l..M, and scaling function ?(x,y). An image is generated by upsampling and convolving the coefficients at a given band i with 'ljJi (or with ? at the lowest-resolution level of the pyramid), followed by successive upsampling and convolution with ?, depending on their level within the pyramid. The wavelet image model for an L level pyramid is specified mathematically as g(x, y, 0) + v(x, y) { aL-l(x,y) l=L-1 II (x, y) l <L- 1 I(x,y) g(x, y, i) (2) (3) M II(x,y) = [g(x,y,l + 1)t 2] * ?(x,y) + L: [a~(x,y)t 2J * 'ljJi(X,y) (4) i=l where the coefficients a are indexed by their position, x, y, band, i, and level of resolution l within the pyramid (l = 0 is the highest resolution level) . The symbol Figure 1: Wavelet image model. Shown are the coefficients of the first three levels of a pyramid (l = 0,1,2), with each level split into a number of different bands (i = 1...M). The highest level (l = 3) is not shown and contains only one lowpass band. t 2 denotes upsampling by two and is defined as f ( ~, ~) x even & y even f(x,y)t2 == { o otherwise (5) The wavelet pyramid model is schematically illustrated in figure 1. Thaditional wavelet bases typically utilize three bands (M = 3), in which case the representation is critically sampled (same number of coefficients as image pixels). Here, we shall also examine the cases of M = 4 and 6, in which the representation is overcomplete (more coefficients than image pixels). Because the image model is linear, it may be expressed compactly in vector/matrix notation as I=Ga+v (6) where the vector a is the entire list of coefficient values at all positions, bands, and levels of the pyramid, and the columns of G are the basis functions corresponding to each coefficient, which are parameterized by 'l/J and 41. The probability of generating an image I given a specific state of the coefficients a and assuming Gaussian i.i.d. noise v is then G 12 P(Ila,O) = -1e - ~I 2 1a (7) ZAN where 0 denotes the parameters of the model and includes the wavelet pyramid functions 'l/Ji and 41, as well as the noise variance, a~ = 1/ AN. The prior probability distribution over the coefficients is assumed to be factorial and sparse: P(a) (8) 1 -e Zs -S(a.) ' (9) where S is a non-convex function that shapes P(ai) to have the requisite "sparse" form- i.e., peaked at zero with heavy tails, or positive kurtosis. We choose here S(x) = t3log(1 + (x/a)2), which corresponds to a Cauchy-like prior over the coefficients (an exact Cauchy distribution would be obtained for t3 = 1).1 1 A more optimal choice for the prior would be to use a mixture-of-Gaussians distribution, which better captures the sharp peak at zero characteristic of a sparse representation. But properly maximizing the posterior with such a prior presents formidable challenges [6) . 3 Inferring the coefficients The coefficients for a particular image are determined by finding the maximum of the posterior distribution (MAP estimate) argmax P(all, B) a a = argmax P(lla, B)P(aIB) (10) a argmln [A;II_GaI2+ ~S(ai)l (11) A local minimum may be found via gradient descent, yielding the differential equation e - S(a) Ii ex: ANG T e = I-Ga. (12) (13) The computations involving G T e and G a in equations 12 and 13 may be performed quickly and efficiently using fast algorithms for building pyramids and reconstructing from pyramids [7]. 4 Learning Our goal in adapting the wavelet model to natural images is to find the functions 'l/Ji and tjJ that minimize the description length ? of images under the model ? = P(IIB) = -(logP(IIB)) f P(lla, B) P(aIB) da (14) (15) A learning rule for the basis functions may be derived by gradient descent on ?: 8? 8B i = AN \ (e T ~~ a) P(aII,O) ) (16) Instead of sampling from the full posterior distribution, however, we utilize a simpler approximation in which a single sample is taken at the posterior maximum, and so we have All (AT 8G (17) UUi ex: e 8B i aA) where e = I - Ga. The price we pay for this approximation, though, is that the basis functions will grow without bound, since the greater their norm, IG k I, the smaller each ak will become, thus decreasing the sparseness penalty in (11). This trivial solution is avoided by adaptively rescaling the basis functions after each learning step so that a target variance on the coefficients is met, as described in an earlier paper [1]. The update rules for 'l/Ji and tjJ are then derived from (17), and may be expressed in terms of the following recursive formulas: ~'l/Ji(m,n) = F't/J(e(x,y),m,n,O) F't/J (1, m, n, l) - L x,y f(2x + m, 2y + n) ai(x, y) + F't/J([f * tjJ]{, 2, m, n, l + 1) (18) ~?(m, n) F</l(f, m,n, l) = F</l(e(x, y), m, n, 0) = L f(2x + m,2y + n) g(x,y,l + 1) + F</l([f *?l+ 2, m,n, l + 1) (19) x ,y where * denotes cross-correlation and .j.. 2 denotes downsampling by two. These computations may also be performed efficiently using fast algorithms for building and reconstructing from pyramids [7]. 5 Results The image model was trained on a set of 10, pre-whitened 512 x 512 natural images that were used in previous studies [1]. The basis function parameters '?i and ? were represented as 5 x 5 pixel masks, and were initialized to random numbers. For each update, an 80 x 80 subimage was randomly extracted from one of the images, and the coefficients were computed iteratively via (12,13) until the decrease in the energy function was less than 0.1 %. The resulting residual, e, was then used for updating the functions '?i and ? according to (18) and (19). The noise parameter AN was set to 400, corresponding to a noise variance that is 2.5% of the image variance (a; = 0.1). At this level of noise, the image reconstructions are visually indistinguishable from the original. The parameters of the prior used were f3 = 2.5, a = 0.3. A stable solution began to emerge after about one hour of training for M=3, and after several hours for M = 6 (Pentium II, 450 MHz). Shown in figure 2 are the basis functions learned for the cases M = 3, 4 and 6, along with a standard bi-orthogonal 9/7 wavelet (FBI fingerprint standard [8]) for comparison. The difference between the learned wavelets and the standard wavelet is striking, in that the learned wavelets tile the orientation domain more evenly. They also exhibit self-similarity in orientation- i.e., they appear to be rotated versions of one another. Increasing the number of bands M from three to four produces narrower orientation tuning, but increasing overcompleteness beyond that point does not, as shown in the tiling diagram of figure 3. All the learned basis function spectra lie well within the Nyquist bounding box in the 2D Fourier plane, matching the power spectrum of the images in the training set. Coding efficiency was evaluated by compressing the sparsified coefficients ii using the embedded wavelet zerotree encoder [9] and measuring the signal-to-noise ratio for a fixed bit rate (SNR = 10 IOglO /mse). The results, shown in table 1, demonstrate that the overcomplete bases (M = 4) achieve higher SNR than either of two standard wavelet bases for the same bit rate. Note however that at these levels of SNR the reconstructions are visually identical to the original. At higher compression ratios the learned bases loose their advantage, most likely due to the fact that they are non-orthogonal and hence produce more errors in the reconstruction when the coefficients are quantized. a; Table 1: Coding efficiency. I basis set M = 3 (learned) M = 4 (learned) Daubechies 6 FBI 9/7 I SNR I 11.2 11.9 11.2 11.4 M=3 (learned) M=3 (standard) basis functions spectra M=4 (learned) ? M=6 (learned) basis functions spectra Figure 2: Basis functions and corresponding power spectra for M = 3, 4 and 6, along with a standard 9/7 biorthogonal wavelet. Each column shows a different band, while each row shows a different level. The lone basis function in the last row is the scaling function (twice convolved with itself). The power spectra are plotted in the 2D-Fourier plane (vertical vs. horizontal spatial-frequency) with the maximum spatial-frequency at the Nyquist rate. M=3 (standard) M=3 (learned) M=4 (learned) M=6 (learned) Figure 3: Frequency domain tiling properties. Shown are iso-power contours at 50% of the maximum for each band and level. 6 Conclusion We have shown in this work how a wavelet basis may be adapted so as to represent the structures in natural images in terms of sparse, independent components. Importantly, the algorithm has the capacity to learn overcomplete basis sets, which are capable of tiling the joint space of position, orientation, and spatial-frequency in a more continuous fashion than traditional, critically sampled basis sets [10]. The overcomplete bases exhibit superior coding efficiency, in the sense of achieving higher SNR for a fixed bit rate. Although the improvements in coding efficiency are modest, we believe the method described here has the potential to yield even greater improvements when adapted to more specific image ensembles such as textures. Acknowledgments This work benefited from extensive use of Eero Simoncelli's Matlab pyramid toolbox. Supported by NIMH R29-MH057921. References [1] Olshausen BA, Field DJ (1997) Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37: 3311-3325. [2] Lewicki MS, Olshausen BA (1999) A probabilistic framework for the adaptation and comparison of image codes, J. Opt. Soc. of Am., A, 16{7}: 1587-160l. [3] Bell AJ, Sejnowski T J (1997) The independent components of natural images are edge filters, Vision Research, 37: 3327-3338. [4] van Hateren JH, van der Schaaff A (1997) Independent component filters of natural images compared with simple cells in primary visual cortex, Proc. Royal Soc. Lond. B, 265: 359-366. [5] Mallat S (1999) A wavelet tour of signal processing. Academic Press. [6] Olshausen BA, Millman KJ (2000). Learning sparse codes with a mixture-of-Gaussians prior. In: Advances in Neuml Information Processing Systems, 12, S.A. Solla, T.K. Leen, K.R. Muller, eds. MIT Press, pp. 841-847. [7] Simoncelli EP, Matlab pyramid toolbox. ftp://ftp.cis.upenn.edu/pub/eero/matlabPyrTools.tar.gz [8] The Bath Wavelet Warehouse http://dmsun4.bath.ac.uk/wavelets/warehouse.html [9] Shapiro JM (1993). Embedded image coding using zerotrees of wavelet coefficients. IEEE Transactions on Signal Processing, 41 {12}: 3445-3462. [10] Simoncelli EP, Freeman WT, Adelson EH, Heeger DJ (1992) Shiftable multi scale transforms, IEEE Transactions on Information Theory, 38{2}: 587-607.
1878 |@word version:1 compression:2 norm:1 contains:1 pub:1 shape:1 designed:1 update:2 v:1 generative:1 plane:2 iso:1 quantized:1 successive:1 simpler:1 mathematical:1 along:2 differential:1 become:1 warehouse:2 introduce:1 manner:1 upenn:1 mask:1 examine:1 multi:1 freeman:1 decreasing:1 jm:1 increasing:2 moreover:1 notation:1 formidable:1 lowest:1 z:1 lone:1 finding:1 sparsification:1 uk:1 appear:1 positive:1 vertically:1 local:1 ak:1 plus:1 twice:1 bi:2 statistically:1 acknowledgment:1 recursive:2 block:2 lla:2 area:1 bell:1 significantly:1 adapting:2 matching:1 pre:1 ila:1 ga:3 context:1 equivalent:1 map:1 imposed:1 center:2 phil:1 maximizing:1 convex:1 resolution:3 rule:2 importantly:1 increment:1 target:1 mallat:1 exact:1 pa:1 iib:2 updating:1 blocking:1 ep:2 capture:2 compressing:1 solla:1 decrease:1 highest:2 nimh:1 trained:2 upon:1 efficiency:6 basis:28 compactly:1 joint:2 lowpass:2 aii:1 represented:1 fast:2 sejnowski:1 larger:3 otherwise:1 encoder:1 itself:1 ljji:3 advantage:1 kurtosis:1 propose:1 reconstruction:3 adaptation:1 bath:2 achieve:2 description:1 produce:2 generating:2 converges:1 rotated:1 ftp:2 depending:1 ac:1 soc:2 met:1 filter:4 opt:1 mathematically:1 visually:2 cognition:1 proc:1 superposition:2 overcompleteness:1 mit:1 gaussian:1 tar:1 derived:2 properly:1 improvement:2 contrast:1 pentium:1 sense:1 am:1 biorthogonal:1 entire:2 typically:2 expand:1 pixel:4 among:1 orientation:6 html:1 spatial:6 uc:2 field:1 f3:1 sampling:1 identical:1 adelson:1 peaked:1 t2:1 oriented:3 randomly:1 composed:1 argmax:2 consisting:1 baolshausen:1 mixture:2 yielding:1 edge:1 capable:1 orthogonal:2 modest:1 indexed:1 initialized:1 plotted:1 overcomplete:7 column:2 earlier:1 disadvantage:1 mhz:1 measuring:1 logp:1 sallee:2 snr:6 tour:1 shiftable:1 combined:1 adaptively:1 peak:1 probabilistic:1 michael:1 quickly:1 daubechies:1 choose:1 tile:3 convolving:1 rescaling:1 stark:1 potential:1 coding:9 includes:1 coefficient:22 performed:2 minimize:2 variance:4 characteristic:1 efficiently:2 ensemble:2 yield:3 spaced:1 t3:1 critically:2 zan:1 ed:1 energy:1 frequency:5 pp:1 sampled:2 amplitude:1 higher:6 leen:1 done:1 though:1 box:1 evaluated:1 correlation:1 until:1 hand:1 horizontal:1 aj:1 artifact:1 believe:1 olshausen:4 building:2 aib:2 hence:1 spatially:1 symmetric:1 iteratively:1 illustrated:1 indistinguishable:1 self:2 davis:4 m:1 octave:1 complete:1 demonstrate:1 image:45 began:1 superior:1 ji:4 tail:1 mellon:1 blocked:1 ai:5 tuning:1 bruno:1 fingerprint:1 dj:2 stable:1 similarity:1 cortex:1 base:9 posterior:4 perspective:1 apart:1 certain:1 der:1 muller:1 minimum:1 greater:2 employed:1 converge:2 maximize:1 signal:3 ii:5 full:1 simoncelli:3 academic:1 cross:1 coded:1 desideratum:1 involving:1 whitened:2 vision:2 represent:3 pyramid:15 achieved:1 cell:1 addition:1 schematically:1 diagram:1 grow:1 split:1 psychology:1 architecture:2 nyquist:2 penalty:1 matlab:2 factorial:2 transforms:1 ang:1 band:9 http:1 shapiro:1 neuroscience:1 estimated:1 carnegie:1 shall:1 four:1 achieving:1 utilize:2 v1:1 parameterized:1 striking:1 scaling:5 bit:4 bound:1 ct:1 pay:1 followed:1 adapted:7 handful:1 fourier:2 lond:1 relatively:1 department:3 according:2 across:4 smaller:1 reconstructing:2 taken:1 computationally:1 equation:2 loose:1 tiling:4 gaussians:2 mh057921:1 appropriate:1 fbi:2 convolved:1 original:2 neuml:1 denotes:4 assumes:1 newton:1 strategy:1 primary:1 traditional:2 exhibit:2 gradient:2 unable:1 upsampling:3 capacity:1 evenly:2 cauchy:2 trivial:1 assuming:1 code:6 length:3 ratio:2 minimizing:1 downsampling:1 ba:3 vertical:1 convolution:1 descent:2 sparsified:1 sharp:1 specified:3 extensive:1 toolbox:2 learned:15 emu:1 hour:2 address:1 beyond:1 challenge:1 royal:1 power:4 natural:10 eh:1 residual:1 representing:1 gz:1 r29:1 kj:1 prior:7 millman:1 embedded:2 tjj:3 localized:1 heavy:1 translation:1 row:2 supported:1 last:1 jh:1 focussed:1 subimage:1 emerge:1 sparse:12 van:2 contour:1 ig:1 avoided:1 transaction:2 pittsburgh:1 assumed:2 eero:2 spectrum:6 continuous:1 table:2 learn:1 ca:2 mse:1 domain:3 da:1 bounding:1 noise:7 repeated:2 allowed:1 benefited:1 fashion:2 sub:1 position:7 inferring:1 heeger:1 bandpass:1 lie:1 wavelet:33 ioglo:1 formula:1 specific:2 symbol:1 list:1 intractable:1 quantization:1 circularly:1 ci:1 texture:1 sparseness:1 likely:2 visual:1 horizontally:1 expressed:2 lewicki:2 aa:1 corresponds:1 extracted:2 goal:1 narrower:1 price:1 determined:1 wt:1 invariance:1 e:1 hateren:1 requisite:1 ex:2
960
1,879
On Reversing Jensen's Inequality Tony Jebara MIT Media Lab Cambridge, MA 02139 [email protected] Alex Pentland MIT Media Lab Cambridge, MA 02139 [email protected] Abstract Jensen's inequality is a powerful mathematical tool and one of the workhorses in statistical learning. Its applications therein include the EM algorithm, Bayesian estimation and Bayesian inference. Jensen computes simple lower bounds on otherwise intractable quantities such as products of sums and latent log-likelihoods. This simplification then permits operations like integration and maximization. Quite often (i.e. in discriminative learning) upper bounds are needed as well. We derive and prove an efficient analytic inequality that provides such variational upper bounds. This inequality holds for latent variable mixtures of exponential family distributions and thus spans a wide range of contemporary statistical models. We also discuss applications of the upper bounds including maximum conditional likelihood, large margin discriminative models and conditional Bayesian inference. Convergence, efficiency and prediction results are shown. 1 1 Introduction Statistical model estimation and inference often require the maximization, evaluation, and integration of complicated mathematical expressions. One approach for simplifying the computations is to find and manipulate variational upper and lower bounds instead of the expressions themselves. A prominent tool for computing such bounds is Jensen's inequality which subsumes many information-theoretic bounds (cf. Cover and Thomas 1996). In maximum likelihood (ML) estimation under incomplete data, Jensen is used to derive an iterative EM algorithm [2]. For graphical models, intractable inference and estimation is performed via variational bounds [7]. Bayesian integration also uses Jensen and EM-like bounds to compute integrals that are otherwise intractable [9]. Recently, however, the learning community has seen the proliferation of conditional or discriminative criteria. These include support vector machines, maximum entropy discrimination distributions [4], and discriminative HMMs [3]. These criteria allocate resources with the given task (classification or regression) in mind, yielding improved performance. In contrast, under canonical ML each density is trained separately to describe observations rather than optimize classification or regression. Therefore performance is compromised. 'This is the short version of the paper. Please download the long version with tighter bounds, detailed proofs, more results, important extensions and sample matlab code from: http://www.media.rnit.edu/ "-'jebara/bounds Computationally, what differentiates these criteria from ML is that they not only require Jensen-type lower bounds but may also utilize the corresponding upper bounds. The Jensen bounds only partially simplify their expressions and some intractabilities remain. For instance, latent distributions need to be bounded above and below in a discriminative setting [4] [3]. Metaphorically, discriminative learning requires lower bounds to cluster positive examples and upper bounds to repel away from negative ones. We derive these complementary upper bounds 2 which are useful for discriminative classification and regression. These bounds are structurally similar to Jensen bounds, allowing easy migration of ML techniques to discriminative settings. This paper is organized as follows: We introduce the probabilistic models we will use: mixtures of the exponential family. We then describe some estimation criteria on these models which are intractable. One simplification is to lower bound via Jensen's inequality or EM. The reverse upper bound is then derived. We show implementation and results of the bounds in applications (i.e. conditional maximum likelihood (CML?. Finally, a strict algebraic proof is given to validate the reverse-bound. 2 The Exponential Family We restrict the reverse-Jensen bounds to mixtures of the exponential family (e-family). In practice this class of densities covers a very large portion of contemporary statistical models. Mixtures of the e-family include Gaussians Mixture Models, Multinomials, Poisson, Hidden Markov Models, Sigmoidal Belief Networks, Discrete Bayesian Networks, etc. [1] The e-family has the following form: p(Xle) = exp(A(X) + xTe - K(e)). E-Distribution Gaussian Multinomial A(X) - ~ XT x-If log( 2n-) o Here, K(e) is convex in e, a multi-dimensional parameter vector. Typically the data vector X is constrained to live in the gradient space of K, i.e. X E :eK(e). The e-family has special properties (i.e. conjugates, convexity, linearity, etc.) [1]. The reverse-Jensen bound also exploits these intrinsic properties. The table above lists example A and K functions for Gaussian and multinomial distributions. More generally, though, we will deal with mixtures of the e-family (where m represents the incomplete data?, i.e.: m m These latent probability distributions need to get maximized, integrated, marginalized, conditioned, etc. to solve various inference, prediction, and parameter estimation tasks. However, such manipulations can be difficult or intractable. 3 Conditional and Discriminative Criteria The combination of ML with EM and Jensen have indeed produced straightforward and monotonically convergent estimation procedures for mixtures of the e-family [2] [1] [7]. However, ML criteria are non-discriminative modeling techniques for estimating generative models. Consequently, they suffer when model assumptions are inaccurate. 2 A weaker bound for Gaussian mixture regression appears in [6]. Other reverse-bounds are in [8]. 3Note we use El to denote an aggregate model encompassing all individual Elm \1m. 12 12 10 0 00 ? ifJO 0 8 0 0 0 6 , 0 4 2 C) 0 ? 6$0 ~ Q'. o 6 4 '" 2 0 -2 0 " ' ' ,?O .. 0 5 10 15 20 25 ML Classifier: 1=-8.0, Ie = -1.7 -5 0 5 0 0' " , 00"0 0?'<> ? 8 -2 -5 ? 0 00? ~ 00 8 "* ~ >Q x 0 ? go 'b 0 ? ifJo 0 8 O~ )( "?/ ? 0 10 'b 10 15 '" )( "*.l 20 25 CML Classifier: 1=-54.7, Ie = 0.4 Figure 1: ML vs. CML (Thick Gaussians represent circles, thin ones represent x's). For visualization, observe the binary classification4 problem above. Here, our model incorrectly has 2 Gaussians (identity covariances) per class but the true data is generated from 8 Gaussians. Two solutions are shown, ML and CML. Note the values of joint loglikelihood I and conditional log-likelihood Ie. The ML solution performs as well as random chance guessing while CML classifies the data very well. Thus, CML, in estimating a conditional density, propagates the classification task into the estimation criterion. In such examples, we are given training examples Xi and corresponding binary labels Ci to classify with a latent variable e-family model (mixture of Gaussians). We use m to represent the latent missing variables. The corresponding objective functions log-likelihood I and conditional log-likelihood Ie are: I Ie L , log L= p(m,e"X, 10) = L, logL=p(m ,e, IX,,0) = L , logL=p(m ,e"X, 10) -logL= L c p(m, e,X, 10 ) The classification and regression task can be even more powerfully exploited in the case of discriminative (or large-margin) estimation [4] [5]. Here, hard constraints are posed on a discriminant function ? (X IE?, the ratio of each class' latent likelihood. Prediction of class labels is done via the sign of the function, c = sign?(X IE?. ?(XIE? = log :~~:~:; = logL=p(m ,XI0+)-logL= p(m,X I0_ ) (1) In the above log-likelihoods and discriminant functions we note logarithms of sums (latent likelihood is basically a product of sums) which cause intractabilities. For instance, it is difficult to maximize or integrate the above log-sum quantities. Thus, we need to invoke simplifying bounds. 4 Jensen and EM Bounds Recall the definition of Jensen's inequality: f(E{X}) 2': E{f(X)} for concave f. The log-summations in I, Ie, and ?(X IE? all involve a concave f = log around an expectation, i.e. a log-sum or probabilistic mixture over latent variables. We apply Jensen as follows: logL=p(m ,XI0) log L= 0'= exp(A=(X =)+X~0=-JC=(0=)) > > p = ,x I0 . ] log p(= ,X lEl)+log'" p(m,X I0 ) n p(n,X lEl) p(= ,XlEl) i...J= L= [h=] (X~0= - JC=(0=)) +C Above, we have also expanded the bound in the e-family notation. This forms a variational lower bound on the log-sum which makes tangential contact with it at and is much easier e 4These derivations extend to multi-class classification and regression as well. to manipulate. Basically, the log-sum becomes a sum of log-exponential family members. There is an additive constant term C and the positive scalar h m terms (the responsibilities) are given by the terms in the square brackets (here, brackets are for grouping terms and are not operators). These quantities are relatively straightforward to compute. We only require local evaluations of log-sum values at the current E> to compute a global lower bound. If we bound all log-sums in the log-likelihood, we have a lower bound on the objective I which we can maximize easily. Iterating maximization and lower bound computation at the new E> produces a local maximum of log-likelihood as in EM. However, applying Jensen on log-sums in Ie and ?(XIE? is not as straightforward. Some terms in these expressions involve negative log-sums and so Jensen is actually solving for an upper bound on those terms. If we want overall lower and upper bounds on Ie and ?(XIE?, we need to compute reverse-Jensen bounds. 5 Reverse-Jensen Bounds It seems strange we can reverse Jensen (i.e. f(E{X}) ~ E{f(X)}) but it is possible. We need to exploit the convexity of the K functions in the e-family instead of exploiting log. However, not only does the reverse-bound have to upper-bound the concavity of f the log-sum, it should also have the same form as the Jensen-bound above, i.e. a sum of log-exponential family terms. That way, upper and lower bounds can be combined homogeneously and ML tools can be quickly adapted to the new bounds. We thus need: = iogLm cx m exp(Am(Xm)+X;;'0 m - K m(0 m )) < Lm -[w ml (Y!0 m - K m(0 m )) +k (2) Here, we give the parameters of the bound directly, refer to the proof at the end of the paper for their algebraic derivation. This bound again makes tangential contact at yet is an upper bound on the log-sum 5. e k iogp(XI0)+ Lm Wm(Y!Elm-Km(El m)) ~( 8K(0 m l 88m. W 7n ? mIn W I Tn sue Iem -X)+ 8K(0 m l Iem Tn h th t .!:..m. a w ~ 80m. I (8K (0 m l 88 m 9:m -X)+ m. I 8K(0 m l 80m. 0 7n E 8K(0 m l 88 m This bound effectively reweights (w m ) and translates (Ym ) incomplete data to obtain complete data. Tighter bounds are possible (i.e. smaller w m ) which also depend on the h m terms (see web page). The first condition requires that the W;" generate a valid Ym that lives in the gradient space of the K functions (a typical e-family constraint). Thus, from local computations of the log-sum's values, gradients and Hessians at the current we can compute global upper bounds. e, 6 Applications and Results In Fig. 2 we plot the bounds for a two-component unidimensional Gaussian mixture model case and a two component binomial (unidimensional multinomial) mixture model. The Jensen-type bounds as well as the reverse-Jensen bounds are shown at various configurations of and X. Jensen bounds are usually tighter but this is inevitable due to the intrinsic shape of the log-sum. In addition to viewing many such 2D visualizations, we computed higher dimensional bounds and sampled them extensively, empirically verifying that the reverse-Jensen bound remained above the log-sum. Below we describe practical uses of this new reverse-bound. e 5We can also find multinomial bounds on a-priors jointly with the E> parameters. 10 ." e, e, 10 (a) Gaussian Case (b) Multinomial Case Figure 2: Jensen (black) and reverse-Jensen (white) bounds on the log-sum (gray). 6.1 Conditional Maximum Likelihood The inequalities above were use to fully lower bound IC and maximizing the bound iteratively. This is like the CEM algorithm [6] except the new bounds handle the whole e-family (i.e. generalized CEM). The synthetic Gaussian mixture model problem problem portrayed in Fig. 1 was implemented. Both ML and CML estimators (with reverse-bounds) were initialized in the same random configuration and maximized. The Gaussians converged as in Fig. 1. CML classification accuracy was 93 % while ML obtained 59%. Figure (A) depicts the convergence of ICper iteration under CML (top line) and ML (bottom-line). Similarly, we computed multinomial models for 3-class data as 60 base-pair protein chains in Figure (B). E_ 1 -_1 (A) -1) ~= ~ 2 40"IC 5 10 220 Computationally, utilizing both Jensen and reverse-Jensen bounds (B) 20~ 10 20 for optimizing CML needs double the processing as ML using EM. For example, we estimated 2 classes of mixtures of multinomials (5-way mixture) from 40 lO-dimensional data points. In non-optimized Matlab code, ML took 0.57 seconds per epoch while CML took 1.27 seconds due to extra bound computations. Thus, efficiency is close to EM for practical problems. Complexity per epoch roughly scales linearly with sample size, dimensions and number of latent variables. 6.2 I 15 Conditional Variational Bayesian Inference In [9], Bayesian integration methods were demonstrated on latent-variable models by invoking Jensen type lower bounds on the integrals of interest. A similar technique can be used to approximate conditional Bayesian integration. Traditionally, we compute the joint Bayesian integral from (X,Y) data as p(X , Y) = f p(X, Y I8)p(8 IX ,Y)d8 and condition it to obtain p(Y IX )i (the superscript indicates we initially estimated a joint density). Alternatively, we can compute the conditional Bayesian integral directly. The 30 corresponding dependency graphs (Fig. 3(b) and (c? depict the differences between j oint and conditional estimation. The conditional Bayesian integral exploits the graph's factorization, to solve p(Y IX) c. p (YIX )c = f p (YIX ,ElC)[p (El clx ,Y )]dElc= f p (YIX ,ElC) [ P( YI ;~yjC1) (0") l dElc Jensen and reverse-Jensen bound the terms to permit analytic integration. Iterating this process efficiently converges to an approximation of the true integral. We also exhaustively solved both Bayesian integrals exactly for a 2 Gaussian mixture model on 4 data points. Fig. 3 shows the data and densities. In Fig. 3(d) joint and conditional estimates are inconsistent under Bayesian integration (i.e. P(Y IX )C-j. P(Y IX )j). ~pIYlx/ 7YY . ~ ~ In~ral. fP1;'x( ~gral. IX' Y~YIX} Condition (a) Data (b) Conditioned Joint (c) Direct Conditional (d) Inconsistency Figure 3: Conditioned Joint and Conditional Bayesian Estimates 6.3 Maximum Entropy Discrimination Recently, Maximum Entropy Discrimination (MED) was proposed as an alternative criterion for estimating discriminative exponential densities [4] [5] and was shown to subsume SVMs. The technique integrates over discriminant functions like Eq. 1 but this is intractable under latent variable situations. However, if Jensen and reverse-Jensen bounds are used, the required computations can be done. This permits iterative MED solutions to obtain large margin mixture models and mixtures of SVMs (see web page). 7 Discussion We derived and proved an upper bound on the log-sum of e-farnily distributions that acts as the reverse of the Jensen lower bound. This tool has applications in conditional and discriminative learning for latent variable models. For further results, extensions, etc. see: http://www.media.mit.edu/ ~jebara/bounds. 8 Proof Starting from Eq. 2, we directly compute k and Ym by ensuring the variational bound makes tangential contact with the log-sum at (i.e. making their value and gradients equal). Substituting k and Yminto Eq. 2, we get constraints on W m via Bregman distances: e Define F m(El m) =IC(El m )-1C(8 m) -(El m -8 m )TIC' (8 m) . The F functions are convex and have a minimum (which is zero) at 8 m ? Replace the IC functions with F : Here, D= are constants and z=:=X=-K' (0=). Next, define a mapping from these bowlshaped functions to quadratics: F=(0=) = 9=(<1>=) = !(<I>=-0=f(<I>=-0=) This permits us to rewrite Eq. 2 in terms of <1>: L :m w=9(<I>=) ~ log L 'tcxp{ D=+0=(",=)T z =-!)("'=)} T -T L m h=(0=(<1>=)-0=) z= m cxP{Dm+0:mZm-OCE>m)} Let us find properties of the mapping F =9. (3) Take 2nd derivatives over <1>=: K"(0=)~ ~ T +(KI(0=)_KI(0=?)~2~= = 1 above, we get the following for a family of such mappings: ~ 8 = = In an e-farnily, we can always find a O;" such that X==K ' (0;"). By convexity we create a linear lower bound at 0;": Setting 0==0= 1 [K"(0=)]- 1/ 2. of F F(0;")+(0=-0;") Take 2nd derivatives over <1>=: a ~~~) F ' (0;") 10;" ~2:t:: ~ ~ F(0=) 1 = 9(<1>=) which is rewritten as: Z 20 m m. 811>in, a 20,m In Eq. 3, D=+0=(<I>=)T Z=-9(<I>=) is always concave since its Hessian is: Z= aa",= is negative. So, we upper bound these terms by a variational linear bound at 0=: L m w=9(<I>=) > - log L t cXP{D~+4>~[KII(0m)]-1/ 2 Zm} m. -T - CXP{Dm+07JLZm-O(E>m)} < - 1 -1 which - L m h=(0=(<1>=)-0=)TZ= Take 2nd derivatives of both sides with respect to each <1>= to obtain (after simplifications): wm 1> Z K"(0 m )- I ZT - h mmacl>~ Z a20m _m rn. If we invoke the constraint on w;", we can replace -h=Z= ~2:,m ~ w;"1. Manipulating, we get the constraint on w = (as a Loewner ordering here), guaranteeing a global upper bound: o 9 Acknowledgments The authors thank T. Minka, T. Jaakkola and K. Pop at for valuable discussions. References [1] Buntine, W. (1994). Operations for learning with graphical models. JAIR 2, 1994. [2] Dempster, AP. and Laird, N.M. and Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal o/the Royal Statistical Society, B39. [3] Gopalakrishnan, P.S. and Kanevsky, D. and Nadas, A and Nahamoo, D. (1991). An inequality for rational functions with applications to some statistical estimation problems, IEEE Trans. Information Theory, pp. 107-113, Jan. 1991. [4] Jaakkola, T. and Meila, M. and Jebara, T. (1999). Maximum entropy discrimination. NIPS 12. [5] Jebara, T. and Jaakkola, T. (2000). Feature selection and dualities in maximum entropy discrimination. DAI 2000. [6] Jebara, T. and Pentland, A (1998). Maximum conditional likelihood via bound maximization and the CEM algorithm. NIPS 11. [7] Jordan, M. Gharamani, Z. Jaakkola, T. and Saul, L. (1997). An introduction to variational methods for graphical models. Learning in Graphical Models , Kluwer Academic. [8] Pecaric, J.E. and Proschan, F. and Tong, Y.L. (1992). Convex Functions, Partial Orderings, and Statistical Applications. Academic Press. [9] Gharamani, Z. and Beal, M. (1999). Variational Inference for Bayesian Mixture of Factor Analysers, NIPS 12.
1879 |@word version:2 seems:1 nd:3 km:1 cml:11 simplifying:2 covariance:1 b39:1 invoking:1 configuration:2 current:2 yet:1 additive:1 shape:1 analytic:2 plot:1 depict:1 discrimination:5 v:1 generative:1 short:1 provides:1 sigmoidal:1 mathematical:2 direct:1 prove:1 introduce:1 indeed:1 roughly:1 themselves:1 proliferation:1 multi:2 becomes:1 estimating:3 bounded:1 linearity:1 classifies:1 medium:6 notation:1 what:1 tic:1 elm:2 act:1 concave:3 exactly:1 classifier:2 positive:2 local:3 ap:1 black:1 therein:1 hmms:1 factorization:1 range:1 practical:2 acknowledgment:1 practice:1 procedure:1 jan:1 protein:1 get:4 close:1 selection:1 operator:1 live:1 applying:1 www:2 optimize:1 demonstrated:1 missing:1 maximizing:1 straightforward:3 go:1 starting:1 convex:3 estimator:1 utilizing:1 handle:1 traditionally:1 us:2 bottom:1 solved:1 verifying:1 ordering:2 contemporary:2 valuable:1 dempster:1 convexity:3 complexity:1 exhaustively:1 trained:1 depend:1 solving:1 rewrite:1 powerfully:1 efficiency:2 easily:1 joint:6 various:2 derivation:2 describe:3 analyser:1 aggregate:1 quite:1 posed:1 solve:2 loglikelihood:1 otherwise:2 jointly:1 laird:1 superscript:1 beal:1 loewner:1 took:2 product:2 zm:1 oint:1 validate:1 exploiting:1 convergence:2 cluster:1 double:1 produce:1 guaranteeing:1 converges:1 derive:3 eq:5 implemented:1 thick:1 viewing:1 require:3 kii:1 tighter:3 summation:1 extension:2 hold:1 around:1 ic:4 exp:3 mapping:3 lm:2 substituting:1 sandy:1 estimation:11 integrates:1 label:2 create:1 tool:4 mit:5 gaussian:7 yix:4 always:2 rather:1 jaakkola:4 nada:1 derived:2 ral:1 likelihood:15 indicates:1 contrast:1 am:1 inference:7 el:6 i0:2 inaccurate:1 typically:1 integrated:1 initially:1 hidden:1 manipulating:1 overall:1 classification:7 oce:1 constrained:1 integration:7 special:1 equal:1 represents:1 thin:1 inevitable:1 simplify:1 tangential:3 individual:1 iem:2 interest:1 evaluation:2 mixture:19 bracket:2 yielding:1 chain:1 bregman:1 integral:7 partial:1 incomplete:4 xte:1 logarithm:1 initialized:1 circle:1 instance:2 classify:1 modeling:1 cover:2 maximization:4 buntine:1 dependency:1 synthetic:1 combined:1 cxp:3 migration:1 density:6 ie:11 probabilistic:2 invoke:2 e_:1 ym:3 quickly:1 again:1 d8:1 reweights:1 tz:1 ek:1 derivative:3 subsumes:1 jc:2 performed:1 lab:2 responsibility:1 portion:1 wm:2 complicated:1 fp1:1 square:1 accuracy:1 efficiently:1 maximized:2 bayesian:15 produced:1 basically:2 converged:1 definition:1 pp:1 minka:1 dm:2 proof:4 sampled:1 rational:1 proved:1 recall:1 organized:1 actually:1 appears:1 higher:1 jair:1 xie:3 improved:1 done:2 though:1 elc:2 web:2 logl:6 gray:1 true:2 iteratively:1 deal:1 white:1 please:1 criterion:8 generalized:1 prominent:1 theoretic:1 complete:1 workhorse:1 performs:1 tn:2 variational:9 recently:2 multinomial:8 empirically:1 rnit:1 extend:1 xi0:3 kluwer:1 refer:1 cambridge:2 meila:1 similarly:1 etc:4 base:1 optimizing:1 reverse:18 manipulation:1 inequality:9 binary:2 life:1 inconsistency:1 yi:1 exploited:1 seen:1 minimum:1 dai:1 maximize:2 monotonically:1 academic:2 long:1 clx:1 manipulate:2 ensuring:1 prediction:3 regression:6 expectation:1 poisson:1 sue:1 iteration:1 represent:3 addition:1 want:1 separately:1 extra:1 strict:1 med:2 member:1 inconsistent:1 jordan:1 easy:1 kanevsky:1 restrict:1 iogp:1 unidimensional:2 translates:1 expression:4 allocate:1 suffer:1 algebraic:2 hessian:2 cause:1 matlab:2 useful:1 iterating:2 detailed:1 involve:2 generally:1 extensively:1 svms:2 http:2 generate:1 canonical:1 metaphorically:1 sign:2 estimated:2 per:3 yy:1 discrete:1 utilize:1 graph:2 sum:21 powerful:1 family:18 strange:1 bound:77 ki:2 simplification:3 convergent:1 quadratic:1 nahamoo:1 adapted:1 constraint:5 alex:1 span:1 min:1 expanded:1 relatively:1 combination:1 conjugate:1 remain:1 smaller:1 em:10 making:1 computationally:2 resource:1 visualization:2 discus:1 differentiates:1 needed:1 mind:1 end:1 operation:2 gaussians:6 permit:4 rewritten:1 apply:1 observe:1 away:1 homogeneously:1 alternative:1 thomas:1 binomial:1 top:1 tony:1 include:3 cf:1 graphical:4 marginalized:1 exploit:3 lel:2 society:1 contact:3 objective:2 quantity:3 guessing:1 gradient:4 distance:1 thank:1 discriminant:3 gopalakrishnan:1 code:2 ratio:1 difficult:2 negative:3 implementation:1 zt:1 allowing:1 upper:17 observation:1 markov:1 pentland:2 incorrectly:1 subsume:1 situation:1 rn:1 jebara:6 community:1 download:1 pair:1 required:1 repel:1 xle:1 optimized:1 pop:1 nip:3 trans:1 below:2 usually:1 xm:1 including:1 royal:1 belief:1 prior:1 epoch:2 encompassing:1 fully:1 integrate:1 propagates:1 rubin:1 intractability:2 i8:1 lo:1 proschan:1 side:1 weaker:1 wide:1 saul:1 dimension:1 valid:1 computes:1 concavity:1 author:1 approximate:1 ml:17 global:3 cem:3 discriminative:13 xi:1 alternatively:1 latent:13 iterative:2 compromised:1 table:1 gharamani:2 linearly:1 whole:1 complementary:1 fig:6 depicts:1 tong:1 structurally:1 exponential:7 portrayed:1 ix:7 remained:1 xt:1 jensen:36 list:1 grouping:1 intractable:6 intrinsic:2 effectively:1 ci:1 conditioned:3 margin:3 easier:1 entropy:5 cx:1 partially:1 scalar:1 aa:1 chance:1 ma:2 conditional:19 identity:1 consequently:1 replace:2 hard:1 typical:1 except:1 reversing:1 duality:1 support:1
961
188
643 LEARNING SEQUENTIAL STRUCTURE IN SIMPLE RECURRENT NETWORKS David Servan-Schreiber. Axel Cleeremans. and James L. McClelland Departtnents of Computer Science and Psycholgy Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t-l, together with element t, to predict element t+ 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network. We illustrate the phases of learning with cluster analyses performed at different points during training. Several connectionist architectures that are explicitly constrained to capture sequential infonnation have been developed. Examples are Time Delay Networks (e.g. Sejnowski & Rosenberg. 1986) -- also called 'moving window' paradigms -- or algorithms such as back-propagation in time (Rumelhart. Hinton & Williams. 1986), Such architectures use explicit representations of several consecutive events. if not of the entire history of past inputs. Recently. Elman (1988) has introduced a simple recurrent network (SRN) that has the potential to master an infinite corpus of sequences with the limited means of a learning procedure that is completely local in time (see Figure I.). CONIlSrr UNI'l'S Figure 1. The simple recurrent network (Elman, 1988) 644 Servan-Schreiber, Cleeremans and McClelland In the SRN. the pattern of activation on the hidden units at time t-I. together with the new input pattern. is allowed to influence the pattern of activation at time t . This is achieved by copying the pattern of activation on the hidden layer at time t?I to a set of input units -- called the 'context units' -- at time t. The forward connections in the network are subject to training via back-propagation. but there is no backpropagation through time. In this paper. we show that the SRN can learn to mimic closely a flnite state automaton. both in its behavior and in its state representations. In particular. we show that it can learn to process an infinite corpus of strings based on experience with afinite set of training exemplars. We then describe the phases through which the appropriate internal representations are discovered during training. MASTERING A FINITE STATE GRAMMAR In our first experiment. we asked whether the network could learn the contingencies implied by a small flnite state grammar (see Figure 2). The network was presented with strings derived from this grammar. and was required to try to predict the next letter at every step. These predictions are context dependent since each letter appears twice in the grammar and is followed in each case by different successors. A single unit on the input layer represented a given letter (six input units in total; flve for the letters and one for a begin symbol 'B'). Similar local representations were used on the output layer (with the 'begin' symbol being replaced by an end symbol 'E'). There were three hidden units. s .tart -~ T Figure 2. The small fmite-state grammar (Reber. 1967) Training. On each of 60.000 training trials. a string was generated from the grammar. starting with 'B'. Successive arcs were selected randomly from the 2 possible continuations with a probability of 0.5. Each letter was Learning Sequential Structure in Simple Recurrent Networks then presented sequentially to the network. The activations of the context units were reset to 0.5 at the beginning of each string. After each letter, the error between the network's prediction and the actual successor specified by the string was computed and back-propagated. The 60,000 randomly generated strings ranged from 3 to 30 letters (mean: 7; sd: 3.3). Performance. Three tests were conducted. First, we examined the network's predictions on a set of 70,000 random strings. During this test, the network is first presented with the start signal, and one of the five letters or E is then selected at random as a successor. If that letter is predicted by the network as a legal successor (Le, activation is above 0.3 for the corresponding unit), it is then presented to the input layer on the next time step, and another letter is drawn at random as its successor. This procedure is repeated as long as each letter is predicted as a legal successor until the end signal is selected as the next letter. The procedure is interrupted as soon as the actual successor generated by the random procedure is not predicted by the network, and the string of letters is then considered 'rejected'. A string is considered 'accepted' if all its letters have been predicted as possible continuations up to, and including, the end signal. Of the 70,000 random strings, 0.3 % were grammatical, and 99.7 % were ungrammatical. The network performed flawlessly, accepting all the grammatical strings and rejecting all the others. In a second test, we presented the network with 20,000 generated at random from the grammar, i.e, all these strings were grammatical. Using the same criterion as above, all of these strings were correctly 'accepted'. Finally, we constructed a set of very long grammatical strings -- more than 100 letters long -- and verified that at each step the network correcdy predicted all the possible successors (activations above 0.3) and none of the other letters in the grammar. Analysis of internal representations. What kind of internal representations have developed over the set of hidden units that allow the network to associate the proper predictions to intrinsically ambiguous letters? One way to answer this question is to record the hidden units' activation patterns generated in response to the presentation of individual letters in different contexts. These activation vectors can then be used as input to a cluster analysis program. Figure 3.A. shows the results of such an analysis conducted on a small random set of grammatical strings. The patterns of activation are grouped according to the nodes of the grammar: all the patterns that are used to predict the successors of a given node are grouped together independently of the current letter. This observation sheds some light on the behavior of the network: at each point in a sequence, the pattern of activation stored over the context units provides information about the current node in the grammar. Together with information about the current letter (represented on the input layer), this contextual information is used to produce a new pattern of activation over the hidden layer, that uniquely specifies the next node. In that sense, the network closely approximates the finite-state automaton that would encode the grammar from which the training exemplars were derived. However, a closer look at the cluster analysis reveals that within a cluster corresponding to a particular node, patterns are further divided according to the path traversed before the node is reached. For example, looking at the bottom cluster -- node #5 -- patterns produced by a 'VV', 'PS', 'XS' or 'SXS' ending are grouped separately: 645 646 Servan-Schreiber, Cleeremans and McClelland I I I '~'--------~----~-----------. ? I ? Ie ! ?? Ie I ?... ? Figure 3. A. Hieruchical cluster analysis of the hidden unit activation patternS after 60.000 presentations of Slrings generated at random from the finite-Slate grammar. B. Cluster analysis of the H.U. activaDon pauans following 2000 epochs of 1raining 011 a set of 22 strings with a maximum length of eightlettets. Learning Sequential Structure in Simple Recurrent Networks they are more similar to each other than to the abstract prototype of node #5. This tendency to preserve information about the path is not a characteristic of traditional finite-state automata. ENCODING PATH INFORMATION In a different set of experiments, we asked whether the SRN could learn to use the infonnation about the path that is encoded in the hidden units' patterns of activation. In one of these experiments, we tested whether the network could master length constraints. When strings generated from the small finite-state grammar may only have a maximum of 8 letters, the prediction following the presentation of the same letter in position number six or seven may be different. For example, following the sequence 'TSSSXXV', 'V' is the seventh letter and only another 'V' would be a legal successor. In contrast, following the sequence 'TSSXXV', both 'V' and 'P' are legal successors. A network with 15 hidden units was trained on a small set of length-limited (max. 8 letters) grammatical strings. It was able to use the small activation differences present over the context units - and due to the slightly different sequences presented - to master contingencies such as those illustrated above (see table 1). Table 1. Activation of each output unit following the presentation of 'Y' as the 6th or 7th letter in the string T tssxxV tsssxxV 0.0 0.0 S 0.0 0.0 P 0.54 0.02 X 0.0 0.0 V E 0.48 0.97 0.0 0.0 A cluster analysis of all the patterns of activation on the hidden layer generated by each letter in each sequence demonstrates how the influence of the path is reflected in these patterns (see figure 3.B.)*. We labeled the arcs according to the letter being presented (the 'current letter') and its position in the grammar defined by Reber. Thus 'VI' refers to the f11'st 'V' in the grammar and 'V2' to the second 'V' which immediately precedes the end of the string. 'Early' and 'Late' refer to whether the letter occurred early or late in the sequence (for example in 'PT..' 'T2' occurs early; in 'PVPXT..' it occurs late). Finally, in the left margin we indicated what predictions the corresponding patterns yield on the output layer (e.g, the hidden unit pattern generated by 'BEGIN' predicts 'T' or 'P'). From the figure, it can be seen that the patterns are grouped according to three distinct principles: (1) according to similar predictions, (2) according to similar letters presented on the input units, and (3) according to similar paths. These factors do not necessarily overlap since several occurrences of the same letter in a sequence usually implies different predictions and since similar paths also lead to different predictions depending on the current letter. For example, the top cluster in the figure corresponds to all occurrences of the letter 'V' and is further subdivided among 'V I' and 'V2" * Information about the leaves of the cluster analyses in this and the remaining figures is available in Servan-Schreiber. Cleeremans and McCleUand (1988). 647 648 Servan-Schreiber, Cleeremans and McClelland The 'V l' cluster is itself further divided between groups where 'V l' occurs early in the sequence (e.g, 'pV.. .') and groups where it occurs later (e.g, 'tssxxV .. .' and 'pvpxV .. .'). Note that the division according to the path does not necessarily correspond to different predictions. For example, 'V2' always predicts 'END' and always with maximum certainty. Nevertheless, sequences up to 'V2' are divided according to the path traversed. PHASES OF LEARNING How can information about the path be progressively encoded in the hidden layer patterns of activation? To clarify how the network learns to use the context of preceding letters in a sequence, we will illustrate the different phases of learning with cluster analyses of the hidden layer patterns generated at each phase. To make the analyses simpler, we used a smaller training set than the training set mentioned previously. The corresponding finite-state grammar is shown in Figure 4. In this simpler grammar, the main difference -- besides the reduced number of patterns -- is that the letters 'P' and 'T' appear only once. s x .11It --4IF-----~ ----~Eu T Figure 4. The reduced rmite-state grammar from which 12 strings were generated for training Discovering letters. At epoch 0, before the network has received any training, the hidden unit patterns clearly show an organization by letter: to each letter corresponds an individual cluster. These clusters are already subdivided according to preceding sequences -- the 'path'. This fact illustrates how a pattern of activation on the context units naturally tends to encode the path traversed so far independently of any error correcting procedure. The average distance between the different patterns -- the 'contrast' as it were -- is nonetheless rather small; the scale only goes up to 0.6 (see Figure 5.A.)**. But this is due to the very small initial random ** In all the following figures. the scaJe was automatically detennined by the cluster analysis program. It is important to keep this in mind when comparing the figures to Learning Sequential Structure in Simple Recurrent Netw:,l.'" . : I Figure 5. Cluster Analyses of the H.U. aclivation pattmls obcained with the reduced set of strings: A. before lraining. B. After 100 epochs of lraining. C. After 700 epochs of training. 649 650 Servan-Schreiber, Cleeremans and McClelland values of the weights from the input and context layers to the hidden layer. Larger initial values would enhance the network's tendency to capture path infonnation in the hidden unit patterns before training is even started After 100 epochs of training, an organization by letters is still prevalent, however letters have been regrouped according to similar predictions. 'START', 'P' and'S' all make the common prediction of 'X or S' (although'S' also predicts 'END'); 'T' and 'V' make the common prediction of 'V' (although 'V' also predicts 'END' and 'P'). The path information has been almost eliminated: there is very little difference between the patterns generated by two different occurrences of the same letter (see Figure 5.B.). For example, the hidden layer pattern generated by 'S I' and the corresponding output pattern are almost identical to the patterns generated by 'S2' (see table 2). Table 2. Activation of each output unit fOllowing the presentation of the flrst S in the grammar (SI) or the second S (S2> after 100 epochs of training SI S2 T S P 0.0 0.0 0.36 0.37 0.0 0.0 X 0.33 0.33 V E 0.16 0.16 0.17 0.17 In this phase, the network is learning to ignore the pattern of activation on the context units and to produce an output pattern appropriate to the letter 'S' in any context. This is a direct consequence of the fact that the patterns of activation on the hidden layer -- and hence the context layer -- are continuously changing from one epoch to the next as the weights from the input units (the letters) to the hidden layer are modified. Consequently, adjustments made to the weights from the context layer to the hidden layer are inconsistent from epoch to epoch and cancel each other. In contrast, the network is able to pick up the stable association between each letter and all of its possible successors. Discovering arcs. At the end of this phase, individual letters consistently generate a unique pattern of activation on the hidden layer. This is a crucial step in developing a sensitivity to context: patterns copied onto the context layer have become a unique code designating which letter immediately preceded the current letter. The learning procedure can now exploit the regular association between the pattern on the context layer and the desired output. Around epoch 700, the cluster analysis shows that the network has used this infonnation to differentiate clearly between the fust and second occurrence of the same letter (Figure 5.C.). The pattern generated by 'S2' -which predicts 'END' -- clusters with the pattern generated by 'V2" which also predicts 'END'. The overall difference between all the hidden layer patterns has also more than roughly doubled, as indicated by the change in scale. Encoding the path. During the last phase of learning, the network learns to make different predictions to the same occurrence of a letter (e.g, 'V I') each other. Learning Sequential Structure in Simple Recurrent Networks on the basis of the previous sequence. For example, it learns to differentiate between 'ssxxV' which predicts either 'P' or 'V', and 'sssxxV' which predicts only 'V' by exploiting the small difference between the activation patterns generated by X2 in the two different contexts. The process through which path information is encoded can be conceptualized in the following way: As the initial papers about backpropagation pointed out, the hidden unit patterns of activation represent an 'encoding' of the features of the input patterns that are relevant to the task. In the recurrent network, the hidden layer is presented with information about the current letter, but also -- on the context layer -- with an encoding of the relevant features of the previous letter. Thus, a given hidden layer pattern can come to encode information about the relevant features of two consecutive letters. When this pattern is fed back on the context layer, the new pattern of activation over the hidden units can come to encode information about three consecutive letters, and so on. In this manner, the context layer patterns can allow the network to maintain prediction-relevant features of an entire sequence. However, it is important to note that information about the path that is not relevant locally (Le, that does not contribute to predicting successors of the current letter) tends not to be encoded in the next hidden layer pattern. It may then be lost for subsequent processing. This tendency is lessened when the network has extra degrees of freedom -- i.e, extra hidden units -- so as to allow small and locally useless differences to survive for several processing steps. CONCLUSION We have shown that the network architecture first proposed by Elman (1988) is capable of mastering an infinite coIpus of strings generated from a finite-state grammar after training on a finite set of exemplars with a learning algorithm that is local in time. The network develops internal representations that correspond to the nodes of the grammar and closely approximates the corresponding minimal finite-state recognizer. We have also shown that the simple recurrent network is able to encode information about contingencies that are not local to a given letter and its immediate predecessor, such as those implied by a length constraint on the strings. Encoding of sequential structure in the patterns of activation over the hidden layers proceeds in stages. The network first develops stable hidden-layer representations for individual letters, and then for individual arcs in the grammar. Finally, the network is able to exploit slight differences in the patterns of activation which denote a specific path through the grammar. Our current work is exploring the relevance of this architecture to the processing of embedded sequences typical of natural language. The results of some preliminary experiments are available in Servan-Schreiber, Cleeremans and McClelland (1988). 651 652 Servan-Schreiber, Cleeremans and McClelland References Elman. J.L. (1988). Finding structure in time. CRL Technical report 9901. Center for Research in Language. University of California. San Diego. Reber. A.S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior. S. 855-863. Rumelhart. D.E .? Hinton. G.E .? and Williams. R.I. (1986). Learning internal representations by backpropagating errors. Nature 323:533-536. Sejnowski. T J. and Rosenberg C. (1986). NETta1k: A parallel network that learns to read aloud. Technical Report.lohns Hopkins University lHU-EECS-86-01. Servan-Schreiber D. Cleeremans A. and McClelland JL (1988) Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183. Computer Science Department. Carnegie Mellon University. Pittsburgh. PA 15213. Williams. R.J. and Zipser. D. (1988). A learning algorithm for continually running fully recurrent neural networks. ICS Technical report 8805. Institute for Cognitive Science. UCSD. La lolla. CA 92093.
188 |@word trial:1 fmite:1 pick:1 initial:3 past:1 current:9 contextual:1 comparing:1 activation:29 si:2 interrupted:1 subsequent:1 progressively:1 selected:3 leaf:1 discovering:2 beginning:1 record:1 accepting:1 provides:1 node:9 contribute:1 successive:2 simpler:2 five:1 constructed:1 direct:1 become:1 predecessor:1 manner:1 tart:1 roughly:1 elman:5 behavior:3 f11:1 automatically:1 actual:2 little:1 window:1 begin:3 what:2 kind:1 string:25 developed:2 finding:1 certainty:1 every:1 shed:1 demonstrates:1 unit:28 appear:1 continually:1 before:4 local:4 sd:1 tends:2 consequence:1 encoding:6 path:19 twice:1 examined:1 limited:2 unique:2 lost:1 backpropagation:2 procedure:6 refers:1 regular:1 doubled:1 onto:1 context:20 influence:2 center:1 conceptualized:1 williams:3 go:1 starting:1 independently:2 automaton:3 immediately:2 correcting:1 pt:1 diego:1 us:1 pauans:1 designating:1 pa:2 element:3 rumelhart:2 associate:1 predicts:8 labeled:1 bottom:1 capture:2 cleeremans:9 eu:1 mentioned:1 asked:2 trained:2 division:1 completely:1 basis:1 slate:1 represented:2 distinct:1 describe:1 sejnowski:2 precedes:1 artificial:1 aloud:1 encoded:4 larger:1 grammar:26 itself:1 differentiate:2 sequence:16 reset:1 relevant:6 detennined:1 exploiting:1 cluster:19 p:1 produce:2 perfect:1 illustrate:2 recurrent:11 depending:1 exemplar:3 received:1 predicted:5 c:1 implies:1 come:2 closely:3 successor:13 subdivided:2 preliminary:1 traversed:4 exploring:1 clarify:1 around:1 considered:2 ic:1 predict:3 regrouped:1 consecutive:3 early:4 recognizer:2 infonnation:4 grouped:4 schreiber:9 clearly:2 always:2 modified:1 rather:1 rosenberg:2 encode:6 derived:2 consistently:1 prevalent:1 contrast:3 sense:1 dependent:1 entire:3 fust:1 hidden:32 overall:1 among:1 constrained:1 once:1 eliminated:1 identical:1 look:1 cancel:1 survive:1 mimic:1 connectionist:1 others:1 t2:1 develops:2 report:4 randomly:2 preserve:1 individual:5 replaced:1 phase:8 maintain:1 freedom:1 organization:2 light:1 closer:1 capable:1 experience:1 srn:4 desired:1 minimal:1 servan:9 delay:1 conducted:2 seventh:1 stored:1 answer:1 eec:1 st:1 sensitivity:1 ie:2 axel:1 enhance:1 together:4 continuously:1 hopkins:1 cognitive:1 flve:1 potential:1 explicitly:1 vi:1 performed:2 try:1 later:1 reached:1 start:2 parallel:1 characteristic:1 yield:1 correspond:2 rejecting:1 produced:1 none:1 history:1 flrst:1 nonetheless:1 james:1 naturally:1 propagated:1 intrinsically:1 back:4 appears:1 reflected:1 response:1 rejected:1 stage:1 implicit:1 until:1 propagation:2 indicated:2 ranged:1 hence:1 read:1 illustrated:1 during:4 uniquely:1 ambiguous:1 backpropagating:1 criterion:1 recently:1 common:2 preceded:1 jl:1 association:2 occurred:1 approximates:2 slight:1 mellon:2 refer:1 pointed:1 language:2 moving:1 stable:2 showed:1 seen:1 preceding:2 paradigm:1 signal:3 technical:4 long:3 divided:3 reber:3 prediction:16 cmu:1 represent:1 achieved:1 separately:1 crucial:1 extra:2 subject:1 inconsistent:1 zipser:1 architecture:5 prototype:1 whether:4 six:2 locally:2 mcclelland:8 reduced:3 continuation:2 specifies:1 generate:1 correctly:1 carnegie:2 group:2 nevertheless:1 drawn:1 changing:1 verified:1 letter:56 master:3 almost:2 layer:31 followed:1 copied:1 constraint:2 x2:1 department:1 developing:1 according:11 smaller:1 slightly:1 mastering:2 legal:4 previously:1 flawlessly:1 mind:1 fed:1 end:10 available:2 v2:5 appropriate:2 occurrence:5 top:1 remaining:1 running:1 exploit:2 implied:2 question:1 already:1 occurs:4 sxs:1 traditional:1 distance:1 seven:1 length:4 besides:1 copying:1 code:1 useless:1 proper:1 observation:1 arc:4 finite:11 immediate:1 hinton:2 looking:1 discovered:1 ucsd:1 david:1 introduced:2 required:1 specified:1 connection:1 california:1 able:4 proceeds:1 usually:1 pattern:50 program:2 including:1 max:1 event:1 overlap:1 natural:1 predicting:2 started:1 epoch:10 embedded:1 fully:1 contingency:3 degree:1 principle:1 last:1 soon:1 verbal:2 allow:3 vv:1 institute:1 grammatical:6 raining:1 ending:1 forward:1 made:1 san:1 far:1 uni:1 netw:1 ignore:1 keep:1 sequentially:1 reveals:1 corpus:2 pittsburgh:2 afinite:1 table:4 learn:5 nature:1 ca:1 ungrammatical:1 necessarily:2 flnite:2 main:1 s2:4 allowed:1 repeated:1 position:2 pv:1 explicit:1 late:3 learns:4 specific:1 symbol:3 x:1 sequential:8 illustrates:1 margin:1 explore:1 adjustment:1 corresponds:2 presentation:5 consequently:1 crl:1 change:1 infinite:3 typical:1 called:2 total:1 accepted:2 tendency:3 la:1 internal:5 relevance:1 tested:1
962
1,880
Sparse Greedy Gaussian Process Regression Alex J. Smola? RSISE and Department of Engineering Australian National University Canberra, ACT, 0200 Peter Bartlett RSISE Australian National University Canberra, ACT, 0200 [email protected] [email protected] Abstract We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n 2 m), storage is O(nm), the cost for prediction is 0 (n) and the cost to compute confidence bounds is O(nm), where n ?: m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems. 1 Introduction Gaussian processes have become popular because they allow exact Bayesian analysis with simple matrix manipulations, yet provide good performance. They share with Support Vector machines and Regularization Networks the concept of regularization via Reproducing Kernel Hilbert spaces [3], that is, they allow the direct specification of the smoothness properties of the class of functions under consideration. However, Gaussian processes are not always the method of choice for large datasets, since they involve evaluations of the covariance function at m points (where m denotes the sample size) in order to carry out inference at a single additional point. This may be rather costly to implement - practitioners prefer to use only a small number of basis functions (Le. covariance function evaluations). Furthermore, the Maximum a Posteriori (MAP) estimate requires computation, storage, and inversion of the full m x m covariance matrix Kii = k( Xi, Xi) where Xl! ... ,xm are training patterns. While there exist techniques [2, 8] to reduce the computational cost of finding an estimate to O(km 2 ) rather than O(m 3 ) when the covariance matrix contains a significant number of small eigenvalues, all these methods still require computation and storage of the full covariance matrix. None of these methods addresses the problem of speeding up the prediction stage (except for the rare case when the integral operator corresponding to the kernel can be diagonalized analytically [8]). We devise a sparse greedy method, similar to those proposed in the context of wavelets [4], solutions of linear systems [5] or matrix approximation [7] that finds ?Supported by the DFG (Sm 62-1) and the Australian Research Council. an approximation of the MAP estimate by expanding it in terms of a small subset of kernel functions k (Xi, .). Briefly, the technique works as follows: given a set of (already chosen) kernel functions, we seek the additional function that increases the posterior probability most. We add it to the set of basis functions and repeat until the maximum is approximated sufficiently well. A similar approach for a tight upper bound on the posterior probability gives a stopping criterion. 2 Gaussian Process Regression Consider a finite set X = {Xl.'" xm} of inputs. In Gaussian Process Regression, we assume that for any such set there is a covariance matrix K with elements Kij = k( Xi, Xj). We assume that for each input X there is a corresponding output y(x), and that these outputs are generated by y(x) = t(x) + (1) are both normal random variables, with rv N(O, ( 2 ) and where t(x) and t = (t(Xl), ... , t(xm))T rv N(O, K). We can use Bayes theorem to determine the distribution of the output y(x) at a (new) input x. Conditioned on the data (X,y), the output y(x) is normally distributed. It follows that the mean of this distribution is the maximum a posteriori probability (MAP) estimate of y. We are interested in estimating this mean, and also the variance. e e e It is possible to give an equivalent parametric representation of y that is more convenient for our purposes. We may assume that the vector y = (y(Xl)"" ,y(xm))T of outputs is generated by e y=Ka+e, (2) where a rv N(O, K- 1 ) and rv N(O, ( 2 1). Consequently the posterior probability p(aly, X) over the latent variables a is proportional to exp(-2;21Iy - Ka11 2) exp(-!a TKa) (3) and the conditional expectation of y(x) for a (new) location X is E[y(x)ly,X] = k T aopt, where k T denotes the vector (k( Xl. x), ... , k (x m , x)) and aopt is the value of a that maximizes (3). Thus, it suffices to compute aopt before any predictions are required. The problem of choosing the MAP estimate of a is equivalent to the problem of minimizing the negative log-posterior, (4) minimize [-y T Ka + !a T (a 2 K + KT K) a] aEW" (ignoring constant terms and rescaling by ( 2 ). It is easy to show that the mean of the conditional distribution of y(x) is k T (K +( 21)-ly, and its variance is k(x, x) + a 2 - k T (K + ( 21)-lk (see, for example, [2]). 3 Approximate Minimization of Quadratic Forms For Gaussian process regression, searching for an approximate solution to (4) relies on the assumption that a set of variables whose posterior probability is close to that of the mode of the distribution will be a good approximation for the MAP estimate. The following theorem suggests a simple approach to estimating the accuracy of an approximate solution to (4). It uses an idea from [2] in a modified, slightly more general form. Theorem 1 (Approximation Bounds for Quadratic Forms) Denote by K E lRmxm a positive semidefinite matrix, y, a E lRm and define the two quadratic forms Q(a) := -y T Ka 1 + _aT (a 2 K + KT K)a, 2 (5) Q*(a) := -y Ta 1 + _aT (a 2 1 + K)a. (6) 2 Suppose Q and Q* have minima Qmin and Q:nn. Then for all a, a* E IRffl we have _~IIYI12 - Qmin ::::: Q*(a*)::::: Q;',.in :::::a-2(_~IIYI12_Q(a)), with equalities throughout when Q(a) (7) a 2 Q*(a*), Q(a)::::: (8) = Qmin and Q*(a*) = Q;',.in. Hence, by minimizing Q* in addition to Q we can bound Q's closeness to the optimum and vice versa. Proof The minimum of Q(a) is obtained for aopt minimizes Q*), hence Qmin = 1 T -2"Y K(K +a 2 1) -1 y * and Qmin = = (K 1 + a 21)-1y T (which also 2-1 -2"Y (K +a 1) y. (9) This allows us to combine Qmin and Q;',.in to Qmin + a 2 Q;',.in = _~llyI12. Since by definition Q (a) ::::: Qmin for all a (and likewise Q* (a*) ::::: Q;',.in for all a*) we may solve Qmin + a 2 Q;',.in for either Q or Q* to obtain lower bounds for each of the two quantities. This proves (7) and (8). ? Equation (7) is useful for computing an approximation to the MAP solution, whereas (8) can be used to obtain error bars on the estimate. To see this, note that in calculating the variance, the expensive quantity to compute is -kT (K +a 21)-1k. However, this can be found by solving (10) minimize [-k Ta + ~a T (a 2 1 + K) a] , aEIRm and the expression inside the parentheses is Q*(a) with y = k (see (6)). Hence, an approximate minimizer of (10) gives an upper bound on the error bars, and lower bounds can be obtained from (8) . 2 1 2 I . ?11 h . ( *) .- 2(Q(a)+u Q*(a*)+2liYli) ? h n practice we W1 use t e quantly gap a, a .- -Q(a)+u2Q * (a*)+~liYli2 ,I.e. t e relative size of the difference between upper and lower bound as stopping criterion. 4 A Sparse Greedy Algorithm The central idea is that in order to obtain a faster algorithm, one has to reduce the number of free variables. Denote by P E IRffl xn with m ::::: nand m,n E N an extension matrix (Le. p T is a projection) with p T P = 1. We will make the ansatz ap := P[3 where [3 E IRn (11) and find solutions [3 such that Q(ap) (or Q*(ap)) is minimized. The solution is [3opt = (pT (a 2 K + K T K) p) -1 p T K T y. (12) Clearly if Pis ofrank m, this will also be the solution of (4) (the minimum negative log posterior for all a E IRffl ). In all other cases, however, it is an approximation. Computational Cost of Greedy Decompositions For a given P E IRffl xn let us analyze the computational cost involved in the estimation procedures. To compute (12) we need to evaluate pT Ky which is O(nm), (KP)T(KP) which is O(n 2 m) and invert an n x n matrix, which is O(n 3 ). Hence the total cost is O(n 2 m). Predictions then cost only k T a which is O(n). Using P also to minimize Q*(P[3*) costs no more than O(n 3 ), which is needed to upper-bound the log posterior. For error bars, we have to approximately minimize (10) which can done for a = P(3 at O(n 3 ) cost. If we compute (PKpT)-l beforehand, this can be done by at O(n 2 ) and likewise for upper bounds. We have to minimize -k T K P(3 + !(3T pT ((72 K + KT K)P(3 which costs O(n 2 m) (once the inverse matrices have been computed, one may, however, use them to compute error bars at different locations, too, thus costing only O(n 2 )). The lower bounds on the error bars may not be so crucial, since a bad estimate will only lead to overly conservative confidence intervals and not have any other negative effect. Finally note that all we ever have to compute and store is K P, i.e. the m x n submatrix of K rather than K itself. Table 1 summarizes the scaling behaviour of several optimization algorithms. Exact Solution Memory Initialization Pred. Mean Error Bars Conjugate Gradient [2] O(m~) O(m~) O(m;j) O(nm:l) g~:~) g~~~2) Optimal Sparse Decomposition O(nm) O(n:lm) O(n2 O(n 2 m) or O(n 2 ) Sparse Greedy Approximation O(nm) o (K.n:lm) O(n) O(K.n 2 m) or O(n 2 ) Table 1: Computational Cost of Optimization Methods. Note that n <t:: m and also note that the n used in Conjugate Gradient, Sparse Decomposition, and Sparse Greedy Approximation methods will differ, with neG ::; nSD ::; nSGA since the search spaces are more restricted. K. = 60 gives near-optimal results. Sparse Greedy Approxhnation Several choices for P are possible, including choosing the principal components of K [8], using conjugate gradient descent to minimize Q [2], symmetric Cholesky factorization [1], or using a sparse greedy approximation of K [7]. Yet these methods have the disadvantage that they either do not take the specific form of y into account [8, 7] or lead to expansions that cost O(m) for prediction and require computation and storage of the full matrix [8, 2]. If we require a sparse expansion of y (x) in terms of k( Xi, x) (i.e. many ai in y = k T a will be 0) we must consider matrices P that are a collection of unit vectors ei (here (ei)j = Oij). We use a greedy approach to find a good approximation. First, for n = 1, we choose P = ei such that Q(P(3) is minimal. In this case we could permit ourselves to consider all possible indices i E {I, ... m} and find the best one by trying out all of them. Next assume that we have found a good solution P(3 where P contains n columns. In order to improve this solution, we may expand P into the matrix P new := [Pold, ei] E lRmx (n+1) and seek the best ei such that P new minimizes min,8 Q(Pnew (3). (Performing a full search over all possible n + 1 out of m indices would be too costly.) This greedy approach to finding a sparse approximate solution is described in Algorithm 1. The algorithm also maintains an approximate minimum of Q*, and exploits the bounds of Theorem 1 to determine when the approximation is sufficiently accurate. (N ote that we leave unspecified how the subsets M ~ I, M* ~ I* are chosen. Assume for now that we choose M = I, M* = I*, the full set of indices that have not yet been selected.) This method is very similar to Matching Pursuit [4] or iterative reduced set Support Vector algorithms [6], with the difference that the target to be approximated (the full solution a) is only given implicitly via Q( a). Approximation Quality Natarajan [5] studies the following Sparse Linear Approximation problem: Given A E lRmxn , b E lRm , E > 0, find X E lRn with minimal number of nonzero entries such that IIAx - bl1 2 ::; E. If we define A := (a 2K + KTK)~ and b := A- 1Ky, then we may write Q(o) = !llb - Aol1 2 + c where c is a constant independent of o. Thus the problem of sparse approximate minimization of Q(o) is a special case of Natarajan's problem (where the matrix A is square, symmetric, and positive definite). In addition, the algorithm considered by Natarajan in [5J involves sequentially choosing columns of A to maximally decrease IIAx - bll. This is clearly equivalent to the sparse greedy algorithm described above. Hence, it is straightforward to obtain the following result from Theorem 2 in [5J. Theorem 2 (Approximation Rate) Algorithm 1 achieves Q(o) ::; Q(oopt) + E when a has n::; I8n*~E/4)ln(IIA-1KYII) ).1 E non-zero components, where n*(E/4) is the minimal number of nonzero components in vectors a for which Q(o) ::; Q(oopt) + E/4, A = (a 2K + KTK)1/2, and).l is the minimum of the magnitudes of the singular values of A, the matrix obtained by normalizing the columns of A. Randomized Algorithms for Subset Selection Unfortunately, the approximation algorithm considered above is still too expensive for large m since each search operation involves O(m) indices. Yet, if we are satisfied with finding a relatively good index rather than the best, we may resort to selecting a random subset of size K ?: m. In Algorithm 1, this corresponds to choosing M ~ I, M* ~ 1* as random subsets of size K. In fact, a constant value of K will typically suffice. To see why, we recall a simple lemma from [7J: the cumulative distribution function of the maximum of m i.i.d. random variables 6, ... is FO m , where F(?) is the cdf of Thus, in order to find a column to add to P that is with probability 0.95 among the best 0.05 of all such columns, a random subsample of size ilogO.05/log0.951 = 59 will suffice. ei. ,em Algorithm 1 Sparse Greedy Quadratic Minimization. Require: Training data X = {Xl, ... , Xm}, Targets y, Noise a 2, Precision E Initialize index sets I,1* = {I, ... ,m}j S, S* = 0. repeat Choose M ~ I, M* ~ I*. Find arg milliEM Q ([P, eiJ,Bopt), argmilli"EM" Q* OP*, ei" J,B~Pt)? Move i from I to S, i* from 1* to S*. Set P:= [P,eiJ, P*:= [P*,ei"J. until Q(P,Bopt} + a2Q*(P,B~Pt) + !llyl12 ::; HIQ(P,Bopt} I+ la2Q*(P,B~Pt) +! IIYl121 Output: Set of indices S, ,Bopt, (pTKP)-t, and (pT(KTK +a2K)p)-1. Numerical Considerations The crucial part is to obtain the values of Q(P,Bopt} cheaply (with P = [Pold, eiJ), provided we solved the problem for Pold. From (12) one can see that all that needs to be done is a rank-I update on the inverse. In the following we will show that this can be obtained in O(mn) operations, provided the inverse of the smaller subsystem is known. Expressing the relevant terms using Pold and k i we obtain pTKT y [Pold,eiJTK T y = (PoidKT y,kJ y) Poid (KT K + a 2K) Pold pJd (KT + a 21) ~ pT (KTK +a 2K) P [ kJ(K +a 21)Pold kJki + a2Kii 1 Thus computation of the terms costs only O(nm), given the values for Pold' Furthermore, it is easy to verify that we can write the inverse of a symmetric positive semidefinite matrix as (13) where 'Y := (C + BT A -1 B)-1. Hence, inversion of pT (KT K + a 2 K) P costs only O(n 2 ). Thus, to find P of size m x n takes O(ltn 2 m) time. For the error bars, (p T KP)-1 will generally be a good starting value for the minimization of (10), so the typical cost for (10) will be O(Tmn) for some T < n, rather than O(mn 2 ). Finally, for added numerical stability one may want to use an incremental Cholesky factorization in (13) instead of the inverse of a matrix. 5 Experiments and Discussion We used the Abalone dataset from the VCI Repository to investigate the properties of the algorithm. The dataset is of size 4177, split into 4000 training and 177 testing split to analyze the numerical performance, and a (3000,1177) split to assess the generalization error (the latter was needed in order to be able to invert (and keep in memory) the full matrix K + a 2 1 for a comparison). The data was rescaled to zero mean and unit variance coordinate-wise. Finally, the gender encoding in Abalone (male/female/infant) was mapped into {(I, 0, 0), (0, 1, 0), (0,0, I)}. IIx-x'1I2 In all our experiments we used Gaussian kernels k(x, x') = exp( -~) as covariance kernels. Figure 1 analyzes the speed of convergence for different It. 1O-' O'--------,L----,L------,o'---------"c---,L------'-,-------,-L---,L------'-,------,-! 20 40 60 80 100 120 Number of Ilerahons 140 160 180 200 Figure 1: Speed of Convergence. We plot the size of the gap between upper and lower bound of the log posterior (gap( a, a*)) for the first 4000 samples from the Abalone dataset (a 2 = 0.1 and 2w 2 = 10). From top to bottom: subsets of size 1, 2, 5, 10, 20, 50, 100, 200. The results were averaged over 10 runs. The relative variance of the gap size was less than 10%. One can see that that subsets of size 50 and above ensure rapid convergence. For the optimal parameters (2a 2 = 0.1 and 2w 2 = 10, chosen after [7]) the average test error of the sparse greedy approximation trained until gap(a, a*) < 0.025 on a (3000,1177) split (the results were averaged over ten independent choices of training sets.) was 1.785 ? 0.32, slightly worse than for the GP estimate (1.782 ? 0.33). The log posterior was -1.572.105 (1 ? 0.005), the optimal value -1.571 . 105 (1 ? 0.005). Hence for all practical purposes full inversion of the covariance matrix and the sparse greedy approximation have statistically indistinguishable generalization performance. In a third experiment (Table 2) we analyzed the number of basis functions needed to minimize the log posterior to gap(a, a*) < 0.025, depending on different choices of the kernel width a. In all cases, less than 10% of the kernel functions suffice to find a good minimizer of the log posterior, for the error bars, even less than 2% are sufficient. This is a dramatic improvement over previous techniques. Kernel width 2w:& Kernels for log-posterior Kernels for error bars 1 373 79?61 2 287 49?43 5 255 26?27 10 257 17?16 20 251 12?9 50 270 8?5 Table 2: Number of basis functions needed to minimize the log posterior on the Abalone dataset (4000 training samples), depending on the width of the kernel w. Also, number of basis functions required to approximate k T (K + 0-21)- l k which is needed to compute the error bars. We averaged over the remaining 177 test samples. To ensure that our results were not dataset specific and that the algorithm scales well we tested it on a larger synthetic dataset of size 10000 in 20 dimensions distributed according to N(O, 1). The data was generated by adding normal noise with variance 0- 2 = 0.1 to a function consisting of 200 randomly chosen Gaussians of width 2w 2 = 40 and normally distributed coefficients and centers. We purposely chose an inadequate Gaussian process prior (but correct noise level) of Gaussians with width 2w 2 = 10 in order to avoid trivial sparse expansions. After 500 iterations (i.e. after using 5% of all basis functions) the size of the gap(cr, cr?) was less than 0.023 (note that this problem is too large to be solved exactly). We believe that sparse greedy approximation methods are a key technique to scale up Gaussian Process regression to sample sizes of 10.000 and beyond. The techniques presented in the paper, however, are by no means limited to regression. Work on the solutions of dense quadratic programs and classification problems is in progress. The authors thank Bob Williamson and Bernhard Sch6lkopf. References [1] S. Fine and K Scheinberg. Efficient SVM training using low-rank kernel representation. Technical report, IBM Watson Research Center, New York, 2000. [2] M. Gibbs and D .J .C . Mackay. Efficient implementation of gaussian processes. Technical report, Cavendish Laboratory, Cambridge, UK, 1997. [3] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1455-1480, 1998. [4] S. Mallat and Z. Zhang. Matching Pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing, 41:3397-3415, 1993. [5] B. K Natarajan. Sparse approximate solutions to linear systems. SIAM Journal of Computing, 25(2) :227-234, 1995. [6] B. Sch6lkopf, S. Mika, C. Burges, P . Knirsch, K-R. Miiller, G . Ratsch, and A. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000 - 1017, 1999. [7] A.J . Smola and B. Sch6lkopf. Sparse greedy matrix approximation for machine learning. In P. Langley, editor, Proceedings of the 17th International Conference on Machine Learning, pages 911 - 918, San Francisco, 2000. Morgan Kaufman. [8] C .KI. Williams and M. Seeger. The effect of the input density distribution on kernelbased classifiers. In P. Langley, editor, Proceedings of the Seventeenth International Conference on Machine Learning, pages 1159 - 1166, San Francisco, California, 2000. Morgan Kaufmann.
1880 |@word repository:1 briefly:1 inversion:3 km:1 seek:2 pold:8 covariance:8 decomposition:3 dramatic:1 carry:1 contains:2 selecting:1 diagonalized:1 ka:3 yet:4 must:1 numerical:3 girosi:1 plot:1 update:1 v:1 infant:1 greedy:17 selected:1 location:2 bopt:5 zhang:1 direct:1 become:1 combine:1 inside:1 rapid:1 lrmxm:1 ote:1 provided:2 estimating:2 suffice:3 maximizes:1 qmin:9 kaufman:1 unspecified:1 minimizes:2 finding:3 act:2 exactly:1 classifier:1 uk:1 normally:2 ly:2 unit:2 before:1 positive:3 engineering:1 encoding:1 lrm:2 ap:3 approximately:1 chose:1 mika:1 au:2 initialization:1 equivalence:1 suggests:1 factorization:2 limited:1 statistically:1 averaged:3 seventeenth:1 practical:1 testing:1 practice:1 implement:1 definite:1 procedure:1 langley:2 convenient:1 projection:1 confidence:2 matching:2 tmn:1 close:1 selection:1 operator:1 subsystem:1 storage:4 context:1 equivalent:3 map:6 center:2 straightforward:1 williams:1 starting:1 stability:1 searching:1 cavendish:1 coordinate:1 pt:9 suppose:1 target:2 mallat:1 exact:2 us:1 element:1 approximated:2 expensive:2 natarajan:4 bottom:1 solved:2 decrease:1 rescaled:1 trained:1 tight:1 solving:1 basis:6 kp:3 choosing:4 whose:1 larger:1 solve:1 gp:1 itself:1 eigenvalue:1 iiax:2 relevant:1 ky:2 convergence:3 requirement:1 optimum:1 incremental:1 leave:1 depending:2 a2k:1 op:1 progress:1 involves:2 australian:3 differ:1 correct:1 kii:1 require:4 behaviour:2 suffices:1 generalization:2 opt:1 extension:1 sufficiently:2 considered:2 normal:2 exp:3 lm:2 hiq:1 achieves:1 dictionary:1 purpose:2 estimation:1 council:1 vice:1 minimization:4 clearly:2 gaussian:11 always:1 modified:1 rather:5 avoid:1 cr:2 improvement:1 rank:2 seeger:1 aew:1 posteriori:3 inference:1 stopping:3 nn:1 typically:1 bt:1 nand:1 irn:1 expand:1 interested:1 arg:1 among:1 classification:1 special:1 initialize:1 mackay:1 once:1 minimized:1 report:2 bl1:1 randomly:1 national:2 dfg:1 ourselves:1 consisting:1 investigate:1 evaluation:2 male:1 analyzed:1 semidefinite:2 lrn:1 kt:7 beforehand:1 accurate:1 integral:1 minimal:3 kij:1 column:5 disadvantage:1 cost:15 subset:7 rare:1 entry:1 inadequate:1 too:4 synthetic:1 density:1 international:2 randomized:1 siam:1 ltn:1 ansatz:1 iy:1 w1:1 central:1 nm:7 satisfied:1 choose:3 worse:1 resort:1 knirsch:1 rescaling:1 account:1 coefficient:1 analyze:2 bayes:1 maintains:1 minimize:8 square:1 ass:1 accuracy:1 variance:6 kaufmann:1 likewise:2 bayesian:1 none:1 bob:1 fo:1 definition:1 frequency:1 involved:1 proof:1 dataset:6 popular:1 recall:1 hilbert:1 ta:2 improved:1 maximally:1 done:3 furthermore:2 smola:4 stage:1 until:3 ei:8 vci:1 mode:1 quality:1 believe:1 effect:2 concept:1 verify:1 regularization:2 analytically:1 equality:1 hence:7 symmetric:3 nonzero:2 laboratory:1 i2:1 indistinguishable:1 width:5 abalone:4 criterion:3 trying:1 wise:1 consideration:2 purposely:1 significant:1 expressing:1 versa:1 gibbs:1 ai:1 cambridge:1 smoothness:1 iia:1 specification:1 add:2 posterior:13 female:1 manipulation:1 store:1 watson:1 devise:1 neg:1 morgan:2 analyzes:1 minimum:5 additional:2 determine:2 signal:1 rv:4 full:8 technical:2 faster:1 parenthesis:1 prediction:5 regression:6 expectation:1 iteration:1 kernel:14 llb:1 invert:2 addition:2 whereas:1 want:1 fine:1 interval:1 ratsch:1 singular:1 crucial:2 practitioner:1 near:1 nsga:1 split:4 easy:2 xj:1 reduce:2 idea:2 expression:1 bartlett:2 peter:2 miiller:1 york:1 useful:1 generally:1 involve:1 ten:1 reduced:1 exist:1 overly:1 write:2 key:1 pnew:1 costing:1 run:1 inverse:5 aopt:4 throughout:1 prefer:1 scaling:2 summarizes:1 submatrix:1 bound:14 ki:1 quadratic:5 alex:2 speed:2 min:1 performing:1 relatively:1 department:1 according:1 conjugate:3 smaller:1 slightly:2 em:2 restricted:1 ln:1 equation:1 scheinberg:1 needed:5 pursuit:2 operation:2 gaussians:2 permit:1 ktk:4 denotes:2 top:1 ensure:2 remaining:1 iix:1 calculating:1 exploit:1 prof:1 move:1 already:1 quantity:2 added:1 parametric:1 costly:2 gradient:3 thank:1 mapped:1 trivial:1 index:7 minimizing:2 unfortunately:1 negative:3 implementation:1 sch6lkopf:3 upper:6 datasets:1 sm:1 finite:1 descent:1 ever:1 bll:1 reproducing:1 aly:1 pred:1 rsise:2 required:2 california:1 address:1 able:1 bar:10 beyond:1 pattern:1 xm:5 program:1 including:1 memory:2 oij:1 mn:2 improve:1 lk:1 speeding:1 kj:2 prior:1 relative:2 proportional:1 tka:1 sufficient:1 editor:2 share:1 pi:1 ibm:1 supported:1 repeat:2 free:1 allow:2 burges:1 sparse:23 distributed:3 dimension:1 xn:2 cumulative:1 author:1 collection:1 san:2 transaction:2 approximate:10 implicitly:1 bernhard:1 keep:1 sequentially:1 francisco:2 xi:5 search:3 latent:1 iterative:1 why:1 table:4 expanding:1 ignoring:1 expansion:3 williamson:1 dense:1 subsample:1 noise:3 n2:1 canberra:2 precision:1 xl:6 third:1 wavelet:1 theorem:6 bad:1 specific:2 svm:1 closeness:1 normalizing:1 adding:1 magnitude:1 conditioned:1 anu:2 gap:7 eij:3 cheaply:1 gender:1 corresponds:1 minimizer:2 relies:1 cdf:1 conditional:2 consequently:1 typical:1 except:1 principal:1 conservative:1 total:1 lemma:1 support:3 cholesky:2 latter:1 kernelbased:1 evaluate:1 tested:1
963
1,881
Generalizable Singular Value Decomposition for Ill-posed Datasets Ulrik Kjerns Lars K. Hansen Department of Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby, Denmark uk, lkhansen@imm. dtu. dk Stephen C. Strother PET Imaging Service VA medical center Minneapolis steve@pet. med. va. gov Abstract We demonstrate that statistical analysis of ill-posed data sets is subject to a bias, which can be observed when projecting independent test set examples onto a basis defined by the training examples. Because the training examples in an ill-posed data set do not fully span the signal space the observed training set variances in each basis vector will be too high compared to the average variance of the test set projections onto the same basis vectors. On basis of this understanding we introduce the Generalizable Singular Value Decomposition (GenSVD) as a means to reduce this bias by re-estimation of the singular values obtained in a conventional Singular Value Decomposition, allowing for a generalization performance increase of a subsequent statistical model. We demonstrate that the algorithm succesfully corrects bias in a data set from a functional PET activation study of the human brain. 1 Ill-posed Data Sets An ill-posed data set has more dimensions in each example than there are examples. Such data sets occur in many fields of research typically in connection with image measurements. The associated statistical problem is that of extracting structure from the observed high-dimensional vectors in the presence of noise. The statistical analysis can be done either supervised (Le. modelling with target values: classification, regresssion) or unsupervised (modelling with no target values: clustering, PCA, ICA). In both types of analysis the ill-posedness may lead to immediate problems if one tries to apply conventional statistical methods of analysis, for example the empirical covariance matrix is prohibitively large and will be rank-deficient. A common approach is to use Singular Value Decomposition (SVD) or the analogue Principal Component Analysis (PCA) to reduce the dimensionality of the data. Let the N observed i-dimensional samples Xj, j = L .N, collected in the data matrix X = [Xl ... XN] of size I x N, I> N . The SVD-theorem states that such a matrix can be decomposed as (1) where U is a matrix of the same size as X with orthogonal basis vectors spanning the space of X, so that UTU = INxN. The square matrix A contains the singular values in the diagonal, A = diag( AI, ... , >w), which are ordered and positive Al ~ A2 ~ ... ~ AN ~ 0, and V is N x N and orthogonal V TV = IN. If there is a mean value significantly different from zero it may at times be advantageous to perform the above analysis on mean-subtracted data, i.e. X - X = U A V T where columns of X all contain the mean vector x = Lj xj/N. Each observation Xj can be expressed in coordinates in the basis defined by the vectors of U with no loss of information[Lautrup et al., 1995]. A change of basis is obtained by qj = U T Xj as the orthogonal basis rotation Q = [ql ... qN] = U T X = UTUAV T = AVT . (2) Since Q is only N x Nand N ? I, Q is a compact representation of the data. Having now N examples of N dimension we have reduced the problem to a marginally illposed one. To further reduce the dimensionality, it is common to retain only a subset of the coordinates, e.g. the top P coordinates (P < N) and the supervised or unsupervised model can be formed in this smaller but now well-posed space. So far we have considered the procedure for modelling from a training set. Our hope is that the statistical description generalizes well to new examples proving that is is a good description of the generating process. The model should, in other words, be able to perform well on a new example, x*, and in the above framework this would mean the predictions based on q* = U T x* should generalize well. We will show in the following, that in general, the distribution of the test set projection q* is quite different from the statistics of the projections of the training examples qj. It has been noted in previous work [Hansen and Larsen, 1996, Roweis, 1998, Hansen et al., 1999] that PCA/SVD of ill-posed data does not by itself represent a probabilistic model where we can assign a likelihood to a new test data point, and procedures have been proposed which make this possible. In [Bishop, 1999] PCA has been considered in a Bayesian framework, but does not address the significant bias of the variance in training set projections in ill-posed data sets. In [Jackson, 1991] an asymptotic expression is given for the bias of eigen-values in a sample covariance matrix, but this expression is valid only in the well-posed case and is not applicable for ill-posed data. 1.1 Example Let the signal source be I-dimensional multivariate Gaussian distribution N(O,~) with a covariance matrix where the first K eigen-values equal u 2 and the last 1- K are zero, so that the covariance matrix has the decomposition ~=u2YDyT, D=diag(1, ... ,1,0, ... ,0), yTY=I (3) Our N samples of the distribution are collected in the matrix X = [Xij] with the SVD (4) A = diag(Al, ... , AN) and the representation ofthe N examples in the N basis vector coordinates defined by U is Q = [%] = U T X = A V T. The total variance per training example is ~ LX;j ~Tr(XTX) = ~Tr(VAUTUAVT) = ~Tr(VA2VT) i,j = ~ Tr(VVT A2) = ~ Tr(A2) = ~L A; i (5) Note that this variance is the same in the U-basis coordinates: 1 'L...J " qij 2 N = ~Tr(QTQ) = ~Tr(VA2VT) = ~ LA~ (6) i i,j We can derive the expected value of this variance: (~ LX;) = (LxL) = (x? Xl) = Tr:E = a2 K (7) i ,j Now, consider a test example X* '" N(O,:E) with the projection q* will have the average total variance = U Tx* which (Tr[(U T x*{ (U T x*)]) = Tr [(x*x* T)UU T ] Tr[:EUU T ] = Tr[DUU T ] = a 2min(N,K) (8) In summary, this means that the orthogonal basis U computed from the training set spans all the variance in the training set but fails to do so on the test examples when N < K, i.e. for ill-posed data. The training set variance is K / N a 2 on average per coordinate, compared to a 2 for the test examples. So which of the two variances is "correct" ? From a modelling point of view, the variance from the test example tells us the true story, so the training set variance should be regarded as biased. This suggests that the training set singular values should be corrected for this bias, in the above example by re-estimating the training set projections using Q = N / K Q. J In the more general case we do not know K, and the true covariance may have an arbitrary eigen-spectrum. The GenSVD algorithm below is a more general algorithm for correcting for the training set bias. 2 The GenSVD Algorithm The data matrix consists of N statistically independent samples X = [Xl ... XN ] so X is size I x N, and each column of X is assumed multivariate Gaussian, Xj '" N(O,:E) and is ill-posed with rank:E > N. With the SVD X = UoAoVaT, we now make the approximation that Uo contains an actual subset of the true eigen-vectors of :E (9) where we have collected the remaining (unspanned by X) eigen-vectors and values in UJ. and Ai , satisfying uluJ. = I and UJUJ. = 0. The unknown 'true' eigen-values corresponding to the observed eigen-vectors are collected in A = diag(Al, ... AN), which are the values we try to estimate in the following. It should be noted that a direct estimation of :E using f: = j;y X X T yields f: = j;yuoAoVaTVoAoUJ = j;yUoA~UJ, i.e., the nonzero eigen-vectors and values of f: is Uo and Ao. The distribution of test samples x* inside the space spanned by Uo is (10) The problem is that Uo and the examples Xj are not independent, so UJ Xj is biased, e.g. the SVD estimate A ~ of A 2 assigns all variance to lie within U o. -k The GenSVD algorithm bypasses this problem by, for each example, computing a basis on all other examples, estimating the variances in A 2 in a leave-one-out manner. Consider (11) where we introduce the notation X_ j for the matrix of all examples except the j'th, and this matrix is decomposed as X _j = B_jA_jlC;' The operation B_jB_; Xj projects the example onto the basis defined by the remaining examples, and back again, so it 'strips' off the part of signal space which is special for Xj which could be signal which does not generalize across examples. Since B_j and Xj are independent B-"J Xj has the same distribution as the projection of a test example x*, B_; x*. Thus, B_jB_; Xj and B_jB_; x* have the same distribution as well. Now, since span B_j=span X_j and span Uo=span [X_j Xj] we have that span B_j~span Uo so we see that Zj and U B_jB-"J X* are identically distributed. This means that Zj has the covariance UJ B_jB-"J~B_jB_;Uo and using Eq. (9) and that ul B_j = 0 (since uluo = 0) we get J (12) We note that this distribution is degenerate because the covariance is of rank N -l. For a sample Zj from the above distribution we have that UJ B_jB_;Uoz j = UJ B_jB_;UoUJ B_jB_; Xj = UJ B_jB_; Xj = Zj (13) As a second approximation, assume that the observed Zj are independent so that we can write the likelihood of A ~ log [(27r)N/21(uJ B_J(B-"JUo)A2(UJ B_J(R;Uo)l l / 2] +~ ~zJ (UJ B_J(B_;Uo)A -2(UJ B_j)(B-"JUo)zj j C ~ log A?2 + -1~ z?T A - 2 z? + -N 2 J 2 J J j (14) j where we have used Eq. (13) and that the determinant l is approximated by This above expression is maximized when 5.~ = ~ ~ Z~j' IA21. (15) j A T A A A The GenSVD of X is then X = UoAV ,A = diag(Al' ... , AN). In practice, using Eq. (11) directly to compute an SVD of the matrix X_ j for each example is computationally demanding. It is possible to compute Zj in a more efficient two-level procedure with the following algorithm: Compute UOAoVOT = svd(X) and Q o = [qj] = AoVOT lSince Zj is degenerate, we define the likelihood over the space where Zj occur, i.e. the determinant in Eq. 14 should be read as 'the product of non-zero eigenvalues'. foreach j = L.N Compute B _;A_; Zj = B_;B-"J qj v..; = svd(Q.J '\A2 = Iii1 L: j Zij2 If the data has a mean value that we wish to remove prior to the SVD it is important that this is done within the GenSVD algorithm. Consider a centered matrix Xc = X - X where X contains the mean x replicated in all N columns. The signal space in Xc is now corrupted because each centered example will contain a component of all examples, which means the 'stripping' of signal components not spanned by other examples no longer works: Xj is no longer distributed like x*. This suggests the alternative algorithm for data with removal of mean component: B_; Compute UOAoVOT foreach j = L.N q-j = svd(X) and Q o = [qj] = AoVOT 1 '"" = N-1 6j'?j qj' T Compute B _;A_; Zj A2 _ Ai - B_; v..; = svd(Q_; - = B_;B-"J (qj L: j Zij2 - Q.;) ii-j) 1 N -1 Finally, note that it is possible to leave out more than one example at a time if the data is independent only in block, i.e. let Q.k would be Qo with the k'th block left out. Example With PET Scans We compared the performance of GenSVD to conventional SVD on a functional [ 15 0] water PET activation study of the human brain. The study consisted of 18 subjects, who were scanned four times while tracing a star-shaped maze with a joy-stick with visual feedback, in total 72 scans of dimension '" 25000 spatial voxels. After the second scan, the visual feedback was mirrored, and the subject accomodated to and learned the new control environment during the last two scans. Scans were normalized by 1) dividing each scan by the average voxel value measured inside a brain mask and 2) for each scan subtracting the average scan for that subject thereby removing subject effects and 3) intra and inter-subject normalization and transformation using rigid body reorientation and affine linear transformations respectively. Voxels inside aforementioned brain mask were arranged in the data matrix with one scan per column. Figure 1 shows the results of an SVD decomposition compared to GenSVD. Each marker represents one scan and the glyphs indicate scan number out of the four (circle-square-star-triangle). The ellipses indicate the mean and covariances of the projections in each scan number. The 32 scans from eight subjects were used as a training set and 40 scans from the remaining 10 subjects for testing. The training set projections are filled markers, test-set projections onto the basis defined by the training set are open markers (i.e. we plot the first two columns of UoAo for SVD and of UoA for GenSVD). We see that there is a clear difference in variance in the train- and test-examples, which is corrected quite well by GenSVD. The lower plot in Figure 1 shows the singular values for the PET data set. We see that GenSVD estimates are much closer to the actual test projection standard deviations than the SVD singular values. 3 Conclusion We have demonstrated that projection of ill-posed data sets onto a basis defined by the same examples introduces a significant bias on the observed variance when comparing to projections of test examples onto the same basis. The GenSVD algorithm has been presented as a tool for correcting for this bias using a leave-one-out re-estimation scheme, and a computationally efficient implementation has been proposed. We have demonstrated that the method works well on an ill-posed real-world data set, were the distribution of the GenSVD-corrected training test set projections matched the distribution of the observed test set projections far better than the uncorrected training examples. This allows a generalization performance increase of a subsequent statistical model, in the case of both supervised and unsupervised models. Acknowledgments This work was supported partly by the Human Brain Project grant P20 MH57180, the Danish Research councils for the Natural and Technical Sciences through the Danish Computational Neural Network Center (CONNECT) and the Technology Center Through Highly Oriented Research (THOR). References [Bishop, 1999] Bishop, C. (1999) . Bayesian pca. In Kearns, M. S. , Soli a, S. A. , and Cohn, D . A., editors, Advances in Neural Information Processing Systems, volume 11. The MIT Press. [Hansen et al. , 1999] Hansen, L. , Larsen, J. , Nielsen, F. , Strother, S., Rostrup , E. , Savoy, R. , Lange, N., Sidtis, J ., Svarer, C. , and Paulson, O. (1999) . Generalizable patterns in neuroimaging: How many principal components? NeuroImage, 9:534- 544. [Hansen and Larsen, 1996] Hansen, L. K. and Larsen, J. (1996). Unsupervised learning and generalization. In Proceedings of IEEE International Conference on Neural Networks, pages 25- 30. [Jackson , 1991] Jackson, J . E. (1991). A Us er's Guide to Principal Components. Wiley Series on Probability and Statistics, John Wiley and Sons. [Lautrup et aI., 1995] Lautrup, B., Hansen, L. K., Law, I., M0rch, N., Svarer, C., and Strother, S. (1995). Massive weight sharing: A cure for extremely ill-posed problems. In Hermann, H. J ., Wolf, D. E. , and Poppel , E. P., editors, Proceedings of Workshop on Supercomputing in Brain Research: Prom Tomography to Neural Networks: Prom tomography to neural networks, HLRZ, KFA Jillich, Germany, pages 137- 148. World Scientific. [Roweis, 1998] Roweis, S. (1998) . Em algorithms for pca and spca. In Jordan, M. I., Kearns, M. J., and Soli a, S. A., editors, Advances in Neural Information Processing Systems, volume 10. The MIT Press. Conventional SVD 3.00 * 2 .00 'E 110 ? 1.00 8 0 > (jJ . * ? 1< . 0.00 -g 0 i;l - 1.00 (jJ - 2.00 ~. - 3 .00 - 2 .00 .? ..: .Ii. .Ii. .J>. .Ii. - 3.00 - 4 .00 Oc .. ? ? '!' ** ~ " ~: j~.~ "E - 1.00 ? ? 0 .00 1.00 2.00 3 .00 4.00 First SVD component Generalizable SVD 1.50 Solid: Train Open: Test 1.00 "11 ~ * 0.50 8 ? 1< o o Trace scan 1 Trace scan 2 ?J. Mirror scan 1 Mirror scan 2 * o ~ 0.00 <D (!) "C 8- 0.50 ell - 1.00 - 1.50'--_~ - 2.00 _ _~_ _ _ _ _ _ _ _ _ _ _ _ _ _ _---J - 1.50 - 1.00 - 0.50 0.00 0 .50 1.00 1.50 2.00 First GenSVD component 2 .00 r-~--------------------'---' - - - SVD training set projection stdev - - GenSVD training set proj . stdev - 1.50 Test set projection stdev \ \ c: o iii .~ ~ 1.00 {g ~ 0.50 0.00 ' - - - - - - - - - - - - - - - - - - - - - - - ' - - - ' 5 10 15 20 Component Figure 1: Projections of PET data in SVD and GenSVD. Each subject's four scans are indicated by: circle, square, star, triangle. Training set scans are marked with filled glyphs and test set with open glyphs. Solid and dotted Ellipses indicate test/train covariance per scan number. The third plot shows the standard deviations for the training and test set for SVD and GenSVD projections.
1881 |@word determinant:2 advantageous:1 open:3 covariance:9 decomposition:6 thereby:1 tr:12 solid:2 contains:3 series:1 comparing:1 activation:2 john:1 subsequent:2 remove:1 plot:3 joy:1 lx:2 mathematical:1 direct:1 qij:1 consists:1 inside:3 manner:1 introduce:2 inter:1 mask:2 expected:1 ica:1 brain:6 decomposed:2 gov:1 actual:2 project:2 estimating:2 notation:1 matched:1 kg:1 rostrup:1 generalizable:4 transformation:2 prohibitively:1 uk:1 stick:1 control:1 medical:1 uo:9 grant:1 positive:1 service:1 suggests:2 succesfully:1 minneapolis:1 statistically:1 acknowledgment:1 testing:1 practice:1 block:2 illposed:1 procedure:3 empirical:1 significantly:1 xtx:1 projection:19 word:1 get:1 onto:6 conventional:4 demonstrated:2 center:3 assigns:1 correcting:2 regarded:1 jackson:3 spanned:2 proving:1 coordinate:6 target:2 massive:1 satisfying:1 approximated:1 observed:8 environment:1 basis:16 triangle:2 tx:1 stdev:3 train:3 tell:1 quite:2 posed:15 statistic:2 itself:1 eigenvalue:1 subtracting:1 product:1 degenerate:2 roweis:3 description:2 generating:1 leave:3 derive:1 measured:1 eq:4 dividing:1 uncorrected:1 indicate:3 uu:1 hermann:1 correct:1 lars:1 centered:2 human:3 strother:3 assign:1 ao:1 generalization:3 considered:2 a2:7 estimation:3 applicable:1 hansen:8 council:1 tool:1 hope:1 mit:2 gaussian:2 modelling:5 rank:3 likelihood:3 rigid:1 typically:1 lj:1 nand:1 proj:1 germany:1 prom:2 classification:1 ill:14 accomodated:1 aforementioned:1 spatial:1 special:1 ell:1 field:1 equal:1 having:1 shaped:1 represents:1 vvt:1 unsupervised:4 x_:2 oriented:1 highly:1 intra:1 introduces:1 solo:2 closer:1 orthogonal:4 filled:2 re:3 circle:2 iii1:1 column:5 deviation:2 subset:2 too:1 stripping:1 connect:1 corrupted:1 international:1 retain:1 probabilistic:1 off:1 corrects:1 uoa:1 again:1 star:3 try:2 view:1 square:3 formed:1 variance:16 who:1 maximized:1 yield:1 ofthe:1 generalize:2 bayesian:2 marginally:1 poppel:1 sharing:1 strip:1 danish:2 larsen:4 associated:1 dimensionality:2 nielsen:1 back:1 steve:1 supervised:3 arranged:1 done:2 qo:1 cohn:1 marker:3 indicated:1 scientific:1 glyph:3 effect:1 contain:2 true:4 consisted:1 normalized:1 read:1 nonzero:1 during:1 noted:2 oc:1 q_:1 demonstrate:2 image:1 common:2 rotation:1 functional:2 foreach:2 volume:2 measurement:1 significant:2 ai:4 p20:1 longer:2 multivariate:2 signal:6 stephen:1 ii:4 technical:2 ellipsis:2 va:2 prediction:1 represent:1 normalization:1 utu:1 singular:9 source:1 biased:2 subject:9 med:1 deficient:1 jordan:1 extracting:1 presence:1 spca:1 iii:1 identically:1 xj:16 reduce:3 lange:1 qj:7 expression:3 pca:6 ul:1 jj:2 clear:1 tomography:2 reduced:1 xij:1 mirrored:1 zj:12 dotted:1 per:4 write:1 four:3 regresssion:1 imaging:1 lkhansen:1 occur:2 scanned:1 span:8 min:1 extremely:1 department:1 tv:1 smaller:1 across:1 son:1 em:1 projecting:1 lyngby:1 computationally:2 avt:1 know:1 b_:6 generalizes:1 operation:1 apply:1 eight:1 subtracted:1 alternative:1 eigen:8 top:1 clustering:1 remaining:3 paulson:1 xc:2 uj:11 lautrup:3 yty:1 diagonal:1 ulrik:1 collected:4 spanning:1 pet:7 denmark:2 water:1 ql:1 neuroimaging:1 trace:2 implementation:1 unknown:1 perform:2 allowing:1 observation:1 datasets:1 immediate:1 arbitrary:1 posedness:1 connection:1 learned:1 address:1 able:1 below:1 pattern:1 analogue:1 demanding:1 natural:1 scheme:1 technology:1 dtu:1 prior:1 understanding:1 voxels:2 removal:1 asymptotic:1 law:1 fully:1 loss:1 affine:1 editor:3 story:1 bypass:1 summary:1 supported:1 last:2 bias:9 guide:1 tracing:1 distributed:2 feedback:2 dimension:3 xn:2 valid:1 world:2 maze:1 qn:1 cure:1 replicated:1 reorientation:1 far:2 voxel:1 supercomputing:1 compact:1 thor:1 imm:1 assumed:1 a_:2 spectrum:1 diag:5 noise:1 body:1 wiley:2 fails:1 neuroimage:1 wish:1 xl:3 lie:1 third:1 theorem:1 removing:1 bishop:3 er:1 dk:2 workshop:1 mirror:2 lxl:1 visual:2 expressed:1 ordered:1 wolf:1 marked:1 change:1 except:1 corrected:3 principal:3 kearns:2 total:3 svarer:2 partly:1 svd:22 la:1 scan:21 savoy:1
964
1,882
Temporally Dependent Plasticity: An Information Theoretic Account Gal Chechik and N aft ali Tishby School of Computer Science and Engineering and the Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem, Israel {ggal,tishby}@cs.huji.ac.il Abstract The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1 Introduction Hebbian plasticity, the major paradigm for learning in computational neuroscience, was until a few years ago interpreted as learning by correlated neuronal activity. A series of studies have recently shown that changes in synaptic efficacies highly depend on the relative timing of the pre- and postsynaptic spikes, as the efficacy of a synapse between two excitatory neurons increases when the presynaptic spike precedes the postsynaptic one, but decreases otherwise [1-6]. The magnitude of these synaptic changes decays roughly exponentially as a function of the time difference between pre- and post synaptic spikes, with a time constant of few tens of milliseconds (results vary between studies, especially with regard to the synaptic depression component, compare e.g. [4] and [6]). What could be the computational role of this delicate type of plasticity, sometimes termed spike-timing dependent plasticity (STDP) ? Several authors suggested answers for this question by modeling STDP and studying its effects on synaptic, neural and network dynamics. Importantly, STDP embodies an inherent competition between incoming inputs, and was shown to result in normalization of total incoming synaptic strength [7], maintain the irregularity of neuronal firing [8, 9], ?Work supported in part by a Human Frontier Science Project (HFSP) grant RG 0133/1998. and lead to the emergence of synchronous subpopulation firing in recurrent networks [10]. It may also play an important role in sequence learning [11, 12]. The dynamics of synaptic efficacies under the operation of STDP strongly depends on whether STDP is implemented additively (independent of the baseline synaptic value) or multiplicatively (where the change is proportional to the synaptic efficacy) [13]. This paper takes a different approach to the study of spike-dependent learning rules: while the above studies model STDP and study the model properties, we start by deriving a spike-dependent learning rule from first principles within a simple rate model and then compare it with the experimentally observed STDP. To derive our learning rule, we consider the principle of mutual information maximization. This idea, known as the Infomax principle [14], states that the goal of a neural network's learning procedure is to maximize the mutual information between its output and input. The current paper applies Infomax for a leaky integrator neuron with spiking inputs. The derivation suggests computational insights into the dependence of the temporal characteristics of STDP on biophysical parameters and shows that STDP may serve to maximize mutual information in a network of spiking neurons. 2 The Model We study a network with N input neurons Sl .. SN firing spike trains, and a single output (target) neuron Y. At any point in time, the target neuron accumulates its inputs with some temporal filter F due to voltage attenuation or synaptic transfer function N Y(t) =L (1) WiXi(t) i=l where Wi is the synaptic efficacy between the ith input neuron and the target neuron, Si(t) = L:t a p&k. e c5(t - tspike) is the i-th spike train and T is the membrane time constant. The filter F may be used to consider general synaptic transfer function and voltage decay effects, but is set here as an example to an exponential filter Fr (x) == exp( - x / T). The learning goal is to set the synaptic weights W such that M + 1 uncorrelated patterns of input activity ~1/ ('TJ = O.. M) may be discriminated using the output. Each pattern determines the firing rates of the input neurons, thus S is a noisy realization of ~ due to the stochasticity of the point process. The input patterns are presented for periods of length T (on the order of tens of milliseconds). At each period, a pattern ~1/ is randomly chosen for presentation with probability q1/' where most of the patterns are rare (L:!l q1/ ? 1) but ~o is abundant and may be thought of as a background noisy pattern. It should be stressed that in our model information is coded in the non-stationary rates that underlie the input spike trains. As these rates are not observable, any learning must depends on the observable input spikes that realize those underlying rates. 3 Mutual Information Maximization Let us focus on a single presentation period (omitting the notation of t), and look at the value of Y at the end of this period, Y = L:~l WiX i , with Xi == J~T et'/r Si(t')dt'. Denoting by f(Y) the p.d.f. of Y, the input-output mutual information [15] in this network is defined by I(Y; 'TJ) = h(Y) - h(YI'TJ) h(Y) = - i: f(y)log(f(y))dx (2) where h(Y) is the differential entropy of the Y distribution, and h(YI'TJ) is the differential entropy given that the network is presented with a known input pattern. This mutual information measures how easy it is to decide which input pattern 'TJ was presented to the network by observing the network's output Y. To calculate the conditional entropy h(YI'TJ) we use the assumption that input neurons fire independently and their number is large, thus the input of the target neuron when the network is presented with the pattern ~11 is normally distributed f(YI'TJ) = N(J.t11,(711 2) with mean J.t11 =< WXl1 > and variance (711 2 =< (W Xl1)(W Xl1)T > - < W Xl1 >2. The brackets denote averaging over the possible realizations of the inputs Xl1 when the network is presented with the pattern ~11. To calculate the entropy of Y we note that f (Y) is a mixture of Gaussians, each resulting from the presentation of an input pattern and use the assumption E!1 ql1 ? 1 to approximate the entropy. The details of this derivation are omitted due to space considerations and will be presented elsewhere. Differentiating the mutual information with regard to Wi we obtain 8I(Yj'TJ) 8Wi M +Lql1 (Cav(y,X:!)K~ + E(X;:)K~) (3) 11=1 M - Lql1 (Cav(y,X?)K~ + E(X?)K~) 11=1 with (J.t11-J.tO)2 +(711 2 -(70 2 . (70 4 (70 4 1 K 1 =1- - _. ' 11 - (70 (711 ' K' = J.t11-J.t0 (702 11 - . where E(X;:) is the expected value of X;: as averaged over presentation of the ~11 pattern. The general form of this complex gradient is simplified in the following sections together with a discussion of its use for biological learning. The derived gradient may be used for a gradient ascent learning rule by repeatedly calculating the distribution moments J.t11' (711 that depend on W, and updating the weights according to ~ Wi = >. 8 ~J (Y j 'TJ). This learning rule climbs along the gradient and is bound to converge to a local maximum of the mutual information. Figure lA plots the mutual information during the operation of the learning rule, showing that the network indeed reaches a (possibly local) mutual information maximum. Figure IB depicts the changes in output distribution during learning, showing that it splits into two segregated bumps: one that corresponds to the ~o pattern and another that corresponds to the rest of the patterns. 4 Learning In A Biological System Aiming to obtain a spike-dependent biologically feasible learning rule that maximizes mutual information, we now turn to approximate the analytical rule derived above by a rule that can be implemented in biology. To this end, four steps are taken where each step corresponds to a biological constraint and its solution. First, biological synapses are limited either to excitatory or inhibitory regimes. Since information is believed to be coded in the activity of excitatory neurons, we limit the weights W to positive values. Secondly, the K terms are global functions of weights and input distributions since they depend on the distribution moments J.t11' (71)" To avoid this problem we approximate the learning rule by replacing {K~ , Kg, K~} with constants {>.~, >.g, >'_~l. These constants are set to optimal values, but remain fixed once they are set. We have found numerically that high performance (to be demonstrated in section 5) may be obtained for a wide regime of these constants. A. B. 0.1 0.5 ,----~--~---~-----, 100 0.09 0 .08 0.4 >='0.07 ii:' 0 .06 F.:O.3 ~ == 0 .05 ~ .0 ~ 0 .04 ~0 . 2 o c.. 0 .03 0 .02 0.1 0.01 o ~~~~--~-~~~~~~-~ 1000 2000 time steps 3000 4000 - 60 - 20 0 20 40 output value Y Figure 1: Mutual information and output distribution along learning with the gradient ascent learning rule (Eq. 3). All patterns were constructed by setting 10% of the input neurons to fire Poisson spike trains at 40R z, while the rest fire at lOR z. Poisson spike trains were simulated by discretizing time into 1 millisecond bins. Simulation parameters A = 1 M = 100, N = 1000, qo = 0.9, q'f/ = 0.001, T = 20msec. A. Input-output mutual information B. Output distribution after 100,150,200 and 300 learning steps. Outputs segregate into two distinct bumps: one corresponds to the presentation of the ~o pattern and the other corresponds to the rest of the patterns. Thirdly, summation over patterns embodies a 'batch' mode of learning, requiring very large memory to average over multiple presentations. To implement an online learning rule, we replace summation over patterns by pattern-triggered learning. One should note that the analytical derivation yielded that summation in is performed over the rare patterns only (Eq. 3), thus pattern-triggered learning is naturally implemented by restricting learning to presentations of rare patterns l . Fourthly, the learning rule explicitly depends on E(X) and COV(Y, X) that are not observables of the model. We thus replace them by performing stochastic weighted averaging over spikes to yield a spike-dependent learning rule. In the case of inhomogeneous Poisson spike trains where input neurons fire independently, the covariance t t' - t terms obeys Cau(Y,Xi) = W i E r / 2 (X i ), where Er(X) = e-T-E(S(t'))dt' . The expectations E(Xn may be simply estimated by weighted averaging of the observed spikes Xi that precede the learning moment. Estimating E(XP) is more difficult because, as stated above, learning should be triggered by the rare patterns only. Thus, ~o spikes should have an effect only when a rare pattern ~'f/ is presented. A possible solution is to use the fact that ~o is highly frequent, (and therefore spikes in the vicinity of a ~'f/ presentation are with high probability ~o spikes), to average over spikes following a ~'f/ presentation for background activity estimation. These spikes can be temporally weighted in many ways: from the computational point of view it is beneficial to weigh spikes uniformly along time, but this may require long "memory" and is biologically improbable. We thus refrain from suggesting a specific weighting for background spikes, and obtain the following rule, that is Loo lIn fact, learning rules where learning is also triggered by the presentation of the background pattern explicitly depend on the prior probabilities Q'1 ' and thus are not robust to fluctuations in Q'1. Since such fluctuations strongly reduce the mutual information obtained by these rules, we conclude that pattern-triggered learning should be triggered by the rare pattern only. activated only when one of the rare patterns (f'I, mem = l..M) is presented (4) where h,2(S(t')) denote the temporal weighting of f,,0 spikes. It should be noted that this learning rule uses rare pattern presentations as an external ("supervised") learning signal. The general form of this learning rule and its performance are discussed in the next section. 5 5.1 Analyzing The Biologically Feasible Rule Comparing performance We have obtained a new spike-dependent learning rule that may be implemented in a biological system that approximates an information maximization learning rule. But how good are these approximations? Does learning with the biologically feasible learning rule increase mutual information? and to what level? The curves in figure 2A compare the mutual information of the learning rule of Eq. 3 with that of Eq. 4, as traced in simulation of the learning process. Apparently, the approximated learning rule achieves fairly good performance compared to the optimal rule, and most of reduction in performance is due to limiting weights to positive values. 5.2 Interpreting the learning rule structure The general form of the learning rule of Eq. 4 is pictorially presented in figure 2B, to allow us to inspect the main features of its structure. First, synaptic potentiation is temporally weighted in a manner that is determined by the same filter F that the neuron applies over its inputs, but learning should apply an average of F and F2 (it F(t - t')S(t')dt' and F2(t - t')S(t')dt'). The relative weighting of these two components was numerically estimated by simulating the optimal rule of Eq. 3 and was found to be on the same order of magnitude. Second, in our model synaptic depression is targeted at learning the underlying structure of background activity. Our analysis does not restrict the temporal weighting of the depression curve. t A major difference between the obtained rule and the experimentally observed learning rule is that in our rule learning is triggered by an external learning signal that corresponds to the presentation of rare patterns, while in the experimentally observed rule learning is triggered by the postsynaptic spike. The possible role of the postsynaptic spike is discussed in the following section. 6 Unsupervised Learning By now we have considered a learning scenario that used external information telling whether the presented pattern is the background pattern or not, to decide whether learning should take place. When such learning signal is missing, it is tempting to use the postsynaptic spike (signaling the presence of an interesting input pattern) as a learning signal. This yields a learning procedure as in Eq. 4 except this time learning is triggered by postsynaptic spikes instead of an external signal. The resulting learning rule is similar to previous models of the experimentally observed STDP as [9, 13, 16]. However, this mechanism will effectively serve learning only if the postsynaptic spikes co-occur with the presentation of a rare pattern. Such co-occurrence may be achieved by supplying short learning signals at the presence of the interesting patterns (e.g. by attentional mechanisms increasing neuronal excitability). This will induce learning such that later postsynaptic spikes will be triggered by the rare pattern presentation. These issues await further investigation. A. B. 0.0 0.0 ~o.o Positive weights limitation ?. ~o . o ~ Spike dependent rule (Eq.4) 0.0 possible spike weighting 0 '-----~ O .5 ~-~ 1.5 ~2-~ 2.-5~3-~ 3.-5~4-~ 4 .5 ~5 time steps x 104 -50 a t(pre)-t(learning) 50 Figure 2: A. Comparing optimal (Eq. 3) and approximated (Eq. 4) learning rules. 10% of the input neurons of CI ('T/ > 0) were set to fire at 40Hz, while the rest fire at 5H z . ~o-neurons fire at 8Hz yielding similar average input as the fTl patterns. Learning rates ratio for Eq. 4 were numerically searched for optimal value, yielding ).'1/ = 0.15,).0 = 0.05 for the arbitrary choice ).' = 0.1. Rest of parameters as in Fig 1 except M = 20, N = 2000. B. A pictorial representation of Eq. 4, plotting ~W as a function of the time difference between the learning signal time t and the input spike time tspike. The potentiation curve (solid line) is the sum of two exponents with constants and (dashed lines). The depression curve is not constrained by our derivation, thus several examples are brought (dot-dashed lines). T 7 !T Discussion In the framework of information maximization, we have derived a spike-dependent learning rule for a leaky integrator neuron. This learning rule achieves near optimal mutual information and can in principle be implemented in biological neurons. The analytical derivation of this rule allows to obtain insight into the learning rules observed experimentally in various preparations. The most fundamental result is that time-dependent learning stems from the timedependency of neuronal output on its inputs. In our model this is embodied in the filter F which a neuron applies over its input spike trains. This filter is determined by the biophysical parameters of the neuron, namely its membrane leak, synaptic transfer functions and dendritic arbor structure. Our model thus yields direct experimental predictions for the way temporal characteristics of the potentiation learning curve are determined by the neuronal biophysical parameters. Namely, cells with larger membrane constants should exhibit longer synaptic potentiation time windows. Interestingly, the time window observed for STDP potentiation indeed fits the time windows of an AMP A channel and is also in agreement with cortical membrane time constants, as predicted by the current analysis [4, 6]. Several features of the theoretically derived rule may have similar functions in the experimentally observed rule: In our model synaptic weakening is targeted to learn the structure of the background activity. Both synaptic depression and potentiation in our model should be triggered by rare pattern presentation to allow near-optimal mutual information. IN addition, synaptic changes should depend on the synaptic baseline value in a sub-linear manner. The experimental results in this regard are still unclear, but theoretical investigations show that this weight dependency has large effect on networks dynamics [13]. While the learning rule presented in Equation 4 assumes independent firing of input neurons , our derivation actually holds for a wider class of inputs. In the case of correlated inputs however, the learning rule involves cross-synaptic terms, which may be difficult to compute by biological neurons. As STDP is highly sensitive to synchronous inputs, it remains a most interesting question to investigate biologicallyfeasible approximations to an Infomax rule for time structured and synchronous inputs. References [1] W .B. Levy and D . Steward. Temporal contiguity requirements for long-term associative potentiatio/depression in the hippocampus. Neuroscience, 8:791- 797, 1983. [2] D. Debanne, B.H. Gahwiler, and S.M. Thompson. Asynchronous pre- and postsynaptic activity induces associative long-term depression in area CAl of the rat hippocampus in vitro. Proc. Natl. Acad. Sci., 91 :1148- 1152, 1994. [3] H. Markram , J . Lubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science, 275(5297}:213- 215 , 1997. [4] L. Zhang, H.W.Tao, C.E. Holt, W.A. Harris, and M m. Poo. A critical window for cooperation and competition among developing retinotectal synapses. Nature , 395(3}:37- 44, 1998. [5] Q. Bi and M m. Poo. Precise spike timing determines the direction and extent of synaptic modifications in cultured hippocampal neurons. J. Neurosci. , 18:1046410472, 1999. [6] D.E. Feldman. Timing based Jtp and ltd at vertical inputs to layer II/III pryamidal cells in rat barrel cortex. Neuron, 27:45- 56, 2000. [7] R. Kempter, W . Gerstner, and J.L. van Hemmen. Hebbian learning and spiking neurons. Phys. Rev. E., 59(4} :4498- 4514, 1999. [8] L.F. Abbot and S. Song. Temporally asymetric hebbian learning, spike timing and neural respons variability. In S.A. Solla and D .A. Cohen, editors, Advances in Neural Information Processing Systems 11, pages 69- 75. MIT Press, 1999. [9] S. Song, KD. Miller, and L.F. Abbot. Competitive Hebbian learning through spiketiming dependent synaptic plasticity. Nature Neuroscience , pages 919- 926, 2000. [10] D. Horn , N. Levy, I. Meilijson, and E. Ruppin. Distributed synchrony of spiking neurons in a hebbian cell assembly. In S.A. Solla, T.K Leen, and KR. Muller, editors, Advances in Neural Information Processing Systems 12, pages 129- 135, 2000. [11] M.R. Mehta, M. Quirk, and M. Wilson. From hippocampus to v1 : Effect of ltp on spatio-temporal dynamics of receptive fields. In J.M. Bower, editor , Computational Neuroscience: Trends in Research 1999. Elsevier, 1999. [12] R. Rao and T. Sejnowski. Predictive sequence learning in recurrent neocortical circuits. In S.A. Solla, T .K Leen, and KR. Muller, editors, Advances in Neural Information Processing Systems 12, pages 164- 170. MIT Press , 2000. [13] J. Rubin, D . Lee, and H. Sompolinski. Equilibrium properties of temporally asymmetric hebbian plasticity. Phys . Rev. D., In press, 2000. [14] R. Linsker. Self-organization in a perceptual network. Computer, 21(3}:105- 117, 1988. [15] C.E. Shannon. A mathematical theory of communication. Bell Syst. Tech . J., 27:379423, 1948. [16] R. Kempter, W. Gerstner, and J.L. van Hemmen. Intrinsic stabilization of output rates by spike-time dependent hebbian learning. Submitted, 2000.
1882 |@word hippocampus:3 mehta:1 additively:1 simulation:2 covariance:1 q1:2 solid:1 reduction:1 moment:3 series:1 efficacy:6 denoting:1 interestingly:1 amp:1 current:2 comparing:2 si:2 dx:1 aft:1 must:1 realize:1 plasticity:9 plot:1 aps:1 stationary:1 ith:1 short:1 supplying:1 zhang:1 lor:1 mathematical:1 along:3 constructed:1 direct:1 differential:2 manner:2 theoretically:1 indeed:2 expected:1 roughly:1 integrator:2 window:4 increasing:1 project:1 estimating:1 moreover:1 underlying:2 notation:1 maximizes:1 barrel:1 circuit:1 israel:1 what:2 kg:1 interpreted:1 contiguity:1 gal:1 temporal:9 attenuation:1 normally:1 grant:1 underlie:1 positive:3 engineering:1 timing:6 local:2 limit:1 aiming:1 acad:1 accumulates:1 analyzing:1 firing:5 fluctuation:2 suggests:1 co:2 limited:1 bi:1 averaged:1 obeys:1 horn:1 yj:1 cau:1 implement:1 irregularity:1 signaling:1 procedure:2 area:1 bell:1 thought:1 chechik:1 pre:5 induce:1 subpopulation:1 holt:1 suggest:1 cal:1 ggal:1 demonstrated:1 center:1 missing:1 poo:2 jerusalem:1 independently:2 thompson:1 rule:49 insight:2 importantly:1 deriving:1 debanne:1 limiting:1 target:4 play:1 cultured:1 us:1 agreement:1 trend:1 approximated:2 updating:1 asymmetric:1 observed:10 role:3 coincidence:1 pictorially:1 await:1 calculate:2 solla:3 decrease:1 weigh:1 leak:1 dynamic:4 depend:5 ali:1 predictive:1 serve:2 f2:2 observables:1 various:1 derivation:6 train:7 distinct:1 sejnowski:1 precedes:1 larger:1 otherwise:1 cov:1 emergence:1 noisy:2 online:1 associative:2 sequence:2 triggered:11 biophysical:4 analytical:3 fr:1 frequent:1 realization:2 competition:2 requirement:1 wider:1 derive:1 recurrent:2 ac:1 quirk:1 school:1 received:1 eq:12 epsps:1 implemented:5 c:1 predicted:1 involves:1 direction:1 inhomogeneous:1 filter:7 stochastic:1 stabilization:1 human:1 bin:1 require:1 respons:1 potentiation:6 investigation:2 biological:7 dendritic:1 secondly:1 summation:3 frontier:1 fourthly:1 hold:1 considered:1 stdp:12 exp:1 equilibrium:1 bump:2 major:2 vary:1 achieves:2 omitted:1 estimation:1 proc:1 precede:1 sensitive:1 weighted:4 brought:1 mit:2 avoid:1 voltage:2 wilson:1 derived:4 focus:1 tech:1 baseline:2 elsevier:1 dependent:15 weakening:1 relation:1 tao:1 issue:1 among:1 exponent:1 constrained:1 fairly:1 mutual:20 frotscher:1 field:1 once:1 biology:1 look:1 unsupervised:1 linsker:1 inherent:1 few:2 randomly:1 pictorial:1 fire:7 delicate:1 maintain:1 organization:1 asymetric:1 highly:3 investigate:1 mixture:1 bracket:1 yielding:2 activated:1 tj:9 natl:1 improbable:1 abundant:1 theoretical:1 modeling:1 abbot:2 rao:1 maximization:5 rare:12 wix:1 tishby:2 loo:1 dependency:2 answer:1 fundamental:1 huji:1 interdisciplinary:1 lee:1 infomax:3 together:1 possibly:1 external:4 syst:1 account:1 suggesting:1 explicitly:2 depends:4 performed:1 view:1 later:1 observing:1 apparently:1 meilijson:1 start:1 competitive:1 synchrony:1 il:1 lubke:1 variance:1 characteristic:2 miller:1 yield:3 ago:1 submitted:1 synapsis:2 reach:1 phys:2 sharing:1 synaptic:27 naturally:1 wixi:1 actually:1 dt:4 supervised:2 gahwiler:1 synapse:1 leen:2 strongly:2 until:1 replacing:1 qo:1 mode:1 effect:5 omitting:1 requiring:1 vicinity:1 excitability:1 during:2 self:1 noted:1 rat:2 hippocampal:1 theoretic:1 neocortical:1 interpreting:1 consideration:1 novel:1 ruppin:1 recently:2 spiking:4 discriminated:1 vitro:1 cohen:1 exponentially:1 thirdly:1 discussed:2 interpretation:1 approximates:1 numerically:3 feldman:1 stochasticity:1 dot:1 stable:1 longer:1 cortex:1 termed:1 scenario:1 steward:1 discretizing:1 refrain:1 yi:4 muller:2 converge:1 paradigm:2 maximize:2 period:4 signal:7 ii:2 tempting:1 multiple:1 dashed:2 stem:1 hebbian:8 believed:1 long:3 lin:1 cross:1 post:2 coded:2 prediction:2 basic:1 expectation:1 poisson:3 sometimes:1 normalization:1 achieved:1 cell:3 background:7 ftl:1 addition:1 rest:5 ascent:2 hz:2 ltp:1 climb:1 near:3 presence:2 split:1 easy:1 iii:1 fit:1 restrict:1 reduce:1 idea:1 synchronous:3 whether:3 t0:1 ltd:1 song:2 repeatedly:1 depression:7 ten:2 induces:1 sl:1 millisecond:3 inhibitory:1 neuroscience:4 estimated:2 four:1 traced:1 sompolinski:1 v1:1 year:1 sum:1 place:1 decide:2 bound:1 layer:1 yielded:1 activity:7 strength:1 occur:1 constraint:1 performing:1 structured:1 developing:1 according:1 kd:1 membrane:4 remain:1 beneficial:1 postsynaptic:10 wi:4 rev:2 biologically:4 modification:1 taken:1 equation:1 remains:1 turn:1 mechanism:2 end:2 studying:1 operation:2 gaussians:1 apply:1 simulating:1 occurrence:1 batch:1 assumes:1 t11:6 assembly:1 spiketiming:1 calculating:1 embodies:2 especially:1 question:2 spike:42 receptive:1 dependence:1 unclear:1 exhibit:1 gradient:5 attentional:1 simulated:1 sci:1 presynaptic:1 extent:1 length:1 multiplicatively:1 ratio:1 hebrew:1 difficult:2 regulation:1 stated:1 sakmann:1 inspect:1 neuron:28 vertical:1 segregate:1 variability:1 precise:1 communication:1 arbitrary:1 namely:2 suggested:1 pattern:39 regime:2 memory:2 critical:1 temporally:6 embodied:1 sn:1 prior:1 discovery:1 segregated:1 relative:3 kempter:2 interesting:3 limitation:1 proportional:1 xp:1 rubin:1 principle:5 plotting:1 editor:4 uncorrelated:1 excitatory:3 elsewhere:1 cooperation:1 supported:1 asynchronous:1 allow:2 telling:1 wide:1 markram:1 differentiating:1 leaky:2 distributed:2 regard:3 curve:5 van:2 xn:1 cortical:1 author:1 c5:1 simplified:1 approximate:3 observable:2 global:1 reveals:1 incoming:2 mem:1 conclude:1 spatio:1 xi:3 retinotectal:1 channel:1 transfer:3 robust:1 learn:1 nature:2 gerstner:2 complex:1 main:1 neurosci:1 neuronal:6 fig:1 hemmen:2 depicts:1 sub:1 msec:1 exponential:1 bower:1 ib:1 levy:2 weighting:5 perceptual:1 specific:1 showing:2 er:1 decay:2 derives:1 intrinsic:1 restricting:1 effectively:1 kr:2 ci:1 magnitude:2 rg:1 entropy:5 simply:1 xl1:4 applies:3 corresponds:6 determines:2 harris:1 conditional:1 goal:2 presentation:15 targeted:2 replace:2 cav:2 experimentally:8 change:5 feasible:3 determined:4 except:2 uniformly:1 averaging:3 total:1 hfsp:1 experimental:3 la:1 arbor:1 shannon:1 searched:1 stressed:1 preparation:1 correlated:2
965
1,883
Position Variance, Recurrence and Perceptual Learning Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WCIN 3AR. zhaoping @g a t s by.u c l. a c.u k da y a n @gat sby.u c l. ac .uk Abstract Stimulus arrays are inevitably presented at different positions on the retina in visual tasks, even those that nominally require fixation. In particular, this applies to many perceptual learning tasks. We show that perceptual inference or discrimination in the face of positional variance has a structurally different quality from inference about fixed position stimuli, involving a particular, quadratic, non-linearity rather than a purely linear discrimination. We show the advantage taking this non-linearity into account has for discrimination, and suggest it as a role for recurrent connections in area VI, by demonstrating the superior discrimination performance of a recurrent network. We propose that learning the feedforward and recurrent neural connections for these tasks corresponds to the fast and slow components of learning observed in perceptual learning tasks. 1 Introduction The field of perceptual learning in simple, but high precision, visual tasks (such as vernier acuity tasks) has produced many surprising results whose import for models has yet to be fully felt. A core of results is that there are two stages of learning, one fast, which happens over the first few trials, and another slow, which happens over multiple sessions, may involve REM sleep, and can last for months or even years (Fable, 1994; Karni & Sagi, 1993; Fahle, Edelman, & Poggio 1995). Learning is surprisingly specific, in some cases being tied to the eye of origin of the input and rarely admitting generalisation across wide areas of space or between tasks that appear extremely similar, even involving the same early-stage detectors (eg Fahle, Edelman, & Poggio 1995; Fable, 1994). For instance, improvement through learning on an orientation discrimination task does not lead to improvement on a vernier acuity task (Fable 1997), even though both tasks presumably use the same orientation selective striate cortical cells to process inputs. Of course, learning in human psychophysics is likely to involve plasticity in a large number of different parts of the brain over various timescales. Previous studies (Poggio, Fable, & Edelman 1992, Weiss, Edelman, & Fable 1993) proposed phenomenological models of learning in a feedforward network architecture. In these models, the first stage units in the network receive the sensory inputs through the medium of basis functions relevant for the perceptual task. Over learning, a set of feedforward weights is acquired such that the weighted sum of the activities from the input units can be used to make an appropriate binary decision, eg using a threshold. These models can account for some, but not all, observations on perceptual learning (Fable et al 1995). Since the activity of VI units seems not to relate directly to behavioral decisions on these visual tasks, the feedforward connections A --=---------:-----~ -l+y y+E l+y X x Figure 1: Mid-point discrimination. A) Three bars are presented at x-, Xo and x+. The task is to report which of the outer bars is closer to the central bar. y represents the variable placement of the stimulus array. B) Population activities in cortical cells evoked by the stimulus bars - the activities ai is plotted against the preferred location Xi of the cells. This comes from Gaussian tuning curves (k = 20; T = 0.1) and Poisson noise. There are 81 units whose preferred values are placed at regular intervals of ~x = 0.05 between X = -2 and x = 2. must model processing beyond VI. The lack of generalisation between tasks that involve the same visual feature samplers suggests that the basis functions, eg the orientation selective primary cortical cells that sample the inputs, do not change their sensitivity and shapes, eg their orientation selectivity or tuning widths. However, evidence such as the specificity of learning to the eye of origin and spatial location strongly suggest that lower visual areas such as VI are directly involved in learning. Indeed, VI is a visual processor of quite some computational power (performing tasks such as segmentation, contour-integration, pop-out, noise removal) rather than being just a feedforward, linear, processing stage (eg Li, 1999; Pouget et aII998). Here, we study a paradigmatic perceptual task from a statistical perspective. Rather than suggest particular learning rules, we seek to understand what it is about the structure of the task that might lead to two phases of learning (fast and slow), and thus what computational job might be ascribed to VI processing, in particular, the role of lateral recurrent connections. We agree with the general consensus that fast learning involves the feedforward connections. However, by considering positional invariance for discrimination, we show that there is an inherently non-linear component to the overall task, which defeats feedforward algorithms. 2 The bisection task Figure IA shows the bisection task. Three bars are presented at horizontal positions Xo = y + E, x_ = -1 + Y and x+ = 1 + y, where -1 ? E ? 1. Here y is a nuisance random number with zero mean, reflecting the variability in the position of stimulus array due to eye movements or other uncontrolled factors. The task for the subject is to report which of the outer bars is closer to the central bar, ie to report whether E is greater than or less than O. The bars create a population-coded representation in VI cells preferring vertical orientation. In figure IB, we show the activity of cells ai as a function of preferred topographic location Xi of the cell; and, for simplicity, we ignore activities from other VI cells which prefer orientations other than vertical. We assume that the cortical response to the bars is additive, with mean ai(E,y) = f(Xi - xo) + f(Xi - x_) + f(Xi - x+) (1) (we often drop the dependence on E, y and write ai, or, for all the components, a) where f is, say, a Gaussian, tuning curve with height k and tuning width T, f(x) = ke- x2 /2T2, usually with T ? 1. The net activity is ai = ai + ni, where ni is a noise term. We assume that ni comes from a Poisson distribution and is independent across the units, and E and y have mean zero and are uniformly distributed in their respective ranges. The subject must report whether E is greater or less than 0 on the basis of the activities a. A normative way to do this is to calculate the probability P[Ela] of E given a, and report by maximum likelihood (ML) that E > 0 if dE P[Ela] > 0.5. Without prior information about E, y, and with Poisson noise ni = ai - iii, we have 1,>0 3 Fixed position stimulus array When the stimulus array is in a fixed position y = 0, analysis is easy, and is very similar to that carried out by Seung & Sompolinsky (1993). Dropping y, we calculate log P[Ela] and approximate it by Taylor expansion about E = 0 to second order in E: 10gP[aIE] .-vconstant+E t,logP[aIE]I,=o+ ~t-IogP[aIE]I,=o (3) ignoring higher order terms. Provided that the last term is negative (which it indeed is, almost surely), we derive an approximately Gaussian distribution (4) u; = ?= u; t, with variance [-t-IogP[aIE]I,=o]-l and mean logP[aIE]IE=o. Thus the 10gP[aIE]I,=0 is greater or subject should report that E > 0 or E < 0 if the test t(a) = less than zero respectively. For the Poisson noise case we consider, log P[aIE] = constant+ l:i ai log iii ( E) since l:i iii (E) is a constant, independent of E. Thus, t, (5) Therefore, maximum likelihood discrimination can be implemented by a linear feedforward network mapping inputs ai through feedforward weights Wi = log iii to calculate as the output t(a) = l:i Wiai . A threshold of 0 on t(a) provides the discrimination E > 0 if t(a) > 0 and E < 0 for t(a) < O. The task therefore has an essentially linear character. Note that if the noise corrupting the activities is Gaussian, the weights should instead be t, Wi a= alai. Figure 2A shows the optimal discrimination weights for the case of independent Poisson noise. The lower solid line in figure 2C shows optimal performance as a function of Eo The error rate drops precipitately from 50% for very small (and thus difficult) E to almost 0, long before E approaches the tuning width T. It is also possible to learn weights in a variety of ways (eg Poggio, Fable & Edelman, 1992; Weiss, Edelman & Fable, 1993; Fable, Edelman & Poggio 1995;) Figure 2B shows discrimination weights learned using a simple error-correcting learning procedure, which are almost the same as the optimal weights and lead to performance that is essentially optimal (the lower dashed line in figure 2C) . We use error-correcting learning as a comparison technique below. 4 Moveable stimulus array If the stimulus array can move around, ie if y is not necessarily 0, then the discrimination task gets considerably harder. The upper dotted line in figure 2C shows the (rather unfair) test of using the learned weights in figure 2B when y E [- .2, .2] varies uniformly. Clearly this has a highly detrimental effect on the quality of discrimination. Looking at the weight structure in figure 2A;B suggests an obvious reason for this - the weights associated with 0, and the the outer bars are zero since they provide no information about E when y = ML weights learned weights performance Vl ..... o ..... ..... Q) -2 2 X o -2 OL-------'-"'"-------::-. 2 X E 1) 0.05 f. Figure 2: A) The ML optimal discrimination weights w = log a (plotted as Wi vs. Xi) for deciding if ? > 0 when y = O. B) The learned discrimination weights w for the same decision. During on line learning, random examples were selected with ? E -2[-r, r] uniformly, r = 0.1, and the weights were adjusted online to maximise the log probability of generating the correct discrimination under a model in which the probability of declaring that ? > 0 is O'(~i Wiai) = 1/(1 + exp( - ~i Wiai)). C) Performance of the networks with ML (lower solid line) and learned (lower dashed line) weights as a function of ?. Performance is measured by drawing a randomly given ? and y, and assessing the %'age of trials the answer is incorrect. The upper dotted line shows the effect of drawing y E [-0.2,0.2] uniformly, yet using the ML weights in (B) that assume y = O. weights are finely balanced about 0, the mid-point of the outer bars, giving an unbiased or balanced discrimination on E. If the whole array can move, this balance will be destroyed, and all the above conclusions change. The equivalent of equation (3) when y f:. 0 is Thus, to second-order, a Gaussian distribution can approximate P[ E, Y Ia]. Figure 3A shows the high quality of this approximation. Here, E and yare anti-correlated given activities a, because the information from the center stimulus bar only constrains their sum E + y. Of interest is the probability P[Ela] dy log prE, yla], which is approximately Gaussian with mean (3 and variance where, under Poisson noise ni = ai - ai, p; p;, =f _ (a. 8210g B)(a. 810g B)j(a . 82 10g B)]1 _ (3 = [a? 810gB 8e 8y8e 8y 8y2 e,y-O Pe-2 2 IogB)2j(a. 82 10g B) _ = [(a. 88y8e 8y2 a. 82 102g B]1 8e _ e,y-O Since -a . 82d~~ B (which is the inverse variance of the Gaussian distribution of y that we integrated out) is positive, the appropriate test for the sign of E is t(a) 2 IogB)(a. 810g B) _ = [(a. 88y8e 8y (a . 8IogB)(a. 8210~B)]1 8y 8e _ (6) e,y-O If t( a) > 0 then we should report E > 0, and conversely. Interestingly, t( a) is a very simple quadratic form t(a) =a . Q .a == "0tJ.. a.a . [( 8 2 log iii) (8 log iii ) _ (8 log iii) (8 2 lo~ iii)] t J 8y8e 8y 8e 8y Ie,y-O _ (7) Therefore, the discrimination problem in the face of positional variance has a precisely quantifiable non-linear character. The quadratic test t(a) cannot be implemented by a linear feedforward architecture only, since the optimal boundary t(a) 0 to separate the state space a for a decision is now curved. Writing t(a) a? Q . a where the symmetric = = A o. B C ,Wd 'W: exact ,+ ,+ 0.1 -0.05 0.05 -0 0.1 -0.02 -1 0.02 0.08 -0.02 ? 0.02 ? 0.08 -2 -2 Figure 3: Varying y. A) Posterior distribution prE, yla]. Exact (left) prE, yla] for a particular a with true values E = 0.27, Y = 1.57 (with 7 = 0.1) and its bivariate Gaussian approximation (right). Only the relevant region of (E, y) space is shown - outside this, the probability mass is essentially 0 (and the contour values are the same). B) The quadratic form Q, Qij vs. X i and Xj. C) The four eigenvectors of Q with non-zero eigenvalues (shown above). The eigenvalues come in ? pairs; the associated eigenvectors come in antisymmetric pairs. The absolute scale of Q and its eigenvalues is arbitrary. A ML errors c B linear/ ML errors Figure 4: y =1= O. A) Performance of the approximate inference based on the quadratic form of figure 3B in terms of %'age error as a function of Iyl and lEI (7 = 0.1). B) Feedforward weights, Wi vs. Xi , learned using the same procedure as in figure 2B , but with y E [-.2 , .2] chosen uniformly at random. C) Ratio of error rates for the linear (weights from B) to the quadratic discrimination. Values that would be infinite are pegged at 20. = form Qij (Q~j + Qji)/2, we find Q only has four non-zero eigenvalues, for the 42 log a I ? . I 0 log a If,y=O, an d d ImenSlOna su b -space spanne d b y 4 vectors 0 OYOf f ,y=O, oy a If,y=O, 0 log Of 02d~~ a ky=o. Q and its eigenvectors and eigenvalues are shown in Figure 3B;C. Note that if Gaussian rather than Poisson noise is used for ni ai - ai , the test t( a) is still quadratic. = Using t(a) to infer E is sound for y up to two standard deviations (7) of the tuning curve f(x) away from 0, as shown in Figure 4A. By comparison, a feedforward network, of weights shown in figure 4B and learned using the same error-correcting learning procedure as above, gives substantially worse performance, even though it is better than the feedforward net of Figure 2A;B. Figure 4C shows the ratio of the error rates for the linear to the quadratic decisions. The linear network is often dramatically worse, because it fails to take proper account of y . We originally suggested that recurrent interactions in the form of horizontal intra-cortical connections within VI might be the site of the longer term improvement in behavior. Figure 5 demonstrates the plausibility of this idea. Input activity (as in figure IB) is used to initialise the state u at time t 0 of a recurrent network. The recurrent weights are = A B recurrent weights C recu rrent error o lin/rec error Decision y Input a Figure 5: Threshold linear recurrent network, its weights, and performance. See text. symmetric and shown in figure 5B. The network activities evolve according to dui/dt = -Ui + Lj Jijg(Uj) + ai (8) where Jij are the recurrent weight from unit j to i , g(u) = U if U > 0 and g(u) = 0 for U :::; O. The network activities finally settle to an equilibrium u(t -+ 00) (note that Ui (t -+ 00) = ai when J = 0). The activity values u( t -+ 00) of this equilibrium are fed through feed forward weights w, that are trained for this recurrent network just as for the pure feedforward case, to reach a decision Li WiUi(t -+ 00). Figure 5C shows that using this network gives results that are almost invariant to y (as for the quadratic discriminator) ; and figure 5D shows that it generally outperforms the optimal linear discriminator by a large margin, albeit performing slightly worse than the quadratic form. The recurrent weight matrix is subject to three influences: (1) a short range interaction J ij for IXi - Xj I ;S T to stablize activities ai induced by a single bar in the input; (2) a longer range interaction J ij for IXi - Xj I '" 1 to mediate interaction between neighboring stimulus bars, amplifying the effects of the displacement signal ?, and (3) a slight local interaction Jij for lXii, IXj I ;S T. The first two interaction components are translation invariant in the spatial range of Xi, Xj E [-2,2] where the stimulus array appears, in order to accommodate the positional variance in y. The last component is not translation invariant and counters variations in y. 5 Discussion The problem of position invariant discrimination is common to many perceptual learning tasks, including hyper-acuity tasks such as the standard line vernier, three-dot vernier, curvature vernier, and orientation vernier tasks (Fahle et al 1995, Fahle 1997). Hence, the issues we address and analyze here are of general relevance. In particular, our mathematical formulation, derivations, and thus conclusions, are general and do not depend on any particular aspect of the bisection task. One essential problem in many of these tasks is to discriminate a stimulus variable ? that depends only on the relative positions between the stimulus features , while the absolute position y of the whole stimulus array can vary between trials by an amount that is much larger than the discrimination threshold (or acuity) on ?. The positional variable y may not have to correspond to the absolute position of the stimulus array, but merely to the error in the estimation of the absolute position of the stimulus by other neural areas. Our study suggests that although when y = 0 is fixed, the discrimination is easy and soluble by a linear, feedforward network, whose weights can be learnt in a straight-forward manner, when y is not fixed, optimal discrimination of ? is based on an approximately quadratic function of the input activities, which cannot be implemented using a linear feedforward net. We also showed that a non-linear recurrent network, which is a close relative of a line attractor network, can perform much better than a pure feedforward network on the bisection task in the face of position variance. There is experimental evidence that lateral connections within VI change after learning the bisection task (Gilbert 2000), although we have yet to construct an appropriate learning rule. We suggest that learning the recurrent weights for the nonlinear transform corresponds to the slow component in perceptual learning, while learning the feedforward weights corresponds to the fast component. The desired recurrent weights are expected to be much more difficult to learn, in the face of nonlinear transforms and (the easily unstable) recurrent dynamics. Further, the feedforward weights need to be adjusted further as the recurrent weights change the activities on which they work. The precise recurrent interactions in our network are very specific to the task and its parameters. In particular, the range of the interactions is completely determined by the scale of spacing between stimulus bars; and the distance-dependent excitation and inhibition in the recurrent weights is determined by the nature of the bisection task. This may be why there is little transfer of learning between tasks, when the nature and the spatial scale of the task change, even if the same input units are involved. However, our recurrent interaction model does predict that transfer is likely when the spacing between the two outer bars (here at ~x = 2) changes by a small fraction. Further, since the signs of the recurrent synapses change drastically with the distance between the interacting cells, negative transfer is likely between two bisection tasks of slightly different spatial scales. We are planning to test this prediction. Achieving selectivity at the same time as translation invariance is a very basic requirement for position-invariant object recognition (see Riesenhuber & Poggio 1999 for a recent discussion), and arises in a pure form in this bisection task. Note, for instance, that trying to cope with different values of y by averaging spatially shifted versions of the optimal weights for y = 0 (figure 2A) would be hopeless, since this would erase (or at very least blur) the precise spatial positioning of the peaks and troughs which underlies the discrimination power. It would be possible to scan the input for the value of y that fits the best and then apply the discriminator centered about that value, and, indeed, this is conceptually what the neocognitron (Fukushima 1980) and the MAX-model (Riesenhuber & Poggio 1999) do using layers of linear and non-linear combination. In our case, we have shown, at least for fairly small y, that the optimal non-linearity for the task is a simple quadratic. Acknowledgements Funding is from the Gatsby Charitable Foundation. We are very grateful to Shimon Edelman, Manfred Fable and Maneesh Sahani for discussions. References [1] Karni A and Sagi D. Nature 365 250-252,1993. [2] Fahle M. Edelman S. and Poggio T. Vision Res. 35 3003-3013, 1995. [3] Fahle M. Perception 23 411-427, (1994). And also Fahle M. Vis. Res. 37(14) 1885-1895, (1997). [4] Poggio T. Fahle M. and Edelman S. Science 2561018-1021, 1992. [5] Weiss Y. Edelman S. and Fahle M. Neural Computation 5 695-718, 1993. [6] Li, Zhaoping Network: Computation in Neural Systems 10(2) 187-212, 1999. [7] Pouget A, Zhang K, Deneve S, Latham PE. Neural Comput. 10(2):373-401, 1998. [8] Seung HS , Sompolinsky H. Proc Natl Acad Sci USA . 90(22):10749-53, 1993 [9] Koch C. Biophysics of computation. Oxford University Press, 1999. [10] Gilbert C. Presentation at the Neural Dynamics Workshop, Gatsby Unit, 2/2000. [11] Riesenhuber M, Poggio T. Nat Neurosci. 2(11):1019-25, 1999. [12] Fukushima, K. BioI. Cybem. 36193-202, 1980.
1883 |@word h:1 trial:3 version:1 seems:1 seek:1 solid:2 accommodate:1 harder:1 interestingly:1 outperforms:1 wd:1 ixj:1 surprising:1 yet:3 import:1 must:2 additive:1 blur:1 plasticity:1 shape:1 drop:2 sby:1 discrimination:24 v:3 selected:1 core:1 short:1 manfred:1 provides:1 location:3 zhang:1 height:1 mathematical:1 edelman:11 incorrect:1 fixation:1 qij:2 behavioral:1 manner:1 ascribed:1 acquired:1 iogb:3 expected:1 indeed:3 behavior:1 planning:1 brain:1 ol:1 rem:1 little:1 considering:1 erase:1 provided:1 linearity:3 medium:1 mass:1 duo:1 what:3 substantially:1 demonstrates:1 uk:1 unit:9 appear:1 before:1 maximise:1 positive:1 local:1 sagi:2 acad:1 oxford:1 approximately:3 might:3 evoked:1 suggests:3 conversely:1 range:5 procedure:3 displacement:1 area:4 maneesh:1 pre:3 regular:1 specificity:1 suggest:4 get:1 cannot:2 close:1 influence:1 writing:1 gilbert:2 equivalent:1 center:1 ke:1 simplicity:1 correcting:3 pouget:2 pure:3 rule:2 array:11 initialise:1 population:2 variation:1 exact:2 ixi:2 origin:2 recognition:1 rec:1 observed:1 role:2 calculate:3 moveable:1 region:1 sompolinsky:2 movement:1 counter:1 balanced:2 ui:2 constrains:1 seung:2 dynamic:2 trained:1 depend:1 grateful:1 iyl:1 purely:1 basis:3 completely:1 easily:1 various:1 derivation:1 fast:5 london:1 hyper:1 outside:1 whose:3 quite:1 larger:1 say:1 drawing:2 topographic:1 gp:2 transform:1 online:1 advantage:1 eigenvalue:5 net:3 propose:1 interaction:9 jij:2 neighboring:1 relevant:2 ky:1 quantifiable:1 wcin:1 requirement:1 assessing:1 generating:1 object:1 derive:1 recurrent:21 ac:1 measured:1 ij:2 job:1 implemented:3 involves:1 come:4 correct:1 centered:1 human:1 settle:1 require:1 adjusted:2 around:1 koch:1 deciding:1 presumably:1 exp:1 mapping:1 equilibrium:2 predict:1 vary:1 early:1 alai:1 estimation:1 proc:1 amplifying:1 create:1 weighted:1 clearly:1 gaussian:9 rather:5 varying:1 acuity:4 improvement:3 likelihood:2 inference:3 dayan:1 dependent:1 vl:1 integrated:1 lj:1 selective:2 overall:1 issue:1 orientation:7 spatial:5 integration:1 psychophysics:1 fairly:1 field:1 construct:1 zhaoping:3 represents:1 report:7 stimulus:18 t2:1 few:1 retina:1 x_:2 randomly:1 phase:1 attractor:1 fukushima:2 interest:1 highly:1 intra:1 admitting:1 tj:1 natl:1 closer:2 poggio:10 respective:1 taylor:1 desired:1 plotted:2 re:2 instance:2 ar:1 logp:2 queen:1 deviation:1 answer:1 varies:1 learnt:1 considerably:1 peak:1 sensitivity:1 ie:4 preferring:1 central:2 soluble:1 worse:3 li:4 account:3 de:1 trough:1 vi:11 depends:1 analyze:1 square:1 ni:6 variance:8 correspond:1 conceptually:1 produced:1 bisection:8 straight:1 processor:1 detector:1 synapsis:1 reach:1 against:1 involved:2 obvious:1 associated:2 segmentation:1 reflecting:1 appears:1 feed:1 higher:1 originally:1 dt:1 response:1 wei:3 formulation:1 though:2 strongly:1 just:2 stage:4 horizontal:2 su:1 nonlinear:2 lack:1 quality:3 lei:1 usa:1 effect:3 unbiased:1 y2:2 true:1 hence:1 spatially:1 symmetric:2 eg:6 during:1 width:3 recurrence:1 nuisance:1 excitation:1 trying:1 neocognitron:1 latham:1 funding:1 common:1 superior:1 jijg:1 defeat:1 slight:1 ai:16 tuning:6 session:1 phenomenological:1 dot:1 longer:2 inhibition:1 curvature:1 posterior:1 showed:1 recent:1 perspective:1 selectivity:2 binary:1 greater:3 eo:1 surely:1 paradigmatic:1 dashed:2 signal:1 multiple:1 sound:1 infer:1 positioning:1 england:1 plausibility:1 long:1 lin:1 coded:1 biophysics:1 prediction:1 involving:2 basic:1 underlies:1 essentially:3 vision:1 poisson:7 cell:9 receive:1 spacing:2 interval:1 finely:1 subject:4 induced:1 feedforward:19 iii:8 easy:2 destroyed:1 variety:1 xj:4 fit:1 architecture:2 iogp:2 idea:1 whether:2 gb:1 peter:1 dramatically:1 generally:1 involve:3 eigenvectors:3 amount:1 transforms:1 mid:2 shifted:1 dotted:2 sign:2 neuroscience:1 write:1 dropping:1 four:2 demonstrating:1 threshold:4 achieving:1 deneve:1 merely:1 fraction:1 year:1 sum:2 inverse:1 almost:4 decision:7 prefer:1 dy:1 layer:1 uncontrolled:1 wiui:1 quadratic:12 sleep:1 activity:17 placement:1 precisely:1 x2:1 felt:1 aspect:1 extremely:1 performing:2 according:1 combination:1 across:2 slightly:2 character:2 wi:4 happens:2 invariant:5 xo:3 equation:1 agree:1 fed:1 yare:1 apply:1 away:1 appropriate:3 qji:1 giving:1 uj:1 move:2 primary:1 dependence:1 striate:1 detrimental:1 distance:2 separate:1 lateral:2 sci:1 outer:5 consensus:1 unstable:1 reason:1 ratio:2 balance:1 difficult:2 vernier:6 relate:1 negative:2 aie:7 proper:1 perform:1 upper:2 vertical:2 observation:1 riesenhuber:3 inevitably:1 anti:1 curved:1 variability:1 looking:1 precise:2 interacting:1 yla:3 arbitrary:1 pair:2 connection:7 discriminator:3 learned:7 pop:1 address:1 beyond:1 bar:16 suggested:1 usually:1 below:1 perception:1 including:1 max:1 power:2 ia:2 ela:4 eye:3 carried:1 sahani:1 text:1 prior:1 acknowledgement:1 removal:1 evolve:1 relative:2 fully:1 oy:1 declaring:1 age:2 foundation:1 corrupting:1 charitable:1 translation:3 lo:1 course:1 hopeless:1 surprisingly:1 last:3 placed:1 drastically:1 understand:1 wide:1 face:4 taking:1 absolute:4 karni:2 distributed:1 curve:3 boundary:1 cortical:5 contour:2 sensory:1 wiai:3 forward:2 cope:1 approximate:3 ignore:1 preferred:3 fahle:9 ml:7 cybem:1 xi:8 why:1 learn:2 nature:3 transfer:3 inherently:1 ignoring:1 expansion:1 necessarily:1 da:1 antisymmetric:1 timescales:1 neurosci:1 whole:2 noise:9 mediate:1 site:1 gatsby:3 slow:4 precision:1 structurally:1 position:14 fails:1 comput:1 perceptual:10 tied:1 ib:2 unfair:1 pe:2 shimon:1 specific:2 normative:1 evidence:2 bivariate:1 essential:1 workshop:1 albeit:1 gat:1 nat:1 margin:1 likely:3 visual:6 positional:5 nominally:1 applies:1 corresponds:3 bioi:1 month:1 presentation:1 change:7 generalisation:2 infinite:1 uniformly:5 determined:2 sampler:1 averaging:1 discriminate:1 invariance:2 experimental:1 rarely:1 arises:1 scan:1 relevance:1 correlated:1
966
1,884
Whence Sparseness? C. van Vreeswijk Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WCIN 3AR, United Kingdom Abstract It has been shown that the receptive fields of simple cells in VI can be explained by assuming optimal encoding, provided that an extra constraint of sparseness is added. This finding suggests that there is a reason, independent of optimal representation, for sparseness. However this work used an ad hoc model for the noise. Here I show that, if a biologically more plausible noise model, describing neurons as Poisson processes, is used sparseness does not have to be added as a constraint. Thus I conclude that sparseness is not a feature that evolution has striven for, but is simply the result of the evolutionary pressure towards an optimal representation. 1 Introduction Recently there has been an resurgence of interest in using optimal coding strategies to 'explain' the response properties of neuron in the primary sensory areas [1]. Notably this approach was used Olshausen and Field [2] to infer the receptive field of simple cells in the primary visual cortex. To arrive at the correct results however, they had to add sparseness of activity as an extra constraint. Others have shown that similar results are obtained if one assumes that the neurons represent independent components of natural stimuli [3]. The fact that these studies need to impose an extra constraint suggests strongly that the subsequent processing of the stimuli uses either sparseness or independence of the neuronal activity. It is therefore highly important to determine whether these constraints are really necessary. Here it will be argued that the necessity of the sparseness constraint in these models is due to modeling the noise in the system incorrectly. Modeling the noise in a biologically more plausibly way leads to a representation of the input in which the sparseness of the activity naturally follows from the optimality of the representation. 2 Gaussian Noise Several approaches have been used to find an output that represents the input optimally, for example, minimizing the square difference between the input and its reconstruction. In this paper I will concentrate on a different definition of optimality, I require that the mutual information between the input and output is maximized. If the number of output units is at least equal to the dimensionality of the input space a perfect reconstruction of the input is possible, unless there is noise in the system. So for an (over)-complete representation optimal encoding only makes sense in the presence of noise. It is important to note that the optimal solution depends on the model of noise that is taken, even if one takes the limit where the noise goes to zero. Thus it is important to have an adequate noise model. Most optimal coding schemes describe the neuronal output by an input-dependent mean to which Gaussian noise is added. This is, roughly speaking, also the implicit assumption in an optimization procedure in which the mean square reconstruction error is minimized, but it is also often used explicitly when the the mutual information is maximized. It is instructive to see, in the latter case, why one needs to impose extra constraints to obtain unambiguous results: Assume the input s has dimension Ni and is drawn from a distribution p(s). There are No ~ Ni output neurons whose rates r satisfy r=Ws+ue, (1) e is a No dimensional univariate Gaussian with zero mean, P?(e) = (2'71,)-N /2 exp( -e T e/2) (the superscript T denotes the transpose). The task is to find the No x Ni matrix Wm that maximizes the mutual information 1M between r and s, defined by [4] where o IM(r, s) =/ dr / dsp(r, s){log[p(rls)l-IOg[/ ds'p(r, s'm. (2) Here p( r, s) is the joint probability distribution of rand sand p( r Is) is the conditional probability of r, given s. It is immediately clear that replacing W by eW with e > 1 increases the mutual information by effectively reducing the noise by a factor l/e. Thus maximal mutual information is obtained as the rates become infinite. Thus, to get sensible results, a constraint has to be placed on the rates. A natural constraint in this framework is a constraint on the average square rates r, < rT r >= Nom. Here I have used < ... > to denote the average over the noise and inputs and R~ > u 2 is the mean square rate. Under this constraint, however, the optimal solution is still vastly degenerate. Namely if W M is a matrix that gives the maximum mutual information, for any unitary matrix U (UTU = 1), UWm will also maximize 1M. This is straightforward to show. For r = W mS + ue the mutual information is given by IM(r,s;Wm) = /dr /dSP(S)p?(r-:;Vm s ) {IOg[p?(r-:;Vms)]_ log [ / ds' p(s')P? (r - :m S')]} , (3) where I have used IM(r, S; W) to denote the mutual information when the matrix W is used. In the case where r satisfies r = UWmS + ue the mutual information is given by equation 3, with Wm replaced by UWm. Changing variables from r to r' = UTr and using I det(U)1 = 1, this can be rewritten as / dr' / dsp(s)P? (U r '- ;VmS) {lOg [P? (U r '- ;VmS)] log [/ ds'p(s')P? (U r ' - :;Vms')]} . (4) Becausepe(e) is a function ofeTe only, Pe (Ue) = pe(e), and therefore Im(r, 8; UWm ) = Im(r, 8, W m ). In other words, because we have assumed a model in which the noise is described by independent Gaussians, or generally the distribution of the noise is a function of T only, the mutual information is invariant to unitary transformations of the output. Clearly, this degeneracy is a result of the particular choice of the noise statistics and unlikely to survive when we try to account for biologically observed noise more accurately. In the latter case it may well happen that the degeneracy is broken in such a way that maximizing the mutual information with a constraint on the average rates is itself sufficient to obtain a sparse representation. e ee 3 Poisson Noise To obtain a robust insight in this issue, it is important that the system can be treated analytically. The desire for biologically plausibility of the system should therefore be balanced by the desire to keep it simple. Ubiquitous features found in electrophysiological experiments likely to be of importance are (see for example [5]): i) Neurons transmit information through spikes. ii) Consecutive inter-spike intervals are at best weakly correlated. iii) With repeated presentation of the stimulus the variance in the number of spikes a neuron emits over a given period varies nearly linearly with the mean number of emitted spikes. A simple model that captures all these features of the biological system is the Poisson process [6]. I will thus consider a system in which the neurons are described by such a process. The general model is as follows: The inputs are given by an Ni dimensional vector 8 drawn from a distribution p(8). These give rise to No inputs u into the cells, which satisfy u = W 8, where W is the coupling matrix. The inputs u are transformed into rates through a transfer function g, ri = g( Ui) . The output of the network is observed for a time T. Optimal encoding of the input is defined by the maximal mutual information between the spikes the neurons emit and the stimulus. Let ni be the total number of spikes for neuron i and n the No dimensional array of spike counts, then p(nlr) = TIihT)ni exp(-riT)/ni!. Optimal coding is achieved by determining W m such that Wm = argmaxw(IM(n, 8; W)). (5) As before there is need for a constraint on W so that solutions with infinite rates are excluded. Whereas with Gaussian channels fixing the mean square rates is the most natural choice for the constraint, for Poissonian neurons it is more natural to fix the mean number of emitted spikes, L:i < ri >= NoRo. By rescaling time we can, without loss of generality, assume that Ro = 1. 4 A Simple Example The simplest example in which we can consider whether such systems will lead a sparse representation is a system with a single neuron and a 1 dimensional input, which is uniformly distributed between 0 and 1. Assume that the unit has output rate r = when the input satisfies s < I - P and rate l/p if s > I - p. Because the neuron is either 'on' or 'off' , maximal information about its state can be obtained by checking whether it fired either one or more spikes or did not fire at all in the time-window over which the neuron was observed. If the neuron is 'on', the probability that it does not spike in a time T is 1- exp( -T /p), otherwise it is 1. Thus the probability distribution is ? p(O, s) =I - e- T/ P 0(s - I + p), p(l+, s) = e- T/ P i0(s - 1+ p), (6) 0.5 0.4 0.3 Pm 0.2 0.1 00 2 T 3 4 5 Figure 1: Pm, the value of P that maximizes the mutual information as function of the measuring time-window T. where I have used p(l+, s) to denote the probability of 1 or more spikes and an input s. The mutual information satisfies IM(n, SiP) = p(l - e- T / p ) log(l - e- T / p ) (1 - p(l - e- T / p )) - pe- T / p 10g(P) - log(l - p(l - e- T / p )). (7) Figure 1 shows Pm, the value of p that maximizes the mutual information, as a function of the time T over which the neuron is observed. For small T, Pm is small, this reflects the fact that the reliability of the neuron is increased if the rate in the 'on' state (l/p) is made maximal. For large T, Pm approaches 1/2, the value for which the entropy of the output rate T is maximized. We thus see a trade-off between the reliability which wants to make p as small as possible, and the capacity, which pushes p to 112. For time intervals that are smaller than or on the order of the mean 1: inter-spike interval the former dominates and leads to an optimal solution in which the neuron is, with a high probability, quiet, or, with a low probability, fires vigorously. Thus in this system the neurons fire sparsely if the measuring time is sufficiently short. 5 A More Interesting Example Somewhat closer to the problem of optimal encoding in VI, but still tractable, is the following example. A two-dimensional input 8 is drawn from a distribution p( 8) given by p(Sl, S2) = ~ (8(sl)e-ls21/2 + e-ISll/28(S2)) . (8) This input is encoded by four neurons, the inputs into these neurons are given by 1 (cos(?) ( U1 ) _ U2 - I cos(?)I + I sin(?)I - sin(?) cS10'ns((~)) ( SS21 ) , (9) 'I' == (Ui + IUil)/2, the threshold linear function. Due to the symmetry of the problem, rotation by a multiple of 7l' /2 leads to the same rates, up to a permutation. Thus we can restrict ourselves to 0 ::::; ? < 7l' /2. U3 = -U1, and U4 = -U2. The rates Ti satisfy Ti = (Ui)+ It is straightforward to show that Li < ni >= 4, and that sparseness of the activity, here defined by Li? nf > - < ni >2)/ < ni >2, has its minimum for ? = 7l'/4, and o.7.------,.-----r-----r-------, 0.6 0.5 0.2 0.1 00~----~0~.4,----r0~.8,--~--~1r..2~----r1.6 Figure 2: Mutual information 1M as function of </J, for T = 1 (solid line), for T = 2 (dashed line), and T = 3 (dotted line). maximum for </J given by = O. IM(n, 8; </J) Some straightforward algebra shows that the mutual information is = ,/,T + log(l + T) - ~ 1 l+T 1 ( C: T) C: T) n [log I COS(</J) I )nl (1 I COS(</J) I + I sin(</J) I og ~ T (I COS(~;~~~)s~n(</J)I) n log (1 + + ISin(</J) In) COS(</J) + I + :~:~:? In) ] (10) Figure 2 shows the 1M as a function of </J for different values of T. For all values of T the mutual information is maximal when </J = 0 and minimal for 7f I 4. For large T the angular dependent part of 1M scales as liT. So this dependence becomes negligible if the output is observed for a long time. Yet as in the previous example, for relatively short time-windows, optimal coding automatically leads to sparse coding. 6 Optimal Coding in VI Finally we turn to optimal encoding of images in the striate cortex. To study this I consider a system in with a large number, K, of natural images. The intensity of pixel j (1 ::; j ::; N i ) of image I\; is given by S j (1\;) . These images induce an input Ui into neuron i (1 ::; i ::; No) given by (11) Ui(l\;) = WijSj(I\;), L j which lead to firing rates Ti(l\;) which satisfy Ti(l\;) = f3- 1 10g(1 + e,t3ui(It)). Here I have used as smoothened version of of the threshold linear function (which is recovered in the limit f3 -t 00) to ensure that its derivative with respect to Wij is continuous. The neurons fire in a Poissonian manner, so that for image I\; the probability P( n, 1\;) of observing ni spikes for neuron i is given by (12) We want to choose the matrix such W the mutual information 1M between the image K, and the spike count for the different neurons, given by IM(n,K,j W) = 2: K1 n 2:p(n,K,j W) [log(p(n,K,j W) -log(p(nj W))] (13) K is maximized. Obviously an analytic solution is out of the question, but one may want to try to approach the optimal solution by gradient assent, using for the derivative of the mutual information. Here the derivative of p( n, K,) is given by ap(n,K,)/awi,j sj(K,)(l - ef3Ti(K)(ndri - T)p(n,K,) . The constraint on the rates, K- 1 LK Li ri(K,) = No is incorporated by adding this function with a Lagrange multiplier to the objective function. = Unfortunately the gradient assent approach is impractical, since the summation over n scales exponentially with No. In any case, one may want to use stochastic gradient assent to avoid getting trapped in local minima. But to do stochastic gradient assent it is sufficient to obtain a unbiased estimate of aIM/aWij. Denoting this derivative by aIM/aWij Ln Fij(n), where Fij has the obvious meaning, one can rewrite the derivative of the mutual information as = -2:-( ) aIM Fij(n) -pn -aWij n p(n) (15) provided that p( n) is non-zero for every n for which p( nj W) of aIM /awij (denoted by aiM /awij ) is obtained by taking - aIM aWij i= O. An unbiased estimate L =.!. 2: Fij (n(?)) L ?=1 (16) p(n(?)) , where the L vectors n(?) are drawn independently from the distribution p(n). Conjecturing that Fij (n) is roughly proportional to p( nj W), I set p( n) p( nj W) to obtain the best estimate of aIM/aWij for fixed L. Drawing from p(nj W) can be done in a computationally cheap way by first randomly picking K, and then draw from p(n, K" W), which factorizes. = In the simulation of the system I have used the natural image collection from [7]. From each of the approximately 4000 images a 10 x 10 patch. These were preprocessed by subtracting the mean and whitening the image. To reduce the effects of tlle comers a circular region was then extracted from the images. This resulted in an input which has 80 pixel intensities per image. These pixel intensities were encoded by 160 neurons. The coupling matrix was initialized by drawing tits components independently from a Gaussian distribution and rescaling the matrix to normalize < ri >. The time-window T was chosen to be T = 0.5, and (3 was gradually increased from its initial value of (3 1 to (3 10. The coupling matrix was updated using ? = 10- 4 , and L = 10. = = Figure 3 shows the some of the receptive fields that were obtained after the system had approached the fixed point, e.i. tlle running average of the mutual information no longer increased. These receptive fields look rather similar to those obtained from simple cells in the striate cortex. However a more thorough analysis of these receptive field and the sparseness of the rate distribution still has to be undertaken. , " r ~-- ? Il,~. ~ I - "~ c?=P1 _.....;.,:J- If A.#' l\ '1 ? ~ ,~ " . ': -2 e'I ~\, l!~ ~L? :::::I ? ~ ~ r:l 'I ~ ~ ?? , .- \ ~I - Figure 3: Forty examples of receptive fields that show clear Gabor like structure. 7 Discussion I have shown why optimal encoding using Gaussian channels naturally leads to highly degenerate solutions necessitating extra constraints. Using the biologically more plausible noise model which describes the neurons by Poisson processes naturally leads to a sparse representation, when optimal encoding is enforced, for two analytically tractable models. For a model of the striate cortex Poisson statistics also leads to a network in which the receptive fields of the neurons mimic those of VI simple cells, without the need to impose sparseness. This leads to the conclusion that sparseness is not an independent constraint that is imposed by evolutionary pressure, but rather is a consequence of optimal encoding. References [1] Baddeley, R., Hancock, P., and Foldiak:, P. (eds.) (2000) Information Theory and the Brain. (Cambridge University Press, Cambridge). [2] Olshausen, B.A. and Field, D.J. (1996) Nature 381 :607; (1998) Vision Research 37:3311. [3] Bell, A.j. and Sejnowski, T.J. (1997) Vision Res. 37:3327; van Hateren, lH. and van der Schaaf, A. (1998) Proc. R. Soc. Lond. 265 :359. [4] Cover, T.M. and Thomas, lA. (1991) Information Theory (Whiley and Sons, New York). [5] Richmond, B.J., Optican, L.M., and Spitzer, H. (1990) J. NeurophysioI. 64:351; Rolls, E.T., Critchley, H.D., and Treves, A. (1996) J. Neurophysiol. 75: 1982; Dean, A.F. (1981) Exp. Brain. Res. 44:437. [6] Smith, w.L. (1951) Biometrica 46:1. [7] van Hateren, lH. and van der Schaaf, A. (1998) Proc.R.Soc.Lond. B 265:359-366.
1884 |@word version:1 simulation:1 pressure:2 solid:1 awij:7 vigorously:1 initial:1 necessity:1 united:1 denoting:1 optican:1 recovered:1 yet:1 subsequent:1 happen:1 analytic:1 cheap:1 smith:1 short:2 nom:1 wijsj:1 become:1 uwm:3 manner:1 inter:2 notably:1 roughly:2 p1:1 brain:2 automatically:1 window:4 becomes:1 provided:2 maximizes:3 spitzer:1 finding:1 transformation:1 nj:5 impractical:1 thorough:1 every:1 nf:1 ti:4 sip:1 ro:1 unit:3 before:1 negligible:1 local:1 limit:2 consequence:1 encoding:8 firing:1 ap:1 approximately:1 awi:1 suggests:2 co:6 procedure:1 area:1 bell:1 gabor:1 word:1 induce:1 get:1 imposed:1 dean:1 maximizing:1 go:1 straightforward:3 independently:2 immediately:1 insight:1 array:1 transmit:1 updated:1 us:1 u4:1 sparsely:1 critchley:1 observed:5 capture:1 region:1 trade:1 balanced:1 broken:1 ui:5 weakly:1 rewrite:1 algebra:1 tit:1 neurophysiol:1 comer:1 joint:1 hancock:1 describe:1 london:2 sejnowski:1 approached:1 whose:1 encoded:2 plausible:2 drawing:2 otherwise:1 statistic:2 itself:1 superscript:1 obviously:1 hoc:1 reconstruction:3 subtracting:1 maximal:5 degenerate:2 fired:1 normalize:1 getting:1 wcin:1 r1:1 perfect:1 coupling:3 fixing:1 soc:2 concentrate:1 fij:5 correct:1 stochastic:2 tlle:2 sand:1 argued:1 require:1 fix:1 really:1 biological:1 summation:1 im:9 sufficiently:1 exp:4 u3:1 consecutive:1 proc:2 nlr:1 reflects:1 clearly:1 gaussian:6 aim:7 rather:2 avoid:1 pn:1 og:1 factorizes:1 dsp:3 richmond:1 sense:1 whence:1 dependent:2 i0:1 unlikely:1 w:1 transformed:1 wij:1 pixel:3 issue:1 denoted:1 mutual:22 schaaf:2 field:9 equal:1 f3:2 represents:1 lit:1 look:1 rls:1 survive:1 nearly:1 mimic:1 minimized:1 others:1 stimulus:4 randomly:1 resulted:1 replaced:1 ourselves:1 fire:4 interest:1 highly:2 circular:1 nl:1 emit:1 closer:1 necessary:1 lh:2 unless:1 initialized:1 re:2 minimal:1 increased:3 modeling:2 ar:1 cover:1 measuring:2 queen:1 smoothened:1 optimally:1 varies:1 vm:1 off:2 picking:1 vastly:1 choose:1 dr:3 derivative:5 rescaling:2 li:3 account:1 coding:6 satisfy:4 explicitly:1 vi:4 ad:1 depends:1 try:2 observing:1 wm:4 square:6 ni:11 il:1 roll:1 variance:1 maximized:4 accurately:1 explain:1 ed:1 definition:1 obvious:1 naturally:3 degeneracy:2 emits:1 dimensionality:1 ubiquitous:1 electrophysiological:1 response:1 rand:1 done:1 strongly:1 generality:1 angular:1 implicit:1 d:3 replacing:1 olshausen:2 effect:1 multiplier:1 unbiased:2 evolution:1 analytically:2 former:1 excluded:1 sin:3 ue:4 unambiguous:1 m:1 complete:1 necessitating:1 assent:4 image:11 meaning:1 recently:1 rotation:1 exponentially:1 cambridge:2 pm:5 had:2 reliability:2 cortex:4 longer:1 whitening:1 add:1 foldiak:1 der:2 minimum:2 somewhat:1 impose:3 r0:1 forty:1 determine:1 maximize:1 period:1 dashed:1 ii:1 biometrica:1 multiple:1 infer:1 plausibility:1 long:1 iog:2 vision:2 poisson:5 represent:1 achieved:1 cell:5 whereas:1 want:4 utu:1 interval:3 extra:5 emitted:2 unitary:2 ee:1 presence:1 iii:1 independence:1 restrict:1 reduce:1 det:1 whether:3 speaking:1 york:1 adequate:1 generally:1 clear:2 simplest:1 sl:2 dotted:1 neuroscience:1 trapped:1 per:1 four:1 threshold:2 drawn:4 changing:1 preprocessed:1 isin:1 undertaken:1 enforced:1 arrive:1 patch:1 draw:1 activity:4 constraint:17 ri:4 u1:2 optimality:2 lond:2 relatively:1 smaller:1 describes:1 son:1 biologically:5 argmaxw:1 explained:1 invariant:1 gradually:1 taken:1 ln:1 equation:1 computationally:1 describing:1 vreeswijk:1 count:2 turn:1 tractable:2 gaussians:1 rewritten:1 thomas:1 assumes:1 denotes:1 ensure:1 running:1 plausibly:1 k1:1 objective:1 added:3 question:1 spike:14 receptive:7 strategy:1 primary:2 rt:1 dependence:1 striate:3 evolutionary:2 gradient:4 quiet:1 capacity:1 sensible:1 reason:1 assuming:1 minimizing:1 kingdom:1 unfortunately:1 resurgence:1 rise:1 neuron:27 incorrectly:1 incorporated:1 intensity:3 treves:1 namely:1 poissonian:2 natural:6 treated:1 scheme:1 lk:1 checking:1 determining:1 loss:1 permutation:1 interesting:1 proportional:1 sufficient:2 placed:1 transpose:1 taking:1 sparse:4 van:5 distributed:1 dimension:1 sensory:1 made:1 collection:1 sj:1 keep:1 conclude:1 assumed:1 continuous:1 why:2 channel:2 transfer:1 robust:1 nature:1 symmetry:1 did:1 linearly:1 s2:2 noise:19 repeated:1 neuronal:2 gatsby:1 n:1 vms:4 pe:3 dominates:1 utr:1 adding:1 effectively:1 importance:1 push:1 sparseness:13 entropy:1 simply:1 univariate:1 likely:1 visual:1 lagrange:1 desire:2 u2:2 satisfies:3 extracted:1 conditional:1 neurophysioi:1 presentation:1 towards:1 infinite:2 reducing:1 uniformly:1 total:1 la:1 ew:1 rit:1 college:1 latter:2 hateren:2 baddeley:1 instructive:1 correlated:1
967
1,885
One Microphone Source Separation Sam T. Roweis Gatsby Unit, University College London [email protected]. a c.uk Abstract Source separation, or computational auditory scene analysis , attempts to extract individual acoustic objects from input which contains a mixture of sounds from different sources, altered by the acoustic environment. Unmixing algorithms such as lCA and its extensions recover sources by reweighting multiple observation sequences, and thus cannot operate when only a single observation signal is available. I present a technique called refiltering which recovers sources by a nonstationary reweighting ("masking") of frequency sub-bands from a single recording, and argue for the application of statistical algorithms to learning this masking function . I present results of a simple factorial HMM system which learns on recordings of single speakers and can then separate mixtures using only one observation signal by computing the masking function and then refiltering. 1 Learning from data in computational auditory scene analysis Imagine listening to many pianos being played simultaneously. If each pianist were striking keys randomly it would be very difficult to tell which note came from which piano. But if each were playing a coherent song, separation would be much easier because of the structure of music. Now imagine teaching a computer to do the separation by showing it many musical scores as "training data". Typical auditory perceptual input contains a mixture of sounds from different sources, altered by the acoustic environment. Any biological or artificial hearing system must extract individual acoustic objects or streams in order to do successful localization, denoising and recognition. Bregman [1] called this process auditory scene analysis in analogy to vision. Source separation, or computational auditory scene analysis (CASA) is the practical realization of this problem via computer analysis of microphone recordings and is very similar to the musical task described above. It has been investigated by research groups with different emphases. The CASA community have focused on both multiple and single microphone source separation problems under highly realistic acoustic conditions, but have used almost exclusively hand designed systems which include substantial knowledge of the human auditory system and its psychophysical characteristics (e.g. [2,3]). Unfortunately, it is difficult to incorporate large amounts of detailed statistical knowledge about the problem into such an approach. On the other hand, machine learning researchers, especially those working on independent components analysis (lCA) and related algorithms, have focused on the case of multiple microphones in simplified mixing environments and have used powerful "blind" statistical techniques. These "unmixing" algorithms (even those which attempt to recover more sources than signals) cannot operate on single recordings. Furthermore, since they often depend only on the joint amplitude histogram of the observations they can be very sensitive to the details of filtering and reverberation in the environment. The goal of this paper is to bring together the robust representations of CAS A and methods which learn from data to solve a restricted version of the source separation problem - isolating acoustic objects from only a single microphone recording. 2 Refiltering vs. unmixing Unmixing algorithms reweight multiple simultaneous recordings mk (t) (generically called microphones) to form a new source object s(t): s(t) '-v-" = D:lml(t)+D:2m2(t)+ ... +D:KmK(t) '-v-'" '-v-'" (1) ............... estimated source mic 1 mic 2 mic K The unmixing coefficients D:i are constant over time and are chosen to optimize some property of the set of recovered sources, which often translates into a kurtosis measure on the joint amplitude histogram of the microphones . The intuition is that unmixing algorithms are finding spikes (or dents for low kurtosis sources) in the marginal amplitude histogram. The time ordering of the datapoints is often irrelevant. Unmixing depends on a fine timescale, sample-by-sample comparison of several observation signals. Humans, on the other hand, cannot hear histogram spikes l and perform well on many monaural separation tasks. We are doing structural analysis, or a kind of perceptual grouping on the incoming sound. But what is being grouped? There is substantial evidence that the energy across time in different frequency bands can carry relatively independent information. This suggests that the appropriate subparts of an audio signal may be narrow frequency bands over short times. To generate these parts, one can perform multi band analysis - break the original signal y(t) into many subband signals bi(t) each filtered to contain only energy from a small portion of the spectrum. The results of such an analysis are often displayed as a spectrogram which shows energy (using colour or grayscale) as a function of time (ordinate) and frequency (abscissa). (For example one is shown on the top left of figure 5.) In the musical analogy, a spectrogram is like a musical score in which the colour or grey level of the each note tells you how hard to hit the piano key. The basic idea of refiltering is to construct new sources by selectively reweighting the multiband signals bi(t). Crucially, however, the mixing coefficients are no longer constant over time; they are now called masking signals. Given a set of masking signals, denoted D:i(t), a source s(t) can be recovered by modulating the corresponding subband signals from the original input and summing: mask 1 mask 2 ,-.-.. s(t) maskK ,-.-.. + ... + ,.--.... D:l(t) b1(t) + D:2(t) b2(t) '-v-" ~ ~ '-v-" estimated source sub-band 1 sub-band 2 sub-band K D:K(t) bK(t) (2) The D:i(t) are gain knobs on each subband that we can twist over time to bring bands in and out of the source as needed. This performs masking on the original spectrogram. (An equivalent operation can be performed in the frequency domain. 2 ) This approach, illustrated in figure 1, forms the basis of many CASA approaches (e.g. [2,3,4]). For any specific choice of masking signals D:i(t), refiltering attempts to isolate a single source from the input signal and suppress all other sources and background noises. Different sources can be isolated by choosing different masking signals. Henceforth, I will make a strong simplifying assumption that D:i(t) are binary and constant over a timescale T of roughly 30ms. This is physically unrealistic, because the energy in each small region of time-frequency never comes entirely from a single source. However in practice, for small numbers of sources, this approximation works quite well (figure 3). (Think of ignoring collisions by assuming separate piano players do not often hit the same note at the same time.) lTry randomJy permuting the time order of samples in a stereo mixture containing several sources and see if you still hear distinct streams when you play it back. 2Make a conventional spectrogram of the original signal y(t) and modulate the magnitude of each short time DFT while preserving its phase: SW(T) = F- 1 {D:wIIF{yW(r)}IILF{yW(r)}} where sW(r) and yW(r) are the wth windows (blocks) of the recovered and original signals, oi is the masking signal for subband i in window w, and F[?] is the DFf. Figure 1: The refiltering approach to one microphone source separation. Multiband analysis of the original signal y(t) gives sub-band signals bi(t) which are modulated by masking signals ai(t) (binary or real valued between 0 and 1) and recombined to give the estimated source or object s(t). Refiltering can also be thought of as a highly nonstationary Wiener filter in which both the signal and noise spectra are re-estimated at a rate l/T; the binary assumption is equivalent to assuming that over a timescale T the signal and noise spectra are nonoverlapping. It is a fortunate empirical fact that refiltering, even with binary masking signals, can cleanly separate sources from a single mixed recording. This can be demonstrated by taking several isolated sources or noises and mixing them in a controlled way. Since the original components are known, an "optimal" set of masking signals can be computed. For example, we might set 0i ( t) equal to the ratio of energy from one source in band i around times t ? T to the sum of energies from all sources in the same band at that time (as recommended by the Wiener filter) or to a binary version which thresholds this ratio. Constructing masks in this way is also useful for generating labeled training data, as discussed below. 3 Multiband grouping as a statistical pattern recognition problem Since one-microphone source separation using refiltering is possible if the masking signals are well chosen, the essential problem becomes: how can the Oi(t) be computed automatically from a single mixed recording? The goal is to group or "tag" together regions of the spectrogram that belong to the same auditory object. Fortunately, in audition (as in vision), natural signals-especially speech---exhibit a lot of regularity in the way energy is distributed across the time-frequency plane. Grouping cues based on these regularities have been studied for many years by psychophysicists and are hand built into many CASA systems. Cues are based on the idea of suspicious coincidences: roughly, "things that move together likely belong together". Thus, frequencies which exhibit common onsets, offsets, or upward/downward sweeps are more likely to be grouped into the same stream (figure 2). Also, many real world sounds have harmonic spectra; so frequencies which lie exactly on a harmonic "stack" are often perceptually grouped together. (Musically, piano players do not hit keys randomly, but instead use chords and repeated melodies.) Harmonic stacking. Common onset. Frequency co-modulation. Figure 2: Examples of three common grouping cues for energy which often comes from a single source. (left) Frequencies which lie exactly on harmonic multiples of a single base frequency. (middle) Frequencies which suddenly increase or decrease their energy together. (right) Energy which which moves up or down in frequency at the same time. There are several ways that statistical pattern recognition might be applied to take advantage of these cues. Methods may be roughly grouped into unsupervised ones, which learn models of isolated sources and then try to explain mixed input as being caused by the interaction of individual source models; and supervised methods, which explicitly model grouping in mixed acoustic input but require labeled data consisting of mixed input as well as masking signals. Luckily it is very easy to generate such data by mixing isolated sources in a controlled way, although the subsequent supervised learning can difficult. 3 Figure 3: Each point represents the energy from one source versus another in a narrow frequency band over a 32ms window. The plot shows all frequencies over a 2 second period from a speech mixture. Typically when one source has large energy the other does not. The binary assumption on the masking signals O!i(t) is equivalent to projecting the points shown onto either the horizontal or vertical axis. 4 Results using factorial-max HMMs Here, I will describe one (purely unsupervised) method I have pursued for automatically generating masking signals from a single microphone. The approach first trains speaker dependent hidden Markov models (HMMs) on isolated data from single talkers. These pre-trained models are then combined in a particular way to build a separation system. First, for each speaker, a simple HMM is fit using patches of narrowband spectrograms as the pattern vectors. 4 The emission densities model the typical spectral patterns produced by each talker, while the transition probabilities encourage spectral continuity. HMM training was initialized by first training a mixture of Gaussians on each speaker's data (with a single shared covariance matrix) independent of time order. Each mixture had 8192 components of dimension 1026 = 513 x 2; thus each HMM had 8192 states. To avoid overfitting, the transition matrices were regularlized after training so that each transition (even those unobserved in the training set) had a small finite probability. Next, to separate a new single recording which is a mixture of known speakers, these pretrained models are combined into afactorial hidden Markov model (FHMM) architecture [5]. A FHMM consists of two or more underlying Markov chains (the hidden states) which evolve independently. The observation Yt at any time depends on the states of all the chains. A simple way to model this dependence is to have each chain c independently propose an output yC and then combine them to generate the observation according to some rule Yt = Q(yi, yl, ... ,yD? Below, I use a model with only two chains, whose states are denoted Xt and Zt. At each time, one chain proposes an output vector ax, and the other proposes h z ,. The key part of the model is the function Q: observations are generated by taking the elementwise maximum of the proposals and adding noise. This maximum operation reflects the observation that the log magnitude spectrogram of a mixture of sources is very nearly the elementwise maximum of the individual spectrograms. The full generative model for this "factorial-max HMM" can be written simply as: p(Xt = jlXt-l = i) = Tij p(Zt = jlZt-l = i) = U ij p(Yt IXt, Zt) = N(max[axt! h z,], R) (3) (4) (5) 3Recall that refiltering can only isolate one auditory stream at a time from the scene (we are always separating "a source" from "the background"). This makes learning the masking signals an unusual problem because for any input (spectrogram) there are as many correct answers as objects in the scene. Such a highly multimodal distribution on outputs given inputs means that the mapping from auditory input to masking signals cannot be learned using backprop or other single-valued function approximators which take the average of the possible maskings present in the training data. 4The observations are created by concatenating the values of 2 adjacent columns of the log magnitude periodogram into a single vector. The original waveforms were sampled at 16kHz. Periodogram windows of 32ms at a frame rate of 16ms were analyzed using a Hamming tapered OFT zero padded to length 1024. This gave 513 frequency samples from OC to Nyquist. Average signal energy was normalized across the most recent 8 frames before computing each OFT. where N(f.-L, 1;) denotes a Gaussian distribution with mean f.-L and covariance 1; and max[?] is the elementwise maximum operation on two vectors. (There are also densities on the initial states Xl and zd This model is illustrated in figure 4. It ignores two aspects of the spectrogram data: first, Gaussian noise is used although the observations are nonnegative; second, the probability factor requiring the non-maximum output proposal to be less than the maximum proposal is missing. However, in practice these approximations are not too severe and making them allows an efficient inference procedure (see below) . ??? Figure 4: Factorial HMM with max output semantics. Two Markov chains Xt and Zt evolve independently. Observations Yt are the elementwise max of the individual emission vectors max[a x " b z ,] plus Gaussian noise. ??? In the experiment presented below, each chain represents a speaker dependent HMM (one male and one female). The emission and transition probabilities from each speaker's pretrained HMM were used as the parameters for the combined FHMM. (The output noise covariance R is shared between the two HMMs.) Given an input waveform, the observation sequence Y = YI, ... ,YT is created from the spectrogram as before. 4 Separation is done by first inferring a joint underlying state sequence {Xt, Zt} of the two Markov chains in the model and then using the difference of their individual output predictions to compute a binary masking signal: Clt(i) =1 if a~, (i) > hz, (i) and 0 if a~, (i) ~ hz, (i) (6) Ideally, the inferred state sequences {Xt, Zt} should be the mode of the posterior distribution p(Xt, ztIY). Since the hidden chains share a single visible output variable, naive inference in the FHMM graphical model yields an intractable amount of work exponential in the size of the state space of each submodel. However, because all of the observations are nonnegative and the max operation is used to combine output proposals, there is an efficient trick for computing the best joint state trajectory. At each time, we can upper bound the log-probability of generating the observation vector if one chain is in state i, no matter what state the other chain is in. Computing these bounds for each state setting of each chain requires only a linear amount of work in the size of the state spaces. With these bounds in hand, each time we evaluate the probability of a specific pair of states we can eliminate from consideration all state settings of either chain whose bounds are worse than the achieved probability. If pairs of states are evaluated in a sensible heuristic order (for example by ranking the bounds) this results in practice in almost all possible configurations being quickly eliminated. (This trick turns out to be equivalent to Clj3 search in game trees.) The training data for the model consists only of spectrograms of isolated examples of each speaker but inference can be done on test data which is a spectrogram of a single mixture of known speakers. The results of separating a simple two speaker mixture are shown below. The test utterance was formed by linearly mixing two out-of-sample utterances (one male and one female) from the same speakers as the models were trained on. Figure 5 shows the original mixed spectrogram (top left) as well as the sequence of outputs a~, (bottom left) and hz, (bottom right) from each chain. The chain with the maximum output in any sub-band at any time has Cli(t) = 1, otherwise Cli(t) = 0 (top right). The FHMM system achieves good separation from only a single microphone (see figure 6). < ? > hz, Figure 5: (top left) Original spectrogram of mixed utterance. (bottom) Male and female spectrograms predicted by factorial HMM and used to compute refiltering masks. (top right) Masking signals Oi (t) , computed by comparing the magnitudes of each model's predictions. 5 Conclusions In this paper I have argued for the marriage of learning algorithms with the refiltering approach to CASA. I have presented results from a simple factorial HMM system on a speaker dependent separation problem which indicate that automatically learned onemicrophone separation systems may be possible. In the machine learning community, the one-microphone separation problem has received much less attention than unmixing problems, while CASA researchers have not employed automatic learning techniques to full effect. Scene analysis is an interesting and challenging learning problem with exciting and practical applications, and the refiltering setup has many nice properties. First, it can work if the masking signals are chosen properly. Second, it is easy to generate lots of training data, both supervised and unsupervised. Third, a good learning algorithmwhen presented with enough data-should automatically discover the sorts of grouping cues which have been built into existing systems by hand. Furthermore, in the refiltering paradigm there is no need to make a hard decision about the number of sources present in an input. Each proposed masking has an associated score or probability; groupings with high scores can be considered "sources", while ones with low scores might be parts of the background or mixtures other faint sources. CAS A returns a collection of candidate maskings and their associated scores, and then it is up to the user to decide-based on the range of scores-the number of sources in the scene. Many existing approaches to speech and audio processing have the potential to be applied to the monaural source separation problem. The unsupervised factorial HMM system presented in this paper is very similar to the work in the speech recognition community on parallel model combination [6,7]; however rather than using the combined models to evaluate the likelihood of speech in noise, the efficiently inferred states are being used to generate a masking signal for refiltering. Wan and Nelson have developed dual EKF methods [8] and applied them speech denoising but have also informally demonstrated their potential application to monaural source separation. Attias and colleagues [9] developed a fully probabilistic model of speech in noise and used variational Bayesian techniques to perform inference and learning allowing denoising and dereverberation; their approach clearly has the potential to be applied to the separation problem as well. Cauwenberghs [10] has a very promising approach to the problem for purely harmonic signals that takes advantage of powerful phase constraints which are ignored by other algorithms. Unsupervised and supervised approaches can be combined to various degrees. Learning models of isolated sounds may be useful for developing feature detectors; conjunctions of such feature detectors can then be trained in a supervised fashion using labeled data. Figure 6: Test separation results, using a 2-chain speaker dependent factorial-max HMM, followed by refiltering. (See figure 4 and text for details.) (A) Original waveform of mixed utterance. (B) Original isolated male & female waveforms. (C) Estimated male and female waveforms. The oscillatory correlation algorithm of Brown and Wang [4] has a low level module to detect features in the correlogram and a high level module to do grouping. Related ideas in machine vision, such as Markov networks [11] and minimum normalized cut [12] use low level operations to define weights between pixels and then higher level computations to group pixels together. Acknowledgements Thanks to Hagai Attias, Guy Brown, Geoff Hinton and Lawrence Saul for many insightful discussions about the CASA problem, and to three anonymous referees and many visitors to my poster for helpful comments, criticisms and references to work I had overlooked. References [1] [2] AS. Bregman. (1994) Auditory Scene Analysis. MIT Press. G. Brown & M. Cooke. (1994) Computational auditory scene analysis. Computer Speech and Language 8. [3] D. Ellis. (1994) A computer implementation of psychoacoustic grouping rules. Proc. 12th IntI. Conf. on Pattern Recognition, Jerusalem. [4] G. Brown & D.L. Wang. (2000) An oscillatory correlationframeworkfor computational auditory scene analysis. NIPS 12. [5] Z. Ghalu'amani & M.l. Jordan (1997) Factorial hidden Markov models, Machine Learning 29. [6] AP. Varga & R.K. Moore (1990) Hidden Markov model decomposition of speech and noise, IEEE Conf. Acoustics, Speech & Signal Processing (ICASSP'90). [7] M.J.F. Gales & SJ. Young (1996) Robust continuous speech recognition using parallel model combination, IEEE Trans. Speech & Audio Processing 4. [8] E.A. Wan & A.T. Nelson (1998) Removal of noise from speech using the dual EKF algorithm, IEEE Conf. Acoustics, Speech & Signal Processing (ICASSP'98). [9] H. Attias, J.C. Platt & A. Acero (2001) Speech denoising and dereverberation using probabilistic models, this volume. [10] G. Cauwenberghs (1999) Monaural separation of independent acoustical components, IEEE Symp. Circuit & Systems (ISCAS'99). [11] W. Freeman & E. Pasztor. (1999) Markov networks for low-level vision. Mitsubishi Electric Research Laboratory Technical Report TR99-08. [12] J. Shi & J. Malik. (1997) Normalized cuts and image segmentation. IEEE Conf. Computer Vision and Pattern Recognition, Puerto Rico (ICCV'97).
1885 |@word middle:1 version:2 cleanly:1 grey:1 crucially:1 mitsubishi:1 simplifying:1 covariance:3 decomposition:1 carry:1 initial:1 configuration:1 contains:2 score:7 exclusively:1 dff:1 existing:2 kmk:1 recovered:3 comparing:1 must:1 written:1 realistic:1 subsequent:1 visible:1 designed:1 plot:1 v:1 cue:5 pursued:1 generative:1 plane:1 short:2 filtered:1 wth:1 suspicious:1 consists:2 combine:2 symp:1 mask:4 roughly:3 abscissa:1 multi:1 freeman:1 automatically:4 window:4 becomes:1 discover:1 underlying:2 circuit:1 what:2 kind:1 developed:2 finding:1 unobserved:1 exactly:2 axt:1 hit:3 uk:1 platt:1 unit:1 before:2 modulation:1 yd:1 ap:1 might:3 plus:1 emphasis:1 studied:1 suggests:1 challenging:1 co:1 hmms:3 bi:3 range:1 practical:2 practice:3 block:1 procedure:1 empirical:1 thought:1 poster:1 pre:1 cannot:4 onto:1 acero:1 optimize:1 equivalent:4 conventional:1 demonstrated:2 yt:5 missing:1 shi:1 jerusalem:1 attention:1 independently:3 focused:2 m2:1 rule:2 submodel:1 datapoints:1 imagine:2 play:1 user:1 trick:2 referee:1 mic:3 recognition:7 cut:2 labeled:3 bottom:3 module:2 coincidence:1 wang:2 region:2 ordering:1 decrease:1 chord:1 substantial:2 intuition:1 environment:4 ideally:1 trained:3 depend:1 purely:2 localization:1 basis:1 multimodal:1 joint:4 icassp:2 geoff:1 various:1 train:1 distinct:1 describe:1 london:1 artificial:1 tell:2 psychophysicist:1 choosing:1 quite:1 whose:2 heuristic:1 solve:1 valued:2 otherwise:1 timescale:3 think:1 sequence:5 advantage:2 kurtosis:2 ucl:1 propose:1 interaction:1 realization:1 mixing:5 roweis:2 cli:2 regularity:2 refiltering:16 unmixing:8 generating:3 object:7 ij:1 received:1 strong:1 predicted:1 come:2 indicate:1 waveform:5 correct:1 filter:2 luckily:1 human:2 melody:1 backprop:1 require:1 musically:1 argued:1 anonymous:1 recombined:1 biological:1 hagai:1 extension:1 dent:1 around:1 marriage:1 considered:1 lawrence:1 mapping:1 talker:2 achieves:1 proc:1 sensitive:1 modulating:1 grouped:4 puerto:1 reflects:1 mit:1 clearly:1 always:1 gaussian:3 ekf:2 rather:1 avoid:1 conjunction:1 knob:1 ax:1 emission:3 properly:1 likelihood:1 criticism:1 detect:1 helpful:1 inference:4 dependent:4 typically:1 eliminate:1 hidden:6 semantics:1 upward:1 pixel:2 dual:2 denoted:2 proposes:2 marginal:1 equal:1 construct:1 never:1 eliminated:1 represents:2 unsupervised:5 nearly:1 report:1 randomly:2 simultaneously:1 individual:6 phase:2 consisting:1 iscas:1 attempt:3 highly:3 severe:1 generically:1 mixture:12 analyzed:1 male:5 permuting:1 chain:16 bregman:2 encourage:1 tree:1 initialized:1 re:1 isolating:1 isolated:8 mk:1 column:1 elli:1 stacking:1 hearing:1 successful:1 too:1 ixt:1 answer:1 my:1 combined:5 thanks:1 density:2 probabilistic:2 yl:1 together:7 quickly:1 containing:1 wan:2 gale:1 henceforth:1 guy:1 worse:1 conf:4 audition:1 return:1 potential:3 nonoverlapping:1 b2:1 coefficient:2 matter:1 caused:1 explicitly:1 blind:1 stream:4 depends:2 performed:1 try:1 break:1 lot:2 onset:2 doing:1 portion:1 cauwenberghs:2 recover:2 sort:1 parallel:2 masking:25 oi:3 formed:1 wiener:2 musical:4 characteristic:1 efficiently:1 yield:1 multiband:3 fhmm:5 bayesian:1 produced:1 trajectory:1 researcher:2 simultaneous:1 explain:1 detector:2 oscillatory:2 energy:13 colleague:1 frequency:17 associated:2 recovers:1 hamming:1 gain:1 auditory:12 sampled:1 recall:1 knowledge:2 segmentation:1 amplitude:3 back:1 rico:1 higher:1 supervised:5 done:2 evaluated:1 furthermore:2 correlation:1 hand:6 working:1 horizontal:1 reweighting:3 continuity:1 mode:1 effect:1 contain:1 normalized:3 requiring:1 brown:4 moore:1 laboratory:1 illustrated:2 adjacent:1 game:1 speaker:13 oc:1 m:4 ranking:1 performs:1 bring:2 narrowband:1 image:1 harmonic:5 consideration:1 variational:1 common:3 twist:1 khz:1 volume:1 discussed:1 belong:2 elementwise:4 dft:1 ai:1 automatic:1 teaching:1 language:1 had:4 longer:1 base:1 posterior:1 recent:1 female:5 irrelevant:1 binary:7 came:1 approximators:1 yi:2 preserving:1 minimum:1 fortunately:1 spectrogram:16 employed:1 paradigm:1 period:1 recommended:1 signal:39 clt:1 multiple:5 sound:5 full:2 technical:1 controlled:2 prediction:2 basic:1 vision:5 physically:1 histogram:4 achieved:1 proposal:4 background:3 fine:1 source:45 operate:2 comment:1 recording:9 isolate:2 hz:4 thing:1 jordan:1 nonstationary:2 structural:1 easy:2 enough:1 fit:1 gave:1 architecture:1 idea:3 translates:1 attias:3 listening:1 colour:2 nyquist:1 song:1 stereo:1 speech:15 ignored:1 useful:2 collision:1 detailed:1 yw:3 tij:1 factorial:9 informally:1 amount:3 varga:1 tr99:1 band:13 generate:5 estimated:5 subpart:1 zd:1 group:3 key:4 threshold:1 tapered:1 padded:1 sum:1 year:1 powerful:2 you:3 striking:1 almost:2 decide:1 separation:21 patch:1 decision:1 entirely:1 bound:5 followed:1 played:1 nonnegative:2 constraint:1 scene:11 tag:1 aspect:1 relatively:1 developing:1 lca:2 according:1 combination:2 across:3 sam:1 making:1 projecting:1 restricted:1 iccv:1 inti:1 lml:1 turn:1 needed:1 unusual:1 available:1 operation:5 gaussians:1 appropriate:1 spectral:2 original:12 top:5 denotes:1 include:1 graphical:1 sw:2 music:1 subband:4 especially:2 build:1 suddenly:1 psychophysical:1 move:2 sweep:1 malik:1 spike:2 dependence:1 exhibit:2 separate:4 separating:2 hmm:12 sensible:1 nelson:2 acoustical:1 argue:1 assuming:2 length:1 ratio:2 difficult:3 unfortunately:1 setup:1 reweight:1 reverberation:1 suppress:1 implementation:1 zt:6 perform:3 allowing:1 upper:1 vertical:1 observation:15 markov:9 pasztor:1 finite:1 displayed:1 hinton:1 frame:2 monaural:4 stack:1 community:3 inferred:2 ordinate:1 bk:1 overlooked:1 pair:2 acoustic:9 coherent:1 learned:2 narrow:2 nip:1 trans:1 below:5 pattern:6 yc:1 dereverberation:2 oft:2 hear:2 built:2 max:9 unrealistic:1 natural:1 altered:2 axis:1 created:2 extract:2 naive:1 utterance:4 pianist:1 text:1 nice:1 piano:5 acknowledgement:1 removal:1 evolve:2 fully:1 mixed:8 interesting:1 filtering:1 analogy:2 versus:1 degree:1 exciting:1 playing:1 share:1 cooke:1 saul:1 taking:2 distributed:1 dimension:1 world:1 transition:4 ignores:1 collection:1 simplified:1 sj:1 overfitting:1 incoming:1 summing:1 b1:1 spectrum:4 grayscale:1 search:1 continuous:1 promising:1 learn:2 robust:2 ca:2 ignoring:1 investigated:1 maskk:1 constructing:1 domain:1 electric:1 psychoacoustic:1 linearly:1 noise:12 repeated:1 fashion:1 gatsby:2 sub:6 inferring:1 concatenating:1 fortunate:1 lie:2 xl:1 perceptual:2 periodogram:2 exponential:1 third:1 candidate:1 learns:1 young:1 down:1 casa:7 specific:2 xt:6 showing:1 insightful:1 offset:1 faint:1 evidence:1 grouping:9 essential:1 intractable:1 adding:1 magnitude:4 perceptually:1 downward:1 easier:1 simply:1 likely:2 correlogram:1 pretrained:2 modulate:1 goal:2 shared:2 hard:2 typical:2 denoising:4 microphone:12 called:4 player:2 selectively:1 college:1 modulated:1 visitor:1 incorporate:1 evaluate:2 audio:3
968
1,886
Rate-coded Restricted Boltzmann Machines for Face Recognition Vee WhyeTeh Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada Geoffrey E. Hinton Gatsby Computational Neuroscience UnitUniversity College London London WCIN 3AR u.K. [email protected] hinton@ gatsby. ucl.ac. uk Abstract We describe a neurally-inspired, unsupervised learning algorithm that builds a non-linear generative model for pairs of face images from the same individual. Individuals are then recognized by finding the highest relative probability pair among all pairs that consist of a test image and an image whose identity is known. Our method compares favorably with other methods in the literature. The generative model consists of a single layer of rate-coded, non-linear feature detectors and it has the property that, given a data vector, the true posterior probability distribution over the feature detector activities can be inferred rapidly without iteration or approximation. The weights of the feature detectors are learned by comparing the correlations of pixel intensities and feature activations in two phases: When the network is observing real data and when it is observing reconstructions of real data generated from the feature activations. 1 Introduction Face recognition is difficult when the number of individuals is large and the test and training images of an individual differ in expression, pose, lighting or the date on which they were taken. In addition to being an important application, face recognition allows us to evaluate different kinds of algorithm for learning to recognize or compare objects, since it requires accurate representation of fine discriminative features in the presence of relatively large within-individual variations. This is made even more difficult when there are very few exemplars of each individual. We start by describing a new unsupervised learning algorithm for a restricted form of Boltzmann machine [1]. We then show how to generalize the generative model and the learning algorithm to deal with real-valued pixel intensities and rate-coded feature detectors. We then apply the model to face recognition and compare it to other methods. 2 Inference and learning in Restricted Boltzmann Machines A Restricted Boltzmann machine (RBM) [2] is a Boltzmann machine with a layer of visible units and a single layer of hidden units with no hidden-to-hidden nor visible-to-visible ?Correspondence address time = reconstruction 1 data 0 fantasy 00 Figure 1: Alternating Gibbs sampling and the terms in the learning rules of a RBM. connections. Because there is no explaining away [3], inference in an RBM is much easier than in a general Boltzmann machine or in a causal belief network with one hidden layer. There is no need to perform any iteration to determine the activities of the hidden units, as the hidden states, Sj, are conditionally independent given the visible states, Si . The distribution of Sj is given by the standard logistic function: p(Sj = llsi) = 1 1 + exp( _ Li WijSi) (1) Conversely, the hidden states of an RBM are marginally dependent so it is easy for an RBM to learn population codes in which units may be highly correlated. It is hard to do this in causal belief networks with one hidden layer because the generative model of a causal belief net assumes marginal independence. An RBM can be trained using the standard Boltzmann machine learning algorithm which follows a noisy but unbiased estimate of the gradient of the log likelihood of the data. One way to implement this algorithm is to start the network with a data vector on the visible units and then to alternate between updating all of the hidden units in parallel and updating all of the visible units in parallel with Gibbs sampling. Figure 1 illustrates this process. If this alternating Gibbs sampling is run to equilibrium, there is a very simple way to update the weights so as to minimize the Kullback-Leibler divergence, QOIIQoo, between the data distribution, QO, and the equilibrium distribution of fantasies over the visible units, Qoo, produced by the RBM [4]: (2) where < SiSj >Qo is the expected value of SiSj when data is clamped on the visible units and the hidden states are sampled from their conditional distribution given the data, and <SiSj>Q~ is the expected value of SiSj after prolonged Gibbs sampling. This learning rule does not work well because it can take a long time to approach equilibrium and the sampling noise in the estimate of < SiSj >Q~ can swamp the gradient. Hinton [1] shows that it is far more effective to minimize the difference between QOllQoo and Q111Qoo where Q1 is the distribution of the one-step reconstructions of the data that are produced by first picking binary hidden states from their conditional distribution given the data and then picking binary visible states from their conditional distribution given the hidden states. The exact gradient of this "contrastive divergence" is complicated because the distribution Q1 depends on the weights, but this dependence can safely be ignored to yield a simple and effective learning rule for following the approximate gradient of the contrastive divergence: (3) 3 Applying RBMs to face recognition For images of faces, binary pixels are far from ideal. A simple way to increase the representational power without changing the inference and learning procedures is to imagine that each visible unit, i, has 10 replicas which all have identical weights to the hidden units. So far as the hidden units are concerned, it makes no difference which particular replicas are turned on: it is only the number of active replicas that counts. So a pixel can now have 11 different intensities. During reconstruction of the image from the hidden activities, all the replicas can share the computation of the probability, Pi, of turning on, and then we can select n replicas to be on with probability (~)nPi (10 - n)(1-p;). We actually approximated this binomial distribution by just adding a little Gaussian noise to lOpi and rounding. The same trick can be used for the hidden units. Eq. 3 is unaffected except that Si and Sj are now the number of active replicas. The replica trick can be seen as a way of simulating a single neuron over a time interval in which it may produce multiple spikes that constitute a rate-code. For this reason we call the model "RBMrate". We assumed that the visible units can produce up to 10 spikes and the hidden units can produce up to 100 spikes. We also made two further approximations: We replaced Si and Sj in Eq. 3 by their expected values and we used the expected value of Si when computing the probability of activation of the hidden units. However, we continued to use the stochastically chosen integer firing rates of the hidden units when computing the one-step reconstructions of the data, so the hidden activities cannot transmit an unbounded amount of information from the data to the reconstruction. A simple way to use RBMrate for face recognition is to train a single model on the training set, and to identify a face by finding the gallery image that produces a hidden activity vector that is most similar to the one produced by the face. This is how eigenfaces are used for recognition, but it does not work well because it does not take into account the fact that some variations across faces are important for recognition, while some variations are not. To correct this, we instead trained an RBMrate model on pairs of different images of the same individual, and then we used this model of pairs to decide which gallery image is best paired with the test image. To account for the fact that the model likes some individual face images more than others, we define the fit between two faces hand 12 as G(h, h) + G(h,h) - G(h,h) - G(h,h) where the goodness score G(VI,V2) is the negative free energy of the image pair VI, V2 under the model. Weight-sharing is not used, hence G (VI, V2) ::p G (V2, VI). However, to preserve symmetry, each pair of images of the same individual VI, V2 in the training set has a reversed pair V2, VI in the set. We trained the model with 100 hidden units on 1000 image pairs (500 distinct pairs) for 2000 iterations in batches of 100, with a learning rate of 2.5 x 10- 6 for the weights, a learning rate of 5 x 10- 6 for the biases, and a momentum of 0.95. One advantage of eigenfaces over correlation is that once the test image has been converted into a vector of eigenface activations, comparisons of test and gallery images can be made in the low-dimensional space of eigenface activations rather than the high-dimensional space of pixel intensities. The same applies to our face-pair network, as the goodness score of an image pair is a simple function of the total input received by each hidden unit from each image. The total inputs from each gallery image can be precomputed and stored, while the total inputs from a test image only needs to be computed once for comparisons with all gallery images. 4 The FERET database Our version of the FERET database contained 1002 frontal face images of 429 individuals taken over a period of a few years under varying lighting conditions. Of these images, 818 are used as both the gallery and the training set and the remaining 184 are divided into four disjoint test sets: The .6.expression test set contains 110 images of different individuals. These individuals all have another image in the training set that was taken with the same lighting conditions Figure 2: Images are normalized in five stages: a) Original image; b) Locate centers of eyes by hand; c) Rotate image; d) Crop image and subsample at 56 x 56 pixels; e) Mask out all of the background and some of the face, leaving 1768 pixels in an oval shape; f) Equalize the intensity histogram; g) Some examples of processed images. at the same time but with a different expression. The training set also includes a further 244 pairs of images that differ only in expression. The ildays test set contains 40 images that come from 20 individuals. Each of these individuals has two images from the same session in the training set and two images taken in a session 4 days later or earlier in the test set. A further 28 individuals were photographed 4 days apart and all 112 of these images are in the training set. The ilmonths test set is just like the ~days test set except that the time between sessions was at least three months and different lighting conditions were present in the two sessions. This set contains 20 images of 10 individuals. A further 36 images of 9 more individuals were included in the training set. The ilglasses test set contains 14 images of 7 different individuals. Each of these individuals has two images in the training set that were taken in another session on the same day. The training and test pairs for an individual differ in that one pair has glasses and the other does not. The training set includes a further 24 images, half with glasses and half without, from 6 more individuals. The images include the whole head, parts of the shoulder, and background. Instead of working with whole images, which contain much irrelevant information, we worked with face images that were normalized as shown in figure 2. Masking out all of the background inevitably looses the contour of the face which contains much discriminative information. The histogram equalization step removes most lighting effects, but it also removes some relevant information like the skin tone. For the best performance, the contour shape and skin tone would have to be used as additional sources of discriminative information. 5 Comparative results We compared RBMrate with four popular face recognition methods. The first and simplest is correlation, which returns the similarity score as the angle between two images represented as vectors of pixel intensities. This performed better than using the Euclidean distance as a score. The second method is eigenfaces [5], which first projects the images onto the principal component subspaces, then returns the similarity score as the angle between the projected images. The third method is fisherfaces [6] . Instead of projecting the images onto the subspace of the principal components, which maximizes the variance .1.expression .1.days 30 30 25 25 ~ ~ e.....20 .e......20 Kl T!! 15 Kl T!! 15 e e 10 10 CD CD 5 0 5 corr corr 30 100 25 _ ~ .?-20 tI) tI) Q) Q) "? 15 T!! e g 10 CD Q) 5 0 80 ~ 0 corr eigen fisher oppca RBMrate 60 40 20 0 corr eigen fisher oppca RBMrate Figure 3: Error rates of all methods on all test sets. The bars in each group correspond, from left to right, to the rank-I, rank-2, rank-4, rank-8 and rank-16 error rates. The rank-n error rate is the percentage of test images where the n most similar gallery images are all incorrect. among the projected images, fisherfaces projects the images onto a subspace which, at the same time, maximizes the between individual variances and minimizes the within individual variances in the training set. The final method, which we shall call ()ppca, is proposed by Moghaddam et at [7]. This method models differences between images of the same individual as a PPCA [8, 9], and differences between images of different individuals as another PPCA. Then given a difference of two images, it returns as the similarity score the likelihood ratio of the difference image under the two PPCA models. It was the best performing algorithm in the September 1996 FERET test [10]. For eigenfaces, we used 199 principal components, omitting the first principal component, as we determined manually that it encodes simply for lighting conditions. This improved the recognition performances on all the test sets except for ~exp r ession . We used a subspace of dimension 200 for fisherfaces, while we used 10 and 30 dimensional PPCAs for the within-class and between-class model of c5ppca respectively. These are the same numbers used by Moghaddam et at and gives the best results in our simulations. The number of dimensions or hidden units used by each method was optimized for that particular method for best performance. Figure 3 shows the error rates of all five methods on the test sets. The results were averaged over 10 random partitions of the dataset to improve statistical significance. Correlation and eigenfaces perform poorly on ~expre s s i o n, probably because they do not attempt to ignore the within-individual variations, whereas the other methods do. All the models did very poorly on the ~months test set which is unfortunate as this is the test set that is most like real applications. RBMrate performed best on ~expre s s i o n, fisherfaces is best on ~days and ~glasses ,while eigenfaces is best on ~months . These results show that RBMrate is competitive with but do not perform better than other methods. Figure 4 shows that after our preprocessing, human observers also have great difficulty with the ~mo nths test set, probably because the task is intrinsically difficult and is made even harder by the loss of contour and skin tone information combined with the misleading oval Figure 4: On the left is a test image from ~m o nths and on the right are the 8 most similar images returned by RBMrate . Most human observers cannot find the correct match within these 8. Figure 5: Example features learned by RBMrate . Each pair of RFs constitutes a feature. Top half: with unconstrained weights; bottom half: with non-negative weight constraints. contour produced by masking out all of the background. 6 Receptive fields learned by RBMrate The top half of figure 5 shows the weights of a few of the hidden units after training. All the units encode global features, probably because the image normalization ensures that there are strong long range correlations in pixel intensities. The maximum size of the weights is 0.01765, with most weights having magnitudes smaller than 0.005. Note, however, that the hidden unit activations range from 0 to 100. On the left are 4 units exhibiting interesting features and on the right are 4 units chosen at random. The top unit of the first column seems to be encoding the presence of mustache in both faces. The bottom unit seems to be coding for prominent right eyebrows in both faces. Note that these are facial features which often remain constant across images of the same individual. In the second column are two features which seem to encode for different facial expressions in the two faces. The right side of the top unit encodes a smile while the left side is expressionless. This is reversed in the bottom unit. So the network has discovered some features which are fairly constant across images in the same class, and some features which can differ substantially within a class. Inspired by [11], we tried to enforce local features by restricting the weights to be non- negative. This is achieved by resetting negative weights to zero after each weight update. The bottom half of figure 5 shows some of the hidden receptive fields learned. Except for the 4 features on the left, all other features are local and code for features like mouth shape changes (third column) and eyes and cheeks (fourth column). The 4 features on the left are much more global and clearly capture the fact that the direction of the lighting can differfor two images of the same person. Unfortunately, constraining the weights to be non-negative strongly limits the representational power of RBMrate and makes it worse than all the other methods on all the test sets. 7 Conclusions We have introduced a new method for face recognition based on a non-linear generative model. The generative model can be very complex, yet retains the efficiency required for applications. Performance on the FERET database is comparable to popular methods. However, unlike other methods based on linear models, there is plenty of room for further development using prior knowledge to constrain the weights or additional layers of hidden units to model the correlations of feature detector activities. These improvements should translate into improvements in the rate of recognition. Acknowledgements We thank Jonathon Phillips for graciously providing us with the FERET database, the referees for useful comments and the Gatsby Charitable Foundation for funding. References [1] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GeNU TR 2000-004, Gatsby Computational Neuroscience Unit, University College London, 2000. [2] P. SmoIensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [3] J. Pearl. Probabilistic reasoning in intelligent ~ystems: networks ofplausible inference. Morgan Kaufmann Publishers, San Mateo CA, 1988. [4] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [5] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience , 3(1):71- 86,1991. [6] P. N. Belmumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces versus fisherfaces: recognition using class specific linear projection. In European Conference on Computer Vision, 1996. [7] B. Moghaddam, W. Wahid, and A. Pentland. Beyond eigenfaces: probabilistic matching for face recognition. In IEEE International Conference on Automatic Face and Gesture Recognition, 1998. [8] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):696--710, 1997. [9] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Technical Report NCRG/97/01O , Neural Computing Research Group, Aston University, 1997. [10] P. J. Phillips, H. Moon, P. Rauss, and S. A. Rizvi. The FERET september 1996 database and evaluation procedure. In International Conference on Audio and Video-based Biometric Person Authentication, 1997. [11] D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature , 401, October 1999.
1886 |@word version:1 seems:2 simulation:1 tried:1 contrastive:3 q1:2 tr:1 harder:1 contains:5 score:6 comparing:1 activation:6 si:4 yet:1 visible:11 partition:1 shape:3 remove:2 update:2 generative:6 half:6 intelligence:1 tone:3 toronto:3 five:2 unbounded:1 incorrect:1 consists:1 mask:1 expected:4 nor:1 inspired:2 prolonged:1 little:1 project:2 maximizes:2 kind:1 fantasy:2 minimizes:1 substantially:1 loos:1 finding:2 safely:1 ti:2 uk:1 unit:31 local:2 limit:1 encoding:1 firing:1 mateo:1 conversely:1 factorization:1 range:2 averaged:1 implement:1 procedure:2 projection:1 matching:1 cannot:2 onto:3 applying:1 equalization:1 center:1 rule:3 continued:1 population:1 swamp:1 variation:4 transmit:1 imagine:1 exact:1 trick:2 referee:1 rumelhart:2 recognition:16 approximated:1 updating:2 database:5 bottom:4 capture:1 ensures:1 highest:1 equalize:1 seung:1 kriegman:1 trained:3 efficiency:1 represented:1 train:1 distinct:1 describe:1 london:3 effective:2 sejnowski:1 whose:1 valued:1 noisy:1 final:1 advantage:1 net:1 ucl:1 reconstruction:6 product:1 turned:1 relevant:1 rapidly:1 date:1 translate:1 poorly:2 representational:2 wcin:1 produce:4 comparative:1 object:3 ac:1 pose:1 exemplar:1 received:1 eq:2 strong:1 c:1 come:1 differ:4 exhibiting:1 direction:1 correct:2 exploration:2 human:2 jonathon:1 eigenface:2 microstructure:2 exp:2 great:1 equilibrium:3 cognition:2 mo:1 harmony:1 mit:2 clearly:1 gaussian:1 rather:1 qoo:1 varying:1 sisj:5 encode:2 improvement:2 rank:6 likelihood:2 graciously:1 glass:3 inference:4 dependent:1 hidden:28 pixel:9 biometric:1 among:2 development:1 fairly:1 marginal:1 field:2 once:2 having:1 rauss:1 sampling:5 manually:1 identical:1 unsupervised:2 constitutes:1 plenty:1 others:1 report:2 intelligent:1 few:3 preserve:1 recognize:1 divergence:4 individual:27 replaced:1 phase:1 attempt:1 highly:1 evaluation:1 accurate:1 moghaddam:4 facial:2 euclidean:1 causal:3 column:4 earlier:1 ar:1 goodness:2 retains:1 rounding:1 stored:1 combined:1 person:2 international:2 probabilistic:4 lee:1 picking:2 worse:1 stochastically:1 cognitive:1 expert:1 return:3 li:1 account:2 converted:1 coding:1 includes:2 depends:1 vi:6 later:1 performed:2 observer:2 observing:2 start:2 competitive:1 parallel:4 complicated:1 npi:1 masking:2 minimize:2 moon:1 variance:3 kaufmann:1 resetting:1 yield:1 identify:1 correspond:1 generalize:1 produced:4 marginally:1 lighting:7 m5s:1 unaffected:1 detector:5 sharing:1 rbms:1 energy:1 turk:1 rbm:7 sampled:1 ppca:4 dataset:1 popular:2 intrinsically:1 knowledge:1 actually:1 z9:1 tipping:1 day:6 improved:1 ystems:1 strongly:1 just:2 stage:1 correlation:6 hand:2 working:1 qo:2 logistic:1 wijsi:1 effect:1 omitting:1 normalized:2 true:1 unbiased:1 contain:1 hence:1 alternating:2 leibler:1 deal:1 conditionally:1 during:1 authentication:1 prominent:1 reasoning:1 image:61 funding:1 volume:2 ncrg:1 gibbs:4 phillips:2 automatic:1 unconstrained:1 session:5 similarity:3 posterior:1 irrelevant:1 apart:1 binary:3 seen:1 morgan:1 additional:2 recognized:1 determine:1 period:1 neurally:1 multiple:1 technical:2 match:1 gesture:1 long:2 divided:1 coded:3 paired:1 crop:1 vision:1 iteration:3 histogram:2 normalization:1 achieved:1 addition:1 background:4 fine:1 whereas:1 interval:1 leaving:1 source:1 publisher:1 unlike:1 probably:3 comment:1 seem:1 smile:1 call:2 integer:1 presence:2 ideal:1 constraining:1 easy:1 concerned:1 independence:1 fit:1 expression:6 returned:1 constitute:1 ignored:1 useful:1 amount:1 processed:1 mcclelland:2 simplest:1 percentage:1 neuroscience:3 disjoint:1 shall:1 group:2 four:2 changing:1 replica:7 year:1 run:1 angle:2 fourth:1 cheek:1 decide:1 comparable:1 layer:6 correspondence:1 activity:6 constraint:1 worked:1 constrain:1 encodes:2 ywteh:1 performing:1 relatively:1 photographed:1 department:1 alternate:1 across:3 smaller:1 remain:1 feret:6 projecting:1 restricted:4 taken:5 describing:1 count:1 precomputed:1 apply:1 away:1 v2:6 enforce:1 simulating:1 batch:1 eigen:2 original:1 binomial:1 assumes:1 remaining:1 include:1 unfortunate:1 top:4 build:1 skin:3 spike:3 receptive:2 dependence:1 september:2 gradient:4 subspace:4 reversed:2 distance:1 thank:1 reason:1 gallery:7 code:3 ratio:1 providing:1 vee:1 minimizing:1 difficult:3 unfortunately:1 october:1 favorably:1 hespanha:1 negative:6 boltzmann:8 fisherfaces:5 perform:3 neuron:1 inevitably:1 pentland:3 hinton:5 shoulder:1 head:1 locate:1 discovered:1 canada:1 intensity:7 inferred:1 introduced:1 pair:16 required:1 kl:2 connection:1 optimized:1 learned:4 pearl:1 address:1 beyond:1 bar:1 dynamical:1 pattern:1 eyebrow:1 rf:1 video:1 belief:3 mouth:1 power:2 difficulty:1 turning:1 improve:1 misleading:1 aston:1 eye:2 prior:1 literature:1 acknowledgement:1 relative:1 loss:1 interesting:1 geoffrey:1 versus:1 foundation:4 editor:2 charitable:1 pi:1 share:1 cd:3 free:1 bias:1 side:2 explaining:1 eigenfaces:9 face:25 distributed:2 dimension:2 contour:4 made:4 projected:2 preprocessing:1 san:1 far:3 transaction:1 sj:5 approximate:1 ignore:1 kullback:1 global:2 active:2 assumed:1 discriminative:3 learn:1 wahid:1 nature:1 ca:1 symmetry:1 complex:1 european:1 did:1 significance:1 whole:2 noise:2 subsample:1 rizvi:1 gatsby:4 momentum:1 clamped:1 third:2 specific:1 bishop:1 consist:1 restricting:1 adding:1 corr:4 magnitude:1 illustrates:1 relearning:1 easier:1 simply:1 visual:1 contained:1 applies:1 conditional:3 identity:1 month:3 room:1 fisher:2 hard:1 change:1 included:1 determined:1 except:4 principal:5 total:3 oval:2 select:1 college:2 rotate:1 frontal:1 evaluate:1 audio:1 correlated:1
969
1,887
Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra Paul Hayton Department of Engineering Science University of Oxford, UK [email protected] Bernhard SchOlkopf Microsoft Research 1 Guildhall Street, Cambridge, UK [email protected] Lionel Tarassenko Department of Engineering Science University of Oxford, UK [email protected] Paul Anuzis Rolls-Royce Civil Aero-Engines Derby, UK Abstract A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis. 1 Introduction Jet engines have a number of rigorous pass-off tests before they can be delivered to the customer. The main test is a vibration test over the full range of operating speeds. Vibration gauges are attached to the casing of the engine and the speed of each shaft is measured using a tachometer. The engine on the test bed is slowly accelerated from idle to full speed and then gradually decelerated back to idle. As the engine accelerates, the rotation frequency of the two (or three) shafts increases and so does the frequency of the vibrations caused by the shafts. A tracked order is the amplitude of the vibration signal in a narrow frequency band centered on a harmonic of the rotation frequency of a shaft, measured as a function of engine speed. It tracks the frequency response of the engine to the energy injected by the rotating shaft. Although there are usually some harmonics present, most of the energy in the vibration spectrum is concentrated in the fundamental tracked orders. These therefore constitute the "vibration signature" of the jet engine under test. It is very important to detect departures from the normal or expected shapes of these tracked orders as this provides very useful diagnostic information (for example, for the identification of out-of-balance conditions). The detection of such abnormalities is ideally suited to the novelty detection paradigm for several reasons. Usually, there are far fewer examples of abnormal shapes than normal ones and often there may only be a single example of a particular type of abnormality in the available database. More importantly, the engine under test may show up a type of abnormality which has never been seen before but which should not be missed. This is especially important in our current work where we are adapting the techniques developed for pass-off tests to in-flight monitoring. With novelty detection, we first of all learn a description of normal vibration shapes by including only examples of normal tracked orders in the training data. Abnormal shapes in test engines are subsequently identified by testing for novelty against the description of normality. In our previous work [2], we investigated the vibration spectra of a two-shaft jet engine, the Rolls-Royce Pegasus. In the available database, there were vibration spectra recorded from 52 normal engines (the training data) and from 33 engines with one or more unusual vibration feature (the test data). The shape of the tracked orders was encoded as a lowdimensional vector by calculating a weighted average of the vibration amplitude over six different speed ranges (giving an 18-D vector for three tracked orders). With so few engines available, the K -means clustering algorithm (with K = 4) was used to construct a very simple model of normality, following component-wise normalisation of the 18-D vectors. The novelty of the vibration signature for a test engine was assessed as the shortest distance to one of the kernel centres in the clustering model of normality (each distance being normalised by the width associated with that kernel). When cumulative distributions of novelty scores were plotted both for normal (training) engines and test engines, there was little overlap found between the two distributions [2]. A significant shortcoming of the method, however, is the inability to rank engines according to novelty, since the shortest normalised distance is evaluated with respect to different cluster centres for different engines. In this paper, we re-visit the problem but for a new engine, the RB211-535. We argue that the SVM paradigm is ideal for novelty detection, as it provides an elegant distribution of normality, a direct indication of the patterns on the boundary of normality (the support vectors) and, perhaps most importantly, a ranking of "abnormality" according to distance to the separating hyperplane in feature space. 2 Support Vector Machines for Novelty Detection Suppose we are given a set of "normal" data points X = {Xl, ... , xL} . In most novelty detection problems, this is all we have; however, in the following we shall develop an algorithm that is slightly more general in that it can also take into account some examples of abnormality, Z = {Zl' ... ' zt} . Our goal is to construct a real-valued function which, given a previously unseen test point x, charaterizes the "X -ness" of the point x, i.e. which takes large values for points similar to those in X. The algorithm that we shall present below will return such a function, along with a threshold value, such that a prespecified fraction of X will lead to function values above threshold. In this sense we are estimating a region which captures a certain probability mass. The present approach employs two ideas from support vector machines [6] which are crucial for their fine generalization performance even in high-dimensional tasks: maximizing a margin, and nonlinearly mapping the data into some feature .space F endowed with a dot product. The latter need not be the case for the input domain X which may be a general set. The connection between the input domain and the feature space is established by a feature map <1> : X -+ F, i.e. a map such that some simple kernel [1,6] k(x,y) = (<1>(x)? <1>(y)), such as the Gaussian k(x,y) = e-llx-yIl2/c, (1) (2) provides a dot product in the image of <P. In practice, we need not necessarily worry about <P, as long as a given k satisfies certain positivity conditions [6]. As F is a dot product space, we can use tools of linear algebra and geometry to construct algorithms in F , even if the input domain X is discrete. Below, we derive our results in F, using the following shorthands: (3) (4) Indices i and j are understood to range over 1, ... ,i (in compact notation: i, j E [.e]), similarly, n,p E [t]. Bold face greek letters denote i-dimensional vectors whose components are labelled using normal face typeset. In analogy to an algorithm recently proposed for the estimation of a distribution's support [5], we seek to separate X from the centroid of Z with a large margin hyperplane committing few training errors. Projections on the normal vector of the hyperplane then characterize the "X -ness" of test points, and the area where the decision function takes the value 1 can serve as an approximation of the support of X. While X is the set of normal examples, the (possibly empty) set Z thus only plays the role of, in some weak and possibly imprecise sense, modeling what the unknown "other" examples might look like. The decision function is found by minimizing a weighted sum of a support vector type regularizer and an empirical error term depending on an overall margin variable p and individual errors ~i' !ll w l1 2 min wEF, ~E R l , pER subjectto + ;l Li ~i - (5) P (W'(Xi-t LnZn)) ~ p- ~i' ~i ~ O. (6) The precise meaning of the parameter v governing the trade-off between the regularizer and the training error will become clear later. Since nonzero slack variables ~i are penalized in the objective function, we can expect that if wand p solve this problem, then the decision function 1 (7) f(x) = sgn((w . (x zn)) - p) tL n will be positive for many examples Xi contained in X, while the SV type regularization term Ilwll will still be small. This can be shown to correspond to a large margin of separation from Ln Zn? t We next compute a dual form of this optimization problem. The details of the calculation, which uses standard techniques of constrained optimization, can be found in [4]. We introduce a Lagrangian and set the derivatives with respect to w equal to zero, yielding in particular (8) All patterns {Xi: i E [.e], Di > O} are called Support Vectors. The expansion (8) turns the decision function (7) into a form which only depends on dot prducts, f(x) = sgn((LiDi(Xi LnZn) . (x - LnZn)) - p). By multiplying out the dot products, we obtain a form that can be written as a nonlinear decision function on the input domain X in terms of a kernel (1) (cf. (3?. A short calculation yields f(x) = t t t b t p). sgn (Li Dik(Xi, x) - Ln k(zn, x) + Lnp k(zn, zp) Lin Dik(Zn, Xi) In the argument of the sgn, only the first two terms depend on x, therefore we may absorb the next terms in the constant p, which we have not fixed yet. To compute p in the final form of the decision function (9) ? we employ the Karush-Kuhn-Tucker (KKT) conditions of the optimization problem [6, e.g.]. They state that for points Xi where < Cli < 1/ (vi), the inequality constraints (6) become equalities (note that in general, Cli E [O,l/(vi)]), and the argument of the sgn in the decision function should equal 0, i.e. the corresponding Xi sits exactly on the hyperplane of separation. The KKT conditions also imply that only those points Xi can have a nonzero Cli for which the first inequality constraint in (6) is precisely met; therefore the support vectors Xi with Cli > will often form but a small subset of X. ? Substituting (8) (the derivative of the Lagrangian by w) and the corresponding conditions for ~ and p into the Lagrangian, we can eliminate the primal variables to get the dual problem. A short calculation shows that it consists of minimizing the quadratic form W(Cl) = 1 2" L CliClj (k(Xi,Xj) + q - qj - qi), (10) ij where q = b I: np k(zn, zp) and qj = t I: n k(xj, zn), subject to the constraints (11) This convex quadratic program can be solved with standard quadratic programming tools. Alternatively, one can employ the SMO algorithm described in [3], which was found to approximately scale quadratically with the training set size. To illustrate the idea presented in this section, figure 1 shows a 2D example of separating the data from the mean of another data set in feature space. Figure 1: Separating one class of data from the mean of a second data set. The first class is a mixture of three gaussians; the SVM algorithm is used to find the hyperplane in feature space that separates the data from the second set (another Gaussian - the black dots). The image intensity represents the SVM output value which is the measure of novelty. We next state a few theoretical results, beginning with a characterization of the influence of v. To this end, first note that the constraints (11) rule out solutions where v > 1, as in that case, the Qi cannot sum up to 1. Negative values of v are ruled out, too, since they would amount to encouraging (rather than penalizing) training errors in (5). Therefore, in the primal problem (5) only v E (0,1] makes sense. We shall now explain that v actually characterizes how many points of X are allowed to lie outside the region where the decision function is positive. To this end, we introduce the term outlier to denote points Xi that have a nonzero slack variable ~i' i.e. points that lie outside of the estimated region. By the KKT conditions, all outliers are also support vectors; however there can be support vectors (sitting exactly on the margin) that are not outliers. Proposition 1 (v-property) Assume the solution of (5) satisfies p '" 0. The following statements hold: (i) v is an upper bound on the fraction of outliers. (ii) v is a lower bound on the fraction of SVs. (iii) Suppose the data (4) were generated independently from a distribution P(x) which does not contain discrete components. Suppose, moreover, that the kernel is analytic and non-constant. With probability 1, asymptotically, v equals both the fraction of Sv.\? and the fraction of outliers. The proof can be found in [4]. We next state another desirable theoretical result: Proposition 2 (Resistance [3]) Local movements of outliers parallel to w do not change the hyperplane. Essentially, this result is due to the fact that the errors ~i enter in the objective function only linearly. To determine the hyperplane, we need to find the (constrained) extremum of the objective function, and in finding the extremum, the derivatives are what counts. For the linear error term, however, those are constant, so they do not depend on how far away from the hyperplane an error point lies. We conclude this section by noting that if Z is empty, the algorithm is trying to separate the data from the origin in F, and both the decision function and the optimization problem reduce to what is described in [5]. 3 Application of SVM to Jet Engine Pass-off Tests The Support Vector machine algorithm for novelty detection is applied to the pass-off data from a set of 162 Rolls-Royce jet engines. The shape of the tracked order of interest is encoded by calculating a weighted average of the vibration amplitude over ten speed ranges, thereby generating a lOD shape vector. The available data was split into the following three sets: ? 99 Normal Engines to be used as training data; ? 40 Normal Engines to be used as validation data; ? 23 engines labelled as having at least one abnormal aspect in their vibration signature (the "test" data). Using the training dataset, the SVM algorithm finds the hyperplane that separates the normal data from the origin in feature space with the largest margin. The number of support vectors gives an indication of how well the algorithm is generalising (if all data points were support vectors, the algorithm would have memorized the data). A Gaussian kernel was used with a width c = 40.0 in equation 2 which was chosen by starting with a small kernel width (so that the algorithm memorizes the data), increasing the width and stopping when similar results are obtained on the training and validation data. Cumulative novelty distributions are plotted for two different values of v and these are shown in figure 2. The curves show a slight overlap between the normal and test engines. Although it is not given here, a ranking of the engines according to their novelty is also provided to the Rolls-Royce test engineers. No No oIEng,r.s "'Eng'''' rMtEng._ (a) l/ = 0.1 (b) l/ = 0.2 Figure 2: Cumulative novelty distributions for two different values of v. The curves show that there is a slight overlap in the data; For v = 0.1, there are 11 validation engines over the SVM decision boundary and 2 test engines inside the boundary. Separating the Normal Engines from the Test Engines. In a retrospective analysis such as described in this paper (for which the test engines with unusual vibration signatures have already been identified as such by the Rolls-Royce experts), the SVM algorithm can be rerun to find the hyperplane that separates the normal data from the mean of the test data in feature space with the largest margin (instead of separating from the origin). The algorithm is trained on the 99 training engines and 22 of the 23 test engines. Each test engine is left out in tum and the algorithm re-trained to compute its novelty. Cumulative distributions are again plotted (see figure 3) and these show an improved separation between the two sets of engines. It should be noted however, that the improvement is less for the validation engines than for the training engines. Nevertheless, there is an improvement for the validation engines seen from the higher intersection of the distribution with the axis. No.ofEngille5 No.orEngill~S V"'idgtionF.ngi~s T .... iningEngin .... TeslEngille!l Test Engines ,~ Novelty (a) (v Nowlty = 0.1) (b) Figure 3: Cumulative novelty distributions showing the variation of novelty with number of engines for (a) the training data versus the test data (each test engine omitted from the training phase in tum to compute its novelty) and (b) the validation data versus the test data. 4 Discussion This paper has presented a novel application of Support Vector Machines and introduced a method for including information from a second data set when considering novelty detection. The results on the Jet Engine data show very good separation between normal and test engines. We believe Support Vector Machines are an ideal framework for novelty detection and indeed, we have obtained better results than with our previous clustering based algorithms for detecting novel Jet Engine signatures. The present work builds on a previous algorithm for estimating a distribution's support [5]. That algorithm, separating the data from the origin in feature space, suffered from the drawback that the origin played a special role. One way to think of it is as a prior on where, in a novelty detection context, the unknown "other" class lies. The present work alleviates this problem by allowing for the possibility to separate from a point inferred from the data, either from the same class, or from some other data. There is a concern that one could put forward about one of the variants of the presently proposed approach, namely about the case where X and Z are disjoint, and we are separating X from Z's centroid: why not actually train a full binary classifier separating X from all examples from Z, rather that just from its mean? Indeed there might be situations where this is appropriate. More specifically, whenever Z is representative of the instances of the other class that we expect to see in the future, then a binary classification is certainly preferable. However, there can be situations where Z is not representative for the other class, for instance due to nonstationarity. Z may even only consists of artificial examples. In this situation, the only real training examples are the positive ones. In this case, separating the data from the mean of some artificial, or non-representative examples, provides a way of taking into account some information from the other class which might work better than simply separating the positive data from the origin. The philosophy behind our approach is the one advocated by [6] . If you are trying to solve a learning problem, do it directly, rather than solving a more general problem along the way. Applied to the estimation of a distribution's support, this means: do not first estimate a density and then threshold it to get an estimate of the support. Acknowledgments. Thanks to John Platt, John Shawe-Taylor, Alex Smola and Bob Williamson for helpful discussions. References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, PA, July 1992. ACM Press. [2] A. Nairac, N. Townsend, R. Carr, S. King, P. Cowley, and L. Tarassenko. A system for the analysis of jet engine vibration data. Integrated Computer-Aided Engineering, 6:53 - 65, 1999. [3] B. SchOlkopf, 1. Platt, J. Shawe-Taylor, AJ. Smola, and R.C. Williamson. Estimating the support of a high-dimensional distribution. TR MSR 99 - 87, Microsoft Research, Redmond, WA, 1999. [4] B. Scholkopf, J. Platt, and A.J. Smola. Kernel method for percentile feature extraction. TR MSR 2000 - 22, Microsoft Research, Redmond, WA , 2000. [5] B. SchOlkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt. Support vector method for novelty detection. In S.A. Solla, T.K. Leen, and K.-R. Muller, editors, Advances in Neural Information Processing Systems 12, pages 582- 588. MIT Press, 2000. [6] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
1887 |@word msr:2 seek:1 eng:1 thereby:1 tr:2 score:1 tachometer:1 current:1 com:1 yet:1 written:1 john:2 shape:8 analytic:1 fewer:1 beginning:1 short:2 prespecified:1 characterization:1 detecting:1 provides:4 sits:1 along:2 direct:1 become:2 scholkopf:4 shorthand:1 consists:2 inside:1 introduce:2 indeed:2 expected:1 little:1 encouraging:1 considering:1 increasing:1 provided:1 estimating:3 notation:1 moreover:1 mass:1 what:3 developed:2 extremum:2 finding:1 pmh:1 exactly:2 preferable:1 classifier:2 uk:6 zl:1 platt:4 before:2 positive:4 engineering:3 scientist:1 understood:1 local:1 oxford:2 approximately:1 might:3 black:1 range:4 acknowledgment:1 testing:1 practice:1 area:1 empirical:1 adapting:1 projection:1 imprecise:1 idle:2 get:2 cannot:1 put:1 context:1 influence:1 map:2 customer:1 lagrangian:3 maximizing:1 starting:1 independently:1 convex:1 rule:1 haussler:1 importantly:2 variation:1 suppose:3 play:1 programming:1 us:1 lod:1 origin:6 pa:1 tarassenko:2 database:2 role:2 royce:5 aero:1 solved:1 capture:1 region:3 solla:1 trade:1 movement:1 ideally:1 signature:6 trained:2 depend:2 solving:1 algebra:1 serve:1 regularizer:2 train:1 committing:1 describe:1 shortcoming:1 artificial:2 outside:2 whose:1 encoded:2 valued:1 solve:2 typeset:1 unseen:1 think:1 delivered:1 final:1 indication:2 lowdimensional:1 product:4 alleviates:1 bed:1 description:2 cli:4 cluster:1 lionel:2 empty:2 zp:2 generating:1 derive:1 develop:1 ac:2 depending:1 illustrate:1 measured:2 ij:1 advocated:1 met:1 kuhn:1 greek:1 drawback:1 subsequently:1 centered:1 sgn:5 memorized:1 generalization:1 karush:1 proposition:2 hold:1 normal:17 guildhall:1 mapping:1 substituting:1 omitted:1 estimation:2 vibration:20 largest:2 gauge:1 tool:2 weighted:3 bsc:1 mit:1 gaussian:3 rather:3 improvement:2 rank:1 rigorous:1 centroid:2 detect:1 sense:3 helpful:1 stopping:1 eliminate:1 integrated:1 rerun:1 overall:1 dual:2 classification:1 constrained:2 ness:2 special:1 equal:3 construct:3 never:1 having:1 extraction:1 represents:1 look:1 future:1 np:1 few:3 employ:3 individual:1 geometry:1 phase:1 microsoft:3 detection:14 normalisation:1 interest:1 possibility:1 certainly:1 mixture:1 yielding:1 primal:2 behind:1 taylor:3 rotating:1 re:2 plotted:3 ruled:1 theoretical:2 instance:2 modeling:1 zn:7 subset:1 too:1 characterize:1 sv:2 thanks:1 density:1 fundamental:1 off:5 wef:1 again:1 recorded:1 slowly:1 possibly:2 positivity:1 expert:1 derivative:3 return:1 li:2 account:2 bold:1 caused:1 ranking:2 depends:1 vi:2 later:1 memorizes:1 characterizes:1 parallel:1 roll:5 correspond:1 yield:1 sitting:1 weak:1 identification:1 monitoring:1 multiplying:1 bob:1 explain:1 whenever:1 nonstationarity:1 against:1 energy:2 frequency:5 tucker:1 associated:1 di:1 proof:1 dataset:1 amplitude:3 actually:2 derby:1 back:1 worry:1 tum:2 higher:1 response:1 improved:1 leen:1 evaluated:1 ox:2 governing:1 just:1 smola:4 flight:1 nonlinear:1 aj:1 perhaps:1 believe:1 contain:1 regularization:1 equality:1 nonzero:3 ll:1 width:4 noted:1 percentile:1 trying:2 carr:1 l1:1 image:2 harmonic:2 wise:1 novel:3 recently:1 meaning:1 rotation:2 tracked:7 attached:1 slight:2 significant:1 cambridge:1 llx:1 enter:1 similarly:1 centre:2 shawe:3 dot:6 robot:2 operating:1 certain:2 inequality:2 binary:2 lnp:1 muller:1 seen:2 novelty:26 shortest:2 paradigm:2 determine:1 july:1 ii:1 signal:1 full:3 desirable:1 jet:11 calculation:3 long:1 lin:1 visit:1 qi:2 variant:1 essentially:1 kernel:8 fine:1 suffered:1 crucial:1 subject:1 elegant:1 noting:1 abnormality:5 ideal:2 iii:1 split:1 xj:2 identified:2 reduce:1 idea:2 qj:2 six:1 retrospective:1 dik:2 resistance:1 constitute:1 svs:1 useful:1 clear:1 amount:1 band:1 ten:1 concentrated:1 diagnostic:2 estimated:1 track:1 per:1 disjoint:1 discrete:2 shall:3 threshold:3 nevertheless:1 penalizing:1 asymptotically:1 fraction:5 sum:2 wand:1 letter:1 injected:1 you:1 guyon:1 missed:1 separation:4 decision:10 accelerates:1 abnormal:3 bound:2 played:1 quadratic:3 annual:1 constraint:4 precisely:1 alex:1 aspect:1 speed:6 argument:2 min:1 department:2 according:3 slightly:1 presently:1 outlier:6 gradually:1 ln:2 equation:1 previously:1 slack:2 turn:1 count:1 end:2 unusual:3 available:4 gaussians:1 endowed:1 away:1 appropriate:1 subjectto:1 charaterizes:1 clustering:3 cf:1 calculating:2 giving:1 especially:1 build:1 objective:3 pegasus:1 already:1 distance:4 separate:6 separating:10 street:1 argue:1 reason:1 index:1 carcass:1 balance:1 minimizing:2 statement:1 negative:1 zt:1 unknown:2 allowing:1 upper:1 situation:3 precise:1 intensity:1 inferred:1 introduced:1 nonlinearly:1 namely:1 connection:1 engine:49 smo:1 quadratically:1 narrow:1 boser:1 established:1 redmond:2 usually:2 pattern:2 below:2 departure:1 program:1 including:3 overlap:3 townsend:1 normality:6 imply:1 axis:1 extract:1 prior:1 expect:2 analogy:1 versus:2 validation:6 editor:2 penalized:1 normalised:2 face:2 taking:1 boundary:3 curve:2 cumulative:5 forward:1 far:2 compact:1 bernhard:1 absorb:1 kkt:3 generalising:1 pittsburgh:1 conclude:1 xi:12 alternatively:1 spectrum:4 why:1 learn:1 nature:1 expansion:1 williamson:3 investigated:1 necessarily:1 cl:1 domain:4 main:1 linearly:1 paul:2 allowed:1 representative:3 tl:1 shaft:6 xl:2 lie:4 showing:1 svm:7 concern:1 workshop:1 ilwll:1 vapnik:2 margin:8 civil:1 suited:1 intersection:1 simply:1 contained:1 springer:1 satisfies:2 acm:2 goal:1 king:1 labelled:2 change:1 aided:1 specifically:1 hyperplane:10 engineer:1 called:1 pas:4 support:23 latter:1 inability:1 assessed:1 accelerated:1 philosophy:1 casing:1
970
1,888
Using Free Energies to Represent Q-values in a Multiagent Reinforcement Learning Task Brian Sallans Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada sallam'@cs,toronto,edu Gatsby Computational Neuroscience Unit University College London London WCIN 3AR u.K. hinton @ gatsby, ucZ. ac, uk Abstract The problem of reinforcement learning in large factored Markov decision processes is explored. The Q-value of a state-action pair is approximated by the free energy of a product of experts network. Network parameters are learned on-line using a modified SARSA algorithm which minimizes the inconsistency of the Q-values of consecutive state-action pairs. Actions are chosen based on the current value estimates by fixing the current state and sampling actions from the network using Gibbs sampling. The algorithm is tested on a co-operative multi-agent task. The product of experts model is found to perform comparably to table-based Q-Iearning for small instances of the task, and continues to perform well when the problem becomes too large for a table-based representation. 1 Introduction Online Reinforcement Learning (RL) algorithms try to find a policy which maximizes the expected time-discounted reward provided by the environment. They do this by performing sample backups to learn a value function over states or state-action pairs [1]. If the decision problem is Markov in the observed states, then the optimal value function over state-action pairs (the Q-function) yields all of the information required to find the optimal policy for the decision problem. For example, when the Q-function is represented as a table, the optimal action for a given state can be found simply by searching the row of the table corresponding to that state. 1.1 Factored Markov Decision Processes In many cases the dimensionality of the problem makes a table representation impractical, so a more compact representation that makes use of the structure inherent in the problem is required. In a co-operative multi-agent system, for example, it is natural to represent both the state and action as sets of variables (one for each agent). We expect that the mapping from the combined states of all the agents to the combined actions of all the agents is not arbitrary: Given an individual agent's state, that agent's action might be largely independent of the other agents' exact states and actions, at least for some regions of the combined state space. We expect that a facto red representation of the Q-value function will be appropriate for two reasons: The original representation of the combined states and combined actions is factored, and the ways in which the optimal actions of one agent are dependent on the states and actions of other agents might be well captured by a small number of "hidden" factors rather than the exponential number required to express arbitrary mappings. 1.2 Actor-Critic Architectures If a non-linear function approximator is used to model the Q-function, then it is difficult and time consuming to extract the policy directly from the Q-function because a non-linear optimization must be solved for each action choice. One solution, called an actor-critic architecture, is to use a separate function approximator to model the policy (i.e. to approximate the non-linear optimization) [2, 3]. This has the advantage of being fast, and allows us to explicitly learn a stochastic policy, which can be advantageous if the underlying problem is not strictly Markov [4]. However, a specific parameterized family of policies must be chosen a priori. Instead we present a method where the Q-value of a state-action pair is represented (up to an additive constant) by the negative free-energy, - F, of the state-action pair under a non-causal graphical model. The graphical model is a product of experts [5] which has two very useful properties: Given a state-action pair, the exact free energy is easily computed, and the derivative of this free energy W.r.t. each parameter of the network is also very simple. The model is trained to minimize the inconsistency between the free-energy of a state-action pair and the discounted free energy of the next state-action pair, taking into account the immediate reinforcement. After training, a good action for a given state can be found by clamping the state and drawing a sample of the action variables using Gibbs sampling [6]. Although finding optimal actions would still be difficult for large problems, selecting an action with a probability that is approximately proportional to exp( - F) can be done with a modest number of iterations of Gibbs sampling. 1.3 Markov Decision Processes We will concentrate on finite, factored, Markov decision processes (factored MDPs), in which each state and action is represented as a set of discrete variables. Formally, a factored MDP consists of the set { {SO:}~=I' {A,8 }:=l, {s~ }~=l' P, Pr }, where: So: is the set of possible values for state variable 0:; A,8 is the set of possible values for action variable (3; s~ is the initial value for state variable 0:; P is a transition distribution P(st+llst, at); and Pr is a reward distribution P(rtlst,at,st+l). A state is an M-tuple and an action is an N-tuple. The goal of solving an MDP is to find a policy, which is a sequence of (possibly stochastic) mappings 7ft : Sl X S2 X ... X SM -+ Al X A2 X ... X AN which maximize the total expected reward received over the course of the task: (Rt) 1r' = (rt + '/'rt+1 + ... + ,/,T-trT) 1r' (1) where,/, is a discount factor and (-) 1r' denotes the expectation taken with respect to policy 7ft. We will focus on the case when the policy is stationary: 7ft is identical for all t. 2 Approximating Q-values with a Product of Experts As the number of state and action variables increases, a table representation quickly becomes intractable. We represent the value of a state and action as the negative free-energy (up to a constant) under a product of experts model (see Figure lea?~. With a product of experts, the probability assigned to a state-action pair, (s, a) is just the (normalized) product of the probabilities assigned to (s, a) under each of the individual a) b) hidden units a jO state units 000 000 000 action units ~ Figure 1: a) The Boltzmann product of experts. The estimated Q-value (up to an additive constant) of a setting of the state and action units is found by holding these units fixed and computing the free energy of the network. Actions are selected by alternating between updating all of the hidden units in parallel and updating all of the action units in parallel, with the state units held constant. b) A multinomial state or action variable is represented by a set of "one-of-n" binary units in which exactly one is on. experts: p(s,al()l, ... ,()K) = " II~=l Pk(S, al()k) L.J(sl,al) II k Pk ('S , a 'I()) k (2) where {()1, ... , ()K} are parameters of the K experts and (s', a') indexes all possible stateaction pairs. In the following, we will assume that there are an equal number of state and action variables (i.e. M = N); and that each state or action variable has the same arity (Va., (3, Isa I = IS,B I and IAa I = IA,Bi). These assumptions are appropriate, for example, when there is one state and action variable for each agent in a multi-agent task. Extension to the general case is straight forward . In the following, (3 will index agents. Many kinds of "experts" could be used while still retaining the useful properties of the PoE. We will focus on the case where each expert is a single binary sigmoid unit because it is particularly suited to the discrete tasks we consider here. Each agent's (multinomial) state or action is represented using a "one-of-N" set of binary units which are constrained so that exactly one of them is on. The product of experts is then a bipartite "Restricted Boltzmann Machine" [5]. We use S,Bi to denote agent (3's ith state and a,Bj to denote its jth action. We will denote the binary latent variables of the "experts" by hk (see Figure l(b)). For a state s = {S,Bi} and an action a = {a,Bj} ' the free energy is given by the expected energy given the posterior distribution of the hidden units minus the entropy of this posterior distribution. This is simple to compute because the hidden units are independent in the posterior distribution: F(s,a) -t;]; K M (lSI K -L k=l IAI ~(W,BikS,Bihk + b,BiS,B,) + ~(U,Bjka,Bjhk + b,Bja,BJ ) K bkh k +L k=l hk log hk + (1 - h k ) log (1 - h k ) - Cp (3) where W(3,k is the weight from the kth expert to binary state variable s(3,; U(3;k is the weight from the kth expert to binary action variable a(3;; b k , b(3, and b(3; are biases; and hk = M a { ]; t; (lSI W(3,k S (3,k + t; IAI )} u(3;ka(3;k + bk (4) is the expected value of each expert given the data where a(x) = 1/1 + e- x denotes the logistic function. CF is an additive constant equal to the log of the partition function. The first two terms of (3) corresponds to an unnormalized negative log-likelihood, and the third to the negative entropy of the distribution over the hidden units given the data. The free energy can be computed tractably because inference is tractable in a product of experts: under the product model each expert is independent of the others given the data. We can efficiently compute the exact free energy of a state and action under the product model, up to an additive constant. The Q-function will be approximated by the negative free-energy (or goodness), without the constant: Q(s, a) :::::: -F(s, a) 2.1 + CF (5) Learning the Parameters The parameters of the model must be adjusted so that the goodness of a state-action under the product model approximates its actual Q-value. This is done with a modified SARSA learning rule designed to minimize the Bellman error [7, 8]. If we consider a delta-rule update where the target for input (st, at) is rt + 'YQ(st+!, a t +!), then (for example) the update for W(3,k is given by: !1W(3,k ex: (rt+'YQ(st+l,at+!)-Q(st,at)) 8Q(st,at ) 8 W(3ik (6) (7) The other weights and biases are updated similarly. Although there is no proof of convergence for this learning rule, it works well in practice even though it ignores the effect of changes in W(3ik on Q(st+l, a t +!). 2.2 Sampling Actions Given a trained network and the current state st, we need to generate actions according to their goodness. We would like to select actions according to a Boltzmann exploration scheme in which the probability of selecting an action is proportional to eQ IT. This selection scheme has the desirable property that it optimizes the trade-off between the expected payoff, Q, and the entropy of the selection distribution, where T is the relative importance of exploration versus exploitation. Fortunately, the additive constant, CF, does not need to be known in order to select actions in this way. It is sufficient to do alternating Gibbs sampling. We start with an arbitrary initial action represented on the action units. Holding the state units fixed we update all of the hidden units in parallel so that we get a sample from the posterior distribution over the hidden units given the state and the action. Then we update all of the action units in parallel so that we get a sample from the posterior distribution over actions given the states of the hidden units. When updating the states of the action units, we use a "softmax" to enforce the one-of-N constraint within a set of binary units that represent mutually exclusive actions of the same agent. When the alternating Gibbs sampling reaches equilibrium it draws unbiased samples of actions according to their Q-value. For the networks we used, 50 Gibbs iterations appeared to be sufficient to come close to the equilibrium distribution. 3 Experimental Results To test the algorithm we introduce a co-operative multi-agent task in which there are offensive players trying to reach an end-zone, and defensive players trying to block them (see Figure 2). end-zone blockers ~ agents \ '0 ~ C) [C) Figure 2: An example of the "blocker" task. Agents must get past the blockers to the end-zone. The blockers are preprogrammed with a strategy to stop them, but if they co-operate the blockers cannot stop them all simultaneously. The task is co-operative: As long as one agent reaches the end-zone, the "team" is rewarded. The team receives a reward of + 1 when an agent reaches the end-zone, and a reward of -1 otherwise. The blockers are pre-programmed with a fixed blocking strategy. Each agent occupies one square on the grid, and each blocker occupies three horizontally adjacent squares. An agent cannot move into a square occupied by a blocker or another agent. The task has non-wrap-around edge conditions on the east, west and south sides of the field, and the blockers and agents can move north, south, east or west. A product of experts (PoE) network with 4 hidden units was trained on a 5 x 4 blocker task with two agents and one blocker. The combined state consisted of three position variables (two agents and one blocker) which could take on integer values {I, ... , 20}. The combined action consisted of two action variables taking on values from {I, ... ,4}. The network was run twice, once for 60 000 combined actions and once for 400 000 combined actions, with a learning rate going from 0.1 to 0.01 linearly and temperature going from 1.0 to 0.01 exponentially over the course of training. Each trial was terminated after either the end-zone was reached, or 20 combined actions were taken, whichever occurred first. Each trial was initialized with the blocker placed randomly in the top row and the agents placed randomly in the bottom row. The same learning rate and temperature schedule were used to train a Q-Iearner with a table containing 128,000 elements (20 3 x 4 2 ), except that the Q-Iearner was allowed to train for 1 million combined actions. After training each policy was run for 10,000 steps, and all rewards were totaled. The two algorithms were also compared to a hand-coded policy, where the agents first move to opposite sides of the field and then move to the end-zone. In this case, all of the algorithms performed comparably, and the POE network performing well even for a short training time. A PoE network with 16 hidden units was trained on a 4 x 7 blockers task with three agents and two blockers. Again, the input consisted of position variables for each blocker and agent, and and action variables for each agent. The network was trained for 400 000 combined actions, with the a learning rate from 0.01 to 0.001 and the same temperature schedule as the previous task. Each trial was terminated after either the end-zone was reached, or 40 steps were taken, whichever occurred first. After training, the resultant policy was run for 10,000 steps and the rewards received were totaled. As the table representation would have over a billion elements (28 5 x 43 ), a table based Q-Iearner could not be trained for comparison. The hand-coded policy moved agents 1, 2 and 3 to the left, middle and right column respectively, and then moved all agents towards the end-zone. The PoE performed comparably to this hand-coded policy. The results for all experiments are summarized in Table 1. Table 1: Experimental Results Algorithm Reward Random policy (5 x 4,2 agents, 1 blocker) hand-coded (5 x 4, 2 agents, 1 blocker) Q-Ieaming (5 x 4,2 agents, 1 blocker, 1000K steps) PoE (5 x 4, 2 agents, 1 blocker, 60K steps) PoE (5 x 4, 2 agents, 1 blocker, 400K steps) Random policy (4 x 7,3 agents, 2 blockers) hand-coded (4 x 7,3 agents, 2 blockers) PoE (4 x 7,3 agents, 2 blockers, 400K steps) -9986 -6782 -6904 -7303 -6738 -9486 -7074 -7631 4 Discussion Each hidden unit in the product model implements a probabilistic constraint that captures one aspect of the relationship between combined states and combined actions in a good policy. In practice the hidden units tend to represent particular strategies that are relevant in particular parts of the combined state space. This suggests that the hidden units could be used for hierarchical or temporal learning. A reinforcement learner could, for example, learn the dynamics between hidden unit values (useful for POMDPs) and the rewards associated with hidden unit activations. Because the PoE network implicitly represents a joint probability distribution over stateaction pairs, it can be queried in ways that are not normally possible for an actor network. Given any subset of state and action variables, the remainder can be sampled from the network using Gibbs sampling. This makes it easy to answer questions of the form: "How should agent 3 behave given fixed actions for agents 1 and 2?" or "I can see some of the state variables but not others. What values would I most like to see for the others?". Further, because there is an efficient unsupervised learning algorithm for PoE networks, an agent could improve its policy by watching another agent's actions and making them more probable under its own model. There are a number of related works, both in the fields of reinforcement learning and unsupervised learning. The SARSA algorithm is from [7, 8]. A delta-rule update similar to ours was explored by [9] for POMDPs and Q-Iearning. Factored MDPs and function approximators have a long history in the adaptive control and RL literature (see for example [10]). Our method is also closely related to actor-critic methods [2,3]. Normally with an actorcritic method, the actor network can be viewed as a biased scheme for selecting actions according to the value assigned by the critic. The selection is biased by the choice of parameterization. Our method of action selection is unbiased (if the Markov chain is allowed to converge). Further, the resultant policy can potentially be much more complicated than a typical parameterized actor network would allow. This is exactly the tradeoff explored in the graphical models literature between the use of Monte Carlo inference [11] and variational approximations [12]. Our algorithm is also related to probability matching [13], in which good actions are made more probable under the model, and the temperature at which the probability is computed is slowly reduced over time in order to move from exploration to exploitation and avoid local minima. Unlike our algorithm, the probability matching algorithm used a parameterized distribution which was maximized using gradient descent, and it did not address temporal credit assignment. 5 Conclusions We have shown that a product of experts network can be used to learn the values of stateaction pairs (including temporal credit assignment) when both the states and actions have a factored representation. An unbiased sample of actions can then be recovered with Gibbs sampling and 50 iterations appear to be sufficient. The network performs as well as a tablebased Q-Iearner for small tasks, and continues to perform well when the task becomes too large for a table-based representation. Acknowledgments We thank Peter Dayan, Zoubin Ghahramani and Andy Brown for helpful discussions. This research was funded by NSERC Canada and the Gatsby Charitable Foundation. References [1] R.S Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [2] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man and Cybernetics, 13:835846, 1983. [3] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proc. International Conference on Machine Learning, 1990. [4] Tommi Jaakkola, Satinder P. Singh, and Michael 1. Jordan. Reinforcement learning algorithm for partially observable Markov decision problems. In Gerald Tesauro, David S. Touretzky, and Todd K. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 345-352. The MIT Press, Cambridge, 1995. [5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, Gatsby Computational Neuroscience Unit, UCL, 2000. [6] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Ma chine Intelligence, 6:721- 741 , 1984. [7] G.A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technical Report CUEDIF-INFENGfTR 166, Engineering Department, Cambridge University, 1994. [8] R.S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Touretzky et al. [14], pages 1038- 1044. [9] M.L. Littman, A.R. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: Scaling up. In Proc. International Conference on Machine Learning, 1995 . [10] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996. [11] R. M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56:71- 113, 1992. [12] T. S. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models . Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 1997. Ph.D. thesis. [13] Philip N. Sabes and Michael 1. Jordan. Reinforcement learning by probability matching. In Touretzky et al. [14], pages 1080-1086. [14] David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors. Advances in Neural Information Processing Systems, volume 8. The MIT Press, Cambridge, 1996.
1888 |@word trial:3 exploitation:2 middle:1 advantageous:1 contrastive:1 minus:1 tr:1 initial:2 selecting:3 ours:1 past:1 current:3 ka:1 recovered:1 activation:1 must:4 belmont:1 additive:5 partition:1 designed:1 update:5 stationary:1 intelligence:2 selected:1 parameterization:1 ith:1 short:1 coarse:1 toronto:3 ik:2 consists:1 introduce:1 expected:5 planning:1 multi:4 brain:1 bellman:1 discounted:2 actual:1 becomes:3 provided:1 underlying:1 maximizes:1 what:1 totaled:2 kind:1 minimizes:1 finding:1 impractical:1 temporal:3 cuedif:1 stateaction:3 iearning:2 exactly:3 uk:1 facto:1 unit:31 normally:2 control:2 appear:1 bertsekas:1 engineering:1 local:1 todd:1 sutton:4 reacting:1 approximately:1 might:2 twice:1 suggests:1 co:5 programmed:1 bi:4 acknowledgment:1 practice:2 block:1 implement:1 matching:3 pre:1 zoubin:1 get:3 cannot:2 close:1 selection:4 defensive:1 factored:8 rule:4 searching:1 updated:1 target:1 exact:3 programming:2 element:3 approximated:2 particularly:1 updating:3 continues:2 geman:2 blocking:1 observed:1 ft:3 bottom:1 solved:1 capture:1 region:1 trade:1 mozer:1 environment:2 reward:9 littman:1 dynamic:3 gerald:1 preprogrammed:1 trained:6 singh:1 solving:1 bipartite:1 learner:1 easily:1 joint:1 represented:6 train:2 fast:1 london:2 monte:1 artificial:1 solve:1 drawing:1 otherwise:1 online:1 advantage:1 sequence:1 ucl:1 product:17 remainder:1 relevant:1 moved:2 billion:1 convergence:1 wcin:1 ac:1 fixing:1 received:2 eq:1 c:1 come:1 tommi:1 concentrate:1 closely:1 stochastic:3 exploration:3 occupies:2 generalization:1 brian:1 probable:2 sarsa:3 adjusted:1 strictly:1 extension:1 around:1 credit:2 exp:1 equilibrium:2 mapping:3 bj:3 consecutive:1 a2:1 estimation:1 proc:2 hasselmo:1 mit:4 modified:2 rather:1 occupied:1 avoid:1 barto:2 jaakkola:2 focus:2 likelihood:1 hk:4 helpful:1 inference:3 dependent:1 dayan:1 integrated:1 hidden:16 going:2 priori:1 retaining:1 constrained:1 softmax:1 equal:2 field:3 once:2 sampling:9 identical:1 represents:1 unsupervised:2 others:3 report:2 connectionist:2 inherent:1 randomly:2 simultaneously:1 divergence:1 individual:2 neuronlike:1 held:1 chain:1 andy:1 ieaming:1 tuple:2 edge:1 modest:1 initialized:1 causal:1 instance:1 column:1 ar:1 goodness:3 restoration:1 assignment:2 kaelbling:1 subset:1 successful:1 too:2 answer:1 combined:15 st:9 international:2 probabilistic:1 off:1 michael:4 quickly:1 jo:1 again:1 thesis:1 containing:1 possibly:1 slowly:1 watching:1 cognitive:1 expert:21 derivative:1 account:1 summarized:1 coding:1 north:1 explicitly:1 performed:2 try:1 red:1 start:1 reached:2 parallel:4 complicated:1 actorcritic:1 minimize:2 square:3 largely:1 efficiently:1 maximized:1 yield:1 bayesian:1 comparably:3 carlo:1 pomdps:2 cybernetics:1 m5s:1 straight:1 history:1 reach:4 touretzky:4 energy:14 resultant:2 proof:1 associated:1 gcnu:1 stop:2 sampled:1 dimensionality:1 schedule:2 z9:1 iai:2 leen:1 done:2 though:1 anderson:1 just:1 hand:5 receives:1 logistic:1 scientific:1 mdp:2 effect:1 normalized:1 unbiased:3 brown:1 consisted:3 assigned:3 alternating:3 bkh:1 neal:1 adjacent:1 unnormalized:1 trt:1 trying:2 performs:1 cp:1 temperature:4 image:1 variational:2 sigmoid:1 multinomial:2 rl:2 exponentially:1 volume:2 million:1 occurred:2 approximates:1 cambridge:5 gibbs:9 queried:1 chine:1 grid:1 similarly:1 funded:1 actor:6 posterior:5 own:1 optimizes:1 rewarded:1 tesauro:1 binary:7 approximators:1 inconsistency:2 captured:1 minimum:1 fortunately:1 converge:1 maximize:1 ii:2 desirable:1 isa:1 technical:2 long:2 niranjan:1 coded:5 va:1 neuro:1 expectation:1 iteration:3 represent:5 lea:1 operative:4 biased:2 operate:1 unlike:1 south:2 tend:1 jordan:2 integer:1 easy:1 offensive:1 architecture:3 opposite:1 tradeoff:1 poe:10 peter:1 action:74 useful:3 discount:1 ph:1 reduced:1 generate:1 sl:2 lsi:2 neuroscience:2 estimated:1 delta:2 discrete:2 express:1 relaxation:1 blocker:24 run:3 parameterized:3 family:1 draw:1 sallans:1 decision:7 scaling:1 constraint:2 aspect:1 performing:2 department:3 according:4 making:1 restricted:1 pr:2 taken:3 mutually:1 bja:1 ucz:1 tractable:1 whichever:2 end:9 hierarchical:1 appropriate:2 enforce:1 original:1 denotes:2 top:1 cf:3 graphical:4 ghahramani:1 approximating:2 llst:1 move:5 question:1 strategy:3 rt:5 exclusive:1 gradient:1 kth:2 wrap:1 separate:1 thank:1 athena:1 philip:1 reason:1 index:2 relationship:1 minimizing:1 difficult:3 potentially:1 holding:2 negative:5 policy:20 boltzmann:3 perform:3 markov:8 sm:1 finite:1 descent:1 behave:1 immediate:1 payoff:1 hinton:3 team:2 arbitrary:3 canada:2 bk:1 david:2 pair:13 required:3 learned:1 tractably:1 address:1 pattern:1 appeared:1 including:1 belief:1 ia:1 natural:1 scheme:3 improve:1 rummery:1 mdps:2 yq:2 sabes:1 extract:1 literature:2 relative:1 multiagent:1 expect:2 proportional:2 geoffrey:1 approximator:2 versus:1 foundation:1 agent:46 sufficient:3 editor:2 charitable:1 critic:4 row:3 course:2 placed:2 free:13 jth:1 tsitsiklis:1 bias:2 side:2 allow:1 taking:2 sparse:1 transition:1 ignores:1 forward:1 made:1 reinforcement:10 adaptive:2 transaction:2 approximate:1 compact:1 observable:2 implicitly:1 satinder:1 consuming:1 latent:1 table:12 learn:4 did:1 pk:2 linearly:1 terminated:2 backup:1 s2:1 allowed:2 west:2 gatsby:4 position:2 exponential:1 third:1 specific:1 arity:1 explored:3 intractable:1 importance:1 clamping:1 cassandra:1 suited:1 entropy:3 simply:1 horizontally:1 nserc:1 partially:2 iearner:4 corresponds:1 ma:4 goal:1 viewed:1 towards:1 man:1 change:1 typical:1 except:1 called:1 total:1 experimental:2 player:2 east:2 zone:9 formally:1 college:1 select:2 tested:1 ex:1
971
1,889
Adaptive Object Representation with Hierarchically-Distributed Memory Sites Bosco S. Tjan Department of Psychology University of Southern California [email protected] Abstract Theories of object recognition often assume that only one representation scheme is used within one visual-processing pathway. Versatility of the visual system comes from having multiple visual-processing pathways, each specialized in a different category of objects. We propose a theoretically simpler alternative, capable of explaining the same set of data and more. A single primary visual-processing pathway, loosely modular, is assumed. Memory modules are attached to sites along this pathway. Object-identity decision is made independently at each site. A site's response time is a monotonic-decreasing function of its confidence regarding its decision. An observer's response is the first-arriving response from any site. The effective representation(s) of such a system, determined empirically, can appear to be specialized for different tasks and stimuli, consistent with recent clinical and functional-imaging findings. This, however, merely reflects a decision being made at its appropriate level of abstraction. The system itself is intrinsically flexible and adaptive. 1 Introduction How does the visual system represent its knowledge about objects so as to identify them? A largely unquestioned assumption in the study of object recognition has been that the visual system builds up a representation for an object by sequentially transforming an input image into progressively more abstract representations. The final representation is taken to be the representation of an object and is entered into memory. Recognition of an object occurs when the representation of the object currently in view matches an item in memory. Highly influential proposals for a common representation of objects [1, 2] have failed to show promise of either producing a working artificial system or explaining a gamut of behavioral data. This insistence of having a common representation for all objects is also a major cause of the debate on whether the perceptual representation of objects is 2-D appearance-based or 3-D structure-based [3,4]. Recently, a convergence of data [5-9], including those from the viewpoint debate itself [10, 11], have been used to suggest that the brain may use multiple mechanisms or processing pathways to recognize a multitude of objects. While insisting on a common representation for all objects seems too restrictive in light of the varying complexity across objects [12], asserting a new pathway for every idiosyncratic data clusters seems unnecessary. We propose a parsimonious alternative, which is consistent with existing data but explains them with novel insights. Our framework relies on a single processing pathway. Flexibility and self-adaptivity are achieved by having multiple memory and decision sites distributed along the pathway. 2 Theory and Methods If the visual system needs to construct an abstract representation of objects for a certain task (e.g. object categorization), it will have to do so via multiple stages. The intermediate result at each stage is itself a representation. The entire processing pathway thus provides a hierarchy of representations, ranging from the most imagespecific at the earliest stage to the most abstract at the latest stage. The central idea of our proposal is that the visual system can tap this hierarchical collection of representations by attaching memory modules along the processing pathway. We further speculate that each memory site makes independent decisions about the identity of an incoming image. Each announces its decision after a delay, determined by an amount related to the site's confidence about its own decision and the amount of memory it needs to consult before reaching the decision. The homunculus does nothing but takes the first-arriving response as the system's response. Figure la depicts this framework, which we shall call the Hierarchically Distributed Decision Theory for object recognition. primary visual processing Sensory M emory * ... ~ L, Delays ~ Homunculus' Response * ~ L, L. + + Independent Decisions ? * ~ Yy + the first-arriving respon se (a) (b) Figure 1: An illustration of the Hierarchically Distributed Decision Theory of object recognition (a) and its implementation in a toy visual system (b). 2.1 A toy visual system We constructed a toy visual system to illustrate various properties of the Hierarchically Distributed Decision Theory . The task for this toy system is to identify letters presented at arbitrary position and orientation and corrupted by Gaussian luminance noise. This system is not meant to be a model of human vision, but rather a demonstration of the theory. Given a stimulus (letter+noise), the position of the target letter is first estimated and centered in the image (position normalization) by computing the centroid of the stimulus' luminance profile. Once centered, the principal axis of the luminance profile is determined and the entire image is rotated so that this axis is vertical (orientation normalization). The representation at this final stage is both position- and orientation-invariant. Traditionally, one would commit only this final representation to memory. In contrast, the Hierarchically Distributed Decision Theory stated that the intermediate results are also committed to some form of sensory memory (Figure Ib). A memory item is a feature vector. For this toy system, a feature vector is a sub-sampled image at the output of each stage. To recognize a letter, each site .I' independently decides the letter's identity L" based on the immediate representation Is available to the site. It does so by maximizing the posterior probability Pr(LsIIs), assuming 1) independent feature noise of known distribution (in this case, independent Gaussian luminance noise of zero mean and standard deviation cr) and 2) that its memory content completely captures all other sources of correlated noise and signal uncertainties (deviation from which is assessed by Eq. 3). Specifically, (1) L, = arg max Pr(r II, ) re Letters where Letters is the set of letter identities. A letter identity r is in turn a set of letter images Vat a given luminance, which may be shifted or rotated. So we have, Pr(rl/,) =~pr(V II,) =~pr(l, I V) Pr(V) /pr(l, ) = Eexp(-II/, - 2VI12 ]pr(V)/ 2s Ve r E (2) Eexp(-III, - 2VI12 ]pr(V) reLetters VEr 2s In addition to choosing a response, each site delays sending out its response by an amount 1 s. 1 s is related to each site's own assessment of its confidence about its decision and the size of memory it needed to consult to make the decision. 1s is a monotonically decreasing function of confidence (one minus the maximum posterior probability) and a monotonically increasing function of memory size: 1s = ~ 1- max Pr(rl/J +~ 10g(MJ+ho (3) reLLtters ho, h j, and h2' are constants common to all sites. Ms is the effective number of items in memory at site .1', equal to the number of distinct training views the site saw (or the limit of its memory size, whichever is less). In our toy system, M/ is the number of distinct training views presented to the system. M2 is approximately the number of training views with distinct orientations (because h is normalized by position), and M3 is effectively one view per letter. In general, M/ > M2 > M 3. Relative to the decision time 1" the processing time required to perform normalizations is assumed to be negligible (This assumption can be removed by letting ho depend on site .1'.) 2.2 Learning and testing The learning component of the theory has yet to be determined. For our toy system, we assumed that the items kept in memory are free of luminance noise but subjected to normalization errors caused by the luminance noise (e.g. the position of a letter may not be perfectly determined). We measured performance of the toy system by first exposing it to 5 orientations and 20 positions of each letter at high signal-to-noise ratio (SNR). Ten letters from the Times Roman font were used in the simulation (bcdeghnopw). The system keeps in memory those studied views (Site 1) and their normalized versions (Sites 2 & 3). Therefore, M/ = 5x20xlO = 1000. Since the normalization processes are reliable at high SNR, M2 "" 50, and M3 "" 10. We tested the system by presenting it with letters from either the studied views, or views it had not seen before. In the latter case, a novel view could be either with novel position alone, or with both novel position and orientation. The test stimuli were presented at SNR ranging from 210 to 1800 (Weber contrast of 10-30% at mean luminance of 48 cd/m2 and a noise standard deviation of 10 cd/m2). 3 Results and Discussions Figure 2a shows the performance of our toy visual system under different stimulus conditions. The numbered thin curves indicate recognition accuracy achieved by each site. As expected, Site 1, which kept raw images in memory, achieved the best accuracy when tested with studied views, but it could not generalize to novel views. In contrast, Site 3 maintained essentially the same level of performance regardless of view condition - its representation was invariant to position and orientation. Familiar views Novel positions Familiar views Novel positions Novel positions Novel positions & orientations & orientations 2,3 i ~.~ High contrast (25%) "''' ~ ...... , raw'''.ge Low contnlst ( 15%) 50 Contrast (%) (a) 100 50 100 50 100 % Flrst-arnvlng Response (b) Figure 2: (a) Accuracy of the system (solid symbols) verses accuracy of each site (numbered curves) under different contrast and view conditions. (b) Relative frequency of a site issuing the first-arriving response. The thick curves with solid symbols indicated the system's performance based on first-arriving responses. Clearly, it tracked the performance of the best-performing site under all conditions. Under the novel-position condition, the system's performance was even better than the best-performing sites. This is because although Site 2 and 3 performed equally well, they made different errors. The simple delay rule effectively picked out the most reliable response at each trial. Figure 2b shows the source distribution of the first-arriving responses. When familiar (i.e. studied) views were presented at low contrast (low SNR), Site 1, which used raw image as the representation, was responsible for issuing about 60% of the first-arriving responses. This is because normalization processes tend to be less reliable at low SNR. Whenever an input to Site 2 or 3 cannot be properly normalized, it will match poorly to the normalized views in memory, resulting in lower confidence and longer delay . As contrast increased, normalization processes became more accurate, and the first-arriving responses shifted to the higher sites. Higher sites encode more invariance, and thus need to consult fewer memory items. Lastly, when novel views were presented, Site 3 tended to be the most active, since it was the only site that fully captured all the invariance necessary for this condition. The delay mechanism specified by Eq. 3 allows the system as a whole to be selfadaptive. Its effective representation, if we can speak of such, is flexible. No site is exclusively responsible for any particular kind of stimuli. Instead, the decision is always distributed across sites in a trial-by-trial basis. What do existing human data on object recognition have to say about this simple framework? Wouldn't those data supporting functional specialization or objectcategory-specific representations argue against this framework? Not at all! 3.1 Viewpoint effects Entry-level object recognition [13] often shows less viewpoint dependence than subordinate-level object recognition. This has been taken to suggest that two different mechanisms or forms of representation may be subserving these two types of object recognition tasks [4]. Figure 3a shows our system's overall performance in response time (RT) and error rate when tested with the studied (thus "familiar") and the novel (new positions and orientations) views. The difference in RT and error rate between these two conditions (Figure 3b) is a rough measure of the viewpoint effect. Even though the system includes a site (Site 3) with viewpoint-invariant representation, the system's overall performance still depends on viewpoint, particularly at low contrast. -Familiar ~ BOO 0 0> UI -S 700 0> E F 500 0> UI "c. .00 0 UI 0> a:: \ l \ .l!! ~ 06 f' W 04 - - Novel pDS. & orl. view-deperdent ... ~ ? .t, U > I Z I 0 0> ~O4 I ~ I I 0> ~ .,?.150 ta; a:: <:)100 f' W <l view-invariant .- . I' I \ \ \ \ \.- \ Contrast ('!o) (a) (b) Figure 3: (a) RT and error rate of the toy system when tested with either the studied or novel views. (b) Difference between the two conditions. Because the representation space of this toy system is the image space, contrast is a direct measure of "perceptual" distinctiveness. Figure 3b shows that when objects were sufficiently distinct (as in entry-level recognition), there was little or no viewpoint effect. When objects were highly similar, performances were equally poor for studied and novel views, so there was little viewpoint effect to speak of. Viewpoint effect was localized to a mid-range of distinctiveness. Within this range, increasing similarity increased viewpoint dependence. The fact that viewpoint effect was present only within a bounded range of distinctiveness agrees with the general experience that sizable viewpoint effect is uncommon unless artificiallycreated objects or objects chosen from the same category (subordinate-level recognition) are used. 3.2 Functionally specialized brain regions Various fMRI studies have observed what appears to be functionally specialized brain regions involved in object perception [7-9]. To identify and localize such areas, a typical approach is to subtract the observed hemodynamic signals under one stimulus condition from that under a different condition. An area is said to be "selective" to a stimulus type X if its signal strength is higher whenever X, as opposed to some other type of stimuli, is displayed. We performed a simulated "imaging" on our toy visual system. Consider Figure 3b . If we assume that one unit of metabolic energy is needed to send a response, and no more response will be sent after the first-arriving response has been received, we can re-Iabel the x-axis of the histograms as "hemodynamic signal", or "activation level". Furthermore, as mentioned before, we can label stimuli in high contrast as "distinct objects" and those in low contrast as "similar objects." When we did "similar minus distinct", we obtained the result shown in the lower right-hand panel in Figure 4a. Site 1 was more active than all other sites when recognition was between similar objects, while Site 3 was more active when recognition was between distinct objects. The standard practice of interpreting such a result would label Site 1 as an area for processing similar (perhaps subordinate- level) objects, and Site 3 as an area for processing distinct (perhaps entry-level) objects. Knowing how the decisions are actually made however, such labeling is clearly misguided. When instead we did "familiar minus novel", we obtained a similar pattern of result (Figure 4a, upper right). However, this time we would have to label Site 1 as an area for processing familiar objects (or an area for expertise), and Site 3 for novel objects. Analogous to an on-going debate about expertise vs . object-specificity [14], whether Site 1 is for familiar objects or similar objects cannot be resolved based on subtraction method alone. According to the standard interpretation of the subtraction method, our toy visual system appeared to contain functionally specialized sites; yet, none of the sites were designed to specialize in any kind of stimuli. Even in the most extreme cases, no site was responsible for more than 70% of the decisions. One last point is worth mentioning. The primary visual pathway was equally active under all conditions, so its activity became invisible after subtraction. The observed signal change revealed only the difference in memory activities. minus Fam. m.. 3 UJ nfJE:J~~ ~ n~ U =: :'00 ?0? '00 o 50 100 0 50 u ~ 08 8 c: ? Nonnll o "? 04 &. e ? Las lon OA 02 0.. 100 Hemodynamic signal %signal change (a) Object Type (b) Figure 4: The toy visual system gives the appearance of contalOlOg functionally specialized modules in simulated functional imaging (a) and lesion studies (b). 3.3 Category-specific deficits Patients with prosopagnosia cannot recognize faces, but their ability for recognizing other objects are often spared. Patients with visual object agnosia have the opposite impairments. This kind of double dissociation is taken as another evidence to suggest that the visual system contains object-specific modules (cf. [15]). We observed the same kind of double dissociation with our toy model. Figure 4b shows what happened when we "lesioned" different memory sites in our system by preventing a site from issuing any response. When Site 1 was lesioned, recognition performance for similar-but-familiar objects (analog to familiar faces) was impeded while performance for distinct-but-novel objects was spared. The opposite was true when Site 3 was lesioned. It is worth restating that our toy system consisted of only a single processing pathway and no category-specific representations. 4 Conclusion Intermediate representations along a single visual-processing pathway form a natural hierarchy of abstractions. We have shown that by attaching sensory memory modules to the pathway, this hierarchy can be exploited to achieve an effective representation of objects that is highly flexible and adaptive. Each memory module makes independent decision regarding the identity of an object based on the intermediate representation available to it. Each module delays sending out its response by an amount related to its confidence about its decision, in addition to the time required for memory lookup. The first-arriving response becomes the system's response. It is an attractive conjecture that this scheme of adaptive representation may be used by the visual system. Through a toy example, we have shown that such a system can appear to behave like one with multiple functionally specialized pathways or category-specific representations, raIsmg questions for the contemporary interpretations of behavioral, clinical and functional-imaging data regarding the neuro-architecture for object recognition. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13 . 14. 15. Marr, D., Vision. 1982, San Francisco: Freeman. Biederman, I., Recognition-by-components: A theory of human image understanding. Psychological Review, 1987.94: p. 115-147. Biederman, I. and P.C. Gerhardstein, Recognizing depth-rotated objects: Evidence and conditions for three-dimensional viewpoint invariance. Journal of Experimental Psychology: Human Perception and Performance, 1993. 19: p. 1162-1182. Tarr, M.J . and H.H. Btilthoff, Is human object recognition better described by geon structural descriptions or by multiple views ? Comment on Biederman and Gerhardstein (1993). Journal of Experimental Psychology: Human Perception and Performance, 1995.21: p. 1494-1505 . Cognitive and Farah, M.J., Is an object an object an object? neuropsychological investigations of domain-specificity in visual object recognition. Current Directions in Psychological Science, 1992. 1: p. 164-169. Kanwisher, N., M.M. Chun, and P. ledden, Functional imaging of human visual recognition. Cognitive Brain Research, 1996. 5: p. 55-67. Kanwisher, N., J. McDermott, and M.M. Chun, The fusiform face area: A module in human extra-striate cortex specialized for face perception. Journal of Neuroscience, 1997. 17: p. 1-10. Kanwisher, N., et aI., A locus in human extrastriate cortex for visual shape analysis. Journal of Cognitive Neuroscience, 1997. 9: p. 133-142. Ishai, A., et aI., fMRI reveals differential activation in the ventral object recognition pathway during the perception of fa ces, hourses and chairs. Neuroimage, 1997.5(149). Edelman, S., Features of Recognition. 1991, Rehovot, Isreal: Weizmann Institute of Science. Jolicoeur, P., Identification of disoriented objects: A dual-system theory. Memory & Cognition, 1990. 13: p. 289-303. Tjan, B.S. and G.E. Legge, The viewpoint complexity of an object recognition task. Vision Research, 1998. 38: p. 2335-50. Jolicoeur, P., M.A. Gluck, and S.M. Kosslyn, From pictures to words: Making the connection. Cognitive Psychology, 1984. 16: p. 243-275. Gauthier, I., et al., Activation of the middle fusiform "fa ce area" increases with expertise in recognizing novel objects. Nature Neuroscience, 1999. 2(6): p. 568573. Farah, M.J., Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision. 1990, Cambridge, MA: MIT Press.
1889 |@word bosco:1 trial:3 middle:1 version:1 fusiform:2 seems:2 simulation:1 minus:4 solid:2 extrastriate:1 contains:1 exclusively:1 existing:2 current:1 emory:1 activation:3 yet:2 issuing:3 exposing:1 shape:1 designed:1 progressively:1 v:1 alone:2 fewer:1 item:5 provides:1 disoriented:1 simpler:1 along:4 constructed:1 direct:1 differential:1 edelman:1 specialize:1 pathway:16 behavioral:2 theoretically:1 kanwisher:3 expected:1 brain:4 freeman:1 decreasing:2 little:2 increasing:2 becomes:1 bounded:1 panel:1 what:4 kind:4 finding:1 every:1 unit:1 appear:2 producing:1 before:3 negligible:1 insistence:1 limit:1 approximately:1 studied:7 mentioning:1 range:3 neuropsychological:1 weizmann:1 responsible:3 testing:1 practice:1 area:8 confidence:6 word:1 numbered:2 specificity:2 suggest:3 cannot:3 prosopagnosia:1 maximizing:1 send:1 latest:1 regardless:1 independently:2 announces:1 impeded:1 disorder:1 fam:1 m2:5 insight:1 rule:1 marr:1 traditionally:1 analogous:1 hierarchy:3 target:1 speak:2 recognition:24 particularly:1 observed:4 module:8 capture:1 region:2 removed:1 contemporary:1 mentioned:1 legge:1 transforming:1 pd:1 complexity:2 ui:3 lesioned:3 depend:1 completely:1 basis:1 resolved:1 various:2 distinct:9 effective:4 artificial:1 labeling:1 tell:1 choosing:1 modular:1 say:1 ability:1 commit:1 itself:3 final:3 propose:2 entered:1 flexibility:1 poorly:1 achieve:1 description:1 convergence:1 cluster:1 double:2 categorization:1 rotated:3 object:60 illustrate:1 measured:1 received:1 eq:2 sizable:1 come:1 indicate:1 direction:1 thick:1 centered:2 human:9 subordinate:3 explains:1 investigation:1 sufficiently:1 normal:1 cognition:1 major:1 ventral:1 label:3 currently:1 saw:1 agrees:1 reflects:1 rough:1 clearly:2 mit:1 gaussian:2 always:1 reaching:1 rather:1 cr:1 varying:1 earliest:1 encode:1 lon:1 properly:1 contrast:13 spared:2 centroid:1 abstraction:2 entire:2 selective:1 going:1 arg:1 overall:2 flexible:3 orientation:10 dual:1 equal:1 construct:1 once:1 having:3 btilthoff:1 tarr:1 thin:1 fmri:2 stimulus:11 roman:1 recognize:3 ve:1 usc:1 familiar:10 versatility:1 tjan:2 highly:3 uncommon:1 extreme:1 light:1 accurate:1 capable:1 necessary:1 experience:1 unless:1 loosely:1 re:2 psychological:2 increased:2 deviation:3 entry:3 snr:5 delay:7 recognizing:3 too:1 ishai:1 corrupted:1 central:1 opposed:1 cognitive:4 toy:17 lookup:1 speculate:1 includes:1 caused:1 depends:1 performed:2 view:24 observer:1 picked:1 farah:2 accuracy:4 became:2 largely:1 dissociation:2 identify:3 generalize:1 raw:3 identification:1 none:1 expertise:3 worth:2 flrst:1 tended:1 whenever:2 verse:1 against:1 energy:1 frequency:1 involved:1 sampled:1 intrinsically:1 knowledge:1 subserving:1 actually:1 appears:1 higher:3 ta:1 response:23 though:1 furthermore:1 stage:6 lastly:1 working:1 hand:1 gauthier:1 assessment:1 indicated:1 perhaps:2 effect:7 normalized:4 true:1 consisted:1 contain:1 attractive:1 during:1 self:1 maintained:1 m:1 presenting:1 invisible:1 interpreting:1 image:10 ranging:2 weber:1 novel:19 recently:1 common:4 specialized:8 functional:5 empirically:1 rl:2 tracked:1 attached:1 analog:1 interpretation:2 functionally:5 cambridge:1 ai:2 had:1 longer:1 similarity:1 cortex:2 posterior:2 own:2 recent:1 certain:1 exploited:1 mcdermott:1 seen:1 captured:1 subtraction:3 monotonically:2 signal:8 ii:3 multiple:6 eexp:2 match:2 clinical:2 equally:3 vat:1 neuro:1 vision:4 essentially:1 patient:2 histogram:1 represent:1 normalization:7 achieved:3 proposal:2 addition:2 source:2 extra:1 comment:1 tend:1 sent:1 call:1 consult:3 structural:1 intermediate:4 iii:1 revealed:1 boo:1 psychology:4 architecture:1 perfectly:1 opposite:2 regarding:3 idea:1 knowing:1 whether:2 specialization:1 cause:1 impairment:1 se:1 amount:4 mid:1 ten:1 category:5 homunculus:2 shifted:2 happened:1 estimated:1 neuroscience:3 per:1 yy:1 rehovot:1 promise:1 shall:1 localize:1 ce:2 kept:2 luminance:8 imaging:5 merely:1 letter:15 uncertainty:1 parsimonious:1 decision:21 orl:1 activity:2 strength:1 misguided:1 chair:1 performing:2 conjecture:1 department:1 influential:1 according:1 geon:1 poor:1 across:2 kosslyn:1 making:1 invariant:4 pr:10 taken:3 turn:1 mechanism:3 needed:2 locus:1 letting:1 ge:1 subjected:1 whichever:1 sending:2 available:2 hierarchical:1 appropriate:1 alternative:2 ho:3 cf:1 restrictive:1 build:1 uj:1 question:1 occurs:1 font:1 fa:2 primary:3 dependence:2 rt:3 striate:1 said:1 southern:1 deficit:1 simulated:2 oa:1 restating:1 argue:1 o4:1 assuming:1 illustration:1 ratio:1 demonstration:1 idiosyncratic:1 debate:3 stated:1 implementation:1 perform:1 upper:1 vertical:1 behave:1 displayed:1 supporting:1 immediate:1 committed:1 gerhardstein:2 arbitrary:1 biederman:3 required:2 specified:1 connection:1 tap:1 california:1 perception:5 pattern:1 appeared:1 including:1 memory:27 max:2 reliable:3 natural:1 scheme:2 picture:1 axis:3 gamut:1 review:1 understanding:1 relative:2 fully:1 adaptivity:1 localized:1 h2:1 consistent:2 viewpoint:14 metabolic:1 cd:2 last:1 free:1 arriving:10 institute:1 explaining:2 distinctiveness:3 face:4 attaching:2 distributed:7 curve:3 depth:1 asserting:1 sensory:3 preventing:1 made:4 adaptive:4 collection:1 wouldn:1 san:1 keep:1 sequentially:1 incoming:1 decides:1 ver:1 active:4 reveals:1 assumed:3 unnecessary:1 francisco:1 mj:1 nature:1 domain:1 did:2 hierarchically:5 whole:1 noise:9 profile:2 nothing:1 lesion:1 site:51 jolicoeur:2 depicts:1 sub:1 position:16 neuroimage:1 perceptual:2 ib:1 specific:5 symbol:2 agnosia:2 chun:2 multitude:1 evidence:2 effectively:2 subtract:1 gluck:1 appearance:2 visual:25 failed:1 monotonic:1 relies:1 insisting:1 ma:1 identity:6 content:1 change:2 determined:5 specifically:1 typical:1 principal:1 invariance:3 experimental:2 la:2 m3:2 latter:1 meant:1 assessed:1 hemodynamic:3 tested:4 correlated:1
972
189
769 A SELF-LEARNING NEURAL NETWORK A. Hartstein and R. H. Koch IBM - Thomas J. Watson Research Center Yorktown Heights, New York ABSTRACf We propose a new neural network structure that is compatible with silicon technology and has built-in learning capability. The thrust of this network work is a new synapse function. The synapses have the feature that the learning parameter is embodied in the thresholds of MOSFET devices and is local in character. The network is shown to be capable of learning by example as well as exhibiting the desirable features of the Hopfield type networks. The thrust of what we want to discuss is a new synapse function for an artificial neuron to be used in a neural network. We choose the synapse function to be readily implementable in VLSI technology, rather than choosing a function which is either our best guess for the function used by real synapses or mathematically the most tractable. In order to demonstrate that this type of synapse function provides interesting behavior in a neural network, we imbed this type of function in a Hopfield {Hopfield, 1982} type network and provide the synapses with a Hebbian {Hebb, 1949} learning capability. We then show that this type of network functions in much the same way as a Hopfield network and also learns by example. Some of this work has been discussed previously {Hartstein, 1988}. Most neural networks, which have been described, use a multiplicative function for the synapses. The inputs to the neuron are multiplied by weighting factors and then the results are summed in the neuron. The result of the sum is then put into a hard threshold device or a device with a sigmoid output. This is not the easiest function for a MOSFET to perform although it can be done. Over a large range of parameters, a MOSFET is a linear device with the output current being a linear function of the input voltage relative to a threshold voltage. If one could directly utilize these characteristics, one would be able to design a neural network more compactly. 770 Hartstein and Koch We propose that we directly use MOSFETs as the input devices for the neurons in the network, utilizing their natural characteristics. We assume the following form for the input of each neuron in our network: V; = 0 ( 2: IIj - T;j I ) (1) 1 where V, is the output, ~ are the inputs and T,j are the learned threshold voltages. In this network we use a representation in which both the V's and the T's range from 0 to + 1. The result of the summation is fed into a non-linear sigmoid function (0). All of the neurons in the network are interconnected, the outputs of each neuron feeding the inputs of every other neuron. The functional form of Eq. 1 might, for instance, represent several n-channel and p-channel MOSFETs in parallel. The memories in this network are contained in the threshold voltages, 1',}" We implement learning in this network using a simple linear Hebbian {Hebb, 1949} learning rule. We use a rule which locally reinforces the state of each input node in a neuron relative to the output of that neuron. The equation governing this learning algorithm is: (2) where 1';j are the initial threshold voltages and T 'j are the new threshold voltages after a time,.6.t. Here TJ is a small learning parameter related to this time period, and the offset factor O.S is needed for symmetry. Additional saturation constraints are imposed to ensure that 1';j remain in the interval 0 to + 1. This learning rule is one which is linear in the difference between each input and output of a neuron. This is an enhancing/inhibiting rule. The thresholds are adjusted in such a way that the output of the neuron is either pushed in the same direction as the input (enhancing), or pushed in the opposite direction (inhibiting). For our simple simulations we started the network with all thresholds at O.S and let learning proceed until some saturation occurred. The somewhat more sophisticated method of including a relaxation term in Eq. 2 to slowly push the values toward O.S over time was also explored. The results are essentially the same as for our simple simulations. The interesting question is if we form a network using this type of neuron, what will the overall network response be like? Will the network learn multiple states or will it learn a simple average over all of the states it sees? In order to probe the functioning of this network, we have performed simulations of this network on a digital computer. Each simulation was divided into two phases. The first was a learning phase in which a fixed number of random patterns were presented to the network sequentially for some period of time. During this phase the threshold A Self-Learning Neural Network voltages were allowed to change using the rule in Eq. 2. The second was a testing phase in which learning was turned off and the memories established in the network were probed to determine the essential features of these learned memories. In this way we could test how well the network was able to learn the initial test patterns, how well the network could reconstruct the learned patterns when presented with test patterns containing errors, and how the network responded to random input patterns. We have simulated this network using N fully interconnected neurons, with N in the range of 10 to 200. M random patterns were chosen and sequentially presented to the network for learning. M typically ranged up to N/3. After the learning phase, the nature of the stable states in the network was tested. In general we found that the network is capable of learning all of the input patterns as long as M is not too large. The network also learns the inverse patterns (l's and O's interchanged) due to the inherent symmetry of the network. Additional extraneous patterns are learned which have no obvious connection to the intended learned states. These may be analogous to either the spin glass states or the mixed pattern states discussed for the multiplicative network {Amit, 1985}. Fig. 1 shows the capacity of a 100 neuron network. We attempted to teach the network M states and then probed the network to see how many of the states were successfully learned. This process was repeated many times until we achieved good statistics. We have defined successful learning as 100 0;6 accuracy. A more relaxed definition would yield a qualitatively similar curve with larger capacity. The functional form of the learning is peaked at a fixed value of the number of input patterns. For a small number of input patterns, the network essentially learns all of the patterns. Deviations from perfect learning here generally mean 1 bit of information was learned incorrectly. Near the peak the results become more noisy for different learning attempts. Most errors are still only 1 or 2 bits! but the learning in this region becomes marginal as the capacity of the network is approached. For larger values of the number of input patterns the network becomes overloaded and it becomes incapable of learning most of the input states. Some small number of patterns are still learned, but the network is clearly not functioning well. Many of the errors in this region are large, showing little correlation with the intended learned states. This functional form for the learning in the network is the same for all of the network sizes tested. We define the capacity of the network as the average value of the peak number of patterns which can be successfully learned. The inset to Fig. 1 shows the memory capacity of a number of tested networks as a function of the size of the network. The network capacity is seen to be a linear function of the network size. The capacity is proportional to the number of T./s specified. In this 771 772 Hartstein and Koch example the network capacity was f ouod to be about 8 010 of the maximum possible for binary information. This rather low figure results from a trade-off of capacity for the partic\Jlar types of functions that a neural network can perform. It is possible to construct simple memories with 100 0.,.6 capacity. N 100 200 25------------------------~--------------~ 20 0 20 ,,, ,, ,'. ? ] ~ ~ E o 10 15 ~ .?;'0 C 0.. 0 U 0 '0 10 ~ ~ E ~ z 5 o~--~----~--~----~--~ o 10 20 30 40 50 Figure 1. The number of successfully learned patterns as a function of the number of input patterns for a 100 neuron network. The dashed curve is for perfect learning. The inset shows the memory capacity of a threshold neural network as a function of the size of the network. Some important measures of learning in the network are the distribution of stable states in the network after learning has taken place. and the basin of attraction ror each stable point. One can gain a handle on these parameters by probing the network with random test patterns after the network has learned M states. Fig. 2 shows the averaged results of such tests for a 100 neuron network and varying numbers of learned states. The figure shows the probability of finding particular states. both learned and extraneous. The states are ordered first by decreasing A Self-Learning Neural Network probability for the learned states, followed by decreasing probability for the extraneous states. It is clear from the figure that both types of stable states are present in the network. It is also clear that the probabilities of finding different patterns are not equal. Some learned states are more robust than others, that is they have larger basins of attraction. This network model does not partition the available memory space equally among the input patterns. It also provides a large amount of memory space for the extraneous states. Clearly, this is not the optimum situation. 0.8 ~ 0.6 (a) Learned - 0.4 Q) .s (/) 0') c: ~ c: G: L--- 0.2 Extraneous 0.0 .....0 0.8 .b ~ :0 e ~ 0.6 Extraneous (b) 0.4 Learned 0.2 0.0 0 5 10 15 20 State 25 30 Figure 2. The probability of the network finding a specific pattern. Both learned states and extraneous states are found. The figure was obtained for a 100 neuron network. Fig. 2a is for 5 learned patterns and 2b is for 10 learned patterns. Some of the learned states appear to have 0 probability of being found in this simulation. Some of these states are not stable states of the network and will never be found. This is particularly true -when the number of learned states is close to or exceeds the capacity of the network. Others of these states simply have an extremely small probability of being found in a random search because they have small basins of attraction. However, as discussed below, these are still viable states. When the network learns fewer states than its capacity (Fig. 2a), 773 774 Hartstein and Koch most of the stable states are the learned states. As the capacity is approached or exceeded, most of the stable states are extraneous states. The results shown in Fig. 2 address the question of the networks tolerance to errors. A pattern, which has a large basin of attraction, will be relatively tolerant to errors when being retrieved, whereas, a pattern, which has a small basin of attraction, will be less tolerant of errors. The immunity of the learned patterns to errors in being retrieved can also be tested in a more direct way. One can probe the network with test patterns which start out as the learned patterns, but have a certain number of bits changed randomly. One then monitors the final pattern which the networks finds and compares to the known learned pattern. .$ .s (I) "i 0.8 E o !3 0.6 t7' c: ;:; c: ~ '0 04 ? ~ ~ 0.2 e 0.. ? ..-.4?. . . 0.0 '------.1--.-.-.--___.... o 10 20 30 40 Hamming Distance Figure 3. Probability of the network finding a specific learned state when the input pattern has a certain Hamming distance. This figure was obtained for a 100 neuron network which was taught 10 random patterns. Fig. 3 shows typical results of such a calculation. The probability of successfully retrieving a pattern is shown as a function of the Hamming distance. the number of bits which were randomly changed in the test pattern. For this simulation a tOO neuron network was used and it was taught 10 patterns. For small Hamming distances the patterns are successfully found 100?,.6 of the time. As the Hamming distance gets larger the network is no longer capable of finding the desired pattern. but rather finds one of the other fixed points. This result is a statistical av- A Self-Learning Neural Network erase over all of the states and therefore tends to emphasize patterns with small basins of attraction. This is just the opposite of the types of states emphasized in the analysis shown in Fig. 2. We can define the maximum Hamming distance as the Hamming distance at which the probability of finding the learned state has dropped to SO%. Fig. 4 shows the maximum Hamming distance as a function of the number of learned states in our 100 neuron network. As one expects the maximum Hamming distance gets smaller as the number of learned states increases. Perhaps surprisingly, the relationship is linear. These results are important since one requires a reasonable maximum Hamming distance for any real system to function. These considerations also shed some light on the nature of the functioning of the network and its ability to learn. 60 CI) (.) c ~ i5 40 0 c ?eE 0 :c ? 20 . .-E~ ::E 0 0 5 10 15 20 M Figure 4. The maximum Hamming distance for a given number of learned states. Results are for a 100 neuron network. This simulation gives us a picture of the way in which the network utilizes its phase space to store information. When only a few patterns are stored in the network, the network divides up the available space among these memories. The learning process is almost always successful. When a larger number of learn~ patterns are attempted, the available space is now divided among more memories. The maximum Hamming distance decreases and more space is taken up byextraneous states. When the memory capacity is exceeded, the phase space allo- 775 776 Hartstein and Koch cated to any successful memory is very small and most of the space is taken up by extraneous states. The types of behavior we have described are similar to those found in the Hopfield type memory utilizing multiplicative synapses. In fact our central point is that by using a completely different type of synapse function, we can obtain the same behavior. At the same time we argue since this network was proposed using a synapse function which mirrors the operating characteristics of MOSFETs, it will be much easier to realize in hardware. Therefore, we should be able to construct a smaller more tolerant network with the same operating characteristics. We do not mean to imply that the type of synapse function we have explored can only be used in a Hopfield type network. In fact we feel that this type of neuron is quite general and can successfully be utilized in any type of network. This is at present just a conjecture which needs to be explored more fully. Perhaps the most important message from our work is the realization that one need not be constrained to the multiplicative type of synapse, and that other forms of synapses can perform similar functions in neural networks. This may open up many new avenues of investigation. REFERENCES D.l. Amit, H. Gutfreund and H. Sompolinsky, Phys. Rev. A32, 1001 (1985). A. Hartstein and R.H. Koch, IEEE Int. Conf. on Neural Networks, (SOS Printing, San Diego, 1988), Vol. I, 425. D. Hebb, The Organization of Behaviour, (Wiley, New York, 1949). 1.1. Hopfield, Proc. Natl. Acad. Sci. USA 79, 2554 (1982).
189 |@word open:1 simulation:7 initial:2 t7:1 current:1 readily:1 realize:1 partition:1 thrust:2 fewer:1 device:5 guess:1 provides:2 node:1 height:1 direct:1 become:1 viable:1 retrieving:1 behavior:3 decreasing:2 little:1 becomes:3 erase:1 what:2 easiest:1 gutfreund:1 finding:6 every:1 shed:1 appear:1 dropped:1 local:1 tends:1 acad:1 might:1 range:3 averaged:1 testing:1 implement:1 get:2 close:1 put:1 imposed:1 center:1 rule:5 attraction:6 utilizing:2 handle:1 analogous:1 feel:1 diego:1 particularly:1 utilized:1 region:2 sompolinsky:1 trade:1 decrease:1 imbed:1 a32:1 hartstein:7 ror:1 completely:1 compactly:1 hopfield:7 mosfet:3 artificial:1 approached:2 choosing:1 quite:1 larger:5 reconstruct:1 ability:1 statistic:1 noisy:1 final:1 propose:2 interconnected:2 turned:1 realization:1 abstracf:1 optimum:1 perfect:2 eq:3 exhibiting:1 direction:2 feeding:1 behaviour:1 investigation:1 summation:1 mathematically:1 adjusted:1 koch:6 inhibiting:2 interchanged:1 mosfets:3 proc:1 successfully:6 clearly:2 always:1 rather:3 varying:1 voltage:7 partic:1 glass:1 typically:1 vlsi:1 overall:1 among:3 extraneous:9 constrained:1 summed:1 marginal:1 equal:1 construct:2 never:1 peaked:1 others:2 inherent:1 few:1 randomly:2 phase:7 intended:2 attempt:1 organization:1 message:1 light:1 tj:1 natl:1 capable:3 divide:1 desired:1 instance:1 deviation:1 expects:1 successful:3 too:2 stored:1 peak:2 off:2 central:1 containing:1 choose:1 slowly:1 conf:1 int:1 multiplicative:4 performed:1 start:1 capability:2 parallel:1 spin:1 accuracy:1 responded:1 characteristic:4 yield:1 synapsis:6 phys:1 definition:1 obvious:1 hamming:12 gain:1 sophisticated:1 exceeded:2 response:1 synapse:8 done:1 governing:1 just:2 until:2 correlation:1 perhaps:2 usa:1 ranged:1 true:1 functioning:3 during:1 self:4 yorktown:1 demonstrate:1 consideration:1 sigmoid:2 functional:3 discussed:3 occurred:1 silicon:1 stable:7 longer:1 operating:2 retrieved:2 store:1 certain:2 incapable:1 binary:1 watson:1 seen:1 additional:2 somewhat:1 relaxed:1 determine:1 period:2 dashed:1 multiple:1 desirable:1 hebbian:2 exceeds:1 calculation:1 long:1 divided:2 equally:1 enhancing:2 essentially:2 represent:1 achieved:1 whereas:1 want:1 interval:1 ee:1 near:1 opposite:2 avenue:1 york:2 proceed:1 generally:1 clear:2 amount:1 locally:1 hardware:1 reinforces:1 probed:2 vol:1 taught:2 threshold:11 monitor:1 utilize:1 relaxation:1 sum:1 inverse:1 i5:1 place:1 almost:1 reasonable:1 utilizes:1 pushed:2 bit:4 followed:1 constraint:1 extremely:1 relatively:1 conjecture:1 remain:1 smaller:2 character:1 rev:1 taken:3 equation:1 previously:1 discus:1 needed:1 tractable:1 fed:1 available:3 multiplied:1 probe:2 cated:1 thomas:1 ensure:1 amit:2 question:2 distance:12 simulated:1 capacity:15 sci:1 argue:1 toward:1 relationship:1 teach:1 design:1 perform:3 av:1 neuron:23 implementable:1 incorrectly:1 situation:1 overloaded:1 specified:1 connection:1 immunity:1 learned:32 established:1 address:1 able:3 below:1 pattern:41 saturation:2 built:1 including:1 memory:13 natural:1 technology:2 imply:1 picture:1 started:1 embodied:1 relative:2 fully:2 allo:1 mixed:1 interesting:2 proportional:1 digital:1 basin:6 ibm:1 compatible:1 changed:2 surprisingly:1 tolerance:1 curve:2 qualitatively:1 san:1 emphasize:1 sequentially:2 tolerant:3 search:1 channel:2 learn:5 nature:2 robust:1 symmetry:2 allowed:1 repeated:1 fig:9 hebb:3 wiley:1 probing:1 iij:1 weighting:1 printing:1 learns:4 specific:2 emphasized:1 inset:2 showing:1 offset:1 explored:3 essential:1 ci:1 mirror:1 push:1 easier:1 simply:1 contained:1 ordered:1 hard:1 change:1 typical:1 attempted:2 tested:4
973
1,890
Second order approximations for probability models Hilbert J. Kappen Department of Biophysics Nijmegen University Nijmegen, the Netherlands [email protected] Wim Wiegerinck Department of Biophysics Nijmegen University Nijmegen, the Netherlands [email protected] Abstract In this paper, we derive a second order mean field theory for directed graphical probability models. By using an information theoretic argument it is shown how this can be done in the absense of a partition function. This method is a direct generalisation of the well-known TAP approximation for Boltzmann Machines. In a numerical example, it is shown that the method greatly improves the first order mean field approximation. For a restricted class of graphical models, so-called single overlap graphs, the second order method has comparable complexity to the first order method. For sigmoid belief networks, the method is shown to be particularly fast and effective. 1 Introduction Recently, a number of authors have proposed deterministic methods for approximate inference in large graphical models. The simplest approach gives a lower bound on the probability of a subset of variables using Jenssen's inequality (Saul et aI., 1996). The method involves the minimization of the KL divergence between the target probability distribution p and some 'simple' variational distribution q. The method can be applied to a large class of probability models, such as sigmoid belief networks, DAGs and Boltzmann Machines (BM). For Boltzmann-Gibbs distributions, it is possible to derive the lower bound as the first term in a Taylor series expansion of the free energy around a factorized model. The free energy is given by -log Z, where Z is the normalization constant of the Boltzmann-Gibbs distribution: p(x) = exp(~E(X)). This Taylor series can be continued and the second order term is known as the TAP correction (Plefka, 1982; Kappen and Rodriguez, 1998). The second order term significantly improves the quality of the approximation, but is no longer a bound. For probability distributions that are not Boltzmann-Gibbs distributions, it is not obvious how to obtain the second order approximation. However, there is an alternative way to compute the higher order corrections, based on an information theoretic argument. Recently, this argument was applied to stochastic neural networks with asymmetric connectivity (Kappen and Spanjers, 1999). Here, we apply this idea to directed graphical models. 2 The method Let x = (Xl"", Xn) be an n-dimensional vector, with Xi taking on discrete values. Let p(x) be a directed graphical model on x. We will assume that p(x) can be written as a product of potentials in the following way : n n k=l k=l (I) Here, Pk(Xkl'lrk) denotes the conditional probability table of variable Xk given the values of its parents 'Irk. xk = (X k, 'Irk) denotes the subset of variables that appear in potential k and ?k (xk) = logpk(xk l'lrk). Potentials can be overlapping, xk n xl =1= 0, and X = Ukxk. We wish to compute the marginal probability that Xi has some specific value Si in the presence of some evidence. We therefore denote X = (e, s) where e denote the subset of variables that constitute the evidence, and S denotes the remainder of the variables. The marginal is given as _I ) - P( s. e - p(si,e) p(e) . (2) Both numerator and denominator contain sums over hidden states. These sums scale exponentially with the size of the problem, and therefore the computation of marginals is intractable. We propose to approximate this problem by using a mean field approach. Consider a factorized distribution on the hidden variables h: (3) We wish to find the factorized distribution q that best approximates p(sle). Consider as a distance measure KL = LP(sle) log (p~~!;)) (4) . ? It is easy to see that the q that minimizes KL satisfies: (5) We now think of the manifold of all probability distributions of the form Eq. 1, spanned by the coordinates ?k (x k ), k = 1, ... , m. For each k, ?k (xk) is a table of numbers, indexed by xk. This manifold contains a submanifold of factorized probability distributions in which the potentials factorize: ?k(X k ) = Li,iEk ?ki(Xi). When in addition, Lk,iEk ?ki(Xi) = logqi(xi), i E h, p(sle) reduces to q(s). Assume now that p(sle) is somehow close to the factorized submanifold. The difference ~P(Si Ie) = P(Si Ie) - qi(Si) is then small, and we can expand this small difference in terms of changes in the parameters ~?k(xk) = ?k(X k ) -logq(x k ), k = 1, ... , m: t (810gp(s~le)) ~?k(xk) L 8?k(x) k=l a;k + "21 "L..J + " q ( 8 2 10gp(sil e) ) L..J 8? (xk)8? (I) kl a;k ,ii' k 1 Y higher order terms -k -I ~?k(X )~?I(Y) q (6) The differentials are evaluated in the factorized distribution q. The left-hand size of Eq. 6 is zero because of Eq. 5 and we solve for q(Si)' This factorized distribution gives the desired marginals up to the order of the expansion of ~ logp(sile). It is straightforward to compute the derivatives: 8l0gp(sile) 8<pk(X k) 2 8 10gp(sil e) 8<Pk (x k)8<p1 (fil) = p(xk,iilsi,e) - p(xk,ille) -p(xklsi' e)p(fillsi' e) + p(xk le)p{;ii Ie) (7) We introduce the notation (. .. ) 8 ; and ( . .. ) as the expectation values with respect to the factorized distributions q(XISi, e) and q(xle), respectively. We define (( ... ) )8; == (... )8; (. .. ). We obtain ~logp(sile) 2)(~<pk))8; = k 1 + 2 L (((~<Pk~<PI))8; - (~<Pk)8; (~<P1)8; + (~<Pk) (~<Pl)) k,l + higher order terms (8) To first order, setting Eq. 8 equal to zero we obtain o= L((~<pk)).; = (logp(x)).; -logq(si) + const., (9) k where we have absorbed all terms independent of i into a constant. Thus, we find the solution (10) in which the constants Zi follow from normalisation. The first order term is equivalent to the standard mean field equations, obtained from Jensens' inequality. The correction with second order terms is obtained in the same way, again dropping terms independent of i: q(Si) = ~. exp t ((IOgP(X)).; + ~ L ((~<Pk~<Pt)8; - (~<Pk).; (~<Pt)8.)) (11) k~ were, again, the constants Zi follow from normalisation. These equations, which form the main result of this paper, are generalization of the mean field equations with TAP corrections for directed graphical models. Both left and right-hand size of Eqs. 10 and 11 depend on the unknown probability distribution q(s) and can be solved by fixed point iteration. 3 Complexity and single-overlap graphs The complexity of the first order equations Eq. 10 is exponential in the number of variables in the potentials <Pk of P: if the maximal clique size is c, then for each i we need of the order of ni exp(c) computations, where ni is the number of cliques that contain node i. The second term scales worse, since one must compute averages over the union of two overlapping cliques and because of the double sum. However, things are not so bad. First Figure 1: An example of a single-overlap graph. Left: The chest clinic model (ASIA)(Lauritzen and Spiegelhaiter, 1988). Right: nodes within one potential a re grouped together, showing that potentials share at most one node. of all, notice that the sum over k and l can be restricted to overlapping cliques (k n l =1= 0) and that i must be in either k or lor both (i E k U I). Denote by n k the number of cliques that have at least one variable in common with clique k and denote by noverlap = maxk nk Then, the sum over k and l contains not more than ninoverlap terms. Each term is an average over the union of two cliques, which can be worse case of size 2c-1 (when only one variable is shared). However, since (!l</>k!l<1>t) Si = ((!l</>k) knl !l</>l) 8i ((.) knl means expectation wrt q conditioned on the variables in k n l) we can precompute (!l</>k)knl for all pairs of overlapping cliques k, l, for all states in knl. Therefore, the worse case complexity of the second order term is less than ninoverlap exp(c). Thus, we see that the second order method has the same exponential complexity as the first order method, but with a different polynomial prefactor. Therefore, the first or second order method can be applied to directed graphical models as long as the number of parents is reasonably small. The fact that the second order term has a worse complexity than the first order term is in contrast to Boltzmann machines, in which the TAP approximation has the same complexity as the standard mean field approximation. This phenomenon also occurs for a special class of DAGs, which we call single-overlap graphs. These are graphs in which the potentials </>k share at most one node. Figure 1 shows an example of a single-overlap graph. For single overlap graphs, we can use the first order result Eq. 9 to simplify the second order correction. The derivation rather tedious and we just present the result ~. exp ((IOgP(X))8 + ~ L i ? ((!l</>I?)8i - (!l</>l);.) l,iEI ,~, ~ ?( (8M )., 8?'),,) (12) which has a complexity that is of order ni(c -1) exp(c). For probability distributions with many small potentials that share nodes with many other potentials, Eq. 12 is more efficient than Eq. 11. For instance, for Boltzmann Machines ni = noverlap = n - 1 and c = 2. In this case, Eq. 12 is identical to the TAP equations (Thouless et al., 1977). 4 Sigmoid belief networks In this section, we consider sigmoid belief networks as an interesting class of directed graphical models. The reason is, that one can expand in terms of the couplings instead of the potentials which is more efficient. The sigmoid belief network is defined as (13) (a) (b) (c) (d) Figure 2: Interpretation of different interaction terms appearing in Eq. 16. The open and shaded nodes are hidden and evidence nodes, respectively (except in (a), where k can be any node). Solid arrows indicate the graphical structure in the network. Dashed arrows indicate interaction terms that appear in Eq. 16. where O"(x) = (1 2:;=1 WijXj + 9i. + exp(-2x))-1, Xi = ?1 and hi is the local field: hi(x) = We separate the variables in evidence variables e and hidden variables s: X = (s, e). When couplings from hidden nodes to either hidden or evidence nodes are zero, Wij = 0, i E e, s and j E s, the probability distributions p(sle) and p(e) reduce to p(sle) -+ q(s) = II 0" (Si 9 n (14) iEs (15) iEe where 9,/ = 2: jE e Wijej + 9i depends on the evidence. We expand to second order around this tractable distribution and obtain mi = tanh (L mjWik + 9i + 2 L r( -ek)ekwki - mi L(1 - m%}w;k kEs,e + 4mi L kEe L +2 kEe r(ek)r( -ek)w~i - 4 kEs L r(ek)r( -ek)mlwklwki kEe,IEs (1 - m%}r( -el)eIWlkWki) (16) kEs ,IEe with mi = (Si}q ~ (Si}p and r is given by Eq. 15. The different terms that appear in this equation can be easily interpreted. The first term describes the lowest order forward influence on node i from its parents. Parents can be either evidence or hidden nodes (fig. 2a). The second term is the bias 9i . The third term describes to lowest order the effect of Bayes' rule: it affects mi such that the observed evidence on its children becomes most probable (fig. 2b). Note, that this term is absent when the evidence is explained by the evidence nodes themselves: r(ek) = 1. The fourth and fifth terms are the quadratic contributions to the first and third terms, respectively. The sixth term describes 'explaining away'. It describes the effect of hidden node I on node i, when both have a common observed child k (fig. 2c). The last term describes the effect on node i when its grandchild is observed (fig. 2d). Note, that these equations are different from Eq. 10. When one applies Eq. 10 to sigmoid belief networks, one requires additional approximations to compute (logO"(xihi)} (Saul et aI., 1996). 0.07 0.7 0.06 0.6 0.5 'U 0.05 ~ !l..O.O4 (/) 0.4 ::2 a: 0.3 ~ E ; 0 .03 Q. u 0 .02 0.2 0.01 0.1 a a 10 20 n, 30 a a ... ""' -I 11 0 .5 J Figure 3: Second order approximation for fully connected sigmoid belief network of n nodes. a) nodes 1, ... , nl are hidden (white) and nodes nl + 1, ... , n are clamped (grey), nl n/2; b) CPU time for exact inference (dashed) and second order approximation (solid) versus nl (J = 0.5); c) RMS of hidden node exact marginals (solid) and RMS error of second order approximation (dashed) versus coupling strength J, (nl = 10). = Since only feed-forward connections are present, one can order the nodes such that Wij = 0 for i < j. Then the first order mean field equations can be solved in one single sweep starting with node l. The full second order equations can be solved by iteration, starting with the first order solution. 5 Numerical results We illustrate the theory with two toy problems. The first one is inference in Lauritzen's chest clinic model (ASIA), defined on 8 binary variables x = {A, T, S, L, B, E, X, D} (see figure 1, and (Lauritzen and SpiegeJhaJter, 1988) for more details about the model). We computed exact marginals and approximate marginals using the approximating methods up to first (Eq. 10) and second order (Eq. 11), respectively. The approximate marginals are determined by sequential iteration of (10) and (11), starting at q(Xi) = 0.5 for all variables i. The maximal error in the marginals using the first and second order method is 0.213 and 0.061, respectively. We verified that the single-overlap expression Eq. 12 gave similar results. In fig. 3, we assess the accuracy and CPU time of the second order approximation Eq. 16 for sigmoid belief networks. We generate random fully connected sigmoid belief networks with Wij from a normal distribution with mean zero and variance J2 In and (}i = O. We observe in fig. 3b that the computation time is very fast: For nl = 500, we have obtained convergence in 37 second on a Pentium 300 Mhz processor. The accuracy of the method depends on the size of the weights and is computed for a network of nl = 10 (fig. 3c). In (Kappen and Wiegerinck, 2001), we compare this approach to Saul's variational approach (Saul et aI., 1996) and show that our approach is much faster and slightly more accurate. 6 Discussion In this paper, we computed a second order mean field approximation for directed graphical models. We show that the second order approximation gives a significant improvement over the first order result. The method does not use explicitly that the graph is directed. Therefore, the result is equally valid for Markov graphs. a The complexity of the first and second order approximation is of (ni exp (c)) and O(ninoverlap exp(c)), respectively, with c the number of variables in the largest potential. For single-overlap graphs, one can rewrite the second order equation such that the computational complexity reduces to O(ni(c - 1) exp(c)). Boltzmann machines and the Asia network are examples of single-overlap graphs. For large c, additional approximations are required, as was proposed by (Saul et al., 1996) for the first order mean field equations. It is evident, that such additional approximations are then also required for the second order mean field equations. It has been reported (Barber and Wiegerinck, 1999; Wiegerinck and Kappen, 1999) that similar numerical improvements can be obtained by using a very different approach, which is to use an approximating distribution q that is not factorized, but still tractable. A promising way to proceed is therefore to combine both approaches and to do a second order expansion aroud a manifold of distributions with non-factorized yet tractable distributions. In this approach the sufficient statistics of the tractable structure is expanded, rather than the marginal probabilities. Acknowledgments This research was supported in part by the Dutch Technology Foundation (STW). References Barber, D. and Wiegerinck, W. (1999). Tractable variational structures for approximating graphical models. In Kearns, M., Solla, S ., and Cohn, D., editors, Advances in Neural Information Processing Systems, volume 11 of Advances in Neural Information Processing Systems, pages 183- 189. MIT Press. Kappen, H. and Rodriguez, F. (1998). Efficient learning in Boltzmann Machines using linear response theory. Neural Computation, 10:1137-1156. Kappen, H. and Spanjers, J. (1999). Mean field theory for asymmetric neural networks. Physical Review E, 61 :5658-5663. Kappen, H. and Wiegerinck, W. (2001). Mean field theory for graphical models. In Saad, D. and Opper, M., editors, Advanced mean field theory. MIT Press. Lauritzen, S. and Spiegelhalter, D. (1988). Local computations with probabilties on graphical structures and their application to expert systems. J. Royal Statistical society B, 50: 154-227. Plefka, T. (1982). Convergence condition of the TAP equation for the infinite-range Ising spin glass model. Journal of Physics A, 15:1971- 1978. Saul, L., Jaakkola, T., and Jordan, M. (1996) . Mean field theory for sigmoid belief networks. Journal of anificial intelligence research, 4:61-76. Thouless, D., Anderson, P., and Palmer, R. (1977). Solution of 'Solvable Model of a Spin Glass'. Phil. Mag., 35:593- 601. Wiegerinck, W. and Kappen, H. (1999) . Approximations of bayesian networks through kl minimisation. New Generation Computing, 18:167- 175.
1890 |@word polynomial:1 tedious:1 open:1 grey:1 solid:3 kappen:9 series:2 contains:2 mag:1 si:12 yet:1 written:1 must:2 numerical:3 partition:1 intelligence:1 xk:13 node:22 lor:1 direct:1 differential:1 combine:1 introduce:1 mbfys:2 p1:2 themselves:1 cpu:2 becomes:1 notation:1 factorized:10 lowest:2 interpreted:1 minimizes:1 appear:3 local:2 logo:1 shaded:1 palmer:1 range:1 directed:8 acknowledgment:1 union:2 significantly:1 close:1 influence:1 equivalent:1 deterministic:1 phil:1 straightforward:1 starting:3 rule:1 continued:1 spanned:1 coordinate:1 target:1 pt:2 exact:3 particularly:1 asymmetric:2 ising:1 logq:2 observed:3 prefactor:1 solved:3 connected:2 solla:1 complexity:10 depend:1 rewrite:1 easily:1 derivation:1 irk:2 fast:2 effective:1 solve:1 statistic:1 think:1 gp:3 propose:1 interaction:2 product:1 maximal:2 remainder:1 j2:1 parent:4 double:1 convergence:2 derive:2 absense:1 knl:4 coupling:3 illustrate:1 lauritzen:4 eq:19 involves:1 indicate:2 stochastic:1 generalization:1 probable:1 pl:1 correction:5 fil:1 around:2 normal:1 exp:10 tanh:1 wim:1 grouped:1 largest:1 minimization:1 mit:2 rather:2 jaakkola:1 minimisation:1 improvement:2 greatly:1 contrast:1 pentium:1 glass:2 inference:3 el:1 hidden:10 expand:3 wij:3 special:1 marginal:3 field:15 equal:1 identical:1 simplify:1 divergence:1 thouless:2 normalisation:2 nl:9 accurate:1 indexed:1 taylor:2 desired:1 re:1 instance:1 mhz:1 logp:3 subset:3 plefka:2 submanifold:2 iee:2 reported:1 ie:3 physic:1 together:1 connectivity:1 again:2 worse:4 ek:6 derivative:1 expert:1 li:1 toy:1 potential:12 iei:1 wimw:1 explicitly:1 depends:2 sile:3 bayes:1 contribution:1 ass:1 ni:6 accuracy:2 spin:2 variance:1 bayesian:1 processor:1 ille:1 sixth:1 energy:2 obvious:1 mi:5 improves:2 hilbert:1 feed:1 higher:3 follow:2 asia:3 response:1 done:1 evaluated:1 anderson:1 just:1 hand:2 cohn:1 overlapping:4 rodriguez:2 somehow:1 quality:1 effect:3 contain:2 white:1 numerator:1 evident:1 theoretic:2 xkl:1 variational:3 recently:2 sigmoid:10 common:2 physical:1 exponentially:1 volume:1 interpretation:1 approximates:1 marginals:7 significant:1 iek:2 gibbs:3 ai:3 dag:2 longer:1 inequality:2 binary:1 additional:3 lrk:2 wijxj:1 dashed:3 ii:3 full:1 reduces:2 faster:1 long:1 equally:1 y:2 biophysics:2 qi:1 denominator:1 expectation:2 dutch:1 iteration:3 normalization:1 addition:1 saad:1 thing:1 jordan:1 chest:2 call:1 presence:1 easy:1 affect:1 zi:2 gave:1 reduce:1 idea:1 iogp:2 absent:1 expression:1 rms:2 proceed:1 constitute:1 netherlands:2 simplest:1 generate:1 notice:1 discrete:1 dropping:1 kes:3 verified:1 sle:6 graph:11 sum:5 fourth:1 grandchild:1 comparable:1 bound:3 ki:2 hi:2 quadratic:1 strength:1 argument:3 expanded:1 department:2 precompute:1 describes:5 slightly:1 lp:1 explained:1 restricted:2 equation:13 wrt:1 tractable:5 apply:1 observe:1 away:1 appearing:1 alternative:1 denotes:3 graphical:13 const:1 approximating:3 society:1 sweep:1 occurs:1 distance:1 separate:1 manifold:3 barber:2 reason:1 o4:1 kun:2 nijmegen:4 stw:1 boltzmann:9 unknown:1 markov:1 xisi:1 maxk:1 bert:1 pair:1 required:2 kl:5 xle:1 connection:1 tap:6 royal:1 belief:10 overlap:9 solvable:1 advanced:1 technology:1 spiegelhalter:1 lk:1 review:1 fully:2 interesting:1 generation:1 versus:2 sil:2 clinic:2 foundation:1 sufficient:1 editor:2 pi:1 share:3 supported:1 last:1 free:2 bias:1 noverlap:2 saul:6 explaining:1 taking:1 fifth:1 opper:1 xn:1 valid:1 author:1 forward:2 bm:1 approximate:4 clique:8 xi:7 factorize:1 table:2 promising:1 reasonably:1 expansion:3 anificial:1 pk:11 main:1 arrow:2 child:2 fig:7 je:1 wish:2 exponential:2 xl:2 clamped:1 third:2 bad:1 specific:1 showing:1 jensen:1 evidence:10 intractable:1 sequential:1 conditioned:1 nk:1 absorbed:1 applies:1 satisfies:1 jenssen:1 conditional:1 kee:3 shared:1 change:1 generalisation:1 except:1 determined:1 infinite:1 wiegerinck:7 kearns:1 called:1 probabilties:1 phenomenon:1
974
1,891
Convergence of Large Margin Separable Linear Classification Tong Zhang Mathematical Sciences Department IBM TJ. Watson Research Center Yorktown Heights, NY 10598 [email protected] Abstract Large margin linear classification methods have been successfully applied to many applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed "optimal hyperplane" approaches zero at a rate proportional to the inverse training sample size. This rate is usually characterized by the margin and the maximum norm of the input data. In this paper, we argue that another quantity, namely the robustness of the input data distribution, also plays an important role in characterizing the convergence behavior of expected misclassification error. Based on this concept of robustness, we show that for a large margin separable linear classification problem, the expected misclassification error may converge exponentially in the number of training sample size. 1 Introduction We consider the binary classification problem: to determine a label y E {-1, 1} associated with an input vector x. A useful method for solving this problem is by using linear discriminant functions . Specifically, we seek a weight vector wand a threshold () such that w T x < () if its label y = -1 and w T x ~ () if its label y = 1. In this paper, we are mainly interested in problems that are linearly separable by a positive margin (although, as we shall see later, our analysis is suitable for non-separable problems). That is, there exists a hyperplane that perfectly separates the in-class data from the out-ofclass data. We shall also assume () = 0 throughout the rest of the paper for simplicity. This restriction usually does not cause problems in practice since one can always append a constant feature to the input data x, which offset the effect of (). For linearly separable problems, given a training set of n labeled data (X1,yl), .. . ,(xn,yn), Vapnik recently proposed a method that optimizes a hard margin bound which he calls the "optimal hyperplane" method (see [11]). The optimal hyperplane Wn is the solution to the following quadratic programming problem: . 1 mln-w w 2 2 (1) For linearly non-separable problems, a generalization of the optimal hyperplane method has appeared in [2], where a slack variable f.i is introduced for each data point (xi, yi) for i = 1, ... ,n. We compute a hyperplane Wn that solves min~wTw+CLf.i s.t. wTxiyi 2: I-f.i, w,~ 2 ,. Where C > 0 is a given parameter (also see [11]). f.i 2: 0 fori = 1, ... ,no (2) In this paper, we are interested in the quality of the computed weight Wn for the purpose of predicting the label y of an unseen data point x. We study this predictive power of Wn in the standard batch learning framework. That is, we assume that the training data (xi, yi) for i = 1, ... n are independently drawn from the same underlying data distribution D which is unknown. The predictive power of the computed parameter Wn then corresponds to the classification performance of Wn with respect to the true distribution D. We organize the paper as follows. In Section 2, we briefly review a number of existing techniques for analyzing separable linear classification problems. We then derive an exponential convergence rate of misclassification error in Section 3 for certain large margin linear classification. Section 4 compares the newly derived bound with known results from the traditional margin analysis. We explain that the exponential bound relies on a new quantity (the robustness of the distribution) which is not explored in a traditional margin bound. Note that for certain batch learning problems, exponential learning curves have already been observed [10]. It is thus not surprising that an exponential rate of convergence can be achieved by large margin linear classification. 2 Some known results on generalization analysis There are a number of ways to obtain bounds on the generalization error of a linear classifier. A general framework is to use techniques from empirical processes (aka VC analysis). Many such results that are related to large margin classification have been described in chapter 4 of [3]. The main advantage of this framework is its generality. The analysis does not require the estimated parameter to converge to the true parameter, which is ideal for combinatorial problems. However, for problems that are numerical in natural, the potential parameter space can be significantly reduced by using the first order condition of the optimal solution. In this case, the VC analysis may become suboptimal since it assumes a larger search space than what a typical numerical procedure uses. Generally speaking, for a problem that is linearly separable with a large margin, the expected classification error of the computed hyperplane resulted from this analysis is of the order Oeo~n V Similar generalization bounds can also be obtained for non-separable problems. In chapter 10 of [11], Vapnik described a leave-one-out cross-validation analysis for linearly separable problems. This analysis takes into account the first order KKT condition of the optimal hyperplane W n . The expected generalization performance from this analysis is O( ~) , which is better than the corresponding bounds from the VC analysis. Unfortunately, this technique is only suitable for deriving an expected generalization bound (for example, it is not useful for obtaining a PAC style probability bound). Another well-known technique for analyzing linearly separable problems is the mistake bound framework in online learning. It is possible to obtain an algorithm with a small generalization error in the batch learning setting from an algorithm with a small online mistake 'Bounds described in [3] would imply an expected classification error of 0(10 8: n), which can be slightly improved (by a log n factor) if we adopt a slightly better covering number estimate such as the bounds in [12, 14]. bound. The readers are referred to [6] and references therein for this type of analysis. The technique may lead to a bound with an expected generalization performance of O(~). Besides the above mentioned approaches, generalization ability can also be studied in the statistical mechanical learning framework. It was shown that for linearly separable problems, exponential decrease of misclassification error is possible under this framework [1, 5, 7, 8]. Unfortunately, it is unclear how to relate the statistical mechanical learning framework to the batch learning framework considered in this paper. Their analysis, employing approximation techniques, does not seem to imply small sample bounds which we are interested in. The statistical mechanical learning result suggests that it may be possible to obtain a similar exponential decay of misclassification error in the batch learning setting, which we prove in the next section. Furthermore, we show that the exponential rate depends on a quantity that is different than the traditional margin concept. Our analysis relies on a PAC style probability estimate on the convergence rate of the estimated parameter from (2) to the true parameter. Consequently, it is suitable for non-separable problems. A direct analysis on the convergence rate of the estimated parameter to the true parameter is important for problems that are numerical in nature such as (2). However, a disadvantage of our analysis is that we are unable to directly deal with the linearly separable formulation (1). 3 Exponential convergence We can rewrite the SVM formulation (2) by eliminating 12: f(w T"x' y' - Wn(A) = argmin w n eas: i AT w, 1) + -w 2 (3) where A = 1/(nC) and z z :S 0, > O. Denote by D the true underlying data distribution of (x, y), and let w. (A) be the optimal solution with respect to the true distribution as : W.(A) = arg inf EDf(w T xy - 1) + w ~wT w. 2 (4) Let w. be the solution to w. = arginf w ~w2 2 S.t. EDf(w T xy - 1) = 0, (5) which is the infinite-sample version of the optimal hyperplane method. Throughout this section, we assume Ilw.112 < 00, and EDllxl12 < 00. The latter condition ensures that EDf( w T xy - 1) :S IIwl12ED Ilx 112 + 1 exists for all w. 3.1 Continuity of solution under regularization In this section, we show that Ilw. (A) - w.112 -+ 0 as A -+ O. This continuity result allows us to approximate (5) by using (4) and (3) with a small positive regularization parameter A. We only need to show that within any sequence of A that converges to zero, there exists a subsequence Ai -+ 0 such that w. (Ai) converges to w. strongly. We first consider the following inequality which follows from the definition of w. (A): T A EDf(w.(A) xy - 1) + '2W.(A) 2 A 2 :S '2w. . (6) Therefore Ilw.(A)112 :s Ilw.112' It is well-known that every bounded sequence in a Hilbert space contains a weakly convergent subsequence (cf. Proposition 66.4 in [4]). Therefore within any sequence of A that converges to zero, there exists a subsequence Ai --t 0 such that W. (Ai) converges weakly. We denote the limit by w. Since f(W.(Af xy - 1) is dominated by Ilw.11211x112 + 1 which has a finite integral with respect to D, therefore from (6) and the Lebesgue dominated convergence theorem, we obtain T 0= lim xy - 1) = EDf(wT xy - 1). (7) , ED f(w. (Ad xy -1) = ED limf(w.(Aif , Also note that IIwl12 must have w = w ?. :s liffii Ilw.(Ai)112 :s Ilw.112, therefore by the definition of W., we :s Since W. is the weak limit of W.(Ai), we obtain Ilw.112 liffii Ilw.(Ai)112. Also since Ilw.(Ai)112 Ilw.112, therefore liffii Ilw.(AdI12 = Ilw.112' This equality implies that W. (Ai) converges to w. strongly since :s lim(w.(Ai) - W.)2 = limw.(Ai)2 , , + w; - 21imw.(Ai)T w? = O. , 3.2 Accuracy of estimated hyperplane with non-zero regularization parameter Our goal is to show that for the estimation method (3) with a nonzero regularization parameter A > 0, the estimated parameter Wn(A) converges to the true parameter W.(A) in probability when the sample size n --t 00. Furthermore, we give a large deviation bound on the rate of convergence. From (4), we obtain the following first order condition: EDf3(A, x, y)xy + AW.(A) = 0, (8) where f3(A, x, y) = f'(W.(A)T xy - 1) and f'(z) E [-1,0] denotes a member of the subgradient of f at z [9].2 In the finite sample case, we can also interpret f3(\ x, y) in (8) as a scaled dual variable a: f3 = -a/C, where a appears in the dual (or Kernel) formulation of an SVM (for example, see chapter 10 of [11]). The convexity of f implies that f(zd + (Z2 - zdf'(zd f. This implies the following inequality: :s f(Z2) for any subgradient f' of ~ L f(W.(A)T xiyi - 1) + (Wn(A) - W.(A))T ~ Lf3(A, xi, yi)xiyi n ,. n ,. which is equivalent to: " f(W.(A) T x'y' .. -1 '~ - 1) + -AW.(A) 2 n ,. 2 + (Wn(A) - W.(A)?[ ~ Lf3(\ xi, yi)xiyi n ,. + AW.(A)] + ~(W.(A) - Wn (A))2 2 2Por readers not familiar with the sub gradient concept in convex analysis, our analysis requires little modification if we replace f with a smoother convex function such as P, which avoids the discontinuity in the first order derivative. Also note that by the definition of Wn(A), we have: ..!. " !(WnC>..)T xiyi - 1) + ~Wn(A)2 < ..!. " !(w*(Af xiyi -1) + ~W*(A)2. n~ 2 -n~ 2 , , Therefore by comparing the above two inequalities, we obtain: ~(W*(A) 2 Wn (A))2 ?W*(A) - wn(A)f[..!. L,B(A, xi, yi)xiyi - n :Sllw*(A) - ,. + AW*(A)] Wn(A)11211~ L,B(A, xi, yi)xiyi + AW*(A)112. i Therefore we have IIW*(A) - Wn(A)112 :S~II~ L,B(A, xi, yi)xiyi + AW*(A)112 i 2 1" .... =-11~,B(A, x', y')x'y' - ED,B(A, x, y)xYI12' A n ., (9) Note that in (9), we have already bounded the convergence of Wn(A) to W*(A) in terms of the convergence of the empirical expectation of a random vector ,B( A, x, y)xy to its mean. In order to obtain a large deviation bound on the convergence rate, we need the following result which can be found in [13], page 95: Theorem 3.1 Let ei be zero-mean independent random vectors in a Hilbert space. If there exists M > 0 such thatforall natural numbers 12: 2: E~=l Elleill~ :S ";bl!Ml. Thenfor aILS> 0: P(II~ E i eil12 2:.5):S 2exp(-~()2/(bM2 +.5M)). U sing the fact that ,B( A, x, y) E [-1, 0], it is easy to verify the following corollary by using Theorem 3.1 and (9), where we also bound the l-th moment of the right hand side of (9) using the following form ofJensen's inequality: la + bll :S 2l-1( lall + Ibll) forl 2: 2. Corollary 3.1 If there exists M %1!Ml. Thenfor all.5 > 0: > 0 such thatfor all natural numbers 1 2: 2: ED Ilxll~ :S P(llw*(.A) - wn(A)112 2: .5) :S 2 exp( -iA2.52 /(4bM 2 + A.5M)). Let PD ( .) denote the probability with respect to distribution D, then the following bound on the expected misclassification error of the computed hyperplane Wn (A) is a straightforward consequence of Corollary 3.1: Corollary 3.2 Under the assumptions of Corollary 3.1, then for any non-random values A, ,,(, K > 0, we have: + PD (ll xl12 2: K) + 2 exp( -iA2"{2 /( 4bK 2M2 + A"{K M)), EXPD(Wn(Af xy:S 0) :SPD(w*(.A)T xy:S "() where the expectation Ex is taken over n random samples from D with Wn (A) estimated from the n samples. We now consider linearly separable classification problems where the solution W* of (5) is finite. Throughout the rest of this section, we impose an additional assumption that the distribution D is finitely supported: measure D. IIxl12 :s M almost everywhere with respect to the From Section 3.1, we know that for any sufficiently small positive number A, Ilw. w.(A)112 < 11M, which means that W.(A) also separates the in-class data from the outof-class data with a margin of at least 2(1 - Mllw. - w. (A) 112). Therefore for sufficiently small A, we can define: I'(A) = sup{b: Pn(W.(A)T xy :s b) = O} ~ 1- Mllw. - w.(A)112 > O. By Corollary 3.2, we obtain the following upper-bound on the misclassification error if we compute a linear separator from (3) with a non-zero small regularization parameter A: Ex Pn( wn(Af xy :s 0) :s 2 exp( - ~A21'(A)2 1(4M4 + AI'(A)M2)). This indicates that the expected misclassification error of an appropriately computed hyperplane for a linearly separable problem is exponential in n. However, the rate of convergence depends on AI'( A) 1M2. This quantity is different than the margin concept which has been widely used in the literature to characterize the generalization behavior of a linear classification problem. The new quantity measures the convergence rate of W.(A) to w. as A -+ O. The faster the convergence, the more "robust" the linear classification problem is, and hence the faster the exponential decay of misclassification error is. As we shall see in the next section, this "robustness" is related to the degree of outliers in the problem. 4 Example We give an example to illustrate the "robustness" concept that characterizes the exponential decay of misclassification error. It is known from Vapnik's cross-validation bound in [11] (Theorem 10.7) that by using the large margin idea alone, one can derive an expected misclassification error bound that is of the order O(l/n), where the constant is margin dependent. We show that this bound is tight by using the following example. Example 4.1 Consider a two-dimensional problem. Assume that with probability of 1-1', we observe a data point x with label y such that xy = [1, 0]; and with probability of 1', we observe a data point x with label y such that xy = [-1, 1]. This problem is obviously linearly separable with a large margin that is I' independent. Now, for n random training data, with probability at most I'n + (1- I')n, we observe either xiyi = [1,0] for all i = 1, . .. , n, or xiyi = [-1,1] for all i = 1, ... , n. For all other cases, the computed optimal hyperplane Wn = w ?. This means that the misclassification error is 1'(1 - I')("Yn-l + (1 - I')n-l). This error converges to zero exponentially as n -+ 00. However the convergence rate depends on the fraction of outliers in the distribution characterized by 1'. In particular, for any n, if we let I' = 1In, then we have an expected misclassification error that is at least ~(l-l/n)n ~ 1/(en). D The above tightness construction of the linear decay rate of the expected generalization error (using the margin concept alone) requires the scenario that a small fraction (which shall be in the order of inverse sample size) of data are very different from other data. This small portion of data can be considered as outliers, which can be measured by the "robustness" of the distribution. In general, w. (A) converges to w. slowly when there exist such a small portion of data (outliers) that cannot be correctly classified from the observation of the remaining data. It can be seen that the optimal hyperplane in (1) is quite sensitive to even a single outlier. Intuitively, this instability is quite undesirable. However, the previous large margin learning bounds seemed to have dismissed this concern. This paper indicates that such a concern is still valid. In the worst case, even if the problem is separable by a large margin, outliers can still cause a slow down of the exponential convergence rate. 5 Conclusion In this paper, we derived new generalization bounds for large margin linearly separable classification. Even though we have only discussed the consequence of this analysis for separable problems, the technique can be easily applied to non separable problems (see Corollary 3.2). For large margin separable problems, we show that exponential decay of generalization error may be achieved with an appropriately chosen regularization parameter. However, the bound depends on a quantity which characterizes the robustness of the distribution. An important difference of the robustness concept and the margin concept is that outliers may not be observable with large probability from data while margin generally will. This implies that without any prior knowledge, it could be difficult to directly apply our bound using only the observed data. References [1] lK. Anlauf and M. Biehl. The AdaTron: an adaptive perceptron algorithm. Europhys. Lett., 10(7):687-692, 1989. [2] C. Cortes and V.N. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [3] Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000. [4] Harro G. Heuser. Functional analysis. John Wiley & Sons Ltd., Chichester, 1982. Translated from the German by John Horvath, A Wiley-Interscience Publication. [5] W. Kinzel. Statistical mechanics of the perceptron with maximal stability. In Lecture Notes in Physics, volume 368, pages 175-188. Springer-Verlag, 1990. [6] 1 Kivinen and M.K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Infonnation and Computation, 132:1-64, 1997. [7] M. Opper. Learning times of neural networks: Exact solution for a perceptron algorithm. Phys. Rev. A, 38(7):3824-3826, 1988. [8] M. Opper. Learning in neural networks: Solvable dynamics. Europhysics Letters, 8(4):389-392,1989. [9] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970. [10] Dale Schuurmans. Characterizing rational versus exponential learning curves. J. Comput. Syst. Sci., 55:140-160, 1997. [11] V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998. [12] Robert C. Williamson, Alexander 1 Smola, and Bernhard Scholkopf. Entropy numbers of linear function classes. In COLT'OO, pages 309-319,2000. [13] Vadim Yurinsky. Sums and Gaussian vectors. Springer-Verlag, Berlin, 1995. [14] Tong Zhang. Analysis of regularized linear functions for classification problems. Technical Report RC-21572, IBM, 1999. Abstract in NIPS'99, pp. 370-376.
1891 |@word ia2:2 version:1 briefly:1 eliminating:1 norm:1 seek:1 thatfor:1 moment:1 contains:1 existing:1 com:1 z2:2 surprising:1 comparing:1 xiyi:10 must:1 john:4 additive:1 numerical:3 update:1 alone:2 warmuth:1 mln:1 zhang:2 height:1 mathematical:1 rc:1 direct:1 become:1 scholkopf:1 prove:1 interscience:1 expected:13 behavior:2 mechanic:1 little:1 underlying:2 bounded:2 what:1 argmin:1 ail:1 outof:1 nj:1 adatron:1 every:1 classifier:1 scaled:1 yn:2 organize:1 positive:3 ilxll:1 mistake:2 limit:2 consequence:2 analyzing:2 therein:1 studied:1 suggests:1 practice:1 procedure:1 empirical:2 significantly:1 cannot:1 undesirable:1 instability:1 restriction:1 equivalent:1 center:1 straightforward:1 independently:1 convex:3 simplicity:1 m2:3 deriving:1 stability:1 construction:1 play:1 exact:1 programming:1 us:1 labeled:1 observed:2 role:1 worst:1 ensures:1 decrease:1 mentioned:1 pd:2 convexity:1 cristianini:1 dynamic:1 limw:1 weakly:2 solving:1 rewrite:1 tight:1 predictive:2 translated:1 easily:1 chapter:3 europhys:1 quite:2 larger:1 widely:1 biehl:1 tightness:1 ability:1 unseen:1 online:2 obviously:1 advantage:1 sequence:3 maximal:1 wnc:1 convergence:17 leave:1 converges:8 derive:2 illustrate:1 oo:1 measured:1 finitely:1 solves:1 imw:1 implies:4 vc:3 require:1 generalization:13 proposition:1 sufficiently:2 considered:2 exp:4 adopt:1 purpose:1 estimation:1 label:6 combinatorial:1 infonnation:1 sensitive:1 successfully:1 always:1 gaussian:1 pn:2 publication:1 corollary:7 derived:2 indicates:2 mainly:1 aka:1 dependent:1 interested:3 llw:1 arg:1 classification:16 dual:2 colt:1 bm2:1 f3:3 report:1 resulted:1 m4:1 familiar:1 lebesgue:1 lf3:2 chichester:1 tj:1 integral:1 xy:17 taylor:1 disadvantage:1 deviation:2 characterize:1 aw:6 yl:1 physic:1 por:1 slowly:1 derivative:1 style:2 syst:1 account:1 potential:1 yurinsky:1 rockafellar:1 depends:4 ad:1 later:1 sup:1 characterizes:2 portion:2 accuracy:1 weak:1 classified:1 explain:1 phys:1 harro:1 ed:4 definition:3 iiwl12:1 pp:1 associated:1 rational:1 newly:1 lim:2 knowledge:1 hilbert:2 ea:1 appears:1 improved:1 formulation:3 aif:1 though:1 strongly:2 generality:1 furthermore:2 smola:1 hand:1 ei:1 continuity:2 quality:1 effect:1 concept:8 true:7 verify:1 regularization:6 equality:1 hence:1 nonzero:1 deal:1 ll:1 covering:1 yorktown:1 iiw:1 recently:1 functional:1 kinzel:1 exponentially:2 volume:1 discussed:1 he:1 interpret:1 cambridge:1 ai:14 shawe:1 optimizes:1 inf:1 scenario:1 certain:2 verlag:2 inequality:4 binary:1 watson:2 yi:7 seen:1 additional:1 impose:1 arginf:1 converge:2 determine:1 ii:2 smoother:1 technical:1 faster:2 characterized:2 af:4 cross:2 europhysics:1 prediction:1 expectation:2 kernel:2 achieved:2 appropriately:2 w2:1 rest:2 vadim:1 member:1 seem:1 call:1 ideal:1 easy:1 wn:24 fori:1 spd:1 perfectly:1 suboptimal:1 idea:1 ltd:1 speaking:1 cause:2 york:1 useful:2 generally:2 reduced:1 exist:1 estimated:6 correctly:1 zd:2 shall:4 threshold:1 drawn:1 wtw:1 subgradient:2 fraction:2 sum:1 wand:1 inverse:2 everywhere:1 letter:1 tzhang:1 almost:1 throughout:3 reader:2 bound:27 convergent:1 quadratic:1 lall:1 dominated:2 min:1 separable:23 department:1 anlauf:1 slightly:2 son:2 rev:1 modification:1 outlier:7 intuitively:1 taken:1 slack:1 german:1 know:1 apply:1 observe:3 appropriate:1 batch:5 robustness:8 assumes:1 denotes:1 cf:1 remaining:1 bl:1 already:2 quantity:6 thenfor:2 traditional:3 mllw:2 unclear:1 gradient:2 separate:2 unable:1 sci:1 berlin:1 argue:1 nello:1 discriminant:1 clf:1 besides:1 horvath:1 nc:1 difficult:1 unfortunately:2 robert:1 relate:1 ilw:14 append:1 unknown:1 upper:1 observation:1 sing:1 finite:3 bll:1 introduced:1 bk:1 namely:1 mechanical:3 discontinuity:1 nip:1 usually:2 appeared:1 dismissed:1 x112:1 power:2 misclassification:14 suitable:3 natural:3 regularized:1 predicting:1 solvable:1 kivinen:1 imply:2 lk:1 review:1 literature:1 prior:1 lecture:1 proportional:1 versus:2 validation:2 degree:1 oeo:1 forl:1 edf:5 ibm:3 supported:1 side:1 exponentiated:1 perceptron:3 characterizing:2 curve:2 lett:1 xn:1 valid:1 avoids:1 opper:2 seemed:1 dale:1 adaptive:1 bm:1 employing:1 approximate:1 observable:1 bernhard:1 ml:2 kkt:1 xi:7 subsequence:3 search:1 nature:1 robust:1 obtaining:1 schuurmans:1 williamson:1 separator:1 main:1 linearly:13 x1:1 referred:1 en:1 ny:1 tong:2 slow:1 wiley:3 sub:1 a21:1 exponential:14 comput:1 theorem:4 down:1 pac:2 offset:1 explored:1 decay:5 svm:2 cortes:1 concern:2 exists:6 vapnik:5 margin:25 entropy:1 ilx:1 springer:2 corresponds:1 relies:2 goal:1 consequently:1 replace:1 hard:1 specifically:1 typical:1 infinite:1 tyrrell:1 hyperplane:14 wt:2 iixl12:1 la:1 support:2 latter:1 alexander:1 princeton:2 ex:2
975
1,892
Learning Switching Linear Models of Human Motion Vladimir Pavlovic and James M. Rehg Compaq - Cambridge Research Lab Cambridge, MA 02139 {vladimir.pavlovic,jim.rehg}@compaq.com John MacCormick Compaq - System Research Center Palo Alto, CA 94301 {john.maccormick} @compaq.com Abstract The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain. 1 Introduction The human figure exhibits complex and rich dynamic behavior. Dynamics are essential to the classification of human motion (e.g. gesture recognition) as well as to the synthesis of realistic figure motion for computer graphics. In visual tracking applications, dynamics can provide a powerful cue in the presence of occlusions and measurement noise. Although the use of kinematic models in figure motion analysis is now commonplace, dynamic models have received relatively little attention. The kinematics of the figure specify its degrees of freedom (e.g. joint angles and torso pose) and define a state space. A stochastic dynamic model imposes additional structure on the state space by specifying a probability distribution over state trajectories. We are interested in learning dynamic models from motion capture data, which provides a training corpus of observed state space trajectories. Previous work by a number of authors has applied Hidden Markov Models (HMMs) to this problem. More recently, switching linear dynamic system (SLDS) models have been studied in [5, 12]. In SLDS models, the Markov process controls an underlying linear dynamic system, rather than a fixed Gaussian measurement model. l By mapping discrete hidden states to piecewise linear measurement models, the SLDS framework has potentially greater descriptive power than an HMM. Offsetting this advantage is the fact that exact inference in SLDS is intractable. Approximate inference algorithms are required, which in turn complicates SLDS learning. In this paper we present a framework for SLDS learning and apply it to figure motion modeling. We derive three different approximate inference schemes: Viterbi [13], variational, and GPB2 [1]. We apply learned motion models to three tasks: classification, motion synthesis, and visual tracking. Our results include an empirical comparison between SLDS I SLDS models are sometimes referred to as jump-linear or conditional Gaussian models, and have been studied in the controls and econometrics literatures. a, (a) SLDS as a Bayesian net. (b) Factorization of SLDS. Figure 1: (a) SLDS model as Dynamic Bayesian Network. s is discrete switch state, x is continuous state, and y is its observation. (b) Factorization of SLDS into decoupled HMM and LDS. and HMM models on classification and one-step ahead prediction tasks. The SLDS model class consistently outperforms standard HMMs even on fairly simple motion sequences. Our results suggest that SLDS models are a promising tool for figure motion analysis, and could play a key role in applications such as gesture recognition, visual surveillance, and computer animation. In addition, this paper provides a summary of approximate inference techniques which is lacking in the previous literature on SLDS. Furthermore, our variational inference algorithm is novel, and it provides another example of the benefit of interpreting classical statistical models as (mixed-state) graphical models. 2 Switching Linear Dynamic System Model A switching linear dynamic system (SLDS) model describes the dynamics of a complex, nonlinear physical process by switching among a set of linear dynamic models over time. The system can be described using the following set of state-space equations, Xt+1 = A(st+dxt + Vt+1(St+1), Yt = eXt + Wt, Pr(st+1 = ils t = j) = II(i,j), for the plant and the switching model. The meaning of the variables is as follows: X t E lRN denotes the hidden state of the LDS, and Vt is the state noise process. Similarly, Yt E lRM is the observed measurement and Wt is the measurement noise. Parameters A and are the typical LDS parameters: the state transition matrix and the observation matrix, respectively. We assumed that the LDS models a Gauss-Markov process with i.i.d. Gaussian noise processes Vt (St) '" N(O, Q(St)). The switching model is a discrete first order Markov process with state variables St from a set of S states. The switching model is defined with the state transition matrix II and an initial state distribution 71"0. The LDS and switching process are coupled due to the dependence of the LDS parameters A and Q on the switching state S t : A(st = i) = Ai, Q(St = i) = Qi. e The complex state space representation is equivalently depicted by the DBN dependency graph in Figure lea). The dependency graph implies that the joint distribution P(YT, XT, ST) over the variables of the SLDS can be written as Pr(so) TI;~l Pr(st ISt-dPr(xo Iso) TI;=-;.l Pr(xt IXt-l, St) TI;=~l Pr(Yt IXt), (1) where YT, XT, and ST denote the sequences (of length T) of observations and hidden state variables. From the Gauss-Markov assumption on the LDS and the Markov switching assumption, we can expand Equation I into the parameterized joint pdf of the SLDS of duration T. Learning in complex DBNs can be cast as ML learning in general Bayesian networks. The generalized EM algorithm can then be used to find optimal values of DBN parameters {A, e, Q, R, II, 7I"0}. Inference, which is addressed in the next section, is the most complex step in SLDS learning. Given the sufficient statistics from the inference phase, the parameter update equations in the maximization (M) step are easily obtained by maximizing the expected log of Equation 1 with respect to the LDS and Me parameters (see [13]). 3 Inference in SLDS The goal of inference in complex DBNs is to estimate the posterior P(XT, STIYT). If there were no switching dynamics, the inference would be straightforward - we could infer X T from YT using LDS inference. However, the presence of switching dynamics makes exact inference exponentially hard, as the distribution of the system state at time t is a mixture of Gaussians. Tractable, approximate inference algorithms are therefore required. We describe three methods: Viterbi, variational, and generalized Pseudo Bayesian. st 3.1 Approximate Viterbi Inference Viterbi approximation approach finds the most likely sequence of switching states Sf for a given observation sequence YT. Namely, the desired posterior P(XT,STIYT) is approximated by its mode Pr(XTISf,YT). It is well known how to apply Viterbi inference to discrete state hidden Markov models and continuous state Gauss-Markov models. Here we review an algorithm for approximate Viterbi inference in SLDSs presented in [13]. We have shown in [13] that one can use a recursive procedure to find the best switching sequence Sf = argmaxsT Pr(STIYT). In the heart of this recursion lays the approximation of the partial probability of the swiching sequence and observations up to time t, Jt,i = maxs' _l Pr (St-l, St = i, Yt) R:J maxdPr(Ytlst =i,St-l =j,S;_2(j),Yt-l)Pr(St =ilst - 1 =j)Jt-1,j}. (2) The two scaling components are the likelihood associated with the transition i ~ j from t to t - 1, and the probability of discrete SLDS switching from j to i. They have the notion of a "transition probability" and we denote them together by J tlt-l ,i,j The likelihood term can easily be found using Kalman updates, concurent with the recursion of Equation 2. See [13] for details. The Viterbi inference algorithm can now be written Initialize LDS state estimates XOI-l,i and E01-1,i ; Initialize JO ,i ; fort=l:T-l fori=l:S forj=l:S Predict and filter LDS state estimates t It ,i,j and E tlt ,i ,j; Find j -+ i "transition probability" J tit - 1. i ,j ; end Find best transition '!f;t - 1 i into state i; Update sequence probabilities J t ? i and LDS slate estimates Xtl t , i and E t It ,i; end end Find "best" final switching state i;' _ l and backtrace the best switching sequence S;' ; Do RTS smoothing for S = s.;.. ; x 3.2 Approximate Variational Inference A general structured variational inference technique for Bayesian networks is described in [8]. Namely, an 1]-parameterized distribution Q(1]) is constructed which is "close" to the desired conditional distribution P but is computionally feasible. In our case we define Q by decoupling the switching and LDS portions of SLDS as shown in Figure l(b). The original distribution is factorized into two independent distributions, a Hidden Markov Model (HMM) Qs with variational parameters {qo, ... , qT-l} and a time-varying LDS Qx with variational parameters {xo,A o, ... , AT-1,Qo, ... ,QT-d. The optimal values of the variational parameters TJ are obtained by minimizing the KLdivergence w.r.t. TJ. For example, we arrive at the following optimal variational parameters: Ot 1 = At = log qt( i) = To obtain the terms Pr(St) = Pr(stlqo, ... , qT-t) we use the inference in the HMM with output "probabilities" qt . Similarly, to obtain (Xt) = E[XtIYT] we perform LDS inference in the decoupled time-varying LDS via RTS smoothing. Equation 3 together with the inference solutions in the decoupled models form a set of fixed-point equations. Solution of this fixed-point set is a tractable approximation to the intractable inference of the fully coupled SLDS. The variational inference algorithm for fully coupled SLDSs can now be summarized as: error = 00 ; Initialize P r CSt) ; while (KL di vergence> maxError) Find Qt, At, XO [TOm PrCSt) (Eq. 3); Estimate ( Xt) I (Xt X t ') and ( Xt Xt - 1') from Yt using time-varying LDS inference; (xt Xt') and (XtXt _l') (Eq. 3); Find qt from (xt) I Estimate Pr (St) from qt using HMM inference. end Variational parameters in Equation 3 have intuitive interpretation. LDS parameters At and 1 define the best unimodal representation of the corresponding switching system and are, roughly, averages of original parameters weighted by a best estimates of the switching states P(St). HMM variational paremeters log qt, on the other hand, measure the agreement of each individual LDS with the data. Ot 3.3 Approximate Generalized Pseudo Bayesian Inference The Generalized Psuedo Bayesian [1, 9] (GPB) approximation scheme is based on the general idea of "collapsing" a mixture of Mt Gaussians onto a mixture of Mr Gaussians, where r < t (see [12] for a detailed review) . While there are several variations on this idea, our focus is the GPB2 algorithm, which maintains a mixture of M 2 Gaussians over time and can be reformulated to include smoothing as well as filtering. GPB2 is closely related to the Viterbi approximation of Section 3.1. Instead of picking the most likely previous switching state j, we collapse the S Gaussians (one for each possible value of j) down into a single Gaussian. Namely, the state at time t is obtained as Xtlt,i = Lj Xtlt, i,jPr(St-l = jiSt = i, Yt) . Smoothing in GPB2 is unfortunately a more involved process that includes several additional approximations. Details of this can be found in [12] . Effectively, an RTS smoother can be constructed when an assumption is made that decouples the MC model from the LDS when smoothing the MC states. Together with filtering this results in the following GPB2 algorithm pseudo code x Initialize LDS state estimates 01-1, i and Eo 1_ I, i; Initialize Pr(sQ = il - 1) = .. (i); fort=1:T-1 fori=1:8 forj=1:8 Predict and filter LDS state estimates Xtl t ,i,i ' Etl t, i,j ; Find switching state distributions Prest = ilYt) , Pr(St-l Collapse Xtlt,i,j ' Etlt,i,j to Xtlt,i , Etlt,i; end Collapse Xtlt,i and Etlt,i to Xtlt and E tlt ; end end Do GPB2 smoothing; = jist = i, Yt); The inference process of GPB2 is more involved than those of the Viterbi or the variational approximation. Unlike Viterbi, GPB2 provides soft estimates of switching states at each time t. Like Viterbi GPB2 is a local approximation scheme and as such does not guarantee global optimality inherent in the variational approximation. Some recent work (see [3]) on this type of local approximation in general DBN s has emerged that provides conditions for it to be globally optimal. 4 Previous Work SLDS models and their equivalents have been studied in statistics, time-series modeling, and target tracking since early 1970's. See [13,12] for a review. Ghahramani [6] introduced a DBN-framework for learning and approximate inference in one class of SLDS models. His underlying model differs from ours in assuming the presence of S independent, white noise-driven LDSs whose measurements are selected by the Markov switching process. A switching framework for particle filters applied to dynamics learning is described in [2]. Manifold learning [7] is another approach to constraining the set of allowable trajectories within a high dimensional state space. An HMM-based approach is described in [4]. 5 Experimental Results The data set for our experiments is a corpus of 18 sequences of six individuals performing walking and jogging. Each sequence was approximately 50 frames in duration. All of the motion was fronto-parallel (i.e. occured in a plane that was parallel to the camera plane, as in Figure 2(c).) This simplifies data acquisition and kinematic modeling, while self-occlusions and cluttered backgrounds make the tracking problem non-trivial. Our kinematic model had eight DOF's, corresponding to rotations at the knees, hip, and neck (and ignoring the arms). The link lengths were adjusted manually for each person. The first task we addressed was learning HMM and SLDS models for walking and running. Each of the two motion types were modeled as one, two, or four-state HMM and SLDS models and then combined into a single complex jog-walk model. In addition, each SLDS motion model was assumed to be of either the first or the second order 2. Hence, a total of three models (HMM, first order SLDS, and second order SLDS) were considered for each cardinality (one, two, or four) of the switching state. HMM models were initially assumed to be fully connected. Their parameters were then learned using the standard EM learning, initialized by k-means clustering. Learned HMM models were used to initialize the switching state segmentations for the SLDS models. The SLDS model parameters (A, Q, R, xo, II, 71"0) were then reestimated using EM. The inference) in SLDS learning was accomplished using the three approximated methods outlined in Section 3: Viterbi, GPB2, and variational inference. Results of SLDS learning using either of the three approximate inference methods did not produce significantly different models. This can be explained by the fact that initial segmentations using the HMM and the initial SLDS parameters were all very close to a 2Second order SLDS models imply Xt = A 1 (st)Xt-1 + A 2 (st)Xt-2. ~:~[ ~:':f I n : I: "'p ;;; : "'b;JL;--J : .",k .,", : LJ LJ :u . u :- V"';"liD",1I ~::~ """ : LN l "'f l ~:~[ l ~::[ l ;::~ l "'bZL2 walk : . ok '" " (a) One switching state, second order SLDS. (c) KF, frame 7 (d) SLDS, frame 7 (e) SLDS , frame 20 :50 : : : v'"""'"" :LJ :LJ :LJ : :LJ : :LJ ? 100 (b) Four switching states, second order SLDS. (f) Synthesized walking motion Figure 2: (a)-(d) show an example of classification results on mixed walk-jog sequences using models of different order. (e)-(g) compare constant velocity and SLDS trackers, and (h) shows motion synthesis. locally optimal solution and all three inference schemes indeed converged to the same or similar posteriors. We next addressed the classification of unknown motion sequences in order to test the relative performance of inference in HMM and SLDS. Test sequences of walking and jogging motion were selected randomly and spliced together using B-spline smoothing. Segmentation of the resulting sequences into "walk" and "jog" regimes was accomplished using Viterbi inference in the HMM model and approximate Viterbi, GPB2, and variational inference under the SLDS model. Estimates of "best" switching states Pr(St) indicated which of the two models were considered to be the source of the corresponding motion segment. Figure 2(a)-(b) shows results for two representative combinations of switching state and linear model orders. In Figure 2(a), the top graph depicts the true sequence of jog-walk motions, followed by Viterbi, GPB2, variational, and HMM classifications. Each motion type Gog and walk) is modeled using one switching state and a second order LDS. Figure 2(b) shows the result when the switching state is increased to four. The accuracy of classification increases with the order of the switching states and the LDS model order. More interesting, however, is that the HMM model consistently yields lower segmentation accuracy then all of the SLDS inference schemes. This is not surprising since the HMM model does not impose continuity across time in the plant state space (x), which does indeed exist in a natural figure motion Goint angles evolve continuously in time.) Quantitatively, the three SLDS inference schemes produce very similar results. Qualitatively, GPB2 produces "soft" state estimates, while the Viterbi scheme does not. Variational is somewhere in-between. In terms of computational complexity, Viterbi seems to be the clear winner. Our next experiment addressed the use of learned dynamic models in visual tracking. The primary difficulty in visual tracking is that joint angle measurements are not readily available from a sequence of image intensities. We use image templates for each link in the figure model, initialized from the first video frame, to track the figure through template registration [11]. A conventional extended Kalman filter using a constant velocity dynamic model performs poorly on simple walking motion, due to pixel noise and self-occlusions, and fails by frame 7 as shown in Figure 2(c). We employ approximate Viterbi inference in SLDS as a multi-hypothesis predictor that initializes multiple local template searches in the image space. From the S2 multiple hypotheses Xtlt-l,i,j at each time step, we pick the best S hypothesis with the smallest switching cost, as determined by Equation 2. Figure 2(d)2(e) show the superior performance of the SLDS tracker on the same image sequence. The tracker is well-aligned at frame 7 and only starts to drift off by frame 20. This is not terribly surprising since the SLDS tracker has effectively S (extended) Kalman filters, but it is an encouraging result. The final experiment simulated walking motion by sampling from a learned SLDS walking model. A stick figure animation obtained by superimposing 50 frames of walking is shown in Figure 2(f). The discrete states used to generate the motion are plotted at the bottom of the figure. The synthesized walk becomes less realistic as the simulation time progresses, due to the lack of global constraints on the trajectories. 6 Conclusions Dynamic models for human motion can be learned within a Switching Linear Dynamic System (SLDS) framework. We have derived three approximate inference algorithms for SLDS: Viterbi, GPB2, and variational. Our variational algorithm is novel in the SLDS domain. We show that SLDS classification performance is superior to that of HMMs. We demonstrate that a tracker based on SLDS is more effective than a conventional Extended Kalman Filter. We show synthesis of natural walking motion by sampling. In future work we will build more complex motion models using a much larger motion capture dataset, which we are currently building. We will also extend the SLDS tracker to more complex measurement models and complex discrete state processes (see [10] for a recent approach). References [1] Bar-Shalom and Li, Estimation and tracking: principles, techniques, and software. 1998. [2] A. Blake, B. North, and M. Isard, "Learning multi-class dynamics," in NIPS '98, 1998. [3] X. Boyen, N. Firedman, and D. Koller, "Discovering the hidden structure of complex dynamic systems," in Proc. Uncertainty in Artificial Intelligence, 1999. [4] M. Brand, ''An entropic estimator for structure discovery," in NIPS '98, 1998. [5] C. Bregler, "Learning and recognizing human dynamics in video sequences," in Proc. Int'l Con! Computer Vision and Pattern Recognition (CVPR), 1997. [6] Z. Ghahramani and G. E. Hinton, "Switching state-space models." 1998. [7] N. Howe, M. Leventon, and W. Freeman, "Bayesian reconstruction of 3d human motion from single-camera video," in NIPS'99, 1999. [8] M. 1. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, ''An introduction to variational methods for graphical models," in Learning in graphical models, 1998. [9] C.-J. Kim, "Dynamic linear models with markov-switching," 1. Econometrics, vol. 60, 1994. [10] U. Lerner, R. Parr, D. Koller, and G. Biswas, "Bayesian fault detection and diagnosis in dynamic systems," in Proc. AAAJ, (Austin, TX), 2000. [11] D. Morris and J. Rehg, "Singularity analysis for articulated object tracking," in CVPR, 1998. [12] K. P. Murphy, "Learning switching kalman-filter models," TR 98-10, Compaq CRL., 1998. [13] V. Pavlovic, J. M. Rehg, T.-J. Cham, and K. P. Murphy, "A dynamic bayesian network approach to figure tmcking using learned dynamic models," in Proc. Inti. Con! Computer Vision, 1999.
1892 |@word seems:1 simulation:1 pick:1 tr:1 initial:3 series:1 ours:1 outperforms:1 com:2 surprising:2 written:2 e01:1 john:2 readily:1 realistic:2 update:3 cue:1 selected:2 isard:1 discovering:1 rts:3 intelligence:1 plane:2 iso:1 provides:5 constructed:2 xtl:2 dpr:1 indeed:2 expected:1 roughly:1 behavior:2 ldss:1 multi:2 freeman:1 globally:1 little:1 encouraging:1 cardinality:1 etl:1 becomes:1 underlying:2 alto:1 factorized:1 xtxt:1 sldss:2 kldivergence:1 guarantee:1 pseudo:3 ti:3 decouples:1 stick:1 control:2 superiority:1 local:3 switching:41 ext:1 lrm:1 approximately:1 studied:3 specifying:1 hmms:3 factorization:2 collapse:3 camera:2 offsetting:1 recursive:1 differs:1 sq:1 procedure:1 empirical:1 significantly:1 suggest:1 onto:1 close:2 conventional:3 equivalent:1 center:1 maximizing:1 yt:13 straightforward:1 attention:1 duration:2 cluttered:1 knee:1 q:1 estimator:1 rehg:4 his:1 notion:1 variation:1 dbns:2 play:1 target:1 exact:3 hypothesis:3 agreement:1 velocity:2 recognition:3 approximated:2 walking:9 lay:1 econometrics:2 observed:2 role:1 bottom:1 capture:3 commonplace:1 connected:1 spliced:1 complexity:1 gpb:1 dynamic:31 segment:1 tit:1 easily:2 joint:4 slate:1 tx:1 articulated:1 effective:2 describe:1 artificial:1 dof:1 whose:1 emerged:1 larger:1 cvpr:2 compaq:5 statistic:2 final:2 descriptive:1 advantage:1 sequence:18 net:1 reconstruction:1 aligned:1 poorly:1 prest:1 intuitive:1 produce:3 object:1 derive:1 pose:1 qt:9 received:1 progress:1 eq:2 implies:1 xoi:1 psuedo:1 closely:1 filter:7 stochastic:1 human:9 terribly:1 singularity:1 adjusted:1 bregler:1 tracker:6 considered:2 blake:1 mapping:1 viterbi:19 predict:2 parr:1 early:1 smallest:1 entropic:1 estimation:1 proc:4 currently:1 palo:1 tool:1 weighted:1 gaussian:4 rather:1 varying:4 casting:1 surveillance:1 jaakkola:1 derived:1 focus:1 consistently:2 likelihood:2 kim:1 inference:43 lj:8 initially:1 hidden:7 koller:2 expand:1 interested:1 pixel:1 classification:10 among:1 smoothing:7 fairly:1 initialize:6 sampling:2 manually:1 future:1 pavlovic:3 spline:1 piecewise:1 inherent:1 quantitatively:1 employ:1 randomly:1 lerner:1 individual:2 murphy:2 phase:1 occlusion:3 freedom:1 detection:1 kinematic:3 mixture:4 lrn:1 tj:2 partial:1 decoupled:3 walk:7 desired:2 initialized:2 plotted:1 fronto:1 complicates:1 hip:1 increased:1 modeling:3 soft:2 leventon:1 maximization:1 cost:1 predictor:1 recognizing:1 graphic:1 ixt:2 dependency:2 combined:1 st:25 person:1 off:1 picking:1 synthesis:5 together:4 reestimated:1 continuously:1 jo:1 collapsing:1 li:1 summarized:1 includes:1 north:1 int:1 lab:1 portion:1 start:1 maintains:1 parallel:2 il:2 accuracy:2 yield:1 lds:24 bayesian:11 mc:2 trajectory:4 converged:1 acquisition:1 involved:2 james:1 associated:1 di:1 con:2 dataset:1 occured:1 torso:1 segmentation:4 ok:1 tom:1 specify:1 furthermore:1 hand:1 qo:2 nonlinear:2 lack:1 continuity:1 mode:1 indicated:1 building:1 true:1 biswas:1 hence:1 white:1 self:2 argmaxst:1 generalized:4 pdf:1 allowable:1 demonstrate:1 performs:1 motion:31 interpreting:1 meaning:1 variational:22 image:4 novel:2 recently:1 superior:2 rotation:1 mt:1 physical:1 winner:1 exponentially:1 jl:1 extend:1 interpretation:1 synthesized:2 measurement:8 jpr:1 cambridge:2 ai:1 dbn:4 outlined:1 similarly:2 particle:1 had:1 posterior:3 recent:2 shalom:1 driven:1 fault:1 vt:3 accomplished:2 cham:1 additional:2 greater:1 impose:1 mr:1 eo:1 ii:4 smoother:1 multiple:2 unimodal:1 infer:1 jog:4 gesture:2 qi:1 prediction:1 vision:2 sometimes:1 lea:1 addition:2 background:1 addressed:4 source:1 howe:1 ot:2 unlike:1 jordan:1 presence:3 constraining:1 switch:1 fori:2 idea:2 simplifies:1 six:1 reformulated:1 detailed:1 clear:1 locally:1 morris:1 generate:1 exist:1 track:1 diagnosis:1 discrete:7 vol:1 ist:1 key:1 four:4 registration:1 graph:3 angle:3 parameterized:2 powerful:1 uncertainty:1 arrive:1 scaling:1 followed:1 ahead:1 constraint:1 software:1 optimality:1 performing:1 relatively:1 structured:1 combination:1 describes:1 across:1 em:3 lid:1 explained:1 pr:15 xo:4 inti:1 heart:1 ln:1 equation:9 turn:1 kinematics:1 tractable:2 end:7 available:1 gaussians:5 apply:3 eight:1 original:2 denotes:1 running:1 include:2 clustering:1 top:1 graphical:3 somewhere:1 ghahramani:3 build:1 classical:1 initializes:1 primary:1 dependence:1 exhibit:2 maccormick:2 link:2 simulated:1 hmm:20 me:1 manifold:1 trivial:1 assuming:1 length:2 kalman:5 code:1 modeled:2 minimizing:1 vladimir:2 equivalently:1 unfortunately:1 potentially:1 unknown:1 perform:1 observation:5 markov:11 extended:3 hinton:1 jim:1 frame:9 backtrace:1 intensity:1 drift:1 introduced:1 cast:1 required:2 namely:3 fort:2 kl:1 learned:9 nip:3 bar:1 pattern:1 boyen:1 regime:1 max:1 video:3 power:1 xtlt:7 natural:2 difficulty:1 xtiyt:1 recursion:2 arm:1 scheme:7 imply:1 coupled:3 review:3 literature:2 discovery:1 kf:1 evolve:1 relative:1 lacking:1 plant:2 fully:3 dxt:1 mixed:2 interesting:1 filtering:2 degree:1 sufficient:1 imposes:1 principle:1 jist:2 austin:1 summary:1 saul:1 template:3 benefit:1 transition:6 rich:2 author:1 made:1 jump:1 qualitatively:1 tlt:3 qx:1 approximate:14 paremeters:1 ml:1 global:2 corpus:2 assumed:3 continuous:2 search:1 vergence:1 promising:1 ca:1 decoupling:1 ignoring:1 complex:12 domain:2 did:1 s2:1 noise:6 animation:2 jogging:2 referred:1 representative:1 depicts:1 fails:1 sf:2 down:1 xt:17 jt:2 intractable:3 essential:1 effectively:2 slds:60 depicted:1 likely:2 visual:6 tracking:9 ma:1 conditional:2 goal:1 crl:1 feasible:1 hard:1 cst:1 typical:1 determined:1 wt:2 total:1 neck:1 gauss:3 experimental:1 superimposing:1 brand:1
976
1,893
Sparse Representation for Gaussian Process Models Lehel Csat6 and Manfred Opper Neural Computing Research Group School of Engineering and Applied Sciences B4 7ET Birmingham, United Kingdom {csat o l, oppe r m} @as t o n. ac .uk Abstract We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world data sets indicate the efficiency of the approach. 1 Introduction Gaussian processes (GP) [1; 15] provide promising non-parametric tools for modelling real-world statistical problems. Like other kernel based methods, e.g. Support Vector Machines (SVMs) [13], they combine a high flexibility ofthe model by working in high (often 00) dimensional feature spaces with the simplicity that all operations are "kernelized" i.e. they are performed in the (lower dimensional) input space using positive definite kernels. An important advantage of GPs over other non-Bayesian models is the explicit probabilistic formulation of the model. This does not only provide the modeller with (Bayesian) confidence intervals (for regression) or posterior class probabilities (for classification) but also immediately opens the possibility to treat other nonstandard data models (e.g. Quantum inverse statistics [4]). Unfortunately the drawback of GP models (which was originally apparent in SVMs as well, but has now been overcome [6]) lies in the huge increase of the computational cost with the number of training data. This seems to preclude applications of GPs to large datasets. This paper presents an approach to overcome this problem. It is based on a combination of an online learning approach requiring only a single sweep through the data and a method to reduce the number of parameters representing the model. Making use of the proposed parametrisation the method extracts a subset of the examples and the prediction relies only on these basis vectors (BV). The memory requirement of the algorithm scales thus only with the size of this set. Experiments with real-world datasets confirm the good performance of the proposed method. 1 1A different approach for dealing with large datasets was suggested by V. Tresp [12]. His method 2 Gaussian Process Models GPs belong to Bayesian non-parametric models where likelihoods are parametrised by a Gaussian stochastic process (random field) a(x) which is indexed by the continuous input variable x . The prior knowledge about a is expressed in the prior mean and the covariance given by the kernel Ko(x,x') = Cov(a(x), a(x')) [14; 15]. In the following, only zero mean GP priors are used. In supervised learning the process a(x) is used as a latent variable in the likelihood P(yla(x)) which denotes the probability of output Y given the input x . Based on a set of input-output pairs (xn, Yn) with Xn E R m and Yn E R (n = 1, N) the Bayesian learning method computes the posterior distribution of the process a(x) using the prior and likelihood [14; 15; 3]. Although the prior is a Gaussian process, the posterior process usually is not Gaussian (except for the special case of regression with Gaussian noise). Nevertheless, various approaches have been introduced recently to approximate the posterior averages [11 ; 9]. Our approach is based on the idea of approximating the true posterior process p{ a} by a Gaussian process q{a} which is fully specified by a covariance kernel Kt(x,x') and posterior mean (a(x))t, where t is the number of training data processed by the algorithm so far. Such an approximation could be formulated within the variational approach, where q is chosen such that the relative entropy D(q,p) == Eq In ~ is minimal [9]. However, in this formulation, the expectation is over the approximate process q rather than over p. It seems intuitively better to minimise the other KL divergence given by D(p, q) == Ep In ~, because the expectation is over the true distribution. Unfortunately, such a computation is generally not possible. The following online approach can be understood as an approximation to this task. 3 Online learning for Gaussian Processes In this section we briefly review the main idea of the Bayesian online approach (see e.g. [5]) to GP models. We process the training data sequentially one after the other. Assume we have a Gaussian approximation to the posterior process at time t. We use the next example t + 1 to update the posterior using Bayes rule via p(a) = P(Yt+1la(Xt+l))Pt(q) (P(Yt+1la(xt+1)))t Since the resulting posterior p(q) is non-Gaussian, we project it to the closest Gaussian process q which minimises the KL divergence D(p, q). Note, that now the new approximation q is on "correct" side of the KL divergence. The minimisation can be performed exactly, leading to a match of the means and covariances of p and q. Since p is much less complex than the full posterior, it is possible to write down the changes in the first two moments analytically [2]: (a(x))t+1 = (a(x))t + b1 Kt(x,xt+d K t+1(x,x') = Kt(x,x') + b2 K t (x,xt+1)Kt (xt+1,x') (1) where the scalar coefficients b1 and b2 are: (2) with averaging performed with respect to the marginal Gaussian distribution of the process variable a at input Xt+1' Note, that this yields a one dimensional integral! Derivatives are is based on splitting the data-set into smaller subsets and training individual GP predictors on each of them. The final prediction is achieved by a specific weighting of the individual predictors. <PH! ~' , /:'~es ,-------(a) - (b) Figure 1: Projection of the new input <Pt+! to the subspace spanned by previous inputs. <l>t+l is the projection to the linear span of {<Pih=l ,t. and <Pres the residual vector. Subfigure (a) shows the projections to the subspace, and (b) gives a geometric picture of the "measurable part" of the error It+! from eq. (8). taken with respect to (a(x))t . Note also that this procedure does not equal the extended Kalman filter which involves linearisations of likelihoods, whereas in our approach it is possible to use non-smooth likelihoods (e.g. noise free classifications) without problems. It turns out, that the recursion (1) is solved by the parametrisation (a(x))t Kt(x,x') = = L~=IKo(x,xi)at(i) Ko(x,x') + LL=IKo(x,Xi)Ct(ij)Ko(xj,x') (3) such that in each on-line step, we have to update only the vector of a's and the matrix of C's. For notational convenience we use vector at = [at(1), ... , at (N)jT and matrix C t = {Ct (ij) h,j=I,N. Zero-mean GP with kernel Ko is used as the starting point for the algorithm: ao = a and Co = a will be the starting parameters. The update of the parameters defined in (3) is found to be at+! = at + bl [Ctkt+l + et+!l (4) C t+! = C t + b2 [C tkt+l + et+!l [C tkt+! + et+lf with kt+! = [KO(XI,Xt+!), ... , Ko(xt ,xt+!)jT, et+! the t + 1-th unit vector (all components except t + 1-th are zero), and the scalar coefficients bl and b2 computed from (2). The serious drawback of this approach, which it shares with many other kernel methods, is the quadratic increase of the matrix size with the training data. 4 Sparse representation We use the following idea for reducing the increase of the size of C and a (for a similar approach see [8]). We consider the feature expansion of the kernel Ko(x,x') = <p(X)T <p(x') and decompose the new feature vector <p(Xt+!) as a linear combination of the previous features and a residual <Pres: <p(Xt+!) = <Pt+! + <Pres = "t ~ i=l ei<p(Xi) + <Pres A A (5) where <l>t+! is the projection of <Pt+! to the previous inputs and et+! = [el' . . . ' etjT are the coordinates of <l>t+! with respect to the basis {<Pih=l,t. We can then re-express the GP means: (6) with Qt+l(i) = at+l(i) + et+1(i)at+1(t + 1) and 'YHI the residual (or novelty factor) associated with the new feature vector. The vector et+1 and the residual term 'Yt+1 are all expressed in terms of kernels: (7) et+1 - K(-I)k B HI 'Yt+1 -- k*t+1 - kT t+1 K(-I)k B t+1 with KB(ij) = {KO(Xi,Xj)h,j=l,t and kt+1 = K o (Xt+1,Xt+1). The relation between the quantities et+1 and 'Yt+1 is illustrated in Figure 1. A _ Neglecting the last term in the decomposition of the new input (5) and performing the update with the resulting vector is equivalent to the update rule (4) with et+1 replaced by et+1. Note that the dimension of parameter space is not increased by this approximative update. The memory required by the algorithm scales quadratically only with the size of the set of "basis vectors", i.e. those examples for which the full update (4) is made. This is similar to Support Vectors [13], without the need to solve the (high dimensional) convex optimisation problem. It is also related to the kernel PCA and the reduced set method [8] where the full solution is computed first and then a reduced set is used for prediction. Replacing the input vector cJl t +1 by its projection on the linear span of the BVs when updating the GP parameters induces changes in the GP2. However, the replacement of the true feature vector by its approximation leaves the mean function unchanged at each BV i = 1, t. That is, the functions (a(x))t+1 from (6) and (a(x))t+1 = L~=I Qt+1(i)Ko (Xi,X) have the same value at all Xl. The change at Xt+1 is Ct+1 = l(a(xt+1))t+1 - (a(xt+t})t+11 = Ibl l'Yt+1 (8) with bi the factor from (2). As a consequence, a good approximation to the full GP solution is obtained if the input for which we have only a small change in the mean function of the posterior process is not included in the set of BV s. The change is given by Ct+1 and the decision of including Xt+1 or not is based on the "score" associated to it. The absence of matrix inversions is an important issue when dealing with large datasets. The matrix inversion from the projection equation (7) can be avoided by iterative inversion 3 of the Gram matrix Q = Ki/: Q t+1 = Qt + 'Yt;1 (et+1 - et+t) (et+1 - et+If (9) An important comment is that if the new input is in the linear span of the BVs, then it will not be included in the basis set, avoiding thus: 1.) the small singular values of the matrix K Band 2.) the redundancy in representing the problem. 4.1 Deleting a basis vector The above section gave a method to leave out a vector that is not significant for the prediction purposes. However, it did not provide us with a method to eliminate one of the already existing BV-s. Let us assume that an input Xt+1 has just been added to the set of BV s. Since we know that an addition had taken place, the update rule (4) with the t + 1-th unit vector et+1 was last performed. Since the model parameters at the previous step had an empty t + 1-th row and column, the parameters before thefull update can be identified. The removal of the last basis vector can be done with the following steps: 1) computing the parameters before the update of the GP and 2) performing a reduced update of the 2Equation (7) also minimises the KL-distance between the full posterior (the one that increases parameter space) and a parametric distribution using only the old BVs. 3 A guide is available from Sam Roweis: http://www.gatsby.ucl.ac.uk/rvroweis/notes.html Qt+l C t+ l d t) dt ) c* Q ....................... , ... C* T Q *T c* q* Figure 2: Decomposition of model parameters for the update equation (10). model without the inclusion of the basis vector (eq. (4) using et+1)' The updates for model parameters a, C, and Q are "inverted" by inverting the coupled equations (4) and (9): Q* & = a(t) - a*q* C= C(t) + c* Q*Q*T _ q*2 *Q*T Q=Q(t) _Q__ ~ q* [Q*C*T + C*Q*T] (10) q* where the elements needed to update the model are extracted from the extended parameters as illustrated in Figure 2. The consequence of the identification permits us to evaluate the score for the last BV. But since the order of the BVs is approximately arbitrary, we can assign a score to each BV lat+l(i)1 Ci (11) = Qt+1 (i, i)' Thus we have a method to estimate the score of each basis vector at any time and to eliminate the one with the least contribution to the GP output (the mean), providing a sparse GP with a full control over memory size. 5 Simulation results To apply the online learning rules (4), the data likelihood for the specific problem has to be averaged with respect to a Gaussian. Using eq. (2), the coefficients b1 and b2 are obtained. The marginal of the GP at Xt+1 is a normal distribution with mean (a(Xt+1))t = a[k t+1 and variance 0';'+1 = kt+1 +k;+1 C tkt+1 where the GP parameters at time t are considered. As a first example, we consider regression with Gaussian output noise 0'5 for which 1 ( 2 2 ) (Yt+1 - (a(Xt+1)t)2 (12) In(P(Yt+1l a(Xt+d))t=-2"ln 271'(0'0+0'11:'+1) 2( 2+ 2 )2 0'0 0'11:'+1 For classification we use the probit model. The outputs are binary Y E {-I, I} and the probability is given by the error function (where u = ya/O'o): 1 P(yla) = Erf ( -ya) = . f(C 0'0 V 271' l U dte- t2 / 2 00 The averaged log-likelihood for the new data point at time tis: (P (Yt+1 Ia (Xt+1 ))) = Erf ( Yt+1 a[k t+1 ) j 0'5 + O'i'+l (13) 1.4 ~ 1.2 ," 0.8 0.6 0.4 0.2 ~ -0.2 ~ - 0.4 -3 -2 -1 100 150 (a) 200 250 300 350 400 # of BasIs Vectors 450 500 550 (b) Figure 3: Simulation results for regression (a) and classification (b). For details see text. For the regression case we have chosen the toy data model y = sin(x)/x + ( where ( is a zero-mean Gaussian random variable with variance (]"~ and an RBF kernel. Figure 3.a shows the result of applying the algorithm for 600 input data and restricting the number of BVs to 20. The dash-dotted line is the true function, the continuous line is the approximation with the Bayesian standard deviation plotted by dotted lines (a gradient-like approximation for the output noise based on maximising the likelihood (12) lead us to the variance with which the data has been generated). For classification we used the data from the US postal database4 of handwritten zip codes together with an RBF kernel. The database has 7291 training and 2007 test data of 16 x 16 grey-scale images. To apply the classification method to this database, 10 binary classification problems were solved and the final output was the class with the largest probability. The same BVs have been considered for each classifier and if a deletion was required, the BV having the minimum cumulative score was deleted. The cumulative score was chosen to be the maximum of the scores for each classifier. Figure 3.b shows the test error as a function of the size of the basis set. We find that the test error is rather stable over a considerable range of basis set sizes. Also a comparison with a second sweep through the data shows that the algorithm seems to have already extracted the relevant information out of the data within a single sweep. Using a polynomial kernel for the USPS dataset and 500 BVs we achieved a test error of 4.83%, which compares favourably with other sparse approaches [10; 8] but uses smaller basis sets than the SVM (2540 reported in [8]). We also applied our algorithm to the NIST datasetS which contains 60000 data. Using a fourth order polynomial kernel with only 500 BVs we achieved a test error of 3.13% and we expect that improvements are possible by using a kernel with tunable hyperparameters. The possibility of computing the posterior class probabilities allows us to reject data. When the test data for which the maximum probability was below 0.5 was rejected, the test error was 1.53% with 1.60% of rejection rate. 4Prom: http://www.kernel-machines.org/data.html 5 Available from: http://www.research.att.comryann/ocr/rnnist/ 6 Conclusion and further research This paper presents a sparse approximation for GPs similar to the one found in SVMs [13] or relevance vector machines [10]. In contrast to these other approaches our algorithm is fully online and does not construct the sparse representation from the full data set (for sequential optimisation for SVM see [6]). An important open question (besides the issue of model selection) is how to choose the minimal size of the set of basis vectors such that the predictive performance is not much deteriorated by the approximation involved. In fact, our numerical classification experiments suggest that the prediction performance is considerably stable when the basis set is above a certain size. It would be interesting if one could relate this minimum size to the effective dimensionality of the problem being defined as the number of feature dimensions which are well estimated by the data. One may argue as follows: Replacing the true kernel by a modified (finite dimensional) one which contains only the well estimated features will not change the predictive power. On the other hand, for kernels with a feature space of finite dimensionality M, it is easy to see that we need never more than M basis vectors, because of linear dependence. Whether such reasoning will lead to a practical procedure for choosing the appropriate basis set size, is a question for further research. 7 Acknowledgement This work was supported by EPSRC grant no. GRlM81608. References [1] J. M. Bernardo and A. F. Smith. Bayesian Theory. John Wiley & Sons, 1994. [2] L. Csat6, E. Fokoue, M. Opper, B. Schottky, and O. Winther. Efficient approaches to Gaussian process classification. In NIPS, volume 12, pages 251- 257. The MIT Press, 2000. [3] M. Gibbs and D. J. MacKay. Efficient implementation of Gaussian processes. Technical report, http://wol.ra.phy.cam.ac.uklmackay/abstracts/gpros.html. 1999. [4] J. C. Lemm, J. Uhlig, and A. Weiguny. A Bayesian approach to inverse quantum statistics. Phys.Rev.Lett. , 84:2006, 2000. [5] M. Opper. A Bayesian approach to online learning. In Saad [7], pages 363- 378. [6] J. C. Platt. Fast training of Support Vector Machines using sequential minimal optimisation. In Advances in Kernel Methods (Support Vector Learning). [7] D. Saad, editor. On-Line Learning in Neural Networks. Cambridge Univ. Press, 1998. [8] B. Scholkopf, S. Mika, C. J. Burges, P. Knirsch, K-R. Miiller, G. Ratsch, and A. J. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000-1017, September 1999. [9] M. Seeger. Bayesian model selection for Support Vector Machines, Gaussian processes and other kernel classifiers. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, NIPS, volume 12. The MIT Press, 2000. [10] M. Tipping. The Relevance Vector Machine. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, NIPS, volume 12. The MIT Press, 2000. [11] G. F. Trecate, C. K 1. Williams, and M. Opper. Finite-dimensional approximation of Gaussian processes. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, NIPS, volume 11. The MIT Press, 1999. [12] v. Tresp. A Bayesian committee machine. Neural Computation, accepted. [13] V. N. Vapnik. The Nature oj Statistical Learning Theory. Springer-Verlag, New York, NY, 1995. [14] C. K 1. Williams. Prediction with Gaussian processes. In M. 1. Jordan, editor, Learning in Graphical Models. The MIT Press, 1999. [15] C. K 1. Williams and C. E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M . C. Mozer, and M . E . Hasselmo, editors, NIPS, volume 8. The MIT Press, 1996.
1893 |@word briefly:1 inversion:3 polynomial:2 seems:3 open:2 grey:1 simulation:2 covariance:3 decomposition:2 moment:1 phy:1 contains:2 score:7 united:1 att:1 existing:1 john:1 numerical:1 kleen:2 update:14 v:1 leaf:1 smith:1 manfred:1 postal:1 org:1 scholkopf:1 combine:1 ra:1 preclude:1 project:1 ti:1 bernardo:1 exactly:1 classifier:3 uk:2 control:1 unit:2 grant:1 platt:1 yn:2 positive:1 before:2 engineering:1 understood:1 treat:1 consequence:2 approximately:1 mika:1 co:1 bi:1 range:1 averaged:2 practical:1 definite:1 lf:1 procedure:2 reject:1 projection:6 confidence:1 suggest:1 convenience:1 selection:2 applying:1 www:3 measurable:1 equivalent:1 yt:11 csat6:2 williams:3 starting:2 convex:1 simplicity:1 splitting:1 immediately:1 rule:4 spanned:1 his:1 coordinate:1 deteriorated:1 construction:1 pt:4 gps:5 approximative:1 us:1 element:1 updating:1 database:2 ep:1 epsrc:1 solved:2 solla:3 mozer:1 cam:1 predictive:2 efficiency:1 basis:16 usps:1 various:1 univ:1 fast:1 effective:1 choosing:1 apparent:1 solve:1 statistic:2 cov:1 erf:2 gp:15 final:2 online:8 advantage:1 ucl:1 relevant:2 flexibility:1 roweis:1 empty:1 requirement:1 leave:1 develop:1 ac:3 minimises:2 ij:3 qt:5 school:1 eq:4 involves:1 indicate:1 drawback:2 correct:1 filter:1 stochastic:1 kb:1 wol:1 assign:1 ao:1 decompose:1 considered:2 bvs:8 normal:1 purpose:1 birmingham:1 largest:1 hasselmo:1 pres:4 tool:1 mit:6 gaussian:23 modified:1 rather:2 minimisation:1 notational:1 improvement:1 modelling:1 likelihood:8 contrast:1 ibl:1 seeger:1 el:1 lehel:1 eliminate:2 kernelized:1 relation:1 issue:2 classification:9 html:3 prom:1 special:1 mackay:1 marginal:2 field:1 equal:1 construct:1 having:1 never:1 t2:1 report:1 serious:1 divergence:3 individual:2 replaced:1 replacement:1 huge:1 possibility:2 parametrised:1 kt:9 integral:1 neglecting:1 indexed:1 old:1 re:1 plotted:1 subfigure:1 minimal:3 linearisations:1 increased:1 column:1 cost:1 deviation:1 subset:2 predictor:2 reported:1 considerably:1 winther:1 probabilistic:1 together:2 parametrisation:2 cjl:1 choose:1 derivative:1 leading:1 knirsch:1 toy:2 b2:5 coefficient:3 caused:1 performed:4 bayes:1 contribution:1 variance:3 yield:1 ofthe:1 bayesian:12 identification:1 handwritten:1 comryann:1 modeller:1 nonstandard:1 phys:1 touretzky:1 involved:1 associated:2 dataset:1 tunable:1 knowledge:1 dimensionality:2 originally:1 dt:1 supervised:1 tipping:1 formulation:2 done:1 just:1 rejected:1 smola:1 working:1 hand:1 favourably:1 ei:1 replacing:2 cohn:1 requiring:1 true:5 analytically:1 illustrated:2 ll:1 sin:1 reasoning:1 image:1 variational:1 recently:1 b4:1 volume:5 belong:1 significant:1 cambridge:1 gibbs:1 inclusion:1 had:2 stable:2 posterior:13 closest:1 certain:1 verlag:1 binary:2 inverted:1 minimum:2 zip:1 gp2:1 novelty:1 full:7 smooth:1 technical:1 match:1 prediction:7 regression:6 ko:9 optimisation:3 expectation:2 kernel:20 achieved:3 whereas:1 addition:1 interval:1 ratsch:1 singular:1 saad:2 comment:1 jordan:1 easy:1 xj:2 gave:1 identified:1 reduce:1 idea:3 trecate:1 minimise:1 whether:1 pca:1 miiller:3 york:1 pih:2 generally:1 band:1 ph:1 induces:1 svms:3 processed:1 reduced:3 http:4 specifies:1 dotted:2 estimated:2 csat:1 write:1 express:1 group:1 redundancy:1 nevertheless:1 deleted:1 schottky:1 inverse:2 fourth:1 place:1 decision:1 ki:1 ct:4 hi:1 dash:1 quadratic:1 bv:8 lemm:1 span:3 performing:2 combination:3 smaller:2 son:1 sam:1 rev:1 making:1 intuitively:1 tkt:3 taken:2 ln:1 equation:4 turn:1 committee:1 needed:1 know:1 available:2 operation:1 permit:1 apply:2 ocr:1 appropriate:1 denotes:1 lat:1 graphical:1 approximating:1 unchanged:1 bl:2 sweep:3 already:2 quantity:1 added:1 question:2 parametric:3 dependence:1 september:1 gradient:1 subspace:2 distance:1 argue:1 maximising:1 kalman:1 code:1 besides:1 providing:1 kingdom:1 unfortunately:2 relate:1 implementation:1 datasets:5 nist:1 finite:3 extended:2 yla:2 arbitrary:1 introduced:1 inverting:1 pair:1 required:2 specified:1 kl:4 quadratically:1 deletion:1 nip:5 suggested:1 usually:1 below:1 including:1 memory:3 oj:1 deleting:1 ia:1 power:1 residual:4 recursion:1 representing:2 picture:1 extract:1 coupled:1 tresp:2 text:1 prior:5 review:1 geometric:1 removal:1 acknowledgement:1 relative:1 fully:3 probit:1 expect:1 interesting:1 limitation:1 editor:6 share:1 row:1 supported:1 last:4 free:1 rasmussen:1 side:1 guide:1 burges:1 sparse:7 overcome:3 opper:4 xn:2 world:3 dimension:2 gram:1 quantum:2 computes:1 cumulative:2 made:1 lett:1 avoided:1 far:1 transaction:1 approximate:2 confirm:1 dealing:2 sequentially:1 b1:3 xi:6 continuous:2 latent:1 iterative:1 promising:1 nature:1 dte:1 expansion:1 complex:1 did:1 main:1 subsample:1 noise:4 hyperparameters:1 fokoue:1 gatsby:1 ny:1 wiley:1 explicit:1 xl:1 lie:1 weighting:1 down:1 xt:23 specific:2 jt:2 svm:2 restricting:1 sequential:3 vapnik:1 ci:1 yhi:1 rejection:1 entropy:1 expressed:2 scalar:2 springer:1 relies:1 extracted:2 formulated:1 rbf:2 absence:1 considerable:1 change:6 included:2 except:2 reducing:1 averaging:1 kearns:1 accepted:1 experimental:1 la:2 e:1 ya:2 support:5 relevance:2 evaluate:1 avoiding:1
977
1,894
Universality and individuality in a neural code Elad Schneidman,1,2 Naama Brenner,3 Naftali Tishby,1,3 Rob R. de Ruyter van Steveninck, 3 William Bialek3 ISchool of Computer Science and Engineering, Center for Neural Computation and 2Department of Neurobiology, Hebrew University, Jerusalem 91904, Israel 3NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540, USA { elads, tishby} @cs.huji. ac. il, {bialek, ruyter, naama} @research. nj. nec. com Abstract The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1 Introduction When two people look at the same scene, do they see the same things? This basic question in the theory of knowledge seems to be beyond the scope of experimental investigation. An accessible version of this question is whether different observers of the same sense data have the same neural representation of these data: how much of the neural code is universal, and how much is individual? Differences in the neural codes of different individuals may arise from various sources: First, different individuals may use different 'vocabularies' of coding symbols. Second, they may use the same symbols to encode different stimulus features. Third, they may have different latencies, so they 'say' the same things at slightly different times. Finally, perhaps the most interesting possibility is that different individuals might encode different features of the stimulus, so that they 'talk about different things'. If we are to compare neural codes we must give a quantitative definition of similarity or divergence among neural responses. We shall use ideas from information theory [1, 2] to quantify the the notions of distinguishability, functional equivalence and content in the neural code. This approach does not require a metric either on the space of stimuli or on the space of neural responses (but see [3]); all notions of similarity emerge from the statistical structure of the neural responses. We apply these methods to analyze experiments on an identified motion sensitive neuron in the fly's visual system, the cell HI [4]. Many invertebrate nervous systems have cells that can be named and numbered [5]; in many cases, including the motion sensitive cells in the fly's lobula plate, a small number of neurons is involved in representing a similarly identifiable portion of the sensory world. It might seem that in these cases the question of whether different individuals share the same neural representation of the visual world would have a trivial answer. Far from trivial, we shall see that the neural code even for identified neurons in flies has components which are common among flies and significant components which are individual to each fly. 2 Distinguishing flies according to their spike patterns Nine different flies are shown precisely the same movie, which is repeated many times for each fly (Figure Ia). As we show the movie we record the action potentials from the HI neuron. 1 The details of the stimulus movie should not have a qualitative impact on the results, provided that the movie is sufficiently long and rich to drive the system through a reasonable and natural range of responses. Figure Ib shows a portion of the responses of the different flies to the visual stimulus - the qualitative features of the neural response on long time scales ('" 100 ms) are common to almost all the flies, and some aspects of the response are reproducible on a (few) millisecond time scale across multiple presentations of the movie to each fly. Nonetheless the responses are not identical in the different flies, nor are they perfectly reproduced from trial to trial in the same fly. To analyze similarities and differences among the neural codes, we begin by discretizing the neural response into time bins of size I:l.t = 2 ms. At this resolution there are almost never two spikes in a single bin, so we can think of the neural response as a binary string, as in Fig. Ic-d. We examine the response in blocks or windows of time having length T, so that an individual neural response becomes a binary 'word' W with T / I:l.t 'letters'. Clearly, any fixed choice of T and I:l.t is arbitrary, and so we explore a range of these parameters. Figure If shows that different flies 'speak' with similar but distinct vocabularies. We quantify the divergence among vocabularies by asking how much information the observation of a single word W provides about the identity of its source, that is about the identity of the fly which generates this word: J(W -+ identity; T) = 8 N . [ pi(W) ] Ii. ~ P'(W) log2 pens(w) bits, (1) lThe stimulus presented to the flies is a rigidly moving pattern of vertical bars, randomly dark or bright, with average intensity I ~ 100mW/(m 2 ? sr). The pattern position was defined by a pseudorandom sequence, simulating a diffusive motion or random walk. Recordings were made from the H1 neuron of immobilized Hies, using standard methods. We draw attention to three points relevant for the present analysis: (1) The Hies are freshly caught female Calliphora, so that our 'ensemble of Hies' approaches a natural ensemble. (2) In each Hy we identify the H1 cell as the unique spiking neuron in the lobula plate that has a combination of wide field sensitivity, inward directional selectivity for horizontal motion, and contralateral projection. (3) Recordings are rejected only if raw electrode signals are excessively noisy or unstable. a Stimulus o~200~ '0 o 0 a; >_200 L-----L-----~----~----~----~ b c Spike trains e Fly 1 . 001000 Fly 1 ~~~--~~~~",; .':" Fly 2 : . Word distribution @ t .. pFly 1(Wlt=3306) " ~~~--~H*~~--+4~~-------? Fly3 ~~U- _ _~~_ _~_ _~~~_ _ _ _ _ ___ _~~ ____ ~ Fly4 ~---------*--~--~~~-------? Fly5 ~---mrr-----,; Fly 6 d Fly 6 pFIY'(Wlt=33061 f Total word distribution ~---*l?--? Fly7 ~---'"""'---------' Fly8 ~--.~--~~~~--~ Fly9 3L-~=3 ~,1~~=3~ ,2-=~3~ ,3 3.4 3,5 Time (5) 3306 3318 Time (ms) 20 40 60 binary word value Figure 1: Different flies' spike trains and word statistics. (a) All flies view the same random vertical bar pattern moving across their visual field with a time dependent velocity, part of which is shown. In the experiment, a 40 sec waveform is presented repeatedly, 90 times. (b) A set of 45 response traces to the part of the stimulus shown in (a) from each of the 9 flies . The traces are taken from the segment of the experiment where the transient responses have decayed. (c) Example of construction of the local word distributions. Zooming in on a segment of the repeated responses of fly 1 to the visual stimuli, the fly's spike trains are divided into contiguous 2 ms bins, and the spikes in each of the bins are counted. For example, we get the 6 letter words that the fly used at time 3306 ms into the input trace. (d) Similar as (c) for fly 6. (e) The distributions of words that flies 1 and 6 used at time t = 3306 ms from the beginning of the stimulus. The time dependent distributions, pI(Wlt = 3306 ms) and p6(Wlt = 3306 ms} are presented as a function of the binary value of the actual 'word', e.g., binary word value '17' stands for the word '010001' . (f) Collecting the words that each of the flies used through all of the visual stimulus presentations, we get the total word distributions for flies 1 and 6, pI (W) and P6(W} . = where P;. 1/ N is the a priori probability that we are recording from fly i, pi(W) is the probability that fly i will generate (at some time) the word W in response to the stimulus movie, and pens(w) is the probability that any fly in the whole ensemble of flies would generate this word, N pens(W) =L p;'pi(W). (2) i=l The measure J(W -+ identity;T) has been discussed by Lin [11] as the 'JensenShannon divergence' DJS among the distributions pi(W).2 2Unlike the Kullback- Leibler divergence [2] (the 'standard' choice for measuring dissimilarity among distributions), the Jensen- Shannon divergence is symmetric, and bounded (see also [12]). Moreover, DJS can be used to bound other measures of similarity, such as the optimal or Bayesian probability of identifying correctly the origin of a sample. We find that information about identity is accumulating at more or less constant rate well before the under sampling limits of the experiment are reached (Fig. 2a). Thus I(W -+ identity; T) ~ R(W -+ identity) . T; R(W -+ identity) ~ 5 bits/s, with a very weak dependence on the time resolution .t!.t. Since the mean spike rate can be measured by counting the number of Is in each word W, this information includes the differences in firing rate among the different flies. Even if flies use very similar vocabularies, they may differ substantially in the way that they associate words with particular stimulus features. Since we present the stimulus repeatedly to each fly, we can specify the stimulus precisely by noting the time relative to the beginning of the stimulus. We can therefore consider the word W that the ith fly will generate at time t. This word is drawn from the distribution pi(Wlt) which we can sample, as in Fig. lc-e, by looking across multiple presentations of the same stimulus movie. In parallel with the discussion above, we can measure the information that the word W observed at known t gives us about the identity of the fly, .. I(W -+ IdentIty It; T) N = ~ 11 ~ p i [ pi(Wlt) ] (Wit) log2 pens(Wlt) , (3) where the distribution of words used at time t by the whole ensemble of flies is N pens(Wlt) =L l1Pi(Wlt). (4) i=l The natural quantity is an average over all times t, I( {W, t} -+ identity; T) = (I(W -+ identity It; T)t bits, (5) where (.. ?)t denotes an average over t. Figure 2b shows a plot of I ({W, t} -+ identity; T) /T as a function of the observation time window of size T. Observing both the spike train and the stimulus together provides 32 ? 1 bits/s about the identity of the fly. This is more than six times as much information as we can gain by observing the spike train alone, and corresponds to gaining one bit in ""' 30 ms; correspondingly, a typical pair of flies in our ensemble can be distinguished reliably in ""' 30 ms. This is the time scale on which flies actually use their estimates of visual motion to guide their flight during chasing behavior [6], so that the neural codes of different individuals are distinguishable on the time scales relevant to behavior. 3 Different flies encode different amounts of information about the same stimulus Having seen that we can distinguish reliably among individual flies using relatively short samples of the neural response, we turn to ask whether these substantial differences among codes have an impact on the ability of these cells to convey information about the visual stimulus. As discussed in Refs. [7, 8], the information which the neural response of the ith fly provides about the stimulus, Ii(W -+ s(t); T), is determined by the same probability distributions defined above: 3 i(W -+ s(t);T) = (~Pi(Wlt)IOg2 [~~~~)] )t (6) 3 Again we note that our estimate of the information rate itself is independent of any metric in the space of stimuli, nor does it depend on assumptions about which stimulus features are most important in the code. a 0 . 05 ,---~-~-~--~-~----r---'] b ___--~~~ FI~ Y 6__ YS mixture ?O.04 '" E ~ eO .03 f " "0 :;::0.02 Fly 1 vs mixture .8" '" ~001 Fly 6 vs mixture ---=========== Fly 1 vs mixture %L--~ 5-~1~ 0 -~ 15~~2~ 0 ~-2 ~5~~ 3~ 0 Word length (msec) 5 10 15 20 25 30 Word length (msec) Figure 2: Distinguishing one fly from others based on spike trains. (a) The average rate of information gained about the identity of a fly from its word distribution, as a function of the word size used (middle curve). The information rate is saturated even before we reach the maximal word length used. Also shown is the average rate of information that the word distribution of fly 1 (and 6) gives about its identity, compared with the word distribution mixture of all of the flies. The connecting line is used for clarification only. (b) Similar to (a) , we compute the average amount of information that the distribution of words the fly used at a specific point in time gives about its identity. Averaging over all times, we show the amount of information gained about the identity of fly 1 (and 6) based on its time dependent word distributions, and the average over the 9 flies (middle curve). Error bars were calculated as in (8) . A "baseline calculation" , where we subdivided the spike trains of one fly into artificial new individuals, and compared their spike trains, gave significantly smaller values (not shown) . Figure 3a shows that the flies in our ensemble span a range of information rates from ~ 50 to ~ 150 bits/so This threefold range of information rates is correlated with the range of spike rates, so that each of the cells transmits nearly a constant amount of information per spike, 2.39 ? 0.24 bits/spike. This universal efficiency (10% variance over the population, despite three fold variations in total spike rate), reflects that cells with higher firing rates are not generating extra spikes at random, but rather each extra spike is equally informative about the stimulus. Although information rates are correlated with spike rates, this does not mean that information is carried by a "rate code" alone. To address the rate/timing distinction we compare the total information rate in Fig. 3a, which includes the detailed structure of the spike train, with the information carried in the temporal modulations of the spike rate. As explained in Ref. [10], the information carried by the arrival time of a single spike can be written as an integral over the time variations of the spike rate, and multiplying by the number of spikes gives us the expected information rate if spikes contribute independently; information rates larger than this represent synergy among spikes, or extra information in the temporal patterns of spike. For all the flies in our ensemble, the total rate at which the spike train carries information is substantially larger than the 'single spike' information- 2.39 vs. 1.64 bits/spike, on average. This extra information is carried in the temporal patterns of spikes (Fig. 3b). 4 A universal codebook? Even though flies differ in the structures of their neural responses, distinguishable responses could be functionally equivalent. Thus it might be that all flies could be a 150 ! "Su o en -g]j 100 ?8. ? 100 E~ o~ ..... -Q) Q) b 80 c ..... ! ~Ul ctS ::::l E..... ::::l 50 I 0.2 () ctS ::::l E~ ..... en 0-ctS c ..... .- 0 ctSa. ~E "oW I I .E c .~ ..... _ en x 60 I 40 I 20 If Q) UJ ..... 20 40 60 20 Firing rate (spikes/sec) 40 60 Firing rate (spikes/sec) Figure 3: The information about the stimulus that a fly's spike train carries is correlated with firing rate, and yet a significant part is in the temporal structure. (a) The rate at the HI spike train provides information about the visual stimulus is shown as a function of the average spike rate, with each fly providing a single data point The linear fit of the data points for the 9 flies corresponds to a universal rate of 2.39 ? 0.24 bits/spike, as noted in the text. (b) The extra amount of information that the temporal structure of the spike train of each of the Hies carry about the stimulus, as a function of the average firing rate of the fly (see [10]). The average amount of additional information that is carried by the temporal structure of the spike trains, over the population is 45 ? 17%. Error bars were calculated as in [8] endowed (genetically?) with a universal or consensus codebook that allows each individual to make sense of her own spike trains, despite the differences from her conspecifics. Thus we want to ask how much information we lose if the identity of the flies is hidden from us, or equivalently how much each fly can gain by knowing its own individual code. If we observe the response of a neuron but don't know the identity of the individual generating this response, then we are observing responses drawn from the ensemble distributions defined above, pens(WJt) and pens(w). The information that words provide about the visual stimulus then is IffiiX(W ~ s(t)j T) = ( ~ pens(WJt) 10g2 [~::~~~)] ) t bits. (7) On the other hand, if we know the identity of the fly to be i, we gain the information that its spike train conveys about the stimulus, Ji(W ~ s(t) j T), Eq. (6). The average information loss is then N I~:~(W ~ s(t)j T) = L lUi(W ~ s(t)j T) - IffiiX(W ~ s(t)j T). (8) i= l After some algebra it can be shown that this average information loss is related to the information that the neural responses give about the identity of the individuals, as defined above: I( {W, t} ~ identityj T) -I(W ~ identityj T). (9) The result is that, on average, not knowing the identity of the fly limits us to extracting only 64 bits/s of information about the visual stimulus. This should be compared with the average information rate of 92.3 bits/s in our ensemble of flies: knowing her own identity allows the average fly to extract 44% more information from Hl. Further analysis shows that each individual fly gains approximately the same relative amount of information from knowing its personal codebook. 5 Discussion We have found that the flies use similar yet distinct set of 'words' to encode information about the stimulus. The main source of this difference is not in the total set of words (or spike rates) but rather in how (i.e. when) these words are used to encode the stimulus; taking this into account the flies are discriminable on time scales of relevance to behavior. Using their different codes, the flies' HI spike trains convey very different amounts of information from the same visual inputs. Nonetheless, all the flies achieve a high and constant efficiency in their encoding of this information, and the temporal structure of their spike trains adds nearly 50% more information than that carried by the rate. So how much is universal and how much is individual? We find that each individual fly would lose'" 30% of the visual information carried by this neuron if it 'knew' only the codebook appropriate to the whole ensemble of flies. We leave the judgment of whether this is high individuality or not to the reader, but recall that this is the individuality in an identified neuron. Hence, we should expect that all neural circuits- both vertebrate and invertebrate-express a degree of universality and a degree of individuality. We hope that the methods introduced here will help to explore this issue of individuality more generally. This research was supported by a grant from the Ministry of Science, Israel. References [1] Shannon, C. E. A mathematical theory of communication, Bell Sys. Tech. J. 27, 379- 423 , 623- 656 (1948) . [2] Cover, T. & Thomas J. Elements of information theory (Wiley, 1991). [3] Victor, J . D. & Purpura, K. Nature and precision of temporal coding in visual cortex: a metric- space analysis, J. Neurophysiol. 76, 1310- 1326 (1996) . [4] Hausen, K. The lobular complex of the fly, in Photoreception and vision in invertebrates (ed Ali, M.) pp . 523-559 (Plenum, 1984) . [5] Bullock, T. Structure and Function in the Nervous Systems of Invertebrates (W. H. Freeman, San Francisco, 1965). [6] Land, M. F. & Collett, T . S. Chasing behavior of houseflies (Fannia canicularis). A description and analysis, J. Comp o Physiol. 89, 331- 357 (1974) . [7] de Ruyter van Steveninck, R. R., Lewen, G. D., Strong, S. P., Koberie, R. & W. Bialek, Reproducibility and variability in neural spike trains, Science 275, 1805- 1808, (1997). [8] Strong, S. P., Koberie, R., de Ruyter van Steveninck, R . & Bialek, W. Entropy and information in neural spike trains , Phys. Rev. Lett. 80, 197- 200 (1998). [9] Rieke, F., Wariand, D., de Ruyter van Steveninck, R. & Bialek, W. Spikes: Exploring the Neural Code (MIT Press, 1997) . [10] Brenner, N. , Strong, S. P. , Koberie , R., de Ruyter van Steveninck, R. & Bialek, W . Synergy in a neural code, Neural Comp o 12, 1531- 52 (2000). [11] Lin, J., Divergence measures based on the Shannon entropy, IEEE Trans. Inf. Theory, 37, 145- 151 (1991) . [12] El-Yaniv, R., Fine, S. & Tishby, N. Agnostic classification of Markovian sequences, NIPS 10 pp. 465-471 (MIT Press, 1997) .
1894 |@word trial:2 middle:2 version:1 seems:1 carry:4 com:1 universality:3 yet:2 must:1 written:1 physiol:1 informative:1 motor:1 reproducible:1 plot:1 v:4 alone:2 nervous:2 beginning:2 sys:1 ith:2 short:1 record:1 iog2:1 provides:4 contribute:1 codebook:4 mathematical:1 qualitative:2 elads:1 expected:1 behavior:4 nor:2 examine:1 freeman:1 actual:1 window:2 vertebrate:1 becomes:1 provided:1 begin:1 bounded:1 moreover:1 circuit:1 agnostic:1 inward:1 israel:2 string:1 substantially:2 nj:1 temporal:9 quantitative:2 every:1 collecting:1 grant:1 before:2 engineering:1 local:1 timing:1 limit:2 despite:2 encoding:1 rigidly:1 firing:6 modulation:1 approximately:1 might:3 equivalence:1 range:5 steveninck:5 unique:1 block:1 chasing:2 universal:6 bell:1 thought:1 significantly:1 projection:1 intention:1 word:36 numbered:1 get:2 accumulating:1 equivalent:1 center:1 jerusalem:1 attention:1 caught:1 independently:1 resolution:2 wit:1 identifying:1 rule:1 population:2 rieke:1 notion:2 variation:3 plenum:1 construction:1 speak:1 distinguishing:2 origin:1 associate:1 velocity:1 element:1 particularly:1 observed:1 fly:82 substantial:1 calliphora:1 personal:1 ultimately:1 depend:1 segment:2 algebra:1 ali:1 efficiency:3 neurophysiol:1 jersey:1 various:1 talk:1 train:20 distinct:2 artificial:1 elad:1 larger:2 say:1 ability:1 statistic:1 think:1 noisy:1 itself:1 reproduced:1 sequence:3 maximal:1 relevant:2 reproducibility:1 achieve:1 description:1 quantifiable:1 electrode:1 yaniv:1 generating:2 leave:1 help:1 ac:1 measured:1 eq:1 strong:3 c:1 quantify:2 differ:2 waveform:1 transient:1 bin:4 require:1 subdivided:1 investigation:1 exploring:1 sufficiently:1 ic:1 scope:1 lose:2 sensitive:2 reflects:1 hope:1 mit:2 clearly:1 rather:2 encode:5 tech:1 baseline:1 sense:2 dependent:3 el:1 her:3 hidden:1 issue:1 among:10 classification:1 priori:1 field:2 never:1 having:2 sampling:1 identical:1 look:1 nearly:2 wlt:10 stimulus:33 others:1 few:1 randomly:1 divergence:6 individual:20 conspecific:1 william:1 possibility:1 saturated:1 mixture:6 integral:1 walk:1 asking:1 markovian:1 contiguous:1 cover:1 measuring:1 contralateral:1 tishby:3 answer:1 discriminable:1 decayed:1 sensitivity:1 huji:1 accessible:1 jensenshannon:1 together:1 connecting:1 again:1 account:1 potential:2 de:5 coding:5 sec:3 includes:2 h1:2 observer:1 view:1 analyze:2 observing:3 portion:2 reached:1 parallel:1 il:1 bright:1 variance:1 ensemble:11 judgment:1 identify:1 directional:1 weak:1 raw:1 bayesian:1 multiplying:1 comp:2 drive:1 reach:1 phys:1 ed:1 definition:1 nonetheless:2 pp:2 involved:1 conveys:1 transmits:1 gain:4 ask:2 recall:1 knowledge:1 actually:1 higher:1 response:24 specify:1 formulation:1 though:1 rejected:1 p6:2 hand:2 flight:1 horizontal:1 su:1 perhaps:1 usa:1 excessively:1 hence:1 symmetric:1 leibler:1 during:1 naftali:1 noted:1 m:10 plate:2 motion:5 fi:1 common:2 functional:1 spiking:1 ji:1 discussed:2 functionally:1 significant:3 similarly:1 dj:2 moving:2 similarity:4 cortex:1 add:1 own:3 mrr:1 female:1 inf:1 selectivity:1 discretizing:1 binary:5 victor:1 seen:1 ministry:1 additional:1 eo:1 schneidman:1 signal:1 ii:2 multiple:2 calculation:1 long:2 lin:2 divided:1 equally:1 y:1 impact:2 basic:1 vision:1 metric:3 represent:1 cell:7 diffusive:1 want:1 fine:1 source:3 extra:5 unlike:1 sr:1 recording:3 thing:3 seem:1 collett:1 extracting:1 mw:1 counting:1 noting:1 independence:1 fit:1 gave:1 identified:3 perfectly:1 idea:2 knowing:4 whether:5 six:1 ischool:1 ul:1 nine:1 action:2 repeatedly:2 generally:1 latency:1 clear:1 detailed:1 amount:9 dark:1 generate:3 millisecond:1 correctly:1 per:1 threefold:1 shall:2 express:1 lobula:2 drawn:2 letter:2 named:1 almost:2 reasonable:1 reader:1 draw:1 bit:12 bound:1 individuality:6 hi:4 ct:3 distinguish:1 fold:1 identifiable:1 precisely:2 scene:1 hy:1 invertebrate:4 generates:1 aspect:1 span:1 pseudorandom:1 relatively:1 department:1 according:1 combination:1 across:3 slightly:1 smaller:1 bullock:1 rob:1 rev:1 hl:1 explained:1 taken:1 turn:1 know:2 available:1 endowed:1 apply:2 observe:1 appropriate:1 simulating:1 distinguished:1 thomas:1 denotes:1 housefly:1 log2:2 uj:1 question:4 quantity:1 spike:51 dependence:1 bialek:5 exhibit:1 ow:1 zooming:1 lthe:1 trivial:2 immobilized:1 unstable:1 consensus:1 code:17 length:4 providing:1 hebrew:1 equivalently:1 trace:3 reliably:2 vertical:2 neuron:11 observation:2 neurobiology:1 looking:1 communication:1 variability:1 arbitrary:1 intensity:1 introduced:1 pair:1 distinction:1 nip:1 distinguishability:1 address:1 beyond:2 bar:4 trans:1 pattern:7 genetically:1 including:1 gaining:1 ia:1 natural:3 representing:1 movie:7 carried:7 extract:1 text:1 lewen:1 relative:2 loss:2 expect:1 interesting:1 degree:2 share:1 pi:9 land:1 supported:1 guide:1 understand:1 institute:1 wide:1 taking:1 correspondingly:1 naama:2 emerge:1 van:5 curve:2 calculated:2 vocabulary:4 world:2 stand:1 rich:1 lett:1 sensory:2 hausen:1 made:1 san:1 counted:1 far:1 kullback:1 synergy:2 francisco:1 knew:1 freshly:1 don:1 pen:8 purpura:1 nature:1 ruyter:6 complex:1 main:1 whole:3 arise:1 arrival:1 repeated:2 ref:2 convey:3 fig:5 en:3 wiley:1 lc:1 precision:1 position:1 msec:2 ib:1 third:1 specific:1 jensen:1 symbol:2 gained:2 nec:2 dissimilarity:1 photoreception:1 entropy:2 distinguishable:2 explore:2 visual:15 g2:1 corresponds:2 identity:24 presentation:3 wjt:2 brenner:2 content:1 typical:1 determined:1 lui:1 averaging:1 total:6 clarification:1 experimental:1 shannon:3 hies:4 people:1 relevance:1 princeton:1 correlated:3
978
1,895
Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping Rich Caruana CALD,CMU 5000 Forbes Ave. Pittsburgh, PA 15213 [email protected] Steve Lawrence NEC Research Institute 4 Independence Way Princeton, NJ 08540 lawrence@ research. nj. nec. com Lee Giles Information Sciences Penn State University University Park, PA 16801 [email protected] Abstract The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity generalize well when trained with backprop and early stopping. Experiments suggest two reasons for this: 1) Overfitting can vary significantly in different regions of the model. Excess capacity allows better fit to regions of high non-linearity, and backprop often avoids overfitting the regions of low non-linearity. 2) Regardless of size, nets learn task subcomponents in similar sequence. Big nets pass through stages similar to those learned by smaller nets. Early stopping can stop training the large net when it generalizes comparably to a smaller net. We also show that conjugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linearity. 1 Introduction It is commonly believed that large multi-layer perceptrons (MLPs) generalize poorly: nets with too much capacity overfit the training data. Restricting net capacity prevents overfitting because the net has insufficient capacity to learn models that are too complex. This belief is consistent with a VC-dimension analysis of net capacity vs. generalization: the more free parameters in the net the larger the VC-dimension of the hypothesis space, and the less likely the training sample is large enough to select a (nearly) correct hypothesis [2]. Once it became feasible to train large nets on real problems, a number of MLP users noted that the overfitting they expected from nets with excess capacity did not occur. Large nets appeared to generalize as well as smaller nets - sometimes better. The earliest report of this that we are aware of is Martin and Pittman in 1991: "We find only marginal and inconsistent indications that constraining net capacity improves generalization" [7]. We present empirical results showing that MLPs with excess capacity often do not overfit. On the contrary, we observe that large nets often generalize better than small nets of sufficient capacity. Backprop appears to use excess capacity to better fit regions of high non-linearity, while still fitting regions of low non-linearity smoothly. (This desirable behavior can disappear if a fast training algorithm such as conjugate gradient is used instead of backprop.) Nets with excess capacity trained with backprop appear first to learn models similar to models learned by smaller nets. If early stopping is used, training of the large net can be halted when the large net's model is similar to models learned by smaller nets. ApprOXlUlat l o D Approxunat lo n T rain i ng Data T urg .. t Functi on 'Without Noise - x T rain i ng Data T urgel Functi on '-Vilhau! Noise >< -1 -1 Order 10 Order 20 15~----------~----------~ >< ApprOXLnlut lon Training Data TurS.. 1 Functi on '\Vi l h out N oise >< - 1 5 O~-----------'~ O----------~20 10 Hidden Nodes - 1 5 O~-----------'~ O----------~20 50 Hidden Nodes Figure 1: Top: Polynomial fit to data from y = sin( x /3) + v . Order 20 overfits. Bottom: Small and large MLPs fit to same data. The large MLP does not overfit significantly more than the small MLP. 2 Overfitting Much has been written about overfitting and the bias/variance tradeoff in neural nets and other machine learning models [2, 12, 4, 8, 5, 13, 6] . The top of Figure 1 illustrates polynomial overfitting. We created a training dataset by evaluating y = sin( x /3) + lJ at 0 1 I I 2, ... ,20 where lJ is a uniformly distributed random variable between -0.25 and 0.25. We fit polynomial models with orders 2-20 to the data. Underfitting occurs with order 2. The fit is good with order 10. As the order (and number of parameters) increases, however, significant overfitting (poor generalization) occurs. At order 20, the polynomial fits the training data well, but interpolates poorly. The bottom of Figure 1 shows MLPs fit to the data. We used a single hidden layer MLP, backpropagation (BP), and 100,000 stochastic updates. The learning rate was reduced linearly to zero from an initial rate of 0.5 (reducing the learning rate improves convergence, and linear reduction performs sintilarly to other schedules [3]). This schedule and number of updates trains the MLPs to completion. (We examine early stopping in Section 4.) As with polynomials, the smallest net with one hidden unit (HU) (4 weights weights) underfits the data. The fit is good with two HU (7 weights). Unlike polynomials, however, networks with 10 HU (31 weights) and 50 HU (151 weights) also yield good models. MLPs with seven times as many parameters as data points trained with BP do not significantly overfit this data. The experiments in Section 4 confirm that this bias of BP-trained MLPs towards smooth models is not limited to the simple 2-D problem used here. 3 Local Overfitting Regularization methods such as weight decay typically assume that overfitting is a global phenomenon. But overfitting can vary significantly in different regions of a model. Figure 2 shows polynomial fits for data generated from the following equation: Y ={ - cos( x) + v cos(3(x - iT)) +V 0 :::; x < iT iT:::; X :::; 2iT (Equation 1) Five equally spaced points were generated in the first region, and 15 in the second region, so that the two regions have different data densities and different underlying functions. Overfitting is different in the two regions. In Figure 2 the order 6 model fits the left region O roe r 2 Approxim .. tio n Train ing Dat a T .. r get Fu n c tio n 'Withou t No,,.., + Oroe r I'i Approxi m .. t ion T raining D a . a T .. r ge. Fu n ction 'W ithou . Noio.c + 'oL---~--~--~----~--~--~U Order 2 Oroer 10 A.t~..i~\,',';~''::;~ - . T arget Fu n c tio n 'Withou. No !o.c Order 10 Order 6 Oroer ll'i ~t~..i~i:,';~':'~ - . T arget Fu n c tio n ""ithoU! Noise Order 16 Figure 2: Polynomial approximation of data from Equation 1 as the order of the model is increased from 2 to 16. The overfitting behavior differs in the left and right hand regions. well, but larger models overfit it. The order 6 model underfits the region on the right, and the order 10 model fits it better. No model performs well on both regions. Figure 3 shows MLPs trained on the same data (20,000 batch updates, learning rate linearly reduced to zero starting at 0.5). Small nets underfit. Larger nets, however, fit the entire function well without significant overfitting in the left region. The ability of MLPs to fit both regions of low and high non-linearity well (without overfitting) depends on the training algorithm. Conjugate gradient (CG) is the most popular second order method. CG results in lower training error for this problem, but overfits significantly. Figure 4 shows results for 10 trials for BP and CG. Large BP nets generalize better on this problem -- even the optimal size CG net is prone to overfitting. The degree of overfitting varies in different regions. When the net is large enough to fit the region of high non-linearity, overfitting is often seen in the region of low non-linearity. 4 Generalization, Network Capacity, and Early Stopping The results in Sections 2 and 3 suggest that BP nets are less prone to overfitting than expected. But MLPs can and do overfit. This section examines overfitting vs. net size on seven problems: NETtalk [10], 7 and 12 bit parity, an inverse kinematic model for a robot arm (thanks to Sebastian Thrun for the simulator), Base 1 and Base 2: two sonar modeling problems using data collected from a robot wondering hallways at CMU, and vision data used to learn to steer an autonomous car [9]. These problems exhibit a variety of characteristics. Some are Boolean. Others are continuous. Some have noise. Others are noise-free. Some have many inputs or outputs. Others have few inputs or outputs. 4.1 Results For each problem we used small training sets (100-1000 points, depending on the problem) so that overfitting was possible. We trained fully connected feedforward MLPs with one hidden layer whose size varied from 2 to 800 HU (about 500-100,000 parameters). All the nets were trained with BP using stochastic updates, learning rate 0.1, and momentum 0.9. We used early stopping for regularization because it doesn't interfere with backprop's ability to control capacity locally. Early stopping combined with backprop is so effective that very large nets can be trained without significant overfitting. Section 4.2 explains why. - "- "'/ / 1 Hidden Unit 4 Hidden Units 10 Hidden Units 100 Hidden Units Figure 3: MLP approximation using backpropagation (BP) training of data from Equation 1 as the number of hidden units is increased. No significant overfitting can be seen. 07 07 06 06 05 OJ 04 OJ 02 01 05 ::l '"Z 04 ~ OJ 02 01 5 10 Numbe! of Hidden Ncdes 25 50 5 10 25 50 Numbei cI Hidden Nodes Figure 4: Test Normalized Mean Squared Error for MLPs trained with BP (left) and CG (right). Results are shown with both box-whiskers plots and the mean plus and minus one standard deviation. Figure 5 shows generalization curves for four of the problems. Examining the results for all seven problems, we observe that on only three (Base 1, Base 2, and ALVINN), do nets that are too large yield worse generalization than smaller networks, but the loss is surprisingly small. Many trials were required before statistical tests confirmed that the differences between the optimal size net and the largest net were significant. Moreover, the results suggest that generalization is hurt more by using a net that is a little too small than by using one that is far too large, i.e., it is better to make nets too large than too small. For most tasks and net sizes, we trained well beyond the point where generalization performance peaked. Because we had complete generalization curves, we noticed something unexpected. On some tasks, small nets overtrained considerably. The NETtalk graph in Figure 5 is a good example. Regularization (e.g., early stopping) is critical for nets of all sizes - not just ones that are too big. Nets with restricted capacity can overtrain. 4.2 Why Excess Capacity Does Not Hurt BP nets initialized with small weights can develop large weights only after the number of updates is large. Thus BP nets consider hypotheses with small weights before hypotheses with large weights. Nets with large weights have more representational power, so simple hypotheses are explored before complex hypotheses. NETtalk Inverse Kinematics 0.17 _ -.....- - , - -.....- - , - - - - , 0.2 .----.....- - , - -.....- - , - - - - , 2 hidden units +8 hidden units -+-32 hidden units -E}-0.16 128 hidden unit s .. 0.14 " '_ _ _ _ 512 _ hidden ___ units __ ~ .. 0.18 0.16 *.... 0.15 "...o 0.14 '0 jJ ?... r< ?, ~ 0.12 0.1 ;> 0.13 00 00 o 0 . 12 ' - - - - - ' - - - ' - - - - - ' - - - ' - - - - ' o 100000 200000 300000 400000 500000 " 0.08 u 2et06 4et06 6et06 8et06 Pattern Presentations Pattern Presentations 0.13 0.12 "...o ?... ?, ;> 0.11 00 O. 22 2 hidden units +8 32 8 2 hidden hidden hidden hidden units -+-units -8- units ?K??? units -A- .. 1et07 Base 2 : Ave ra ge of 10 Runs Base 1: Average of 10 Runs o. 15 r<I!",,;;-.--t;--,-----.---, 0.14 1" ? ? ? ? 0.06 0.21 0.2 0.18 00 ,"~-.----,------.----, 2 8 32 128 512 hidden hidden hidden hidden hidden units units units units units +-+-?8 ?? ?X???? -A- .. jJ 0.1 '0 0.09 r< 0.08 ? 0.07 "...o ?... ?, ;> o 0.06 o 0.13 u 0.05 ' - - - - - ' - - - - - - ' - - - - ' - - - - ' 2et06 4et06 6et06 8et06 o Pattern Present ations u 0.12 ' - - - - - ' - - - - - - ' - - - - ' - - - - ' o 200000 400000 600000 800000 Pattern Presentations " jJ 0.17 '0 0.16 r< 0.15 ? 0.14 " Figure 5: Generalization peiformance vs. net size for four of the seven test problems. We analyzed what nets of different size learn while they are trained. We compared the input/output behavior of nets at different stages of learning on large samples of test patterns. We compare the input/output behavior of two nets by computing the squared error between the predictions made by the two nets. If two nets make the same predictions for all test cases, they have learned the same model (even though each model is represented differently), and the squared error between the two models is zero. If two nets make different predictions for test cases, they have learned different models, and the squared error between them is large. This is not the error the models make predicting the true labels, but the difference between predictions made by two different models. Two models can have poor generalization (large error on true labels), but have near zero error compared to each other if they are similar models. But two models with good generalization (low error on true labels) must have low error compared to each other. The first graph in Figure 5 shows learning curves for nets with 10,25, 50, 100, 200, and 400 HU trained on NETtalk. For each size, we saved the net from the epoch that generalized best on a large test set. This gives us the best model of each size found by backprop. We then trained a BP net with 800 HU, and after each epoch compared this net's model with the best models saved for nets of 10-400 HU. This lets us compare the sequence of models learned by the 800 HU net to the best models learned by smaller nets. Figure 6 shows this comparison. The horizontal axis is the number of backprop passes applied to the 800 HU net. The vertical axis is the error between the 800 HU net model and the best model for each smaller net. The 800 HU net starts off distant from the good smaller models, then becomes similar to the good models, and then diverges from them. This is expected. What is interesting is that the 800 HU net first becomes closest to the best S imilarity o f BOO HU Net DUring T raining to S maller Size Peak Performers 1 000 ,* ,....,-,-,--------,r'--------"----,--"---------r--------, H~ 10 hu 25hu 50hu 100 hu 200hu 400 hu t ~ .%>< ~ 800 t E b Xx x ~: '" "',;t: x x .!lo, ot: X x >'1: '~1 600 pea k ....r-pea k -+- pea k E} p eak )( pea k -6p eak ...,.,... ---- ------+ --- -,- 400 --;""':'-: li; ::::: -:: -- ----- -- -- -- =- ==:. -=- ----it- --: 200 o ~----~~----~-----~-----~ o 50000 1000 00 150000 2000 00 Patte rn Pre s entat IOns Figure 6: I/O similarity during training between an 800 hidden unit net and smaller nets (10, 25 , 50, 100,200, and 400 hldden units) trained on NETtalk. 10 HU net, then closest to the 25 HU net, then closest to the 50 HU net, etc. As it is trained, the 800 HU net learns a sequence of models similar to the models learned by smaller nets. If early stopping is used, training of the 800 HU net can be stopped when it behaves similar to the best model that could be learned with nets of 10, 25, 50, . .. HU. Large BP nets learn models similar to those learned by smaller nets. If a BP net with too much capacity would overjit, early stopping could stop training when the model was similar to a model that would have been learned by a smaller net of optimal size. The error between models is about 200-400, yet the generalization error is about 1600. The models are much closer to each other than any of them are to the true model. With early stopping, what counts is the closest approach of each model to the target function, not where models end up late in training. With early stopping there is little disadvantage to using models that are too large because their learning trajectories are similar to those followed by smaller nets of more optimal size. 5 Related Work Our results show that models learned by backprop are biased towards "smooth" solutions. As nets with excess capacity are trained, they first explore smoother models similar to the models smaller nets would have learned. Weigend [11] performed an experiment that showed BP nets learn a problem's eigenvectors in sequence, learning the 1st eigenvector first, then the 2nd, etc. His result complements our analysis of what nets of different sizes learn: if large nets learn an eigenvector sequence similar to smaller nets, then the models learned by the large net will pass through intermediate stages similar to what is learned by small nets (but iff nets of different sizes learn the eigenvectors equally well, which is an assumption we do not need to make.) Theoretical work by [1] supports our results. Bartlett notes: "the VC-bounds seem loose; neural nets often peiform successfully with training sets that are considerably smaller than the number of weights." Bartlett shows (for classification) that the number of training samples only needs to grow according to A 21 (ignoring log factors) to avoid overfitting, where A is a bound on the total weight magnitudes and I is the number of layers in the network. This result suggests that a net with smaller weights will generalize better than a similar net with large weights. Examining the weights from BP and CG nets shows that BP training typically results in smaller weights. 6 Summary Nets of all sizes overfit some problems. But generalization is surprisingly insensitive to excess capacity if the net is trained with backprop. Because BP nets with excess capacity learn a sequence of models functionally similar to what smaller nets learn, early stopping can often be used to stop training large nets when they have learned models similar to those learned by smaller nets of optimal size. This means there is little loss in generalization performance for nets with excess capacity if early stopping can be used. Overfitting is not a global phenomenon, although methods for controlling it often assume that it is. Overfitting can vary significantly in different regions of the model. MLPs trained with BP use excess parameters to improve fit in regions of high non-linearity, while not significantly overfitting other regions. Nets trained with conjugate gradient, however, are more sensitive to net size. BP nets appear to be better than CG nets at avoiding overfitting in regions with different degrees of non-linearity, perhaps because CG is more effective at learning more complex functions that overfit training data, while BP is biased toward learning smoother functions. References [1] Peter L. Bartlett. For valid generalization the size of the weights is more important than the size of the network. In Advances in Neural Information Processing Systems, volume 9, page 134. The MIT Press, 1997. [2] E.B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1(1):151- 160,1989. [3] C. Darken and J.E. Moody. Note on learning rate schedules for stochastic optimization. In Advances in Neural Information Processing Systems, volume 3, pages 832- 838. Morgan Kaufmann, 1991. [4] S. Geman et al. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):158,1992. [5] A Krogh and J.A Hertz. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems, volume 4, pages 950-957. Morgan Kaufmann, 1992. [6] Y. Le Cun, J.S. Denker, and S.A Solla. Optimal Brain Damage. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 2, pages 598-605, San Mateo, 1990. (Denver 1989), Morgan Kaufmann. [7] G.L. Martin and J.A Pittman. Recognizing hand-printed letters and digits using backpropagation learning. Neural Computation, 3:258-267, 1991. [8] J.E. Moody. The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems. In Advances in Neural Information Processing Systems, volume 4, pages 847-854. Morgan Kaufmann, 1992. [9] D.A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 1, pages 305-313, San Mateo, 1989. (Denver 1988), Morgan Kaufmann. [10] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145-168, 1987. [11] A Weigend. On overfitting and the effective number of hidden units. In Proceedings of the 1993 Connectionist Models Summer School, pages 335- 342. Lawrence Erlbaum Associates, 1993. [12] AS. Weigend, D.E. Rumelhart, and B.A Huberman. Generalization by weight-elimination with application to forecasting. In Advances in Neural Information Processing Systems, volume 3, pages 875-882. Morgan Kaufmann, 1991. [13] D. Wolpert. On bias plus variance. Neural Computation, 9(6):1211-1243, 1997.
1895 |@word trial:2 polynomial:8 nd:1 hu:26 minus:1 reduction:1 initial:1 com:1 subcomponents:1 yet:1 written:1 must:1 distant:1 plot:1 update:5 v:3 hallway:1 node:3 five:1 fitting:1 underfitting:1 ra:1 expected:3 overtrain:1 behavior:4 examine:1 multi:1 ol:1 simulator:1 brain:1 little:3 becomes:2 xx:1 linearity:11 underlying:1 moreover:1 what:7 eigenvector:2 nj:2 control:1 unit:25 penn:1 appear:2 before:3 local:1 wondering:1 plus:2 mateo:2 suggests:1 co:2 limited:1 pronounce:1 differs:1 backpropagation:4 digit:1 empirical:1 significantly:7 printed:1 pre:1 suggest:3 get:1 conventional:1 peiformance:1 baum:1 regardless:1 starting:1 examines:1 haussler:1 his:1 autonomous:2 hurt:2 target:1 controlling:1 user:1 hypothesis:6 pa:2 associate:1 rumelhart:1 geman:1 bottom:2 region:25 connected:1 solla:1 withou:2 trained:19 arget:2 dilemma:1 differently:1 represented:1 train:3 fast:1 effective:4 ction:1 sejnowski:1 whose:1 larger:3 ability:2 sequence:6 indication:1 net:107 iff:1 poorly:3 representational:1 convergence:1 diverges:1 depending:1 develop:1 completion:1 school:1 krogh:1 c:1 correct:1 saved:2 stochastic:3 vc:3 pea:4 elimination:1 backprop:12 explains:1 generalization:21 lawrence:3 vary:3 early:15 smallest:1 label:3 sensitive:1 largest:1 successfully:1 mit:1 avoid:1 rosenberg:1 earliest:1 lon:1 ave:2 cg:8 stopping:15 lj:2 typically:2 entire:1 hidden:30 classification:1 marginal:1 once:1 aware:1 psu:1 ng:2 park:1 nearly:1 peaked:1 report:1 others:3 connectionist:1 few:1 mlp:5 kinematic:1 analyzed:1 fu:4 closer:1 initialized:1 theoretical:1 stopped:1 increased:2 modeling:1 giles:2 steer:1 boolean:1 disadvantage:1 halted:1 caruana:2 ations:1 deviation:1 recognizing:1 examining:2 erlbaum:1 too:10 varies:1 considerably:2 combined:1 thanks:1 density:1 peak:1 st:1 lee:1 off:1 moody:2 squared:4 pittman:2 worse:2 li:1 vi:1 depends:1 performed:1 vehicle:1 overfits:3 start:1 parallel:1 forbes:1 mlps:13 became:1 variance:3 characteristic:1 kaufmann:6 yield:3 wisdom:1 spaced:1 generalize:7 comparably:1 trajectory:1 confirmed:1 touretzky:2 sebastian:1 urg:1 stop:3 dataset:1 popular:1 car:1 improves:2 underfits:2 schedule:3 appears:1 steve:1 box:1 though:1 just:1 stage:3 overfit:8 hand:2 horizontal:1 nonlinear:1 interfere:1 perhaps:1 cald:1 normalized:1 true:4 regularization:4 nettalk:5 sin:2 ll:1 during:2 noted:1 generalized:1 complete:1 performs:2 numbe:1 behaves:1 denver:2 insensitive:1 volume:7 functionally:1 significant:5 had:1 robot:2 similarity:1 etc:2 base:6 something:1 closest:4 showed:1 seen:2 morgan:6 performer:1 smoother:2 desirable:1 smooth:2 ing:1 believed:1 equally:2 prediction:4 vision:1 cmu:3 roe:1 sometimes:1 ion:2 grow:1 ot:1 biased:2 unlike:1 pass:1 contrary:1 inconsistent:1 seem:1 near:1 constraining:1 feedforward:1 enough:2 boo:1 intermediate:1 variety:1 independence:1 fit:17 tradeoff:1 bartlett:3 forecasting:1 peter:1 interpolates:1 jj:3 eigenvectors:2 locally:1 reduced:2 ist:1 four:2 graph:2 weigend:3 run:2 inverse:2 letter:1 bit:1 layer:4 bound:2 followed:1 summer:1 occur:1 bp:21 martin:2 according:1 poor:2 conjugate:5 hertz:1 smaller:21 cun:1 restricted:1 equation:4 kinematics:1 count:1 loose:1 ge:2 end:1 generalizes:1 denker:1 observe:2 batch:1 rain:2 top:2 overtrained:1 disappear:1 dat:1 noticed:1 occurs:2 damage:1 exhibit:1 gradient:5 thrun:1 capacity:21 seven:4 collected:1 reason:1 toward:1 insufficient:1 pomerleau:1 vertical:1 darken:1 maller:1 rn:1 varied:1 complement:1 required:1 learned:17 patte:1 beyond:1 pattern:5 appeared:1 oj:3 belief:1 power:1 critical:1 predicting:1 arm:1 improve:2 axis:2 created:1 text:1 epoch:2 fully:1 whisker:1 loss:2 interesting:1 degree:2 sufficient:1 consistent:1 editor:2 land:1 lo:2 prone:2 summary:1 surprisingly:2 parity:1 free:2 english:1 bias:4 institute:1 distributed:1 curve:3 dimension:2 raining:2 evaluating:1 avoids:1 rich:1 doesn:1 valid:2 commonly:1 made:2 san:2 far:1 eak:2 excess:13 confirm:1 global:2 overfitting:30 approxim:1 approxi:1 pittsburgh:1 continuous:1 sonar:1 why:2 learn:13 ignoring:1 alvinn:2 complex:4 did:1 linearly:2 big:2 noise:5 underfit:1 momentum:1 late:1 learns:1 showing:1 explored:1 decay:2 restricting:1 ci:1 nec:2 magnitude:1 illustrates:1 tio:4 smoothly:1 wolpert:1 likely:1 explore:1 prevents:1 unexpected:1 presentation:3 towards:2 feasible:1 uniformly:1 reducing:1 huberman:1 total:1 pas:2 perceptrons:1 select:1 oise:1 support:1 phenomenon:2 princeton:1 avoiding:1
979
1,896
Data clustering by Markovian relaxation and the Information Bottleneck Method N aft ali Tishby and N oam Slonim School of Computer Science and Engineering and Center for Neural Computation * The Hebrew University, Jerusalem, 91904 Israel email: {tishby.noamm}ees.huji.ae.il Abstract We introduce a new, non-parametric and principled, distance based clustering method. This method combines a pairwise based approach with a vector-quantization method which provide a meaningful interpretation to the resulting clusters. The idea is based on turning the distance matrix into a Markov process and then examine the decay of mutual-information during the relaxation of this process. The clusters emerge as quasi-stable structures during this relaxation, and then are extracted using the information bottleneck method. These clusters capture the information about the initial point of the relaxation in the most effective way. The method can cluster data with no geometric or other bias and makes no assumption about the underlying distribution. 1 Introduction Data clustering is one of the most fundamental pattern recognition problems, with numerous algorithms and applications. Yet, the problem itself is ill-defined: the goal is to find a "reasonable" partition of data points into classes or clusters. What is meant by "reasonable" depends on the application, the representation of the data, and the assumptions about the origins of the data points, among other things. One important class of clustering methods is for cases where the data is given as a matrix of pairwise distances or (dis) similarity measures. Often these distances come from empirical measurement or some complex process, and there is no direct access , or even precise definition, of the distance function. In many cases this distance does not form a metric, or it may even be non-symmetric. Such data does not necessarily come as a sample of some meaningful distribution and even the issue of generalization and sample to sample fluctuations is not well defined. Algorithms that use only the pairwise distances, without explicit use of the distance measure itself, employ statistical mechanics analogies [3] or collective graph theoretical properties [6] , etc. The points are then grouped based on some global criteria, such as connected components , small cuts, or minimum alignment energy. Such algorithms are sometimes computationally inefficient and in most cases it is difficult to interpret the resulting 'Work supported in part by the US-Israel binational science foundation (BSF) and by the Human Frontier Science Project (HFSP). NS is supported by the Levi Eshkol grant. clusters. I. e., it is hard to determine a common property to all the points in one cluster - other than that the clusters "look reasonable" . A second class of clustering methods is represented by the generalized vector quantization (VQ) algorithm. Here one fits a model (e.g. Gaussian distributions) to the points in each cluster, such that an average (known) distortion between the data points and their corresponding representative is minimized. This type of algorithms may rely on theoretical frameworks, such as rate distortion theory, and provide much better interpretation for the resulting clusters. VQ type algorithms can also be more computationally efficient since they require the calculation of distances, or distortion, between the data and the centroid models only, not between every pair of data points. On the other hand, they require the knowledge of the distortion function and thus make specific assumptions about the underlying structure or model of the data. In this paper we present a new, information theoretic combination of pairwise clustering with meaningful and intuitive interpretation for the resulting clusters. In addition, our algorithm provides a clear and objective figure of merit for the clusters - without making any assumption on the origin or structure of the data points. 2 Pairwise distances and Markovian relaxation The first step of our algorithm is to turn the pairwise distance matrix into a Markov process, through the following simple intuition. Assign a state of a Markov chain to each of the data points and transition probabilities between the states/points as a function of their pairwise distances. Thus the data can be considered as a directed graph with the points as nodes and the pairwise distances, which need not be symmetric or form a metric, on the arcs of the graph. Distances are normally considered additive, i.e., the length of a trajectory on the graph is the sum of the arc-lengths. Probabilities, on the other hand, are multiplicative for independent events, so if we want the probability of a (random) trajectory on the graph to be naturally related to its length , the transition probabilities between points should be exponential in their distance. Denoting by d(Xi' Xj) the pairwise distance between the points Xi and Xj, 1 then the transition probability that our Markov chain move from the point Xj at time t to the point X i at time t + 1, Pi,j == P(Xi(t + l)lxj(t)), is chosen as, P(Xi(t + l)lxj(t)) ex: exp( ->"d(Xi,Xj)) , (1) where >..-1 is a length scaling factor that equals the mean pairwise distance of the k nearest neighbors to the point Xi. The details of this rescaling are not so important for the final results, and a similar exponentiation of the distances, without our probabilistic interpretation, was performed in other clustering works (see e.g. [3 , 6]). A proper normalization of each row is required to turn this matrix into a stochastic transition matrix. Given this transition matrix, one can imagine a random walk starting at every point on the graph. Specifically, the probability distribution of the positions of a random walk, starting at Xj after t time steps, is given by the j-th row of the t -th iteration of the I-step transition matrix. Denoting by pt the t-step transition matrix , pt = (P)t , is indeed the t-th power of the I-step transition probability matrix. The probability of a random walk starting at Xj at time 0, to be at Xi at time t is thus, p(xi(t) IXj(O)) --------------------------- = Ptj . (2) 1 Henceforth we take the number of data points to be n and the point indices run implicitly from 1 to n unless stated otherwise. If we assume that all the given pairwise distances are finite we obtain in this way an ergodic Markov process with a single stationary distribution, denoted by 7r. This distribution is a right-eigenvector of the t-step transition matrix (for every t), since, 7ri = 2: j Pi,j7rj . It is also the limit distribution of p(Xi (t) IXj (0)) for all j, i.e., limHOOp(xi(t)lxj(O)) = 7ri. During the dynamics of the Markov process any initial state distribution is going to relax to this final stationary distribution and the information about the initial point of a random walk is completely lost. Rate of Information loss for example 1 Colon data, rate of Informallon loss vs clusters ac:curac.y 035 ,-----~--~-~-_____, ~ ~ 03 ,,' . '" ;. 025 o.??? _'" '6 ~02 0 "0". XC 2015 ~ "0 '01 " 005 :. 15 10 100,(1) 20 15 Figure 1: On the left shown an example of data, consisting of 150 points in 2D. On the middle, we plot the rate of information loss, - d~~t) , during the relaxation. Notice that the algorithm has no prior information about circles or ellipses. The rate of the information loss is slow when the "random walks" stabilize on some sub structures of the data - our proposed clusters. On the right we plot the rate of information loss for the colon cancer data, and the accuracy of the obtained clusters for different relaxation times, with the original classes. 2.1 Relaxation of the mutual information The natural way to quantify the information loss during this relaxation process is by the mutual information between the initial point variable , X(O) = {Xj(O)} and the point of the random walk at time t, X(t) = {Xi(t)}. The mutual information between the random variables X and Y is the symmetric functional of their joint distribution, I(X ;Y) = L P(x,Y)log( xEX,yEY ~~~'r\) = P PY L xEX,yEY p(x)p(YIX)log(P(Y(lx))) PY (3) For the Markov relaxation this mutual information is given by, I(t) == I(X(O) ;X(t)) = LPj LP/,jlog Pi> = LPjDKdPt)lp~] j i Pi , (4) j where Pj is the prior probability of the states, and P; = 2: j P;,jPj is the unconditioned probability of Xi at time t. The DKL is the Kulback-Liebler divergence [4], defined as: DKL [Pllq] == 2: y p(y) log ~ which is the information theoretic measure P; of similarity of distributions. Since all the rows j relax to 7r this divergence goes to zero as t --+ 00. While it is clear that the information about the initial point, I(t), decays monotonically (exponentially asymptotically) to zero, the rate of this decay at finite t conveys much information on the structure of the data points. Consider, as a simple example, the planer data points shown in figure 1, with d(Xi,Xj) = (Xi - Xj)2 + (Yi - Yj)2. As can be seen, the rate of information loss about the initial point of the random walk, - d~~t ) , while always positive - slows down at specific times during the relaxation. These relaxation locations indicate the formation of quasi-stable structures on the graph. At these relaxation times the transition probability matrix is approximately a projection matrix (satisfying p2t ,:::: pt) where the almost invariant subgraphs correspond to the clusters. These approximate stationary transitions correspond to slow information loss, which can be identified by derivatives of the information loss at time t. Another way to see this phenomena is by observing the rows of pt, which are the conditional distributions p(x;(t)lxj(O)). The rows that are almost indistinguishable, following the partial relaxation , correspond to points Xj with similar conditional distribution on the rest of the graph at time t. Such points should belong to the same structure, or cluster on the graph. This can be seen directly by observing the matrix pt during the relaxation , as shown in figure 2. The quasi-stable structures on the graph , during t~2? t~23 20 20 40 40 20 40 60 60 60 80 80 80 100 100 100 120 120 120 140 140 140 50 100 150 50 t ~28 100 150 t = 2 1O 20 40 60 80 100 120 140 50 100 150 50 100 150 50 100 150 Figure 2: The relaxation process as seen directly on the matrix pt, for different times, for the example data of figure 1. The darker colors correspond to higher probability density in every row. Since the points are ordered by the 3 ellipses, 50 in each ellipse, it is easy to see the clear emergence of 3 blocks of conditional distributions - the rows of the matrix - during the relaxation process. For very large t there is complete relaxation and all the rows equal the stationary distribution of the process. The best correlation between the resulting clusters and the original ellipses (i.e., highest "accuracy" value) is obtained for intermediate times, where the underlying structure emerges. the relaxation process, are precisely the desirable m eaningful clusters. The remaining question pertains to the correct way to group the initial points into clusters that capture the information about the position on the graph after t-steps. In other words, can we replace the initial point with an initial cluster, that enables prediction of the location on the graph at time t, with similar accuracy? The answer to this question is naturally provided via the recently introduced information bottleneck method [12, 11]. 3 Clusters that preserve information The problem of self-organization of the members of a set X based on the similarity of the conditional distributions of the members of another set, Y , {p(Ylx)} , was first introduced in (9) and was termed "distributional clustering" . This question was recently shown in (12) to b e a specific case of a much more fundamental problem: What are the features of the variable X that are relevant to the prediction of another, relevance, variable Y? This general problem was shown to have a natural information theoretic formulation: Find a compressed representation of the variable X, denoted X, such that the mutual information between X and Y, I(X; Y), is as high as possible, under a constraint on the mutual information between X and X, I(X; X) . Surprisingly, this variational principle yields an exact formal solution for the conditional distributions p(ylx), p(xlx), and p(x). This constrained information optimization problem was called in (12) Th e Information Bottleneck Method. The original approach to the solution of the resulting equations, used already in [9], was based on an analogy with the "deterministic annealing" (DA) approach to clustering (see [10, 8]). This is a top-down hierarchical algorithm that starts from a single cluster and undergoes a cascade of cluster splits which are determined stochastically (as phase transitions) into a "soft" (fuzzy) tree of clusters. We proposed an alternative approach, based on a greedy bottom-up merging, the "Agglomerative Information Bottleneck" (AlB , see [11]) , which is simpler and works better than the DA approach in many situations. This algorithm was applied also in the examples given here. 3.1 The information bottleneck method Given any two non-independent random variables, X and Y, the objective of the information bottleneck method is to extract a compact representation of the variable X, denoted here by X, with minimal loss of mutual information to another, relevance, variable Y. More specifically, we want to find a (possibly stochastic) map, p(x lx ), that maximizes the mutual information to the relevance variable I(X;Y) , under a constraint on the (lossy) coding length of X via X, I(X; X). In other words , we want to find an efficient representation of the variable X, X, such that the predictions of Y from X through X will be as close as possible to the direct prediction of Y from X. As shown in [12], by introducing a positive Lagrange multiplier (3 to enforce the mutual information constraint, the problem amounts to maximization of the Lagrangian: ?(P(x lx )) = I(X; Y) - (3-1 I(X; X) , (5) with respect to p(x lx), subject to the Markov condition X --+ X --+ Y and normalization. This minimization yields directly the following (self-consistent) equations for the map p(xlx) , and for p(ylx) and p(x): = ~~~) exp (- (3 DKL (P(ylx )llp(ylx))) p(ylx) = 2:x p(Ylx)p(xlx)~ p(x) = 2: xp(x lx )p(x) p(xlx) { (6) where Z((3, x) is a normalization function. The familiar Kulback-Liebler divergence , DKL(P(ylx)llp(ylx) )' em erges here from the variational principle. These equations can be solved by iterations that are proved to converge for any finite value of (3 (see [12]). The Lagrange multiplier /3 has the natural interpretation of inverse temperature, which suggests deterministic annealing to explore the hierarchy of solutions in X. The variational principle, Eq.(5), determines also the shape of the annealing process, since by changing /3 the mutual informations Ix == I(X; X) and Iy == I(X; Y) vary such that My <SIx = /3-1 . (7) Thus the optimal curve, which is analogous to the rate distortion function in information theory [4], follows a strictly concave curve in the (Ix, Iy) plane. The information bottleneck algorithms provide an information theoretic mechanism for identifying the quasi-stable structures on the graph that form our meaningful clusters. In our clustering application the variables are taken as X = X(O) and Y = X(t) during the relaxation process. 4 Discussion When varying the temperature T = /3-1, the information bottleneck algorithms explore the structure of the data in various resolutions. For very low T, the resolution is high and each point appears in a cluster of its own. For very high T all points are grouped into one cluster. This process resembles the appearance of the structure during the relaxation. However, there is an important difference between these two mechanisms. In the bottleneck algorithms clusters are formed by isotropically blurring the conditional distributions that correspond to each data point. Points are clustered together when these distributions become sufficiently similar. This process is not sensitive to the global topology of the graph representing the data. This can be understood by looking at the example of figure 1. If we consider two diametrically opposing points on one of the ellipses, they will be clustered together only when their blurred distributions overlap. In this example, unfortunately, this happens when the three ellipses are completely indistinguishable. A direct application of the bottleneck to the original transition matrix is therefore bound to fail in this case. In the relaxation process, on the other hand, the distributions are merged through the Markovian dynamics on the graph. In our specific example, two opposing points become similar when they reach the other states with similar probabilities following partial relaxation. This process better preserves the fine structure of the underlying graph, and thus enables finer partitioning of the data. It is thus necessary to combine the two processes. In the first stage, one should relax the Markov process to a quasi-stable point in terms of the rate of information loss. At this point some natural underlying structure emerges, and reflected in the partially relaxed transition matrix, pt. In the second stage we use the information bottleneck algorithm to identify the information preserving clusters. 5 More examples We applied our method to several 'standard ' clustering problems and obtained very good results. The first one was the famous "iris data" [7], on which we easily obtained just 5 misclassified points. A more interesting application was obtained on well known gene expression data, the Colon cancer data set provided by Alon et. al [l).This data set consists of 62 tissue samples out of which 22 came from tumors and the rest from "normal" biopsies of colon parts of the same patients. Gene expression levels were given for 2000 genes (oligonucleotides), resulting with a 62 over 2000 matrix. As done in other studies of this data, we calculated the Pearson correlation, Kp (u, v) (see, e.g., [5]) , between the u and v expression rows and then transforemed this measure to distances through the simple transformation defined by d(u, v) = ~~~:i~:~l. In figure 1 (right panel) we present the rate of information loss for this data and the accuract of the obtained clusters with the original tissue classes. The emrgence of two clusters at the times of "slow" information loss is clearly seen for t 24 to 212 iterations. The information bottleneck algorithm, when applied at these relaxation times, discovers the original tissue classes, up to 6 or 7 "misclassified" tissues (see figure). For comparison, seven more sophisticated supervised, techniques applied in [2] to this data. Six of them had 12 misclassified points or more, and their best results had 7 missclasifed tissues. = References [1] U. Alon, N. Barkai, D.A. Notterman, K. Gish, D. Mack, and A.J. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Nat. Acad. Sci. USA, 96:6745-6750, 1999. [2] A. Ben-Dor, L. Bruhn, N. Friedman, 1. Nachman, M. Schummer, and Z. Yakhini. Tissue Classification with Gene Expression Profiles. Journal of Computational Biology, 2000, to appear. [3] M. Blatt, M. Wiesman, and E. Domany. Data clustering using a model granular magnet. Neural Computation 9, 1805-1842, 1997. [4] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 1991. [5] M. Eisen, P. Spellman, P. Brown and D. Botstein. Cluster analysis and display of genome wide expression patterns. Proc. Nat. Acad. Sci. USA 95, 14863-14868, 1998. [6] Y. Gdalyahu, D. Weinshall, and M. Werman, Randomized algorithm for pairwise clustering. in proceedings of NIPS-11, 424-430, 1998. [7] RA. Fisher. The use of multiple measurements in taxonomic problems Annual Eugenics, 7, Part II, 179-188, 1936. [8] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Transactions on PAMI, 19(1):1-14, 1997. [9] F.C. Pereira, N. Tishby, and L. Lee. Distributional clustering of English words. In 30th Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, pages 183-190, 1993. [10] K. Rose. Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems. Pmceedings of the IEEE, 86(11):2210-2239, 1998. [11] N. Slonim and N. Tishby. Agglomerative Information Bottleneck. in proceedings of NIPS-12, 1999. [12] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. In proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, 368-377, 1999.
1896 |@word middle:1 compression:1 diametrically:1 gish:1 initial:9 denoting:2 ixj:2 yet:1 aft:1 john:1 additive:1 partition:1 hofmann:1 enables:2 shape:1 plot:2 xex:2 v:1 stationary:4 greedy:1 eshkol:1 noamm:1 plane:1 provides:1 node:1 location:2 lx:5 allerton:1 simpler:1 direct:3 become:2 consists:1 combine:2 introduce:1 pairwise:13 ra:1 indeed:1 examine:1 mechanic:1 project:1 provided:2 underlying:5 maximizes:1 panel:1 israel:2 what:2 weinshall:1 eigenvector:1 fuzzy:1 transformation:1 every:4 concave:1 partitioning:1 normally:1 grant:1 control:1 appear:1 positive:2 engineering:1 understood:1 slonim:2 limit:1 acad:2 fluctuation:1 approximately:1 pami:1 resembles:1 suggests:1 gdalyahu:1 directed:1 yj:1 lost:1 block:1 empirical:1 cascade:1 projection:1 word:3 close:1 py:2 deterministic:4 map:2 center:1 lagrangian:1 jerusalem:1 go:1 starting:3 ergodic:1 resolution:2 identifying:1 subgraphs:1 bsf:1 array:1 analogous:1 imagine:1 pt:7 hierarchy:1 exact:1 origin:2 element:1 recognition:1 satisfying:1 cut:1 distributional:2 bottom:1 levine:1 solved:1 capture:2 notterman:1 connected:1 highest:1 lxj:4 principled:1 intuition:1 rose:1 dynamic:2 ali:1 blurring:1 completely:2 easily:1 joint:1 represented:1 various:1 effective:1 kp:1 formation:1 pearson:1 distortion:5 relax:3 otherwise:1 compressed:1 emergence:1 itself:2 final:2 unconditioned:1 relevant:1 intuitive:1 kulback:2 cluster:33 ben:1 alon:2 ac:1 nearest:1 school:1 eq:1 come:2 indicate:1 quantify:1 biopsy:1 merged:1 correct:1 stochastic:2 human:1 require:2 jpj:1 assign:1 generalization:1 clustered:2 frontier:1 strictly:1 sufficiently:1 considered:2 normal:2 exp:2 werman:1 vary:1 ptj:1 proc:2 nachman:1 sensitive:1 grouped:2 minimization:1 clearly:1 gaussian:1 yix:1 always:1 varying:1 centroid:1 colon:5 quasi:5 going:1 misclassified:3 oam:1 among:1 ill:1 issue:1 denoted:3 classification:2 constrained:1 mutual:11 equal:2 biology:1 broad:1 look:1 bruhn:1 minimized:1 employ:1 preserve:2 divergence:3 familiar:1 phase:1 consisting:1 dor:1 opposing:2 friedman:1 organization:1 alignment:1 chain:2 partial:2 necessary:1 unless:1 tree:1 walk:7 circle:1 theoretical:2 minimal:1 soft:1 markovian:3 cover:1 maximization:1 introducing:1 tishby:5 answer:1 my:1 density:1 fundamental:2 randomized:1 huji:1 probabilistic:1 lee:1 together:2 iy:2 possibly:1 henceforth:1 stochastically:1 inefficient:1 derivative:1 rescaling:1 coding:1 stabilize:1 blurred:1 depends:1 performed:1 multiplicative:1 observing:2 start:1 xlx:4 blatt:1 il:1 formed:1 accuracy:3 correspond:5 yield:2 identify:1 famous:1 trajectory:2 finer:1 tissue:7 liebler:2 reach:1 email:1 definition:1 energy:1 conveys:1 naturally:2 proved:1 knowledge:1 color:1 emerges:2 sophisticated:1 appears:1 higher:1 supervised:1 reflected:1 botstein:1 formulation:1 done:1 just:1 stage:2 correlation:2 hand:3 undergoes:1 columbus:1 alb:1 lossy:1 barkai:1 usa:2 brown:1 multiplier:2 symmetric:3 indistinguishable:2 during:11 self:2 iris:1 criterion:1 generalized:1 theoretic:4 complete:1 temperature:2 variational:3 discovers:1 recently:2 ohio:1 common:1 functional:1 binational:1 exponentially:1 belong:1 interpretation:5 association:1 interpret:1 measurement:2 had:2 stable:5 access:1 similarity:3 etc:1 own:1 termed:1 came:1 meeting:1 yi:1 seen:4 minimum:1 preserving:1 relaxed:1 determine:1 converge:1 monotonically:1 ii:1 multiple:1 desirable:1 calculation:1 dkl:4 ellipsis:5 prediction:4 regression:1 ae:1 patient:1 metric:2 iteration:3 sometimes:1 normalization:3 addition:1 want:3 fine:1 annealing:5 rest:2 subject:1 thing:1 member:2 ee:1 intermediate:1 split:1 easy:1 revealed:1 xj:10 fit:1 identified:1 topology:1 idea:1 domany:1 bottleneck:15 six:2 expression:6 york:1 clear:3 ylx:9 amount:1 notice:1 probed:1 group:1 levi:1 changing:1 pj:1 planer:1 graph:16 relaxation:24 asymptotically:1 sum:1 run:1 inverse:1 exponentiation:1 oligonucleotide:1 taxonomic:1 almost:2 reasonable:3 scaling:1 bound:1 display:1 annual:3 precisely:1 constraint:3 ri:2 lpj:1 combination:1 em:1 son:1 lp:2 making:1 happens:1 invariant:1 mack:1 taken:1 computationally:2 equation:3 vq:2 turn:2 mechanism:2 fail:1 merit:1 hierarchical:1 enforce:1 pllq:1 alternative:1 original:6 thomas:1 top:1 clustering:17 remaining:1 linguistics:1 jlog:1 xc:1 ellipse:1 objective:2 move:1 question:3 already:1 parametric:1 p2t:1 bialek:1 distance:20 sci:2 seven:1 agglomerative:2 length:5 index:1 hebrew:1 difficult:1 unfortunately:1 stated:1 slows:1 collective:1 proper:1 markov:9 arc:2 finite:3 situation:1 looking:1 precise:1 communication:1 introduced:2 pair:1 required:1 yey:2 nip:2 llp:2 eugenics:1 pattern:3 power:1 event:1 overlap:1 natural:4 rely:1 turning:1 buhmann:1 representing:1 spellman:1 numerous:1 extract:1 prior:2 geometric:1 loss:13 interesting:1 analogy:2 granular:1 foundation:1 consistent:1 xp:1 principle:3 pi:4 row:9 cancer:2 supported:2 surprisingly:1 english:1 dis:1 bias:1 formal:1 neighbor:1 wide:1 emerge:1 curve:2 calculated:1 transition:14 genome:1 eisen:1 transaction:1 approximate:1 compact:1 implicitly:1 gene:5 global:2 xi:14 pmceedings:1 complex:1 necessarily:1 da:2 profile:1 representative:1 slow:3 darker:1 n:1 wiley:1 sub:1 position:2 pereira:2 explicit:1 exponential:1 ix:2 down:2 specific:4 decay:3 yakhini:1 quantization:2 merging:1 nat:2 explore:2 appearance:1 lagrange:2 ordered:1 partially:1 isotropically:1 determines:1 extracted:1 conditional:6 goal:1 replace:1 fisher:1 hard:1 specifically:2 determined:1 tumor:2 called:1 hfsp:1 meaningful:4 meant:1 pertains:1 relevance:3 magnet:1 phenomenon:1 ex:1
980
1,897
The Use of MDL to Select among Computational Models of Cognition In J. Myung, Mark A. Pitt & Shaobo Zhang Vijay Balasubramanian Department of Psychology David Rittenhouse Laboratories Ohio State University University of Pennsylvania Columbus, OH 43210 Philadelphia, PA 19103 {myung.l, pitt.2}@osu.edu [email protected] Abstract How should we decide among competing explanations of a cognitive process given limited observations? The problem of model selection is at the heart of progress in cognitive science. In this paper, Minimum Description Length (MDL) is introduced as a method for selecting among computational models of cognition. We also show that differential geometry provides an intuitive understanding of what drives model selection in MDL. Finally, adequacy of MDL is demonstrated in two areas of cognitive modeling. 1 Model Selection and Model Complexity The development and testing of computational models of cognitive processing are a central focus in cognitive science. A model embodies a solution to a problem whose adequacy is evaluated by its ability to mimic behavior by capturing the regularities underlying observed data. This enterprise of model selection is challenging because of the competing goals that must be satisfied. Traditionally, computational models of cognition have been compared using one of many goodness-of-fit measures. However, use of such a measure can result in the choice of a model that over-fits the data, one that captures idiosyncracies in the particular data set (i.e., noise) over and above the underlying regularities of interest. Such models are considered complex, in that the inherent flexibility in the model enables it to fit diverse patterns of data. As a group, they can be characterized as having many parameters that are combined in a highly nonlinear fashion in the model equation. They do not assume a single structure in the data. Rather, the model contains multiple structures; each obtained by finely tuning the parameter values of the model, and thus can fit a wide range of data patterns. In contrast, simple models, frequently with few parameters, assume a specific structure in the data, which will manifest itself as a narrow range of similar data patterns. Only when one of these patterns occurs will the model fit the data well. The problem of over-fitting data due to model complexity suggests that the goal of model selection should instead be to select the model that generalizes best to all data samples that arise from the same underlying regularity, thus capturing only the regularity, not the noise. To achieve this goal, the selection method must be sensitive to the complexity of a model. There are at least two independent dimensions of model complexity. They are the number of free parameters of a model and its functional form, which refers to the way the parameters are combined in the model equation. For instance, it seems unlikely that two one-parameter models, y = ex and y = x 9, are equally complex in their ability to fit data. The two dimensions of model complexity (number of parameters and functional form) and their interplay can improve a model's fit to the data, without necessarily improving generalizability. The trademark of a good model selection procedure, then, is its ability to satisfy two opposing goals. A model must be sufficiently complex to describe the data sample accurately, but without over-fitting the data and thus losing generalizability. To achieve this end, we need a theoretically well-justified measure of model complexity that takes into account the number of parameters and the functional form of a model. In this paper, we introduce Minimum Description Length (MDL) as an appropriate method of selecting among mathematical models of cognition. We also show that MDL has an elegant geometric interpretation that provides a clear, intuitive understanding of the meaning of complexity in MDL. Finally, application examples of MDL are presented in two areas of cognitive modeling. 1.1 Minimum Description Length The central thesis of model selection is the estimation of a model's generalizability. One approach to assessing generalizability is the Minimum Description Length (MDL) principle [1]. It provides a theoretically well-grounded measure of complexity that is sensitive to both dimensions of complexity and also lends itself to intuitive, geometric interpretations. MDL was developed within algorithmic coding theory to choose the model that permits the greatest compression of data. A model family f with parameters e assigns the likelihood f(yle) to a given set of observed data y . The full form of the MDL measure for such a model family is given below. MDL = -In! (yISA) + ~ln( ; )+ In f dS.jdetl(S) where SA is the parameter that maximizes the likelihood, k is the number of parameters in the model, N is the sample size and I(e) is the Fisher information matrix. MDL is the length in bits of the shortest possible code that describes the data with the help of a model. In the context of cognitive modeling, the model that minimizes MDL uncovers the greatest amount of regularity (i.e., knowledge) underlying the data and therefore should be selected. The first, maximized log likelihood term is the lack-of-fit measure, and the second and third terms constitute the intrinsic complexity of the model. In particular, the third term captures the effects of complexity due to functional form, reflected through I(e). We will call the latter two terms together the geometric complexity of the model, for reasons that will become clear in the remainder of this paper. MDL arises as a finite series of terms in an asymptotic expansion of the Bayesian posterior probability of a model given the data for a special form of the parameter prior density [2] . Hence in essence, minimization of MDL is equivalent to maximization of the Bayesian posterior probability. In this paper we present a geometric interpretation of MDL, as well as Bayesian model selection [3], that provides an elegant and intuitive framework for understanding model complexity, a central concept in model selection. 2 Differential Geometric Interpretation of MDL From a geometric perspective, a parametric model family of probability distributions forms a Riemannian manifold embedded in the space of all probability distributions [4]. Every distribution is a point in this space, and the collection of points created by varying the parameters of the model gives rise to a hyper-surface in which "similar" distributions are mapped to "nearby" points. The infinitesimal distance between points separated by the infinitesimal parameter differences de; is given by ds 2 = Y' k. g .. (8 )d8 ;d8 j where g ij(e) is the Riemannian metric tensor. The I.... l,j = l lJ Fisher information, lij(e), is the natural metric on a manifold of distributions in the context of statistical inference [4]. We argue that the MDL measure of model fitness has an attractive interpretation in such a geometric context. The first term in MDL estimates the accuracy of the model since the likelihood f(yI8 measures the ability of the model to fit the observed data. The second and third terms are supposed to penalize model complexity; we will show that they have interesting geometric interpretations. Given the metric gij = lij on the space of parameters, the infinitesimal volume element on the parameter manifold is A ) rt=l dV = d8 .Jdetl (8) == d8 i .Jdetl (8) . The Riemannian volume of the parameter manifold is obtained by integrating dV over the space of parameters: VM =f dV = f dS..jdetl(S) In other words, the third term in MDL penalizes models that occupy a large volume in the space of distributions. In fact, the volume measure VM is related to the number of "distinguishable" probability distributions indexed by the model M.l Because of the way the model family is embedded in the space of distributions, two different parameter values can index very similar distributions. If complexity is related to volumes occupied by model manifolds, the measure of volume should count only different, or distinguishable, distributions, and not the artificial coordinate volume. It is shown in [2,5] that the volume VM achieves this goal? While the third term in MDL measures the total volume of distributions a model can describe, the second term relates to the number of model distributions that lie close to the truth. To see this, taking a Bayesian perspective on model selection is helpful. U sing Bayes rule, the probability that the truth lies in the family f given the observed data y can be written as: Pr(fly) = A(f,y){ dB w(S)Pr(yI9) Here wee) is the prior probability of the parameter e, and A(f, y) = Pr(f)/Pr(y) is the ratio of the prior probabilities of the family f and data y. Bayesian methods assume that the latter are the same for all models under consideration and analyze the socalled Bayesian posterior Pf = I de w(9)Pr(yI9)' Lacking prior knowledge, w should be chosen to weight all distinguishable distributions in the family equally. Hence, wee) = lIVM . For large sample sizes, the likelihood function f(yI8 localizes under general conditions to a multivariate A ) 1 Roughly speaking, two probability distributions are considered indistinguishable if one is mistaken for the other even in the presence of an infinite amount of data. A careful definition of distinguishability involves use of the Kullback-Leibler distance between two probability distributions. For further details, see [3,4]. 2 Note that the parameters of the model are always assumed to be cut off in a manner to ensure that VM is finite. Gaussian centered at the maximum likelihood parameter e' (see [3,4] and citations therein). In this limit, the integral for Pj can be explicitly carried out. Performing the integral and taking a log given the result - In Pf = -lnf(yIS') + In(VM / C M )+ 0(1/ N) where C M = (21t / N)k /2 h(S') where h(e') is a data-dependent factor that goes to 1 for large N when the truth lies withinf (see [3,4] for details). CM is essentially the volume of an ellipsoidal region around the Gaussian peak at f(yle') where the integrand of the Bayesian posterior makes a substantial contribution. In effect, CM measures the number of distinguishable distributions within f that lie close to the truth. Using the expressions for CM and VM , the MDL selection criterion can be written as MDL = - In f (yle') + In(VM / C M) + terms sub leading in N (The subleading terms include the contribution of h(e'); see [3,4] regarding its role in Bayesian inference.) The geometric meaning of the complexity penalty in MDL now becomes clear; models which occupy a relatively large volume distant from the truth are penalized. Models that contain a relatively large fraction of distributions lying close to the truth are preferred. Therefore, we refer to the last two terms in MDL as geometric complexity. It is also illuminating to collect terms in MD as MDL = -In[ ( f (yle') ): V M /C M = -In('' normalized maximized likelihood") Written this way, MDL selects the model that gives the highest value of the maximum likelihood, per the relative ratio of distinguishable distributions (VMICM). From this perspective, a better model is simply one with many distinguishable distributions close to the truth, but few distinguishable distributions overall. 3 Application Examples Geometric complexity and MDL constitute a powerful pair of model evaluation tools. When used together in model testing, a deeper understanding of the relationship between models can be gained. The first measure enables one to assess the relative complexities of the set of models under consideration. The second builds on the first by suggesting which model is preferable given the data in hand. The following simulations demonstrate the application of these methods in two areas of cognitive modeling: information integration, and categorization. In each example, two competing models were fitted to artificial data sets generated by each model. Of interest is the ability of a selection method to recover the model that generated the data. MDL is compared with two other selection methods, both of which consider the number of parameters only. They are the Akaike Information Criterion (AIC ; [6]) and the Bayesian Information Criterion (BIC; [7]) defined as: AlC= -2 In !CyIS')+ 2k; HIC= -21n!CyIS')+ klnN. 3.1 Information Integration In a typical information integration experiment, a range of stimuli are generated from a factorial manipulation of two or more stimulus dimensions (e.g." visual and auditory) and then presented to participants for categorization as one of two or more possible response alternatives. Data are scored as the proportion of responses in one category across the various combinations of stimulus dimensions. For this comparison, we consider two models of information integration, the Fuzzy Logical Model of Perception (FLMP; [8]) and the Linear Integration Model (LIM; [9]). Each assumes that the response probability (Pij) of one category, say A, upon the presentation of a stimulus of the specific i and j feature dimensions in a two-factor information integration experiment takes the following form: FLMP: Pjj = 8jAj 8)"j + (1- 8)(1- Aj ); LIM: Pjj 8j + Aj =-2- where ei and ej (i=I, .. ,q] ; j=I, .. ,q2? 0 < ei, ej < I) are parameters representing the corresponding feature dimension s. The simulation results are shown in Table 1. When the data were generated by FLMP, regardless of the selection method used, FLMP was recovered 100% of the time. This was true across all selection methods and across both sample sizes, except for MDL when sample size was 20. In this case, MDL did not perform quite as well as the other selection methods. When the data were generated by LIM, AIC or BIC fared much more poorly whereas MDL recovered the correct model (LIM) across both sample sizes. Specifically, under AIC or BIC, FLMP was selected over LIM half of the time for N = 20 (51 % vs. 49%), though such errors were reduced for N = 150 (17 % vs 83 %). T abl e 1 M od e I R ecovery R ates f or T wo I nf ormatIOn I ntegratIOn M od eIs Sample Size N Selection Method Data from: Model fitted: FLMP LIM AIC/BIC FLMP 100% 51 % LIM 0% 49 % FLMP 89 % 0% LIM 11 % 100% FLMP 100% 17% LIM 0% 83 % FLMP 100% 0% LIM 0% 100% = 20 MDL AIC/BIC N = 150 MDL That FLMP is selected over LIM when a method such as AIC was used, even when the data were generated by LIM, suggests that FLMP is more complex than LIM. This observation was confirmed when the geometric complexity of each model was calculated. The difference in geometric complexity between FLMP and LIM was 8.74, meaning that for every distinguishable distribution for which LIM can account, FLMP can describe about e 8 .74 == 6248 distinguishable distributions. Obviously, this difference in complexity between the two models must be due to the functional form because they have the same number of parameters. 3.2 Categorization Two models of categorization were considered in the present demonstration. They were the generalized context model (GCM: [10]) and the prototype model (PRT: [11]). Each model assumes that categorization responses follow a multinomial probability distribution with Pii (probability of category C] response given stimulus Xi), which is given by GCM: Pu =~ ~ j ee} i.... S .? S IJ ~ I... K I... keC K ; PRT: Pu if =~ I... K SiK Sik In the equation, sij is a similarity measure between multidimensional stimuli Xi and Xj , SiJ is a similarity measure between stimulus Xi and the prototypic stimulus X j of category C j ? Similarity is measured using the Minkowski distance metric with the metric parameter r. The two models were fitted to data sets generated by each model using the six-dimensional scaling solution from Experiment 1 of [12] under the Euclidean distance metric of r = 2. As shown in Table 2, under AIC or SIC, a relatively small bias toward choosing GCM was found using data generated from PRT when N = 20. When MDL was used to choose between the two models, there was improvement over AIC in correcting the bias. In the larger sample size condition, there was no difference in model recovery rate between AIC and MDL. This outcome contrasts with that of the preceding example, in which MDL was generally superior to the other selection methods when sample size was smallest. T able 2 M ode I R ecovery R ates f orTwo C ategoflzatlOn M od eIs Sample Size Selection Method AIC/SIC N = 20 MDL AIC/SIC N = 150 MDL Data from: Model fitted: GCM PRT GCM 98 % 15 % PRT 2% 85 % GCM 96 % 7% PRT 4% 93 % GCM 99 % 1% PRT 1% 99 % GCM 99 % 1% PRT 1% 99 % On the face of it, these findings would suggest that MDL is not much better than the other selection methods. After all, what else could cause this result? The only circumstances in which such an outcome is predicted under MDL is when the functional forms of the two models are similar (recall that the models have the same number of parameters), thus minimi zing the differential contribution of functional form in the complexity term. Calculation of the geometric complexity of each model confirmed this suspicion. GCM is indeed only slightly more complex than PRT, the difference being equal to 0.60, so GCM can describe about two distributions (eO. 60 == 1.8) for every distribution PRT can describe. These simulation results together demonstrate usefulness of MDL and the geometric complexity measure in testing models of cognition. MDL's sensitivity to functional form was clearly demonstrated in its superior model recovery rate, especially when the complexities of the models differed by a nontrivial amount. 4 Conclusion Model selection in cognitive science can proceed far more confidently with a clear understanding of why one model should be preferred over another. A geometric interpretation of MDL helps to achieve this goal. The work carried out thus far indicates that MDL, along with the geometric complexity measure, holds considerable promise in evaluating computational models of cognition. MDL chooses the correct model most of the time, and geometric complexity provides a measure of how different the two models are in their capacity or power. Future work is directed toward extending this approach to other classes of models, such as connectionist networks. Acknowledgment and Authors Note M.A.P. and U.M. were supported by NIMH Grant MH57472. V.B. was supported by the Society of Fellows and the Milton Fund of Harvard University, by NSF grant NSF-PHY-9802709 and by the DOE grant DOE-FG02-95ER40893 . The present work is based in part on [5] and [13]. References [1] Rissanen, J. (1996) Fisher information and stochastic complexity. IEEE Transaction on Information Theory, 42, 40-47 . [2] Balasubramanian, V. (1997) Statistical inference, Occam's razor and statistical mechanics on the space of probability distributions. Neural Computation, 9, 349368. [3] MacKay, D. J. C. (1992). Bayesian interpolation. Neural Computation, 4, 415447. [4] Amari, S. I. (1985) Differential Geometrical Methods in Statistics. SpringerVerlag. [5] Myung, I. J., Balasubramanian, V., & Pitt, M. A. (1999) Counting probability distributions: Differential geometry and model selection. Proceedings of the National Academy of Sciences USA, 97, 11170-11175. [6] Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle, in B. N. Petrox and F. Caski, Se cond international symposium on information theory, pp. 267-281. Akademiai Kiado, Budapest. [7] Schwarz, G. (1978) Estimating the dimension of a model. The Annals of Statistics, 6, 461-464. [8] Oden, G. C., & Massaro, D. W. (1978) Integration of featural information in speech perception. Psychological Review, 85,172-191. [9] Anderson, N. H. (1981) Foundations of Information Integration Theory . Academic Press. [10] Nosofsky, R. M. (1986) Attention, similarity and the identificationcategorization relationship. Journal of Experimental Psychology: General, 115, 3957. [11] Reed, S. K. (1972) Pattern recognition and categorization. Cognitive Psychology, 3,382-407. [12] Shin, H. J., & Nosofsky, R. M. (1992) Similarity-scaling studies of dot-patten classification and recognition. Journal of Experimental Psychology: General, 121, 278-304. [13] Pitt, M. A., Myung, I. J., & Zhang, S. (2000). Toward a method of selecting among computational models of cognition. Submitted for publication.
1897 |@word compression:1 seems:1 proportion:1 simulation:3 uncovers:1 phy:1 contains:1 series:1 selecting:3 recovered:2 od:3 must:4 written:3 distant:1 enables:2 fund:1 v:2 half:1 selected:3 provides:5 zhang:2 mathematical:1 along:1 enterprise:1 differential:5 become:1 symposium:1 fitting:2 manner:1 introduce:1 theoretically:2 upenn:1 indeed:1 roughly:1 behavior:1 frequently:1 mechanic:1 fared:1 balasubramanian:3 pf:2 becomes:1 estimating:1 underlying:4 maximizes:1 hic:1 what:2 cm:3 minimizes:1 fuzzy:1 developed:1 q2:1 finding:1 fellow:1 every:3 nf:1 multidimensional:1 preferable:1 yi9:2 grant:3 limit:1 interpolation:1 therein:1 suggests:2 challenging:1 collect:1 limited:1 range:3 directed:1 acknowledgment:1 testing:3 flmp:14 procedure:1 shin:1 area:3 word:1 integrating:1 refers:1 suggest:1 close:4 selection:23 context:4 equivalent:1 demonstrated:2 go:1 regardless:1 attention:1 recovery:2 assigns:1 correcting:1 rule:1 oh:1 traditionally:1 coordinate:1 annals:1 massaro:1 losing:1 akaike:2 pa:1 element:1 harvard:1 recognition:2 cut:1 observed:4 role:1 fly:1 capture:2 eis:2 region:1 ormation:1 highest:1 substantial:1 complexity:29 nimh:1 upon:1 various:1 separated:1 describe:5 artificial:2 hyper:1 choosing:1 outcome:2 whose:1 quite:1 larger:1 say:1 amari:1 ability:5 statistic:2 itself:2 obviously:1 interplay:1 yle:4 remainder:1 budapest:1 flexibility:1 achieve:3 poorly:1 academy:1 supposed:1 description:4 intuitive:4 regularity:5 assessing:1 extending:1 categorization:6 help:2 measured:1 ij:2 progress:1 sa:1 predicted:1 involves:1 pii:1 correct:2 stochastic:1 centered:1 extension:1 hold:1 lying:1 sufficiently:1 considered:3 around:1 cognition:7 algorithmic:1 pitt:4 achieves:1 smallest:1 estimation:1 sensitive:2 schwarz:1 tool:1 minimization:1 clearly:1 always:1 gaussian:2 rather:1 occupied:1 ej:2 varying:1 publication:1 focus:1 improvement:1 likelihood:9 indicates:1 contrast:2 helpful:1 inference:3 dependent:1 prt:10 unlikely:1 lj:1 selects:1 overall:1 among:5 classification:1 socalled:1 development:1 special:1 integration:8 mackay:1 equal:1 having:1 patten:1 mimic:1 future:1 connectionist:1 stimulus:8 inherent:1 few:2 wee:2 national:1 pjj:2 fitness:1 abl:1 geometry:2 opposing:1 interest:2 highly:1 evaluation:1 mdl:47 integral:2 indexed:1 euclidean:1 penalizes:1 fitted:4 psychological:1 instance:1 modeling:4 hep:1 goodness:1 maximization:1 usefulness:1 generalizability:4 combined:2 chooses:1 density:1 lnf:1 peak:1 sensitivity:1 international:1 vm:7 off:1 together:3 nosofsky:2 thesis:1 central:3 satisfied:1 choose:2 d8:4 cognitive:10 leading:1 account:2 suggesting:1 de:2 coding:1 satisfy:1 explicitly:1 analyze:1 bayes:1 recover:1 participant:1 contribution:3 ass:1 accuracy:1 maximized:2 bayesian:10 accurately:1 confirmed:2 drive:1 submitted:1 definition:1 infinitesimal:3 pp:1 riemannian:3 auditory:1 logical:1 manifest:1 knowledge:2 lim:15 recall:1 follow:1 reflected:1 response:5 evaluated:1 though:1 anderson:1 d:3 hand:1 ei:2 gcm:10 nonlinear:1 lack:1 sic:3 aj:2 columbus:1 usa:1 effect:2 concept:1 contain:1 normalized:1 true:1 hence:2 laboratory:1 leibler:1 attractive:1 indistinguishable:1 essence:1 razor:1 criterion:3 generalized:1 demonstrate:2 geometrical:1 meaning:3 consideration:2 ohio:1 superior:2 functional:8 multinomial:1 volume:11 interpretation:7 refer:1 tuning:1 mistaken:1 dot:1 similarity:5 surface:1 pu:2 posterior:4 multivariate:1 perspective:3 manipulation:1 yi:1 minimum:4 preceding:1 eo:1 shortest:1 relates:1 multiple:1 full:1 characterized:1 calculation:1 academic:1 equally:2 oden:1 essentially:1 metric:6 circumstance:1 grounded:1 penalize:1 justified:1 whereas:1 ode:1 else:1 finely:1 elegant:2 db:1 adequacy:2 call:1 ee:1 presence:1 counting:1 xj:1 fit:9 psychology:4 bic:5 pennsylvania:1 competing:3 regarding:1 prototype:1 expression:1 six:1 jaj:1 penalty:1 wo:1 speech:1 speaking:1 cause:1 constitute:2 proceed:1 generally:1 clear:4 se:1 factorial:1 amount:3 ellipsoidal:1 category:4 reduced:1 occupy:2 nsf:2 per:1 diverse:1 promise:1 group:1 yi8:2 rissanen:1 pj:1 fraction:1 powerful:1 family:7 decide:1 scaling:2 bit:1 capturing:2 aic:11 nontrivial:1 nearby:1 integrand:1 performing:1 minkowski:1 relatively:3 department:1 combination:1 describes:1 across:4 ate:2 slightly:1 dv:3 pr:5 sij:2 heart:1 ln:1 equation:3 count:1 milton:1 end:1 klnn:1 generalizes:1 permit:1 appropriate:1 alternative:1 assumes:2 ensure:1 include:1 embodies:1 build:1 especially:1 society:1 tensor:1 occurs:1 parametric:1 rt:1 md:1 lends:1 distance:4 mapped:1 capacity:1 manifold:5 argue:1 reason:1 toward:3 length:5 code:1 index:1 relationship:2 reed:1 ratio:2 demonstration:1 rise:1 perform:1 observation:2 sing:1 finite:2 introduced:1 david:1 pair:1 narrow:1 distinguishability:1 able:1 below:1 pattern:5 perception:2 confidently:1 explanation:1 greatest:2 power:1 natural:1 localizes:1 representing:1 improve:1 suspicion:1 created:1 carried:2 philadelphia:1 lij:2 featural:1 prior:4 understanding:5 geometric:18 review:1 asymptotic:1 relative:2 embedded:2 lacking:1 interesting:1 shaobo:1 foundation:1 illuminating:1 pij:1 myung:4 principle:2 sik:2 occam:1 penalized:1 supported:2 last:1 free:1 bias:2 deeper:1 wide:1 taking:2 face:1 dimension:8 calculated:1 evaluating:1 author:1 collection:1 zing:1 far:2 transaction:1 citation:1 preferred:2 idiosyncracies:1 kullback:1 assumed:1 xi:3 why:1 table:2 improving:1 expansion:1 complex:5 necessarily:1 did:1 noise:2 arise:1 scored:1 akademiai:1 fashion:1 differed:1 sub:1 lie:4 third:5 specific:2 trademark:1 intrinsic:1 alc:1 gained:1 fg02:1 vijay:2 distinguishable:9 simply:1 visual:1 truth:7 goal:6 presentation:1 careful:1 fisher:3 considerable:1 springerverlag:1 infinite:1 typical:1 except:1 specifically:1 total:1 gij:1 experimental:2 osu:1 cond:1 select:2 mark:1 latter:2 arises:1 kiado:1 ex:1
981
1,898
Learning Joint Statistical Models for Audio-Visual Fusion and Segregation John W. Fisher 111* Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] Trevor Darrell Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] William T. Freeman Mitsubishi Electric Research Laboratory Cambridge, MA 02139 [email protected] Paul Viola Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] Abstract People can understand complex auditory and visual information, often using one to disambiguate the other. Automated analysis, even at a lowlevel, faces severe challenges, including the lack of accurate statistical models for the signals, and their high-dimensionality and varied sampling rates. Previous approaches [6] assumed simple parametric models for the joint distribution which, while tractable, cannot capture the complex signal relationships. We learn the joint distribution of the visual and auditory signals using a non-parametric approach. First, we project the data into a maximally informative, low-dimensional subspace, suitable for density estimation. We then model the complicated stochastic relationships between the signals using a nonparametric density estimator. These learned densities allow processing across signal modalities. We demonstrate, on synthetic and real signals, localization in video of the face that is speaking in audio, and, conversely, audio enhancement of a particular speaker selected from the video. 1 Introduction Multi-media signals pervade our environment. Humans face complex perception tasks in which ambiguous auditory and visual information must be combined in order to support accurate perception. By contrast, automated approaches for processing multi-media data sources lag far behind. Multi-media analysis (sometimes called sensor fusion) is often formulated in a maximum a posteriori (MAP) or maximum likelihood (ML) estimation framework. Simplifying assumptions about the joint measurement statistics are often made in order to yield tractable analytic forms. For example Hershey and Movellan have shown that correlations between video data and audio can be used to highlight regions of the image which are the "cause" of the audio signal. While such pragmatic choices may lead to *http://www.ai.mit.edulpeople/fisher simple statistical measures, they do so at the cost of modeling capacity. Furthermore, these assumptions may not be appropriate for fusing modalities such as video and audio. The joint statistics for these and many other mixed modal signals are not well understood and are not well-modeled by simple densities such as multi-variate exponential distributions. For example, face motions and speech sounds are related in very complex ways. A critical question is whether, in the absence of an adequate parametric model for joint measurement statistics, can one integrate measurements in a principled way without discounting statistical uncertainty. This suggests that a nonparametric statistical approach may be warranted. In the nonparametric statistical framework principles such as MAP and ML are equivalent to the information theoretic concepts of mutual information and entropy. Consequently we suggest an approach for learning maximally informative joint subspaces for multi-media signal analysis. The technique is a natural application of [8, 3, 5, 4] which formulates a learning approach by which the entropy, and by extension the mutual information, of a differentiable map may be optimized. By way of illustration we present results of audio/video analysis using the suggested approach on both simulated and real data. In the experiments we are able to show significant audio signal enhancement and video source localization. 2 Information Preserving Transformations Entropy is a useful statistical measure as it captures uncertainty in a general way. As the entropy of a density decreases so does the volume of the typical set [2] . Similarly, mutual information quantifies the information (uncertainty reduction) that two random variables convey about each other. The challenge of using such a measure for learning is that they are integral functions of densities (densities which must be inferred from samples). 2.1 Maximally Informative Subspaces In order to make the problem tractable we project high dimensional audio and video measurements to low dimensional subspaces. The parameters of the sub-space are not chosen in an ad hoc fashion, but are learned by maximizing the mutual information between the derived features. Specifically, let V i '" V E ~Nv and ai '" A E ~Na be video and audio measurements, respectively, taken at time i . Let f v : ~Nv f-f ~Mv and f a : ~Na f-f ~Ma be mappings parameterized by the vectors et v and eta , respectively. In our experiments f v and f a are single-layer perceptrons and M v = Ma = 1. The method extends to any differentiable mapping and output dimensionality [3]. During adaptation the parameters vectors et v and eta (the perceptron weights) are chosen such that (1) This process is ilustrated notionally in figure 1 in which video frames and sequences of periodogram coefficients are projected to scalar values. A clear advantage of learning a projection is that rather than requiring pixels of the video frames or spectral coefficients to be inspected individually the projection summarizes the entire set efficiently into two scalar values (one for video and one for audio). We have little reason to believe that joint audio/video measurements are accurately characterized by simple parametric models (e.g. exponential or uni-modal densities). Moreover, low dimensional projections which do not preserve this complex structure will not capture the true form of the relationship (i.e. random low dimensional projections of structured data are typically gaussian). The low dimensional projections which are learned by maximizing mutual information reduce the complexity of the joint distribution, but still preserve the important and potentially complex relationships between audio and visual signals. This learned subspace video projection video sequence audio sequence Figure 1: Fusion Figure: Projection to Subspace possibility motivates the methodology of [3, 8] in which the density in the joint subspace is modeled nonparametrically. This brings us to the natural question regarding the utitility of the learned supspace. There are a variety of ways the subspace and the associated joint density might be used to, for example, manipulate one of the disparate signals based on another. For the particular applications we address in our experiments we shall see that it is the mapping parameters {av , aa} which will be most useful. We will illustrate the details as we go through the experiments. 3 Empirical Results In order to demonstrate the efficacy of the approach we present a series of audio/video analysis experiments of increasing complexity. In these experiments, two sub-space mappings are learned, one from video and another from audio. In all cases, video data is sampled at 30 frames/second. We use both pixel based representations (raw pixel data) and motion based representations (i.e. optical flow [1]). Anandan's optical flow algorithm [1] is a coarse-to-fine method, implemented on a Laplacian pyramid, based on minimizing the sum of squared differences between frames. Confidence measures are derived from fitted quadratic surface principle curvatures. A smoothness constraint is also applied to the final velocity estimates. When raw video is used as an input to the subspace mapper, the pixels are collected into a single vector. The raw video images range in resolution from 240 by 180 (i.e .. 43,200 dimensions) to 320 by 240 (i.e. 76,800 dimensions). When optical flow is used as an input to the sub-space mapper, vector valued flow for each pixel is collected into a single vector, yielding an input vector with twice as many dimensions as pixels. Audio data is sampled at 11.025 KHz. Raw audio is transformed into periodogram coefficients. Periodograms are computed using hamming windows of 5.4 ms duration sampled at 30 Hz (commensurate with the video rate). At each point in time there are 513 periodogram coefficients input to the sub-space mapper. Figure 2: Synthetic image sequence examples (left). Mouth parameters are functionally related to one audio signal. Flow fields horizontal component (center) and vertical component (right). 3.1 A Simple Synthetic Example We begin with a simple synthetic example. The goal of the experiment is to use a video sequence to enhance an associated audio sequence. Figure 2 shows examples from a synthetically generated image sequence of faces (and the associated optical flow field). In the sequence the mouth is described by an ellipse. The parameters of the ellipse are functionally related to a recorded audio signal. Specifically, the area of the ellipse is proportional to the average power of the audio signal (computed over the same periodogram window) while the eccentricity is controlled by the the entropy of the normalized periodogram. Consequently, observed changes in the image sequence are functionally related to the recorded audio signal. It is not necessary (right now) that the relationship be realistic, only that it exists. The associated audio signal is mixed with an interfering, or noise, signal. Their spectra, shown in figure 3 (left), are clearly overlapped. If the power spectrum of the associated and interfering signals were known then the optimal filter for recovering the associated audio sequence is the Wiener filter. It's spectrum is described by H(f) = Pa(f) Pa(f) + Pn(f) (2) where Pa(f) is the power spectrum of the desired signal and Pn (f) is the power spectrum of the interfering signal. In general this information is unknown, but for our experiments it is useful as a benchmark for comparison purposes as it represents an upper bound on performance. That is, in a second-order sense, all filters (including ours) will underperform the Wiener filter. Furthermore, suppose y = Sa + n where Sa is the signal of interest and n is an independent interference signal. It can be shown that - ~ ( 2..) n - l _ p2 (3) where p is the correlation coefficient between Sa and the corrupted version y and (ri:) is the signal to noise power ratio (SNR). Consequently given a reference signal and some signal plus interferer we can use the relationships above to gauge signal enhancement. The question we address is that in the absence of knowing the separate power spectra, which are necessary to implement the Wiener filter, how do we compare using the associated video data. It is not immediately obvious how one might achieve signal enhancement by learning a joint subspace in the manner described. Our intuition is as follows. For this simple case it is only the associated audio signal which bears any relationship to the video sequence. Furthermore, the coefficients of the audio projection, ex a correspond to spectral coefficients. Our reasoning is that large magnitude coefficients correspond those spectral components which have more signal component than those with small magnitude. Using this reasoning we can construct a filter whose coefficients are proportional to our projection ex a. Specifically, we use the following to design our filter H (f) MI = {J ( lexa (f) I- min( lexa (f)l) ) max (lexa(f)I) - min(lexa(f) I) + 1- {J . 0 < (J < 1 2' - (4) -20.-------~--~--------, 0.5 1.0 frequency (KHz) 1.5 2.0 Figure 3: Spectra of audio signals (right). Solid line indicates the desired audio component while the dashed line indicates the interference. where aa (f) are the audio projection coefficients associated with spectral coeffiecient, f. For our experiements, fJ = 0.90, consequently 0.5 ::; HMI (f ) ::; 0.95. While somewhat ad hoc the filter is consistent with our reasoning above and, as we shall see, yields good results. Furthermore, because the signal and interferer are known (in our experimental set up) we can compare our results to the unachievable, yet optimal, Wiener filter for this case. In this case the SNR was 0 dB , furthermore as the two signals have significant spectral overlap, signal recovery is challenging. The optimal Wiener filter achieves a signal processing gain of 2.6 dB while the filter constructed as described achieves 2.0 dB (when using images directly) and 2.1 db when using optical flow. 3.2 Video Attribution of Single Audio Source The previous example demonstrated that the audio projection coefficients could be used to reduce an interfering signal. We move now to a different experiment using real data. Figure 4(a) shows a video frame from the sequence used in the next experiment. In the scene there is a person speaking in the foreground, a person moving in the background and a monitor which is flickering. There is a single audio signal source (of the speaker) but several interfering motion fields in the video sequence. Figures 4(b) is the pixel-wise standard deviations of the video sequence while figure 4(c) shows the pixel-wise flow field energy. These images show that there are many sources of change in the image. Note that the most intense changes in the image sequence are associated with the monitor and not the speaker. Our goal with this experiment is to show that via the method described we can properly attribute the region of the video image which is associated with the audio sequence. The intuition is similar to the previous experiment. We expect that large image projection coefficients, a v correspond to those pixels which are related to the audio signal. Figure 4(d) shows the image a v when images are fed directly into the algorithm while figure 4(e) shows the same image when flow-fields are the input. Clearly both cases have detected regions associated with the speaker with the substantive difference being that the use of flow fields resulted in a smoother attribution. 3.3 User-assisted Audio Enhancement We now repeat the initial synthetic experiment of 3.1 using real data. In this case there are two speakers recorded with a single microphone (the speakers were recorded with stereo microphones so as to obtain a reference, but the experiments used a single mixed audio source). Figure Sea) shows an example frame from the video sequence. We now demonstrate the ability to enhance the audio signal in a user-assisted fashion. By selecting data (a) (b) (e) (e) (d) Figure 4: Video attribution: (a) example image, (b) pixel standard deviations, (c) flow vector energy, (d) image of (tv (pixel features), (e) flow field features (a) (b) (c) Figure 5: User assisted audio enhancement: (a) example image, with user chosen regions, (b) image of (tv for region 1, (c) image of (tv region 2 from one box or the other in figure 5(a) we can enhance the voice of the speaker on the left or right. As the original data was collected with stereo microphones we can again compare our result to an approximation to the Wiener filter (neglecting cross channel leakage). In this case, due to the fact that the speakers are male and female, the signals have better spectral separation. Consequently the Wiener filter achieves a better signal processing gain. For the male speaker the Wiener filter improves the SNR by 10.43 dB, while for the female speaker the improvement is 10.5 dB . Using our technique we are able to achieve a 8.9 dB SNR gain (pixel based) and 9.2 dB SNR gain (optic flow based) for the male speaker while for the female speaker we achieve 5.7 and 5.6 dB, respectively. It is not clear why performance is not as good for the female speaker, but figures 5(b) and (c) are provided by way of partial explanation. Having recovered the audio in the user-assisted fashion described we used the recovered audio signal for video attribution (pixel-based) of the entire scene. Figures 5(b) and (c) are the images of the resulting (tv when using the male (b) and female (c) recovered voice signals. The attribution of the male speaker in (b) appears to be clearer than that of (c). This may be an indication that the video cues were not as detectable for the female speaker as they were for the male in this experiment. In any event these results are consistent with the enhancement results described above. 4 Applications There are several practical applications for the techniques described in this paper. One key area is speech recognition. Recent commercial advances in speech recognition rely on careful placement of the microphone so that background sounds are minimized. Results in more natural environments, where the microphone is some distance from the speaker and there is significant background noise, are disappointing. Our approach may prove useful for teleconferencing, where audio and video of multiple speakers is recorded simultaneously. Other applications include broadcast television in situations where careful microphone placement is not possible, or post-hoc processing to enhance the audio channel might prove valuable. For example, if one speaker's microphone at a news conference malfunctions, the voice of that speaker might be enhanced with the aid of video information. 5 Conclusions One key contribution of this paper is to extend the notion of multi-media fusion to complex domains in which the statistical relationships between audio and video is complex and nongaussian. This is claim is supported in part by the results of Slaney and Covell in which canonical correlations failed to detect audio/video synchrony when a spectral representation was used for the audio signal [7]. Previous approaches have attempted to model these relationships using simple models such as measuring the short term correlation between pixel values and the sound signal [6]. The power of the non-parametric mutual information approach allows our technique to handle complex non-linear relationships between audio and video signals. One demonstration of this modeling flexibility, is the insensitivity to the form of the input signals. Experiments were performed using raw pixel intensities as well as optical flows (which is a complex non-linear function of pixel values across time), yielding similar results. Another key contribution is to establish an important application for this approach, video enhanced audio segmentation. Initial experiments have shown that information from the video signal can be used to reduce the noise in a simultaneously recorded audio signal. Noise is reduced without any a priori information about the form of the audio signal or noise. Surprisingly, in our limited experiments, the noise reduction approaches what is possible using a priori knowledge of the audio signal (using Weiner filtering). References [1] P. Anandan. A computational framework and an algorithm for the measurement of visual motion. Tnt. J. compo Vision, 2:283-310, 1989. [2] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991. [3] J. Fisher and J. Principe. Unsupervised learning for nonlinear synthetic discriminant functions. In D. Casasent and T. Chao, editors, Proc. SPIE, Optical Pattern Recognition VII, volume 2752, pages 2-13, 1996. [4] J. W. Fisher III, A. T. Thier, and P. A. Viola. Learning informative statistics: A nonparametric approach. In S. A. Solla, T. KLeen, and K-R. Mller, editors, Proceedings of 1999 Conference on Advances in Neural Information Processing Systems 12, 1999. [5] J. W. Fisher III and J. C. Principe. A methodology for information theoretic feature extraction. In A. Stuberud, editor, Proceedings of the IEEE International Joint Conference on Neural Networks, 1998. [6] J. Hershey andJ. Movellan. Using audio-visual synchrony to locate sounds. In T. K L. S. A. Solla and K-R. Mller, editors, Proceedings of 1999 Conference on Advances in Neural Information Processing Systems 12, 1999. [7] M. Slaney and M. Covell. Facesync: A linear operator for measuring synchronization of video facial images and audio tracks. In This volume, 2001. [8] P. Viola, N. Schraudolph, and T. Sejnowski. Empirical entropy manipulation for real-world problems. In Proceedings of 1996 Conference on Advances in Neural Information Processing Systems 8, pages 851- 7,1996.
1898 |@word version:1 underperform:1 mitsubishi:1 simplifying:1 solid:1 reduction:2 initial:2 series:1 efficacy:1 selecting:1 ours:1 recovered:3 com:1 yet:1 must:2 john:2 realistic:1 kleen:1 informative:4 analytic:1 cue:1 selected:1 short:1 compo:1 coarse:1 constructed:1 prove:2 manner:1 periodograms:1 multi:6 freeman:1 little:1 window:2 increasing:1 project:2 begin:1 moreover:1 provided:1 medium:5 what:1 transformation:1 understood:1 might:4 plus:1 twice:1 conversely:1 suggests:1 lexa:4 challenging:1 limited:1 range:1 pervade:1 practical:1 implement:1 movellan:2 area:2 empirical:2 projection:12 confidence:1 suggest:1 cannot:1 operator:1 www:1 equivalent:1 map:3 demonstrated:1 center:1 maximizing:2 go:1 attribution:5 lowlevel:1 duration:1 resolution:1 recovery:1 immediately:1 estimator:1 handle:1 notion:1 enhanced:2 inspected:1 suppose:1 user:5 commercial:1 overlapped:1 velocity:1 pa:3 recognition:3 element:1 observed:1 capture:3 region:6 news:1 solla:2 decrease:1 valuable:1 principled:1 intuition:2 environment:2 complexity:2 localization:2 teleconferencing:1 joint:13 sejnowski:1 detected:1 whose:1 lag:1 valued:1 ability:1 statistic:4 final:1 hoc:3 sequence:17 differentiable:2 advantage:1 indication:1 adaptation:1 facesync:1 flexibility:1 achieve:3 insensitivity:1 enhancement:7 darrell:1 eccentricity:1 sea:1 illustrate:1 clearer:1 sa:3 p2:1 implemented:1 recovering:1 attribute:1 filter:14 stochastic:1 human:1 extension:1 assisted:4 mapping:4 mller:2 claim:1 achieves:3 purpose:1 estimation:2 proc:1 individually:1 gauge:1 mit:4 clearly:2 sensor:1 gaussian:1 rather:1 pn:2 derived:2 properly:1 improvement:1 likelihood:1 indicates:2 contrast:1 sense:1 detect:1 posteriori:1 entire:2 typically:1 transformed:1 pixel:16 priori:2 mutual:6 field:7 construct:1 having:1 extraction:1 sampling:1 represents:1 unsupervised:1 foreground:1 minimized:1 preserve:2 resulted:1 simultaneously:2 william:1 interest:1 possibility:1 severe:1 male:6 yielding:2 behind:1 accurate:2 integral:1 neglecting:1 necessary:2 partial:1 facial:1 intense:1 desired:2 fitted:1 merl:1 modeling:2 eta:2 cover:1 formulates:1 measuring:2 cost:1 fusing:1 deviation:2 snr:5 corrupted:1 synthetic:6 combined:1 person:2 density:10 international:1 enhance:4 nongaussian:1 na:2 squared:1 again:1 recorded:6 broadcast:1 slaney:2 coefficient:12 inc:1 mv:1 ad:2 performed:1 complicated:1 synchrony:2 contribution:2 wiener:8 efficiently:1 yield:2 correspond:3 covell:2 raw:5 accurately:1 trevor:2 energy:2 frequency:1 obvious:1 associated:12 mi:1 spie:1 hamming:1 sampled:3 auditory:3 gain:4 massachusetts:3 knowledge:1 dimensionality:2 improves:1 segmentation:1 malfunction:1 appears:1 methodology:2 hershey:2 maximally:3 modal:2 box:1 furthermore:5 correlation:4 horizontal:1 nonlinear:1 lack:1 nonparametrically:1 brings:1 believe:1 concept:1 requiring:1 true:1 normalized:1 discounting:1 laboratory:1 during:1 ambiguous:1 speaker:19 m:1 theoretic:2 demonstrate:3 motion:4 fj:1 reasoning:3 image:21 wise:2 khz:2 volume:3 extend:1 functionally:3 measurement:7 significant:3 cambridge:4 ai:5 smoothness:1 similarly:1 mapper:3 moving:1 surface:1 curvature:1 recent:1 female:6 disappointing:1 manipulation:1 preserving:1 anandan:2 somewhat:1 signal:52 dashed:1 smoother:1 multiple:1 sound:4 characterized:1 cross:1 schraudolph:1 post:1 manipulate:1 casasent:1 laplacian:1 controlled:1 vision:1 sometimes:1 pyramid:1 background:3 fine:1 source:6 modality:2 nv:2 hz:1 db:9 flow:14 synthetically:1 iii:2 automated:2 variety:1 variate:1 reduce:3 regarding:1 knowing:1 whether:1 weiner:1 stereo:2 speech:3 speaking:2 cause:1 york:1 adequate:1 useful:4 clear:2 nonparametric:4 reduced:1 http:1 canonical:1 track:1 shall:2 key:3 monitor:2 sum:1 parameterized:1 uncertainty:3 extends:1 separation:1 summarizes:1 layer:1 bound:1 quadratic:1 optic:1 constraint:1 placement:2 ri:1 scene:2 unachievable:1 min:2 optical:7 structured:1 tv:4 andj:1 across:2 son:1 interference:2 taken:1 segregation:1 detectable:1 tractable:3 fed:1 appropriate:1 spectral:7 voice:3 original:1 thomas:1 include:1 ellipse:3 establish:1 leakage:1 move:1 question:3 parametric:5 experiements:1 subspace:10 distance:1 separate:1 simulated:1 capacity:1 collected:3 discriminant:1 reason:1 substantive:1 modeled:2 relationship:10 illustration:1 ratio:1 minimizing:1 demonstration:1 potentially:1 disparate:1 design:1 motivates:1 unknown:1 upper:1 av:1 vertical:1 commensurate:1 benchmark:1 viola:4 situation:1 frame:6 locate:1 varied:1 tnt:1 intensity:1 inferred:1 optimized:1 learned:6 address:2 able:2 suggested:1 perception:2 pattern:1 challenge:2 interferer:2 including:2 max:1 video:40 explanation:1 mouth:2 power:7 suitable:1 critical:1 natural:3 overlap:1 event:1 rely:1 technology:3 chao:1 synchronization:1 expect:1 highlight:1 bear:1 mixed:3 proportional:2 filtering:1 integrate:1 consistent:2 principle:2 editor:4 interfering:5 repeat:1 supported:1 surprisingly:1 allow:1 understand:1 perceptron:1 institute:3 face:5 dimension:3 world:1 made:1 projected:1 far:1 uni:1 ml:2 assumed:1 spectrum:7 quantifies:1 why:1 disambiguate:1 learn:1 channel:2 warranted:1 hmi:1 complex:10 electric:1 domain:1 noise:7 paul:1 convey:1 fashion:3 aid:1 wiley:1 sub:4 exponential:2 periodogram:5 fusion:4 exists:1 magnitude:2 television:1 vii:1 entropy:6 visual:7 failed:1 scalar:2 aa:2 ma:6 goal:2 formulated:1 consequently:5 careful:2 flickering:1 fisher:6 absence:2 change:3 typical:1 specifically:3 microphone:7 called:1 experimental:1 attempted:1 perceptrons:1 pragmatic:1 principe:2 people:1 support:1 audio:53 ex:2
982
1,899
Some new bounds on the generalization error of combined classifiers Vladimir Koltchinskii Department of Mathematics and Statistics University of New Mexico Albuquerque, NM 87131-1141 [email protected] Dmitriy Panchenko Department of Mathematics and Statistics University of New Mexico Albuquerque, NM 87131-1141 [email protected] Fernando Lozano Department of Electrical and Computer Engineering University of New Mexico Albuquerque, NM 87131 [email protected] Abstract In this paper we develop the method of bounding the generalization error of a classifier in terms of its margin distribution which was introduced in the recent papers of Bartlett and Schapire, Freund, Bartlett and Lee. The theory of Gaussian and empirical processes allow us to prove the margin type inequalities for the most general functional classes, the complexity of the class being measured via the so called Gaussian complexity functions. As a simple application of our results, we obtain the bounds of Schapire, Freund, Bartlett and Lee for the generalization error of boosting. We also substantially improve the results of Bartlett on bounding the generalization error of neural networks in terms of h -norms of the weights of neurons. Furthermore, under additional assumptions on the complexity of the class of hypotheses we provide some tighter bounds, which in the case of boosting improve the results of Schapire, Freund, Bartlett and Lee. 1 Introduction and margin type inequalities for general functional classes Let (X, Y) be a random couple, where X is an instance in a space Sand Y E {-I, I} is a label. Let 9 be a set of functions from S into JR. For 9 E g, sign(g(X)) will be used as a predictor (a classifier) of the unknown label Y. If the distribution of (X, Y) is unknown, then the choice of the predictor is based on the training data (Xl, Y l ), ... , (Xn, Y n ) that consists ofn i.i.d. copies of (X, Y). The goal ofleaming is to find a predictor 9 E 9 (based on the training data) whose generalization (classification) error JP'{Yg(X) :::; O} is small enough. We will first introduce some probabilistic bounds for general functional classes and then give several examples of their applications to bounding the generalization error of boosting and neural networks. We omit all the proofs and refer an interested reader to [5]. Let (8, A, P) be a probability space and let F be a class of measurable functions from (8, A) into lR. Let {Xd be a sequence of i.i.d. random variables taking values in (8, A) with common distribution P. Let Pn be the empirical measure based on the sample (Xl,'" ,Xn), Pn := n- l E~=l c5 x " where c5 x denotes the probability distribution con- x. Is Is centrated at the point We will denote P! := !dP, Pn! := !dPn, etc. In what follows, ?OO(F) denotes the Banach space of uniformly bounded real valued functions on F with the norm IIYII.:F := sUPfE.:F 1Y(f)I, Y E ?OO(F). Define n n where {gi} is a sequence of i.i.d. standard normal random variables, independent of {Xi}' We will call n t-+ Gn(F) the Gaussian complexity function of the class F. One can find in the literature (see, e.g. [11]) various upper bounds on such quantities as Gn (F) in terms of entropies, VC-dimensions, etc. We give below a bound in terms of margin cost functions (compare to [6, 7]) and Gaussian complexities. = Let <P {CPk : IR -+ 1R}~1 be a class of Lipschitz functions such that (1 + sgn( -x) )/2 :-::; CPk(X) for all x E IR and all k. For each cP E <P, L(cp) will denote it's Lipschitz constant. Theorem 1 For all t lP'{ =3/ E > 0, F: P{! :-::; O} > + Let us consider a special family of cost functions. Assume that cP is a fixed non increasing Lipschitz function from IR into IR such that cp(x) 2: (1 + sgn( -x)) /2 for all x E lR. One can easily observe that L( cpU 15)) :-::; L( cP )15- 1 . Applying Theorem 1 to the class of Lipschitz functions <P := {cpU 15 k ) : k 2: O}, where 15k := 2- k , we get the following result. Theorem 2 For all t lP'{3! E > 0, F: P{! :-::; O} + 2y'2irL(cp) Gn(F) > inf [Pncp(L) aE[O,l] 15 + cogIOg~(2c5-l)r/2] + 15 t:n 2 }:-::; 2exp{-2t2}. In [5] an example was given which shows that, in general, the order of the factor 15- 1 in the second term of the bound can not be improved. Given a metric space (T, d), we denote Hd(Tj c:) the c:-entropy of T with respect to d, i.e. Hd(Tj c:) := log Nd(Tj c:), where Nd(Tj c:) is the minimal number of balls of radius c: covering T. The next theorem improves the previous results under some additional assumptions on the growth of random entropies Hdpn, 2 (Fj .). Define for "( E (0,1] and 8n (-yjf):= sup { 15 E (0,1): c5"fPn {/:-::; c5}:-::; n-1+!}. We call c5 n ("(j f) and 8n ("(j f), respectively, the ,,(-margin and the empirical ,,(-margin of f. Theorem 3 Suppose that for some a E (0,2) and for some constant D > 0 Hdpn ,2 (Fj u) ~ Du- a , u Then for any "( ~ 2~a ,for some constants A, B > 0 a.s. (1) > 0 andfor all large enough n JIItv'f E F: A- 1 8n("(jJ) ~ 8nbjf) ~ A8nbjJ)} ~ 1 - B(log210g2 n) exp { -n t /2}. This implies that with high probability for all P{f ~ O} ~ c(n l f E F -'Y/28n bj f)'Y)-I. The bound of Theorem 2 corresponds to the case of "( = 1. It is easy to see from the definitions of "(-margins that the quantity (n l -'Y/28n bj f)'Y)-1 increases in "( E (0,1]. This shows that the bound in the case of "( < 1 is tighter. Further discussion of this type of bounds and their experimental study in the case of convex combinations of simple classifiers is given in the next section. 2 Bounding the generalization error of convex combinations of classifiers Recently, several authors ([1, 8]) suggested a new class of upper bounds on generalization error that are expressed in terms of the empirical distribution of the margin of the predictor (the classifier), The margin is defined as the product Y g(X). The bounds in question are especially useful in the case of the classifiers that are the combinations of simpler classifiers (that belong, say, to a class 1-?). One of the examples of such classifiers is provided by the classifiers obtained by boosting [3, 4], bagging [2] and other voting methods of combining the classifiers. We will now demonstrate how our general results can be applied to the case of convex combinations of simple base classifiers. We assume that S := 8x {-1, 1} andF:= {]: f E F}, where j(x,y) := yf(x). Pwill denote the distribution of (X, Y), Pn the empirical distribution based on the observations ((Xl, YI ), ... , (Xn, Yn)) . It is easy to see that Gn(F) = Gn(F). One can easily see that if F := conv(1-?), where 1-? is a class of base classifiers, then Gn(F) = Gn (1-?). These easy observations allow us to obtain useful bounds for boosting and other methods of combining the classifiers. For instance, we get in this case the following theorem that implies the bound of Schapire, Freund, Bartlett and Lee [8] when 1? is a VC-class of sets. Theorem 4 Let F := conv(1?), where 1-? is a class of measurable functions from (8, A) into R For all t > 0, lP'{ 3f E F : P{yf(x) ~ O} In particular, if 1-? is a VC--class of classifiers h : 8 H {-1, 1} (which means that the class of sets {{x: h(x) = +1} : h E 1-?} is a Vapnik-Chervonenkis class) with VC--dimension V(1-?), we have with some constant C > 0, G n (1-?) ~ C(V(1-?)/n)I/2. This implies that with probability at least 1 - a P{yf(x) ~ O} ~ OE(O inf [Pn{yf(x) ~ 8} + ~ JV(1-?) + ,I] u n +( log log2 (28- 1)) 1/ 2] + V! log ~ +2 r,;; yn , n which slightly improves the bound obtained previously by Schapire, Freund, Bartlett and Lee [8]. Theorem 3 provides some improvement of the above bounds on generalization error of convex combinations of base classifiers . To be specific, consider the case when H is a VC-class of classifiers. Let V := V(H) be its VC-dimension. A well known bound (going back to Dudley) on the entropy of the convex hull (see [11], p. 142) implies that 2 ( V - l) Hdpn,2(conv(H);u)::; sup HdQ,2(conv(H);u)::; Du- - v- . QEP(S) It immediately follows from Theorem 3 that for all 'Y 2: 2J~::::~) and for some constants C,B IF'{3f E conv(H): p{f::; a} > ? n 1- 'Y / 28n (,,(; f) 'Y }::; BIog210g2nexP{--21nt}, where 8n ("(; f) := sup{ 8 E (0,1) : 8'Y Pn {(x, y) : yf(x) ::; 8} ::; n-1+t }. This shows that in the case when the VC-dimension of the base is relatively small the generalization error of boosting and some other convex combinations of simple classifiers obtained by various versions of voting methods becomes better than it was suggested by the bounds of Schapire, Freund, Bartlett and Lee. One can also conjecture that the remarkable generalization ability of these methods observed in numerous experiments can be related to the fact that the combined classifier belongs to a subset of the convex hull for which the random entropy Hdp 2 is much smaller than for the whole convex hull (see [9, 10] for improved margin type bounds in a much more special setting). To demonstrate the improvement provided by our bounds over previous results, we show some experimental evidence obtained for a simple artificially generated problem, for which we are able to compute exactly the generalization error as well as the 'Y-margins. We consider the problem of learning a classifier consisting of the indicator function of the union of a finite number of intervals in the input space S = [0,1] . We used the Adaboost algorithm [4] to find a combined classifier using as base class 11. = {[a, b] : b E [0, In u {[b,l] : b E [0, In (i.e. decision stumps). Notice that in this case V = 2, and according to the theory values of gamma in (2/3, 1) should result in tighter bounds on the generalization error. For our experiments we used a target function with 10 equally spaced intervals, and a sample size of 1000, generated according to the uniform distribution in [0, 1]. We ran Adaboost for 500 rounds, and computed at each round the generalization error of the combined classifier and the bound C(n 1- 'Y/28n ("(; f) 'Y )-1 for different values of 'Y. We set the constant C to one. In figure 1 we plot the generalization error and the bounds for 'Y = 1, 0.8 and 2/3. As expected, for'Y = 1 (which corresponds roughly to the bounds in [8]) the bound is very loose, and as 'Y decreases, the bound gets closer to the generalization error. In figure 2 we show that by reducing further the value of 'Y we get a curve even closer to the actual generalization error (although for 'Y = 0.2 we do not get an upper bound). This seems to support the conjecture that Adaboost generates combined classifiers that belong to a subset of of the convex hull of 11. with a smaller random entropy. In figure 3 we plot the ratio 8-;"("(; f)/8 n("(; f) for'Y = 0.4,2/3 and 0.8 against the boosting iteration. We can see that the ratio is close to one in all the examples indicating that the value of the constant A in theorem 3 is close to one in this case. ? - - - -- ---boosbnground Figure 1: Comparison of the generalization error (thicker line) with (nl-'Y/2 8n b; f)'Y)-l for'Y = 1,0.8 and 2/3 (thinner lines, top to bottom). boostlrQround Figure 2: Comparison of the generalization error (thicker line) with (nl-'Y/2 8n b; f)'Y)-l for'Y = 0.5,0.4 and 0.2 (thinner lines, top to bottom). :I f o ? ! _ ?????? I _ _ :1 11 11 11 I. " o ? _ _ _ ~I I i III II o ? _ _ _ III _ _ _ _ _ _ '::'ffl ?I Jam, ".~., I _ _ _ _ _ _ _ _ _ _ Figure 3: Ratio 8:b;f)/8 n b;f) versus boosting round for'Y = 0.4,2/3,0.8 (top to bottom) 3 Bounding the generalization error in neural network learning We turn now to the applications of the bounds of previous section in neural network learning. Let 1i be a class of measurable functions from (8, A) into R Given a sigmoid U from lR into [-l,l]andavectorw := (Wl, ... ,Wn ) E lRn, let Nu,w(Ul, . .. ,U n ) := u(~~=l WjUj). We call the function Nu,w a neuron with weights wand sigmoid u. For wE lRn, [[w[[t l := ~~=l [Wit. Let Uj : j ~ 1 be functions from lR into [-1,1], satisfying the Lipschitz conditions: [Uj(u) - Uj(v)[ :"S Lj[u - vi, u,v E R Let {Aj} be a sequence of positive numbers. We define recursively classes of neural networks with restrictions on the weights of neurons (j below is the number of layers): 1lo =1i, 1lj(Al , ... ,Aj ):= := {Nuj,w(hl , ... , hn) : n ~ 0, hi E 1ij-l (Al'"'' Aj-d, wE lRn, [[w[[t l :"S Aj} U U1ij-l (A l , .. . , A j- l ). Theorem 5 For all t > 0 and for alll _ ~ 1 . j 1 U U I lP'{ =V E 1l1(Al , ... ,AI) : P{J :"S O} > <lE(O,l] lOf [Pn(fJh-) + ~ II (2L j Aj + l)Gn(1l)+ k=l + ( loglog2(28n l ))l/2] t+2} 2 {2 2} +-..;n <- exp-t Remark. Bartlett [1] obtained a similar bound for a more special class 1l and with larger constants. In the case when Aj == A, L j == L (the case considered by Bartlett) the expression in the right hand side of his bound includes (AL)I;;+l )/2, which is replaced in our bound by (Af)l. These improvement can be substantial in applications, since the above quantities play the role of complexity penalties. Finally, it is worth mentioning that the theorems of Section 1 can be applied also to bounding the generalization error in multi-class problems. Namely, we assume that the labels take values in a finite set Y with card(Y) =: L. Consider a class j: of functions from S := 8 x Y into lR. A function f E j: predicts a label y E Y for an example x E 8 iff f(x,y) > maxf(x,y'). y'#y The margin of an example (x, y) is defined as mj(x,y) := f(x,y) - maxf(x,y'), y'#y so f misclassifies the example (x, y) iff mj(x, y) :"S O. Let F:= {J(.,y): y E Y,f E j:}. The next result follows from Theorem 2. Theorem 6 For all t > 0, lP'{3f E j:: P{mj :"S O} > inf [Pn{mj:"S 8} + <lE(O,l] + ( 10glog2(28n l ))l/2] 4y'27rL~2L - t+2} < 2 exp-t. {22} +-..;n - 1) Gn(F)+ References [1] Bartlett, P. (1998) The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network. IEEE Transactions on Information Theory , 44, 525-536. [2] Breiman, L. (1996). Bagging Predictors. Machine Learning, 26(2), 123-140. [3] Freund y. (1995) Boosting a weak learning algorithm by majority. Information and Computation, 121 ,2,256-285. [4] Freund Y. and Schapire, R.E. (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1),119-139. [5] Koltchinskii, V. and Panchenko, D. (2000) Empirical margin distributions and bounding the generalization error of combined classifiers, preprint. [6] Mason, L. , Bartlett, P. and Baxter, J. (1999) Improved Generalization through Explicit Optimization of Margins. Machine Learning, 0, 1-11. [7] Mason, L. , Baxter, J., Bartlett, P. and Frean, M. (1999) Functional Gradient Techniques for Combining Hypotheses. In: Advances in Large Margin Classifiers. Smola, Bartlett, Sch61kopf and Schnurmans (Eds), to appear. [8] Schapire, R., Freund, Y., Bartlett, P. and Lee, W. S. (1998) Boosting the Margin: A New Explanation of Effectiveness of Voting Methods. Ann. Statist., 26, 1651-1687. [9] Shawe-Taylor,1. and Cristianini, N. (1999) Margin Distribution Bounds on Generalization. In: Lecture Notes in Artificial Intelligence, 1572. Computational Learning Theory, 4th European Conference, EuroCOLT'99, 263-273. [10] Shawe-Taylor, J. and Cristianini, N. (1999) Further Results on the Margin Distribution. Proc. of COLT'99, 278-285. [11] van der Vaart, A. and Wellner, 1. (1996) Weak convergence and Empirical Processes. With Applications to Statistics. Springer-Verlag, New York.
1899 |@word version:1 norm:2 seems:1 nd:2 recursively:1 chervonenkis:1 dpn:1 nt:1 plot:2 intelligence:1 lr:5 provides:1 boosting:11 math:2 simpler:1 prove:1 consists:1 introduce:1 expected:1 roughly:1 multi:1 eurocolt:1 cpu:2 actual:1 increasing:1 conv:5 provided:2 becomes:1 bounded:1 what:1 substantially:1 voting:3 xd:1 growth:1 thicker:2 exactly:1 classifier:25 omit:1 yn:2 appear:1 positive:1 engineering:1 thinner:2 koltchinskii:2 mentioning:1 hdq:1 union:1 sch61kopf:1 empirical:7 get:5 close:2 applying:1 restriction:1 measurable:3 convex:9 wit:1 immediately:1 his:1 hd:2 target:1 suppose:1 play:1 hypothesis:2 satisfying:1 predicts:1 observed:1 bottom:3 role:1 preprint:1 electrical:1 oe:1 decrease:1 ran:1 substantial:1 panchenko:2 complexity:7 cristianini:2 easily:2 various:2 artificial:1 whose:1 larger:1 valued:1 dmitriy:1 say:1 ability:1 statistic:3 gi:1 vaart:1 sequence:3 product:1 combining:3 iff:2 convergence:1 oo:2 develop:1 frean:1 measured:1 ij:1 implies:4 radius:1 hull:4 vc:7 sgn:2 jam:1 sand:1 generalization:25 tighter:3 considered:1 lof:1 normal:1 exp:4 bj:2 proc:1 label:4 wl:1 gaussian:4 pn:8 breiman:1 improvement:3 lj:2 going:1 interested:1 classification:2 colt:1 misclassifies:1 special:3 t2:1 gamma:1 replaced:1 consisting:1 fjh:1 nl:2 lrn:3 tj:4 closer:2 taylor:2 minimal:1 instance:2 cpk:2 gn:9 cost:2 subset:2 predictor:5 uniform:1 combined:6 lee:7 probabilistic:1 yg:1 nm:3 hn:1 stump:1 includes:1 vi:1 sup:3 ir:4 spaced:1 weak:2 albuquerque:3 worth:1 ed:1 definition:1 against:1 eece:1 proof:1 con:1 couple:1 vlad:1 improves:2 back:1 adaboost:3 improved:3 furthermore:1 smola:1 hand:1 irl:1 yf:5 aj:6 lozano:1 round:3 covering:1 theoretic:1 demonstrate:2 cp:6 l1:1 fj:2 recently:1 common:1 sigmoid:2 functional:4 rl:1 jp:1 banach:1 belong:2 refer:1 ai:1 mathematics:2 shawe:2 etc:2 base:5 recent:1 inf:3 belongs:1 verlag:1 inequality:2 yi:1 der:1 additional:2 fernando:1 ii:2 af:1 equally:1 ae:1 metric:1 iteration:1 interval:2 effectiveness:1 call:3 iii:2 enough:2 easy:3 wn:1 baxter:2 maxf:2 expression:1 bartlett:15 ul:1 wellner:1 penalty:1 york:1 jj:1 remark:1 useful:2 statist:1 schapire:8 notice:1 sign:1 jv:1 wand:1 family:1 reader:1 decision:2 bound:32 layer:1 hi:1 generates:1 relatively:1 conjecture:2 department:3 according:2 ball:1 combination:6 jr:1 smaller:2 slightly:1 lp:5 hl:1 previously:1 turn:1 loose:1 ffl:1 observe:1 dudley:1 bagging:2 denotes:2 top:3 log2:1 especially:1 uj:3 question:1 quantity:3 gradient:1 dp:1 qep:1 card:1 majority:1 hdp:1 ratio:3 vladimir:1 mexico:3 unknown:2 upper:3 neuron:3 observation:2 finite:2 introduced:1 namely:1 nu:2 able:1 suggested:2 below:2 pattern:1 explanation:1 indicator:1 improve:2 ofleaming:1 numerous:1 literature:1 freund:9 lecture:1 versus:1 remarkable:1 lo:1 copy:1 side:1 allow:2 taking:1 van:1 curve:1 dimension:4 xn:3 author:1 c5:6 transaction:1 unm:3 xi:1 alll:1 mj:4 du:2 european:1 artificially:1 bounding:7 whole:1 explicit:1 xl:3 theorem:15 ofn:1 specific:1 mason:2 evidence:1 vapnik:1 margin:18 entropy:6 expressed:1 springer:1 corresponds:2 goal:1 ann:1 lipschitz:5 uniformly:1 reducing:1 called:1 experimental:2 indicating:1 support:1 andf:1
983
19
474 OPTIMIZAnON WITH ARTIFICIAL NEURAL NETWORK SYSTEMS: A MAPPING PRINCIPLE AND A COMPARISON TO GRADIENT BASED METHODS t Harrison MonFook Leong Research Institute for Advanced Computer Science NASA Ames Research Center 230-5 Moffett Field, CA, 94035 ABSTRACT General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The perfonnance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradientsearch. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods. INTRODUCTION Solving optimization problems with systems of equations based on neurobiological principles has recently received a great deal of attention. Much of this interest began when an artificial neural network was devised to find near-optimal solutions to an np-complete problem 13. Since then, a number of problems have been mapped into the same artificial neural network and variations of it 10.13,14,17.18,19.21,23.24. In this paper, a unifying principle underlying these mappings is derived for systems of first to nth -order ordinary differential equations. This mapping principle bears similarity to the mathematical tools used to generate optimization methods based on the gradient. In view of this, it seemed important to compare the optimization efficiency of dynamical systems constructed by the neural network mapping principle with dynamical systems constructed from the gradient. . THE PRINCIPLE This paper concerns itself with networks of computational units having a state variable V, a function! that describes how a unit is driven by inputs, a linear ordinary differential operator with constant coefficients D (v) that describes the dynamical response of each unit, and a function g that describes how the output of a computational unit is detennined from its state v. In particular, the paper explores how outputs of the computational units evolve with time in tenns of a scalar function E, a single state variable for the whole network. Fig. I summarizes the relationships between variables, functions, and operators associated with each computational unit. Eq. (1) summarizes the equations of motion for a network composed of such units: D"-+(M)(v) =1(g 1(v I)' . . . ? gN (VN ) ) (I) where the i th element of jJ(M) is D(M)(Vj), superscript (M) denotes that operator D is Mth order, the i th element of is !i(gl(VI) ? ...? gN(VN?, and the network is comprised of N computational units. The network of Hopfield 12 has M=I, functions are weighted linear sums, and functions 1 (where the ith element of 1 is gj(Vj) ) are all the same sigmoid function. We will examine two ways of defining functions given a function F. Along with these definitions will be 1 1 1 t Work supported by NASA Cooperative Agreement No. NCC 2-408 ? American Institute of Physics 1988 475 defined corresponding functions E that will be used to describe the dynamics of Eq. (1). The first method corresponds to optimization methods introduced by artificial neural network research. It will be referred to as method V y ("dell gil): ! == VyF (2a) with associated E function tN[ dv '(S)jdg .(S) E"j = F("g)-JL D(M)(v ?(S?- - '' ds. i dt ' dt (2b) Here, V xR denotes the gradient of H, where partials are taken with respect to variables of X, and E7 denotes the E function associated with gradient operator V7' With appropriate operator D and and g, is simply the "energy function" of Hopfield 12. Note that Eq. (2a) makes functions that can be derived from scalar potential functions. explicit that we will only be concerned with For example, this restriction excludes artificial neural networks that have connections between excitatory and inhibitory units such as that of Freeman 8. The second method corresponds to optimization methods based on the gradient. It will be referred to as method V if ("dell v"): 1 Er 1 1 == VyoF (3a) with associated E function Ev> N [ dv ? (s) 1 dv ? (s ) = FCg) -JL D(M)(v .(s?--' ' i dt dt t ds (3b) I where notation is analogous to that for Eqs. (2). computational unit i : ~_ ?? The critical result that allows us to map \\ optimization problems into transform that detennines unit i's networks described by Eq. output from state variable Vi (1) is that conditions on the constituents of the equation differential operator specifying the can be chosen so that along dynamical characteristics of unit i any solution trajectory, the E function corresponding function governing how inputs to to the system will be a unit i are combined to drive it monotonic function of time. For method V"j' here are / the conditions: all functions g are 1) differentiable /gl(V 1) 'Tg2 (v:z) I' and 2) monotonic in the same sense. Only the first Figure 1: Schematic of a computational unit i from which netcondition is needed to works considered in this paper are constructed. Triangles suggest make a similar assertion for connections between computational units. method Vv- When these conditions are met and when solutions of Eq. (1) exist, the dynamical systems can be used for optimization. The appendix contains proofs for the monotonicity of function E along solution trajectories and references necessary existence theorems. In conclusion, mapping optimization problems onto dynamical systems summarized by Eq. (l) can be reduced to a matter of differentiation if a scalar function representation of the problem can be found and the integrals of Eqs. (2b) and (3b) are ignorable. This last assumption is certainly upheld for the case where operator D has no derivatives less than M'h order. In simulations below, it will be observed to hold for the case M =1 with a nonzero O'h order derivative in D . (Also see Lapedes and Falber 19.) PERSPECTIVES OF RECENT WORK 476 The fonnulations above can be used to classify the neural network optimization techniques used in several recent studies. In these studies, the functions 1 were all identical. For the most part, following Hopfield's fonnulation, researchers 10.13.14.17.23.24 have used method Vy to derive with Ey quadratic in functions 1 and fonns of Eq. (1) that exhibit the ability to find extrema of all functions 1 describable by sigmoid functions such as tanh (x ). However, several researchers have written about artificial neural networks associated with non-quadratic E functions. Method Vy has been used to derive systems capable of finding extrema of non-quadrntic Ey 19. Method Vv has been used to derive systems capable of optimizing Ev where Ev were not necessarily quadratic in variables V 21. A sort of hybrid of the two methods was used by Jeffery and Rosner 18 to find extrema of functions that were not quadratic. The important distinction is that their functions j were derived from a given function Fusing Eq. (3a) where, in addition, a sign definite diagonal matrix was introduced; the left side of Eq. (3a) was left multiplied by this matrix. A perspective on the relationship between all three methods to construct dynamical systems for optimization is summarized by Eq. (4) which describes the relationship between methods Vyand Vyo: E-t V? = <liag [a~~;ll-l V,J' (4) where diag [ Xi] is a diagonal matrix with Xi as the diagonal element of row i. (A similar equation has been derived for quadratic F s.) The relationship between the method of Jeffery and Rosner and Vv is simply Eq. (4) with the time dependent diagonal matrix replaced by a constant diagonal matrix of free parameters. It is noted that Jeffery and Rosner presented timing results that compared simulated annealing. conjugate-gradient, and artificial neural network methods for optimization. Their results are not comparable to the results reported below since they used computation time as a perfonnance measure, not settling times of analog systems. The perspective provided by Eq. (4) will be useful for anticipating the relative performance of methods V~ and Vv in the analytical example below and will aid in understanding the results of computer simulations. COMPARISON OF METHODS Vt AND Vv When M =1 and operator D has no Ofh order derivatives, method Vv is the basis of gradientsearch methods of optimization. Given the long history of of such methods. it is important to know what possible benefits could be achieved by the relatively ne,w optimization scheme. method Vy . In the following. the optimization efficiency of methods Vt and Vv is compared by comparing settling times. the time required for dynamical systems described by Eq. (1) to traverse a continuous path to local optima. To qualify this perfonnance measure. this study anticipates application to the creation of analog devices that would instantiate Eq. (1); hence, we are not interested in estimating the number of discrete steps that would be required to find local optima, an appropriate performance measure if the point was to develop new numerical methods. An analytical example will serve to illustrate the possibility of improvements in settling time by using method Vt instead of method VV' Computer simulations will be reported for more complicated problems following this example. For the analytical example, we will examine the case where all functions 1 are identical and g(v) = tanhG(v -Th) (5) where G > 0 is the gain and Th is the threshold. Transforms similar to this are widely used in artificial neural network research. Suppose we wish to use such computational units to search a multi-dimensional binary solution space. We note that !li.. = G sech 2G(v -Th) dv (6) is near 0 at valid solution states (comers of a hypercube for the case of binary solution spaces). We see from Eq. (4) that near a valid solution state. a network based on method Vy will allow computational units to recede from incorrect states and approach correct states comparatively faster. Does 477 this imply faster settling time for method V"t? To obtain an analytical comparison of settling times, consider the case where M =1 and operator D has no Om order derivatives and F 1 = -2~'J ~('.?(tanhGv?)(tanhGv ? ) ? J (7) 'oJ where matrix S is symmetric. Method Vy gives network equations dV =StanhGv (8) ~ =diag [G sech 2Gvj 1S tanhGV (9) dt and method Vv gives network equations where tanhGY denotes a vector with i'" component tanhGv;. For method Vr there is one stable point, i.e. where ':: = 0, at V = O . For method Vv the stable points are V = 0 and V ? V where V is the set of vectors with component values that are either +- or - . Further trivialization allows for comparing estimates of settling times: Suppose S is diagonal. For this case, if Vj = 0 is on the trajectory of any computational unit i for one method, Vj 0 is on the trajectory of that unit for the other method; hence, a comparison of settling times can be obtained by comparing time estimates for a computational unit to evolve from near 0 to near an extremum or, equivalently, the converse. Specifically, let the interval be [Bo, I-a] where 0< Bo<l-a and o<a<1. For method V.., integrating velocity over time gives the estimate = 1[1'2 [1 1 1+ [1-a 5(2-5) - l-aJ "5(2-a) ~ 00 lJ T Vi = G In (10) and for method V y the estimate is T,,;= ~ln [~~~) ~l (11) From these estimates, method Vv will always take longer to satisfy the criterion for convergence: Note that only with the largest value for Bo, Bo =1-5, is the first term of Eq. (10) zero; for any smaller Bo, this term is positive. Unfortunately, this simple analysis cannot be generalized to nondiagonal S. With diagonal S, all computational units operate independently. Hence, the derivation of ':: is irrelevant with respect to convergence rates; convergence rate depends only on the diagonal element of S having the smallest magnitude. In this sense, the problem is one dimensional. But for non-diagonal S, the problem would be, in general, multi-dimensional and, hence, the direction of ':: becomes relevant To compare settling times for non-diagonal S, computer simulations were done. 'These are described below. COMPUTER SIMULAnONS Methods The problem chosen for study was a much simplified version of a problem in medical imaging: Given electromagnetic field measurements taken from the human scalp, identify the location and magnitude of cerebral activity giving rise to the fields. This problem has received much attention in the last 20 years 3,6.7. The problem, sufficient for our purposes here, was reduced to the following problem: given a few samples of the electric potential field at the surface of a spherical conductor within which reside several static electric dipoles, identify the dipole locations and moments. For this situation, there is a closed form solution for electric potential fields at the 478 spherical surface: (12) where ~ is the electric potential at the spherical conductor surface, 'Xsamp/~ is the location of the sample point ( x denotes a vector, i the corresponding unit vector, and x the corresponding vector magnitude), j1; is the dipole moment of dipole i, and d; is the vector from dipole i to X:ampl~ (This equation can be derived from one derived by Brody, Terry, and Ideker 4 ). Fig. 2 facilitates picturing these relationships. With this analytical solution, the problem was formulated as a least squares minimization problem where the variables were dipole moments. In short, the following process was used: A dipole model was chosen. This model was used with Eq. (12) to calculate potentials at points on a sphere which covered about 60% of the surface. A cluster of internal locations that encompassed the locations of the model was specified. The two optimization techniques were then required to determine dipole moment values at cluster locations such that the collection of dipoles at cluster locations accuFigure 2: Vectors of Eq. (12). rately reflected the dipole distribution specified by the model. This was to be done given only the potential values at the sample points and an initial guess of dipole moments at cluster locations. The optimization systems were to accomplish the task by minimizing the sum of squared differences between potentials calculated using the dipole model and potentials calculated using a guess of dipole moments at cluster locations where the sum is taken over all sample points. Further simplifications of the problem included 1) choosing the dipole model locations to correspond exactly to various locations of the cluster, 2) requiring dipole model moments to.be I, 0, or -I, and 3) representing dipole moments at cluster locations with two bit binary numbers. To describe the dynamical systems used, it suffices to specify operator D and functions '( of Eq. (1) and function F used in Eqs. (2a) and (3a). Operator D was D = d dt + 1. (13) Eq. (5) with a multiplicative factor of 112 was used for all functions '(. Hence, regarding simplification 3) above, each cluster location was associated with two computational units. Considering simplification 2) above, dipole moment magnitude 1 would be represented by both computational units being in the high state, for -I, both in the low state, and for 0, one in the high state and one in the low state. Regarding function F , F = ~ all samp/~ poims s [~lMaSlll'~d(X:) - <Ilcillomr ('Xs) r- c ~ g (v)2 (14) all compu,ariOflal u"irs j where ~_as""~d is calculated from the dipole model and Eq. (12) (The subscript measured is used because the role of the dipole model is to simulate electric potentials that would be measured in a real world situation. In real world situations, we do not know the source distribution underlying ~_asar~d .), C is an experimentally detennined constant (.002 was used), and ~clJIS'~r is Eq. (12) where the sum of Eq. (12) is taken over all cluster locations and the k,h coordinate of the i,h cluster location dipole moment is ? Pi#: = ~ all bits b g (Vil:b)' (15) 479 Index j of Eq. (14) corresponds to one combination of indices ikb. Sample points, 100 of them, were scattered semi-uniformly over the spherical surface emphasized by horizontal shading in Fig. 3. Ouster locations, 11, and model dipoles, 5, were scattered within the subset of the sphere emphasized by vertical shading. For the dipole model used, 10 dipole moment components were non-zero; hence, optimization techniques needed to hold 56 dipole moment components at zero and set 10 components to correct non-zero values in order to correctly identify the dipole model underlying ~_Qs"'~d' The dynamical systems corresponding to 0.8 methods V,. and Vv' were integrated using the relative radii forward Euler method (e.g. Press, Flannery, I I Teukolsky, and Vetterling 22). Numerical ,' I I methods were observed to be convergent experI imentally: settling time and path length were I , observed to asymtotically approach stable I I values as step size of the numerical integrator I I was decreased over two orders of magnitude. Settling times, path lengths, and relative directions of travel were calculated for the two optimization methods using several different initial bit patterns at the cluster locations. In Figure 3: illustration of the distribution of other words. the search was started at different sample points on the surface of the sphericorners of the hypercube comprising the space cll conductor (horizontal shading) and the of acceptable solutions. One corner of the distribution of model dipole locations and hypercube was chosen to be the target solution. cluster locations within the conductor (Note that a zero dipole moment has a degen(verticll shading). erate two bit representation in the dynamical systems explored; the target corner was arbitrarily chosen to be one of the degenerate solutions.) Note from Eq. (5) that for the network to reach a hypercube corner, all elements of would have to be singular. For this reason, settling time and other measures were studied as a function of the proximity of the computational units to their extremum states. Computations were done on a Sequent Balance. I , I I , I v 5 Results Graph 1 shows results for exploring settling time as a function of extremum depth, the minimum of the deviations of variables from the threshold of functions g. Extremum depth is reported in multiples of the width of functions g. The term transition, used in the caption of Graph 1 and below, refers to the movement of a computational unit from one extremum state to the other. The calculations were done for two initial states, one where the output of 1 computational unit was set to zero and one where outputs of 13 computational units were set to zero; bence, 1 and 13, respectively, half transitions were required to reach the target hypercube comer. It can be observed that settling time increases faster for method V v' than that for method Vy just as we would expect from considering Eqs. (4) and (5). However, it can be observed that method Vv is still an order of magnitude faster even wben extremum depth is 3 widths of functions g. For the purpose of unambiguously identifying what hypercube corner the dynamical system settles v +,1 4 3 I - ~ I I ~ ~ # ... t---. o o " - 2 extremum depth 1 '"- =- - 4 3 Graph 1: settling time as a function of extremum depth. #: method Vr- 1 half transition required. .: method V 13 half transitions required. +: method V.... 1 half transition required. -: V.... 13 half transitions required. r 480 to, this extremum depth is more than adequate. Table 1 displays results for various initial conditions. Angles are reported in degrees. These measures refer to the angle between directions of travel in v-space as specified by the two optimization methods. The average angle reported is taken over all trajectory points visited by the numerical integrator. Initial angle is the angle at the beginning of the path. Parasite cost percentage is a measure that compares parasite cost, the integral in Eqs. (2b) and (3b), to the range of function F over the path: . parasite cost % parasite cost = 100x I F F I fi",," ;,udal transitions reauired time 1 0.16 0.0016 100 6.1 1.9 2 0.14 0.0018 78 4.7 1.9 75 3 0.15 0.0021 71 4.7 2.1 74 7 0.19 0.0032 59 4.6 2.4 63 10 0.17 0.0035 49 3.8 2.5 60 13 0.80 0.0074 110 9.2 3.2 39 relative path initial Mean angle extremum time len2th anlZle (std dev) deoth 68 (16) parasite cost % 76 (3.8) 76 (3.5) 2.3 2.3 0.22 0.039 72 (4.3) 73 (4.1) 2.5 2.5 0.055 0.016 71 (3.7) 72 (3.0) 2.3 2.5 0.051 0.0093 69 (4.1) 71 (7.0) 2.4 2.7 0.058 0.0033 63 (2.8) 64 (4.7) 2.5 2.8 O.OOO6{) 77 (11) 71 (8.9) 2.3 2.7 0.076 0.0028 0.030 Table 1: Settling time and other measurements for various required transitions. For each transition case, the upper row is for V y and the lower row is for V v- Std deY denotes standard deviation. See text for definition of measurement terms and units. Noting the differences in path length and angles reported, it is clear that the path taken to the target hypercube comer was quite different for the two methods. Method V v settles from 1 to 2 orders of magnitude faster than method V -r and usually takes a path less than half as long. These relationships did not change significantly for different values for c of Eq. (14) and coefficients of Eq. (13) (both unity in Eq. (13?. Values used favored method Vr Parasite cost is consistently less significant for method V v and is quite small for both methods. To further compare the ability of the optimization methods to solve the brain imaging problem, a large variety of initial hypercube comers were tested. Table 2 displays results that suggest the ability of each method to locate the target comer or to converge to a solution that was consistent with the dipole model. Initial comers were chosen by randomly selecting a number of computational units and setting them to eXtI"emwn states opposite to that required by the target solution. Five cases were run for each case of required transitions. It can be observed that the system based on method Vv is better at finding the target comer and is much better at finding a solution that is consistent with the dipole model. DISCUSSION The simulation results seem to contradict settling time predictions of the second analytical example. It is intuitively clear that there is no contradiction when considering the analytical example as a one dimensional search and the simulations as multi-dimensional searches. Consider Fig. 4 which illustrates one dimensional search starting at point I. Since both optimization methods must decrease function E monotonically, both must head along the same path to the minimum point A. Now consider Fig. 5 which illustrates a two dimensional search starting at point I: Here, the two methods needn't follow the same paths. The two dashed paths suggest that method V." can still be 481 V.. transitions I required 3 4 5 I 6 Vv ~erent dipole different target different dipole different target comer comer solution comer comer, solution 1 4 0 5 0 0 4 1 1 1 0 3 4 4 1 1 0 0 4 1 1 2 0 2 1 4 4 1 0 0 3 1 1 5 0 0 5 5 0 0 0 I 0 2 3 0 5 0 0 I I I 7 13 20 26 33 5 0 40 5 5 5 0 46 I 53 I 0 0 0 0 0 0 3 3 2 4 2 I 2 3 1 0 I 0 0 ! 0 Table 2: Solutions found starting from various initial conditions, five cases for each transition case. Different dipole solution indicates that the system assigned non-zero dipole moments at cluster locations that did not correspond to locations of the dipole model sources. Different corner indicates the solution was consistent with the dipole model but was not the target hypercube comer. Target corner indicates that the solution was the target solution. monotonically decreasing E while traversing a more circuitous route to minimum B or traversing a path to minimum A. The longer path lengths reported in Table 1 for method V~ suggest the occurrence of the fonner. The data of Table 2 verifies the occurrence of the latter: Note that for many v cases where the system based on method Vv settled to the . Figure 4: One dimensional search target comer, the system based on method V~ settled to some other minimum. for minima. Would we observe similar differences in optimization I efficiency for other optimization problems that also have binary solution spaces? A view that supports the plausibility of the affirmative is the following: Consider Eq. (4) and Eq. E (5). We have already made the observation that method Vv would slow convergence into extrema of functions g. We have observed this experimentally via Graph 1. These observations suggest that computational units of Vv systems tend to stay closer to the transition regions of functions g compared to computational units of V'I systems. It seems plausible that this property may allow Vv systems to avoid advancing too deeply toward ineffective solutions and, hence, allow the systems to approach effective solutions more Figure 5: Two dimensional search efficiently. 1bis behavior might also be the explanation for for minima. the comparative success of method Vv revealed in Table 2. Regarding the construction of electronic circuitry to instantiate Eq. (l), systems based on method Vv would require the introduction of a component implementing multiplication by the derivative of functions g. This additional complexity may binder the use of method Vv for the 482 construction of analog circuits for optimization. To illustrate the extent of this additional complexity, Fig. Input 6a shows a schematized circuit for a computational unit of method V-r and Fig. 6b shows a schematized circuit for a computational unit of method VT The simulations reported above suggest that there may be problems for which improvements in settling time Output may offset complications that might come with added circuit complexity. On the problem of imaging cerebral activity, the results above suggest the possibility of constructing analog devices to do the job. Consider the problem of analyzing electric potentials from the scalp of one perOutput son: It is noted that the measured electric potentials, Figure 6: Schematized circuits for a com- ~_as"rcd' appear as linear coefficients in F of Eq. putational unit Notation is consistem (14); hence, they would appear as constant terms in with Horowitz and Hill IS. Shading of of Eq. (1). Thus. cf)_asllrcd would be implemented as amplifiers is to e3IIllark components amplifier biases in the circuits of Figs. 6. This is a referred to in the text. a) Computational significant benefit. To understand this. note that funcunit for method Vr b) Computational tion Ij of Fig. 1 corresponding to the optimization of . ti thod V function F of Eq. (14) would involve a weighted umt or me ... linear sum of inputs g 1(v 1), ??? , gN (VN). The weights would be the nonlinear coefficients of Eq. (14) and correspond to the strengths of the connections shown in Fig. 1. These connection strengths need only be calculated once for the person ar!d Car! then be set in hardware using, for example, a resistor network. Electric potential measurements could then be ar!alyzed by simply using the measurements to bias the input to shaded amplifiers of Figs. 6. For initialization, the system can be initialized with all dipole moments at zero (the 10 transition case in Table 1). This is a reasonable first guess if it is assumed that cluster locations are far denser than the loci of cerebral activity to be observed. For subsequent measurements, the solution for immediately preceding measurements would be a reasonable initial state if it is assumed that cerebral activity of interest waxes and wanes continuously. Might non-invasive real time imaging of cerebral activity be possible using such optimization devices? Results of this study are far from adequate for answering this question. Many complexities that have been avoided may nUllify the practicality of the idea. Among these problems are: 1) The experiment avoided the possibility of dipole sources actually occurring at locations other than cluster locations. The minimization of function F of Eq. (14) may circumvent this problem by employing the superposition of dipole moments at neighboring cluster locations to give a sufficient model in the mear!. 2) The experiment asswned a very restricted range of dipole strengths. This might be dealt with by increasing the number of bits used to represent dipole moments. 3) The conductor model, a homogeneously conducting sphere, may not be sufficient to model the hwnan head 16. Non-sphericity ar!d major inhomogeneities in conductivity Car! be dealt with, to a certain extent, by replacing Eq. (12) with a generalized equation based on a numerical approximation of a boundary integral equation 20 4) The cerebral activity of interest may not be observable at the scalp. 5) Not all forms of cerebral activity give rise to dipolar sources. (For example, this is well known in olfactory cortex 8.) 6) Activity of interest may be overwhelmed by irrelevant activity. Many methods have been devised to contend with this problem (For example, Gevins and Morgan 9.) Clearly, much theoretical work is left to be done. (a) (b) 1 CONCLUDING REMARKS 483 In this study. the mapping principle underlying the application of artificial neural networks to the optimization of multi-dimensional scalar functions has been stated explicitly. Hopfield 12 has shown that for some scalar functions. i.e. functions F quadratic in functions 1. this mapping can lead to dynamical systems that can be easily implemented in hardware. notably. hardware that requires electronic components common to semiconductor technology. Here. mapping principles that have been known for a considerably longer period of time. those underlying gradient based optimization, have been shown capable of leading to dynamical systems that can also be implemented using semiconductor hardware. A problem in medical imaging which requires the search of a multi-dimensional surface full of local extrema has suggested the superiority of the latter mapping principle with respect to settling time of the corresponding dynamical system. 1bis advantage may be quite significant when searching for global extrema using techniques such as iterated descent 2 or iterated genetic hill climbing 1 where many searches for local extrema are required. This advantage is further emphasized by the brain imaging problem: volumes of measurements can be analyzed without reconfiguring the interconnections between computational units; hence, the cost of developing problem specific hardware for finding local extrema may be justifiable. Finally. simulations have contributed plausibility to a possible scheme for non-invasively imaging cerebral activity. APPENDIX To show that for a dynamical system based on method Vr E,. is a monotonic function of time given that all functions g are differentiable and monotonic in the same sense, we need to show that the derivative of ET with respect to time is semi-definite: dET dt - N dFT dg j N [M dVj ] dg, = L - - - L D( )(Vj)-- - . j dgj dt i dt (Ala) dt Substituting Eq. (2a), -dET == dt N [ I, f? dV'] dg? 'dt dt (Alb) -D(M)(v ? ) + - ' - ' . j ' Using Eq. (1), d~ = N [dV i dt ~, dt ]2 dgi ~O (Alc) av?, s as needed. The appropriate inequality depends on the sense in which functions 1 are monotonic. In a similar manner, the result can be obtained for method Vv>- With the condition that functions 1 are differentiable, we can show that the derivative of 4 is semi-definite: dE.". dt _v dv? N [ dV'] dv? v= IN ,dF' - I, D(M)(Vj)_-' - ' . j dVj dt j dt (A2a) dt Using Eqs. (3a) and (1), dEv N [dVj dt - ~ , dt --~- ]2~0 (A2b) S as needed. In order to use the results derived above to conclude that Eq. (1) can be used for optimization of functions 4 and Et in the vicinity of some point we need to show that there exists a neighborhood of Vo in which there exist solution trajectories to Eq. (1). The necessary existence theorems and transformations of Eq. (1) needed in order to apply the theorems can be found in many texts on ordinary differential equations; e.g. Guckenheimer and Holmes 11. Here, it is mainly important to state that the theorems require that functions ,?c(1), functions g are differentiable, and initial conditions are specified for all derivatives of lower order than M. vo. 484 ACKNOWLEDGEMENTS I would like to thank Dr. Michael Raugh and Dr. Pentti Kanerva for constructive criticism and support. I would like to thank Bill Baird and Dr. James Keeler for reviewing this work. I would like to thank Dr. Derek Fender, Dr. John Hopfield, and Dr. Stanley Klein for giving me opportunities that fostered this conglomeration of ideas. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [171 [18] [19] [20] [21] [22] [23] [24] REFERENCES Ackley D.H., "Stochastic iterated genetic bill climbing", PhD. dissertation, Carnegie Mellon U.,1987. Bawn E., Neural Networks for Computing, ed. Denker 1.S. (AlP Confrnc. Proc. 151, ed. Lerner R.G.), p53-58, 1986. Brody D.A., IEEE Trans. vBME-32, n2, pl06-110, 1968. Brody D.A., Terry F.H., !deker RE., IEEE Trans. vBME-20, p141-143, 1973. Cohen M.A., Grossberg S., IEEE Trans. vSMC-13, p815-826, 1983. Cuffin B.N., IEEE Trans. vBME-33, n9, p854-861. 1986. Darcey T.M., AIr J.P., Fender D.H., Prog. Brain Res., v54, pI28-134, 1980. Freeman W J., "Mass Action in the Nervous System", Academic Press, Inc., 1975. Gevins A.S., Morgan N.H., IEEE Trans., vBME-33, n12, pl054-1068, 1986. Goles E., Vichniac G.Y., Neural Networks for Computing, ed. Denker J.S. (AlP Confrnc. Proc. 151, ed. Lerner R.G.), p165-181, 1986. Guckenheimer J., Holmes P., "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields", Springer Verlag, 1983. Hopfield J.I., Proc. Nat!. Acad. Sci., v81, p3088-3092, 1984. Hopfield 1.1., Tank D.W., Bio. Cybrn., v52, p141-152, 1985. Hopfield 1.J., Tank D.W., Science, v233, n4764, p625-633, 1986. Horowitz P., Hill W., "The art of electronics", Cambridge U. Press, 1983. Hosek RS., Sances A., Jodat RW., Larson S.I., IEEE Trans., vBME-25, nS, p405-413, 1978. Hutchinson J.M., Koch C., Neural Networks for Computing, ed. Denker J.S. (AlP Confrnc. Proc. 151, ed. Lerner RG.), p235-240, 1986. Jeffery W., Rosner R, Astrophys. I., v310, p473-481, 1986. Lapedes A., Farber R., Neural Networks for Computing, ed. Denker 1.S. (AlP Confrnc. Proc. lSI, ed. Lerner RG.), p283-298, 1986. Leong H.M.F., ''Frequency dependence of electromagnetic fields: models appropriate for the brain", PhD. dissertation, California Institute of Technology, 1986. Platt I.C., Hopfield J.J., Neural Networks for Computing, ed. Denker I.S. (AlP Confrnc. Proc. 151, ed. Lerner RG.), p364-369, 1986. Press W.H., Flannery B.P., Teukolsky S.A., Vetterling W.T., "Numerical Recipes", Cambridge U. Press, 1986. Takeda M., Goodman J.W., Applied Optics, v25. n18, p3033-3046, 1986. Tank D.W., Hopfield I.J., "Neural computation by concentrating infornation in time", preprint, 1987.
19 |@word erate:1 version:2 seems:1 r:1 simulation:10 shading:5 moment:17 electronics:1 initial:12 contains:1 selecting:1 genetic:2 ala:1 lapedes:2 current:1 comparing:3 com:1 written:1 must:2 john:1 numerical:6 subsequent:1 j1:1 half:6 instantiate:2 device:3 guess:3 nervous:1 beginning:1 ith:1 short:1 dissertation:2 complication:1 ames:1 traverse:1 location:25 dell:2 five:2 mathematical:1 along:4 constructed:3 differential:5 incorrect:1 circuitous:1 olfactory:1 manner:1 notably:1 behavior:1 examine:2 multi:5 integrator:2 brain:4 freeman:2 spherical:4 decreasing:1 considering:3 increasing:1 becomes:1 provided:1 estimating:1 underlying:5 notation:2 circuit:6 mass:1 what:2 affirmative:1 extremum:18 finding:4 differentiation:1 transformation:1 ti:1 exactly:1 platt:1 bio:1 unit:36 medical:3 converse:1 appear:2 conductivity:1 superiority:1 positive:1 timing:1 local:5 semiconductor:2 acad:1 analyzing:1 gevins:2 subscript:1 path:14 might:4 initialization:1 studied:1 specifying:1 shaded:1 binder:1 range:2 bi:2 grossberg:1 definite:3 xr:1 significantly:1 word:1 integrating:1 refers:1 suggest:7 onto:1 cannot:1 operator:11 restriction:1 bill:2 map:1 center:1 attention:2 starting:3 independently:1 identifying:1 immediately:1 dipole:40 contradiction:1 q:1 holmes:2 searching:1 n12:1 variation:1 coordinate:1 analogous:1 v52:1 target:14 suppose:2 construction:2 caption:1 agreement:1 element:6 velocity:1 ignorable:1 std:2 cooperative:1 observed:8 role:1 ackley:1 preprint:1 calculate:1 region:1 movement:1 decrease:1 deeply:1 complexity:4 dynamic:1 solving:1 reviewing:1 creation:1 serve:1 efficiency:3 basis:1 triangle:1 comer:13 easily:1 hopfield:10 various:4 represented:1 derivation:1 describe:2 effective:1 artificial:11 choosing:1 neighborhood:1 parasite:6 quite:3 widely:1 solve:1 plausible:1 denser:1 ampl:1 interconnection:1 ability:3 transform:1 itself:1 inhomogeneity:1 superscript:1 advantage:2 differentiable:4 analytical:8 neighboring:1 relevant:1 detennined:2 degenerate:1 constituent:1 recipe:1 takeda:1 convergence:4 cluster:16 optimum:2 comparative:1 derive:3 develop:1 illustrate:2 measured:3 ij:1 erent:1 received:2 job:1 eq:48 implemented:3 come:1 met:1 rately:1 direction:3 tg2:1 radius:1 detennines:1 correct:2 farber:1 stochastic:1 human:1 alp:5 settle:3 implementing:1 require:2 suffices:1 electromagnetic:3 exploring:1 keeler:1 jdg:1 hold:2 proximity:1 considered:1 koch:1 great:1 mapping:10 circuitry:1 substituting:1 p141:2 major:1 smallest:1 purpose:2 proc:6 travel:2 tanh:1 visited:1 superposition:1 largest:1 tool:1 weighted:2 minimization:2 guckenheimer:2 sphericity:1 clearly:1 always:1 e7:1 schematized:3 avoid:1 derived:7 improvement:2 consistently:1 indicates:3 mainly:1 criticism:1 sense:4 dependent:1 vetterling:2 typically:1 lj:1 integrated:1 mth:1 interested:1 comprising:1 tank:3 among:1 favored:1 art:1 bifurcation:1 field:7 construct:1 once:1 having:2 identical:2 np:1 few:1 randomly:1 composed:1 dg:3 lerner:5 replaced:1 amplifier:3 irs:1 interest:4 picturing:1 possibility:3 putational:1 certainly:1 analyzed:1 integral:3 capable:3 partial:1 necessary:2 closer:1 perfonnance:3 traversing:2 initialized:1 re:2 theoretical:1 classify:1 gn:3 dev:2 assertion:1 ar:3 ordinary:4 cost:7 fusing:1 deviation:2 subset:1 euler:1 comprised:1 too:1 reported:8 hutchinson:1 accomplish:1 anticipates:1 combined:1 considerably:1 person:1 explores:1 cll:1 dvj:3 stay:1 physic:1 michael:1 continuously:1 squared:1 settled:3 dr:6 v7:1 compu:1 american:1 derivative:8 leading:1 corner:6 horowitz:2 li:1 potential:12 needn:1 de:1 summarized:2 coefficient:4 matter:1 baird:1 satisfy:1 inc:1 explicitly:1 vi:3 depends:2 multiplicative:1 view:2 tion:1 closed:1 sort:1 complicated:2 samp:1 om:1 air:1 square:1 characteristic:1 efficiently:1 conducting:1 correspond:3 identify:3 climbing:2 dealt:2 iterated:3 vil:1 trajectory:6 drive:1 researcher:2 justifiable:1 history:1 ncc:1 a2b:1 reach:2 umt:1 ed:10 definition:2 energy:1 derek:1 frequency:1 invasive:1 james:1 proof:1 associated:7 static:1 gain:1 concentrating:1 car:2 stanley:1 anticipating:1 actually:1 nasa:2 dt:21 follow:1 reflected:1 response:1 specify:1 unambiguously:1 done:5 dey:1 governing:1 just:1 d:2 nondiagonal:1 horizontal:2 replacing:1 nonlinear:2 aj:1 alb:1 requiring:1 hence:9 assigned:1 dgi:1 vicinity:1 symmetric:1 nonzero:1 deal:1 ll:1 width:2 noted:2 larson:1 criterion:1 generalized:2 hill:3 complete:1 vo:2 tn:1 motion:1 recently:1 fi:1 began:1 sigmoid:2 common:1 cohen:1 cerebral:9 jl:2 analog:4 volume:1 measurement:9 refer:1 significant:3 mellon:1 cambridge:2 dft:1 upheld:1 stable:3 similarity:1 sech:2 gj:1 longer:3 surface:7 cortex:1 showed:1 recent:2 perspective:3 optimizing:1 irrelevant:2 driven:1 route:1 certain:1 verlag:1 inequality:1 binary:4 tenns:1 arbitrarily:1 vt:4 qualify:1 success:1 morgan:2 minimum:7 additional:2 preceding:1 ey:2 determine:1 converge:1 period:1 monotonically:2 dashed:1 semi:3 multiple:1 full:1 faster:7 academic:1 calculation:1 plausibility:2 long:2 sphere:3 devised:2 schematic:1 prediction:1 dipolar:1 fonns:1 df:1 rosner:4 represent:1 achieved:1 addition:1 annealing:1 interval:1 harrison:1 decreased:1 source:4 singular:1 vsmc:1 goodman:1 operate:1 fonnulation:1 ineffective:1 tend:1 facilitates:1 seem:1 reconfiguring:1 near:5 noting:1 leong:2 revealed:1 concerned:1 asymtotically:1 variety:1 opposite:1 regarding:3 idea:2 det:2 dgj:1 jj:1 remark:1 adequate:2 action:1 useful:1 covered:1 clear:2 involve:1 transforms:1 hardware:5 rw:1 reduced:2 generate:1 exist:2 percentage:1 vy:6 inhibitory:1 lsi:1 gil:1 sign:1 correctly:1 klein:1 discrete:1 carnegie:1 threshold:2 advancing:1 imaging:8 graph:4 excludes:1 sum:5 year:1 run:1 angle:7 prog:1 reasonable:2 electronic:2 vn:3 oscillation:1 summarizes:2 appendix:2 acceptable:1 comparable:1 bit:5 brody:3 simplification:3 convergent:1 display:2 quadratic:6 activity:11 scalp:4 strength:3 optic:1 simulate:1 concluding:1 relatively:1 developing:1 p53:1 goles:1 combination:1 conjugate:1 describes:4 smaller:1 son:1 unity:1 describable:1 dv:10 intuitively:1 restricted:1 taken:6 ln:1 equation:12 kanerva:1 needed:5 locus:2 know:2 jeffery:4 multiplied:1 observe:1 apply:1 denker:5 appropriate:4 occurrence:2 homogeneously:1 existence:2 n9:1 denotes:6 cf:1 opportunity:1 unifying:1 fcg:1 giving:2 wax:1 practicality:1 hypercube:9 comparatively:1 already:1 added:1 question:1 dependence:1 diagonal:10 exhibit:1 gradient:10 thank:3 mapped:1 simulated:1 sci:1 me:2 extent:2 reason:1 toward:1 length:4 index:2 relationship:6 illustration:1 minimizing:1 balance:1 equivalently:1 unfortunately:1 stated:1 rise:2 contend:1 contributed:1 upper:1 vertical:1 ideker:1 observation:2 av:1 nullify:1 descent:1 situation:4 defining:1 head:2 locate:1 sequent:1 introduced:2 required:13 specified:4 connection:4 california:1 distinction:1 fostered:1 trans:6 suggested:1 dynamical:18 below:5 ev:3 pattern:1 usually:1 ofh:1 oj:1 explanation:1 terry:2 critical:1 settling:20 hybrid:1 circumvent:1 advanced:1 nth:1 representing:3 scheme:2 technology:2 imply:1 ne:1 started:1 text:3 understanding:1 acknowledgement:1 evolve:2 determining:1 relative:4 multiplication:1 expect:1 bear:1 a2a:1 moffett:1 degree:1 sufficient:3 experi:1 consistent:3 principle:9 pi:1 row:3 excitatory:1 gl:2 supported:1 last:2 free:1 side:1 allow:3 vv:23 bias:2 institute:3 understand:1 benefit:2 boundary:1 calculated:5 depth:6 valid:2 world:2 transition:14 seemed:1 reside:1 made:2 collection:1 forward:1 simplified:2 avoided:2 far:2 employing:1 contradict:1 observable:1 neurobiological:1 monotonicity:1 global:1 assumed:2 conclude:1 xi:2 search:11 continuous:1 table:8 ca:1 investigated:1 necessarily:1 electric:8 constructing:1 vj:6 diag:2 did:2 whole:1 n2:1 verifies:1 hwnan:1 fig:11 referred:3 rcd:1 scattered:2 encompassed:1 slow:1 aid:1 vr:5 n:1 explicit:1 wish:1 resistor:1 answering:1 formula:1 theorem:4 specific:1 emphasized:3 invasively:1 er:1 explored:1 x:1 offset:1 concern:1 exists:1 alc:1 phd:2 magnitude:7 nat:1 illustrates:3 occurring:1 overwhelmed:1 exti:1 flannery:2 rg:3 simply:3 scalar:5 bo:5 monotonic:5 springer:1 corresponds:3 teukolsky:2 wane:1 formulated:1 experimentally:2 change:1 included:1 specifically:1 uniformly:1 conductor:5 pentti:1 internal:1 support:2 latter:2 constructive:1 tested:1
984
190
282 Kanerva Contour-Map Encoding of Shape for Early Vision Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view. INTRODUCTION Early vision refers here to the first stages of visual perception of an experienced (adult human) observer. Overall, visual perception results in the identification of what is being viewed: We recognize an image as the letter A because it looks to us like other As we have seen. Early vision is the beginning of this process of identification-the making of the first guess. Early vision cannot be based on special or salient features. For example, we normally think of the letter A as being composed of two slanted strokes, / and \, meeting at the top and connected in the middle by a horizontal stroke, -. The strokes and their coincidences define all the features of A. However, we recognize the As in Figure 1 even though the strokes and the features, if present at all, do not stand out in the images. Contour-Map Encoding of Shape for Early Vision Most telling about human vision is that we can recognize such As after seeing more or less normal As only. The challenge of early vision, then, is to find general encoding mechanisms that turn these quite dissimilar images of the same object into similar internal representations while leaving the representations of different objects dissimilar; and to find basic pattern-recognition mechanisms that work with these representations. Since our main work is on associative memories, we have been interested in ways to encode images into long feature vectors suitable for such memories. The contour-map method of this paper encodes a variety of images into vectors for associative memories. REPRESENTING AN IMAGE AS A CONTOUR MAP Images take many forms: line drawings, silhouettes, outlines, dot-matrix pictures, gray-scale pictures, color pictures, and the like, and pictures that combine all these elements. Common to all is that they occupy a region of (two-dimensional) space. An early representation of an image should therefore be concerned with how the image controls its space or, in technical terms, how might it be represented as a field. Let us consider first a gray-scale image. It defines a field by how dark it is in different places (image intensity--a scalar field--the image itself is the field). A related field is given by how the darkness changes from place to place (gradient of intensity--a vector field) . Neither one is quite right for recognizing As because reversing the field (turning dark to light and light to dark) leaves us with the "same" A. However, the darkand-light reversal leaves the contour lines of the image unchanged (i.e., lines of uniform intensity--technically a tangent field perpendicular to the gradient field). My proposal is to base initial recognition on the contour lines. In line drawings and black-and-white images, which have only two darkness levels or "colors", the contour lines are not well defined. This is overcome by propagating the lines and the edges of the image outward and inward over areas of :'. : :: :::: : : ::..::: :: ::. ..... ....... ..... ..... .... .... ..... .....?. ........ ....... ........ .?.?. ....... .........?....... ... ....... .. .... ... .... ........... ......... :: :::::=:: :: :::; :::: ........ .. ... ....... :; ;:::::: ::;; :::;:: : to' ? ???? ?? , ????????? :............... ;:::!:~:::!~~:::::: ..?.. :; ~: : ::~ ::::::::::;; FIGURE 1. Various kinds of As. 283 284 ]{anerva uniform image intensity, in the manner of contour lines, roughly parallel to the lines and the edges. Figure 2 shows only a few such lines, but, in fact, the image is covered with them, running roughly parallel to each other. As a rule, exactly one contour line runs through any given point. Computing its direction is discussed near the end of the paper. ENCODING THE CONTOUR MAP Table 1 shows how the direction of the contour at a point can be encoded in three trits (-1, 0, 1 ternary variables) . The code divides 180 degrees into six equal sectors and assigns a codeword to each sector. The distance between two codewords is the number of (Hamming) units by which the words differ (L1 distance). The code is circular, and the distance between codewords is related directly to the difference in direction: Directions 30, 60, and 90 degrees apart are encoded with words that are 2, 4, and 6 units apart, respectively. The code wraps around, as do tangents, so that directions 180 degrees apart are encoded the same. For finer discrimination we would use some finer circular code. The zero-word 000, which is equally far from all other words in the code, is used for points at which the direction of the contour is ill-defined, such as the very centers of circles. This encoding makes the direction of the contour at any point on a map into a three-component vector. To encode the entire map, the vector field is sampled at a fixed, finite set of points, and the encodings of the sample points are concatenated in fixed order into a long vector. In preliminary studies we have used small sample sizes: 7 x 5 (= 35) sample points, each encoded into three trits, for a total vector of (3 x 35 =) 105 trits, and 8 x 8 sample points by three trits for a total vector of 192 trits. FIGURE 2. Propagating the contour. Contour-Map Encoding of Shape for Early Vision For an example, Figure 3 shows the digit 4 drawn on a 21-by-15-pixel grid. It also shows a 7 x 5 sampling grid laid over the image and the direction of the contour at the sample points (shown by short line segments). Below the image are the three-trit encodings of the sample points starting at the upper left corner and progressing by rows, concatenated into a 105-trit encoding of the entire image. In this encoding, + means +1 and - means -1. From Positions of the Code to Directional Sensors Each position of the three-trit code can be thought of as a directional sensor. For example, the center position senses contours at 90 degrees, plus or minus 45 degrees: It is 1 when the direction of the contour is closer to vertical than to horizontal (see Table 1). Similarly, each position of the long (105-trit) code for the entire map can be thought of as a sensor for a specific direction--plus or minus--at a specific location on the map. An array of sensors will thus encode an image. The sensors are like the direction-sensitive cells of the visual cortex. Such cells, of course, are not laid down with perfect regularity over the cortex, but that does not mean TABLE 1 Coarse Circular Code for Direction of Contour ~~===================== Direction, degrees Codeword ----------------------0 + 15 30 + 15 60 + 15 90 + 15 120 + 15 150 + 15 1 -1 -1 -1 -1 1 1 1 1 -1 1 -1 1 1 -1 1 -1 -1 180 + 15 1 -1 1 Undefined 0 0 . . . . . . 0 I I I \ ~~ f f f Ii! ~I [fl ,~ . f. ~.,.~ }:< "', ....... ? --. -- --\ \f ....... ....... -++ -++ --+ --+ 000 +-+-- -++ -++ --+ -++ +-+ +-+ +-- -++ -++ --+ 000 +-+ +-+ ++- --+ -+-+-++-+ -+++- f I ++-+-+-+--+ -+-++ ======================= FIGURE 3. Encoding an image. 285 286 Kanerva that they could not perform as encoders. Accordingly, a direction-sensitive cell can be thought of as a feature detector that encodes for a certain direction at a certain location in the visual or attentional field. An irregular array of randomly oriented sensors laid over images would produce perfectly good encodings of their contour maps. COMPARING TWO CONTOUR MAPS How closely do two contour maps resemble each other? For simplicity, we will compare maps of equal size (and shape) only. The maps are compared point to point. The difference at a point is the difference in the direction of the contour at that point on the two maps--that is, the magnitude of the lesser of the two angles made by the two contour lines that run through the two points that correspond to each other on the two maps. The maximum difference at a point is therefore 90 degrees. The entire maps are then compared by adding the pointwise differences over all the points (by integrating over the area of the map). The purpose of the encoding is to make the comparing of maps simple. The code is so constructed that the difference of two maps at a point is roughly proportional to the distance between the two (3-trit) codewords--one from each map--for that point. We need not even concern ourselves with the finding of the lesser of the two angles made by the crossing of the two contours; the distance between codewords accounts for that automatically. Entire maps are then compared by adding together the distances at the (35) sample points. This is equivalent to computing the distance between the (105-trit) codewords for the two maps. This distance is proportional to the difference between the maps, and it is approximately so because the maps are sampled at a small number of points and because the direction at each point is coded coarsely. COMPUTING THE DIRECTION OF THE CONTOUR We have not explored widely how to compute contours from images and merely outline here one method, not exactly biological, that works for line drawings and two-tone images and that can be generalized to gray-scale images and even to many multicolor images. We have also experimented with oriented, difference-of-Gaussian filters of Parent and Zucker (1985) and with cortex transforms of Watson (1987). The contours are based on a simple model of attraction, akin to gravity, by assuming that the lines and the edges of the image attract according to their distance from the point. The net attraction at any point on the image defines Contour-Map Encoding of Shape for Early Vision a gradient field, and the contours are perpendicular to it. In practice we work with pixels and assume, for the sake of the gravity model, that pixels of the same color--same as that of the sample point P for which we are computing the direction--have mass zero and those of the opposite color have mass one. For the direction to be independent of scale, the attractive force must be inversely proportional to some power of the distance. Powers greater than 2 make the computation local. For example, power 7 means that one pixel, twice as far as another, contributes only 1/128 as much as the other to the net force. To make the attraction somewhat insensitive to noise, a small constant, 3, is added to the distance. (The values 7 and 3 were chosed after a small amount of experimentation.) Hence, pixel X (of mass 1) attracts P with a magnitude -7 [d(P,X) + 3] force in the direction of X, where d(P,X) is the (Euclidean) distance between P and X. The vector sum of the forces over all pixels X (of mass 1) then is the attractive force at point P, and the direction of the contour at P is perpendicular to it. The magnitude of the vector surn is scaled by dividing it with the sum of the magnitudes of its components. This scaled magnitude indicates how well the direction is defined in the image. When this computation is made at a point on a (one-pixel wide) line, the result is a zero-vector (the gradient at the top of a ridge is zero). However, we want to use the direction of the line itself as the direction of the contour. To this end, we compute at each sample point P another vector that detects linear features, such as lines. This computation is based on the above attraction model, modified as follows: Pixels of the same color as P's now have mass one and those of the opposite color have mass zero (the pixel at P being always regarded as having mass zero); and the direction of the force, instead of being the angle from P to X, is twice that angle. The doubling of the angle makes attractive forces in opposite directions (along a line) reenforce each other and in perpendicular directions cancel out each other. The angle of the net force is then halved, and the magnitude of the force is scaled as above. The two computations yield two vectors, both representing the direction of the contour at a point. They can be combined into a single vector by doubling their angles, to eliminate lBO-degree ambiguities, by adding together the resulting vectors, and by halving the angle of the sum. The direction of the result gives the direction of the contour, and the magnitude of the result indicates how well 287 288 Kanerva this direction is defined. If the magnitude is below some threshold, the direction is taken to be undefined and is encoded with 000. SOME COMPARISONS The method is very general, which is at once its virtue and limitation. The virtue is that it works where more specific methods fail, the limitation that the specific methods are needed for specific problems. In our preliminary experiments with handwritten Zip-code digits, low-pass filtering (blurring) an image, as a method of encoding it, and contour maps resulted in similar rates of recognition by a sparse distributed memory. Higher rates on this same task were gotten by Denker et al. (1989) by encoding the image in terms of features specific to handwriting. To get an idea of the generality of contour maps, Figure 4 shows encoded maps of ten normal digits like that in Figure 3, and for three unusual digits barely recognizable by humans. The labels for the unusual ones and for their maps, 8a, 8b, and 9a, tell what digits they were intented to be. Table 2 of distances between the encoded maps shows that 8 gives only the second best match to 8a and 8b, whereas the digit closest to 9a indeed is 9. This suggest that a system trained on normal letters and digits would do 6 8a 1 r 8b 9a a a a 8a ? ? 8b 9a ? /./ FIGURE 4. Contour maps of digits. Unusual text. Contour-Map Encoding of Shape for Early Vision TABLE 2 Distances Between Normal and Unusual Digits of Figure 4 o 8a 8b 9a 62 38 70 1 2 95 71 89 80 88 66 3 4 74 91 64 77 90 109 6 7 8 9 87 83 73 65 99 103 86 88 62 67 51 83 79 73 59 5 -============================================= a fair job at recognizing the 'NIPS 1989' at the bottom of Figure 4. Systems that encode characters as bit maps, or that take them as composed of strokes, likewise trained, would not do nearly as well. Going back to the As of Figure 1, they can, with one exception, be recognized based on the map of a normal A. Logograms are a rich source of images of this kind. They are excellent for testing a vision system for generality. Finally, other oriented fields, not just contour maps, can be encoded with methods similar to this for recognition by an associative memory. Acknowledgements This research was supported by the National Aeronautics and Space Administration (NASA) with cooperative agreement No. NCC2-387 with the Universities Space Research Association. The idea of contour maps was inspired by the gridfonts of Douglas Hofstadter (1985). The first experiments with the contour-map method were done by Bruno Olshausen. The gravity model arose from discussions with Lauri Kanerva. David Rogers made the computer-drawn illustrations. References Denker, J.S., Gardner, W.R., Graf, H.P., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D., Baird, H.S., and Guyon, I. (1989) Neural Network Recognizer for HandWritten Zip Code Digits. In D.S. Touretzky (ed.), Advances in Neural Information Systems, Volume I. San Mateo, California: Kaufmann. 323-331. Hofstadter, D.R. (1985) Metamagical Themas. New Your: Basic Books. Parent, P., and Zucker, S.W. (1985) Trace Inference, Curvature Consistency, and Curve Detection. Report CIM86-3, McGill Research Center for Intelligent Machines, Montreal, Canada. Watson, A.W. (1987) The Cortex Transform: Rapid Computation of Simulated Neural Images. Computer Vision, Graphics, and Image Processing 39(3) :311-327. 289
190 |@word middle:1 mammal:1 minus:2 initial:1 blank:1 comparing:2 must:1 slanted:1 shape:8 discrimination:1 leaf:2 guess:1 tone:1 accordingly:1 beginning:1 short:1 coarse:1 location:2 ames:1 lbo:1 along:1 constructed:1 combine:1 recognizable:1 manner:1 indeed:1 rapid:1 roughly:3 inspired:1 detects:1 automatically:1 mass:7 inward:1 what:2 kind:2 finding:1 gravity:3 exactly:2 scaled:3 control:1 normally:1 unit:2 ncc2:1 local:1 encoding:17 approximately:1 might:1 black:1 plus:2 twice:2 mateo:1 perpendicular:4 testing:1 ternary:1 practice:1 prevalence:1 digit:10 area:2 thought:3 word:4 integrating:1 refers:1 seeing:1 suggest:2 get:1 cannot:1 darkness:2 equivalent:1 map:43 center:4 starting:1 simplicity:1 assigns:1 rule:1 attraction:4 array:2 regarded:1 multicolor:1 mcgill:1 agreement:1 element:1 crossing:1 recognition:5 cooperative:1 bottom:1 role:1 coincidence:1 region:1 connected:1 trained:2 segment:1 technically:1 blurring:1 easily:1 represented:1 various:1 tell:1 quite:2 encoded:9 widely:1 drawing:3 think:1 transform:1 itself:2 associative:4 net:3 parent:2 regularity:1 produce:1 perfect:1 object:3 montreal:1 propagating:2 job:1 dividing:1 resemble:1 differ:1 direction:34 closely:1 gotten:1 filter:1 human:3 rogers:1 preliminary:2 biological:1 around:1 normal:5 early:11 purpose:1 recognizer:1 label:1 jackel:1 sensitive:3 hubbard:1 sensor:6 gaussian:1 always:1 modified:1 arose:1 encode:4 indicates:2 progressing:1 inference:1 attract:1 entire:5 eliminate:1 going:1 interested:1 pixel:9 overall:1 ill:1 special:1 field:14 equal:2 once:1 having:1 sampling:1 look:1 cancel:1 nearly:1 report:1 intelligent:1 few:1 randomly:1 oriented:3 composed:2 recognize:3 resulted:1 national:1 ourselves:1 detection:1 circular:3 henderson:1 light:3 sens:1 undefined:2 edge:3 closer:1 divide:1 euclidean:1 circle:1 uniform:2 recognizing:4 graphic:1 encoders:1 my:1 combined:1 together:2 ambiguity:1 corner:1 book:1 account:1 baird:1 view:1 observer:1 parallel:2 kaufmann:1 likewise:1 correspond:1 yield:1 directional:2 identification:2 handwritten:2 finer:2 stroke:5 detector:1 touretzky:1 ed:1 hamming:1 handwriting:1 stop:1 sampled:2 color:6 back:1 nasa:2 higher:1 done:1 though:1 generality:2 just:1 stage:1 horizontal:2 defines:2 gray:3 olshausen:1 hence:1 white:1 attractive:3 chosed:1 generalized:1 outline:2 ridge:1 l1:1 image:36 common:1 insensitive:1 volume:1 discussed:1 association:1 grid:2 consistency:1 similarly:1 bruno:1 dot:1 zucker:2 cortex:5 aeronautics:1 base:1 curvature:1 halved:1 closest:1 apart:3 codeword:2 certain:2 watson:2 meeting:1 seen:1 greater:1 somewhat:1 zip:2 recognized:1 ii:1 themas:1 technical:1 match:1 long:4 equally:1 coded:1 halving:1 basic:2 vision:12 cell:3 irregular:1 proposal:1 whereas:1 want:1 leaving:1 source:1 near:1 concerned:1 variety:1 lauri:1 attracts:1 perfectly:1 opposite:3 idea:2 lesser:2 administration:1 six:1 akin:1 covered:1 outward:1 transforms:1 dark:3 amount:1 ten:1 occupy:1 coarsely:1 salient:1 threshold:1 drawn:2 neither:1 douglas:1 merely:1 sum:3 run:2 angle:8 letter:3 place:3 laid:3 guyon:1 bit:1 fl:1 your:1 encodes:2 sake:1 according:1 character:1 making:1 taken:1 kanerva:5 turn:1 mechanism:2 fail:1 needed:1 reversal:1 end:2 unusual:4 metamagical:1 experimentation:1 denker:2 top:2 running:1 concatenated:2 unchanged:1 added:1 codewords:5 gradient:4 wrap:1 distance:14 attentional:1 simulated:1 mail:1 barely:1 assuming:1 code:12 pointwise:1 illustration:1 sector:2 trace:1 rise:1 perform:1 upper:1 vertical:1 neuron:1 howard:1 finite:1 intensity:4 canada:1 david:1 california:2 nip:1 adult:1 below:2 perception:3 pattern:1 challenge:1 memory:6 power:3 suitable:2 force:9 turning:1 advanced:1 representing:2 inversely:1 picture:4 gardner:1 text:1 acknowledgement:1 tangent:2 graf:1 limitation:2 proportional:3 filtering:1 moffett:1 degree:8 row:1 course:1 supported:1 telling:1 institute:1 wide:1 sparse:1 distributed:1 overcome:1 curve:1 stand:1 contour:43 rich:1 made:4 san:1 far:2 silhouette:1 table:5 contributes:1 excellent:1 main:1 noise:1 fair:1 experienced:1 position:4 hofstadter:2 down:1 specific:6 explored:1 experimented:1 virtue:2 concern:1 adding:3 magnitude:8 visual:6 scalar:1 doubling:2 viewed:1 change:1 reversing:1 total:2 pentti:1 pas:1 exception:1 internal:1 people:1 support:1 dissimilar:2
985
1,900
Mixtures of Gaussian Processes Volker Tresp Siemens AG, Corporate Technology, Department of Neural Computation Otto-Hahn-Ring 6,81730 Miinchen, Germany Volker. [email protected] Abstract We introduce the mixture of Gaussian processes (MGP) model which is useful for applications in which the optimal bandwidth of a map is input dependent. The MGP is derived from the mixture of experts model and can also be used for modeling general conditional probability densities. We discuss how Gaussian processes -in particular in form of Gaussian process classification, the support vector machine and the MGP modelcan be used for quantifying the dependencies in graphical models. 1 Introduction Gaussian processes are typically used for regression where it is assumed that the underlying function is generated by one infinite-dimensional Gaussian distribution (i.e. we assume a Gaussian prior distribution). In Gaussian process regression (GPR) we further assume that output data are generated by additive Gaussian noise, i.e. we assume a Gaussian likelihood model. GPR can be generalized by using likelihood models from the exponential family of distributions which is useful for classification and the prediction of lifetimes or counts. The support vector machine (SVM) is a variant in which the likelihood model is not derived from the exponential family of distributions but rather uses functions with a discontinuous first derivative. In this paper we introduce another generalization of GPR in form of the mixture of Gaussian processes (MGP) model which is a variant of the well known mixture of experts (ME) model of Jacobs et al. (1991). The MGP model allows Gaussian processes to model general conditional probability densities. An advantage of the MGP model is that it is fast to train, if compared to the neural network ME model. Even more interesting, the MGP model is one possible approach of addressing the problem of input-dependent bandwidth requirements in GPR. Input-dependent bandwidth is useful if either the complexity of the map is input dependent -requiring a higher bandwidth in regions of high complexity- or if the input data distribution is input dependent. In the latter case, one would prefer Gaussian processes with a higher bandwidth in regions with many data points and a lower bandwidth in regions with lower data density. If GPR models with different bandwidths are used, the MGP approach allows the system to self-organize by locally selecting the GPR model with the appropriate optimal bandwidth. Gaussian process classifiers, the support vector machine and the MGP can be used to model the local dependencies in graphical models. Here, we are mostly interested in the case that the dependencies of a set of variables y is modified via Gaussian processes by a set of exogenous variables x. As an example consider a medical domain in which a Bayesian network of discrete variables y models the dependencies between diseases and symptoms and where these dependencies are modified by exogenous (often continuous) variables x representing quantities such as the patient's age, weight or blood pressure. Another example would be collaborative filtering where y might represent a set of goods and the correlation between customer preferences is modeled by a dependency network (another example of a graphical model). Here, exogenous variables such as income, gender and social status might be useful quantities to modify those dependencies. The paper is organized as follows. In the next section we briefly review Gaussian processes and their application to regression. In Section 3 we discuss generalizations of the simple GPR model. In Section 4 we introduce the MGP model and present experimental results. In Section 5 we discuss Gaussian processes in context with graphical models. In Section 6 we present conclusions. 2 Gaussian Processes In Gaussian Process Regression (GPR) one assumes that a priori a function f(x) is generated from an infinite-dimensional Gaussian distribution with zero mean and covariance K(x, Xk) = cav (f (x) , f(Xk)) where K(x, Xk) are positive definite kernel functions. In this paper we will only use Gaussian kernel functions of the form K(X,Xk) = Aexp (_llx - xk112) 2 28 with scale parameter 8 and amplitude A. Furthermore, we assume a set of N training data D = {(Xk' Yk) H'=l where targets are generated following a normal distribution with variance (72 such that P(ylf(x)) ex exp ( - 2~2 (f(x) - y)2) . (1) The expected value j(x) to an input x given the training data is a superposition of the kernel functions of the form N j(x) = L WkK(X, Xk). (2) k=l Here, Wk is the weight on the k-th kernel. Let K be the N x N Gram matrix with (K)k,j = cov(f(Xk), f(xj)). Then we have the relation fm = Kw where the components of fm = (f(XI), ... ,f(XN ))' are the values of f at the location of the training data and W = (WI' ... ' WN )'. As a result of this relationship we can either calculate the optimal W or we can calculate the optimal fm and then deduce the corresponding w-vector by matrix inversion. The latter approach is taken in this paper. Following the assumptions, the optimal fm minimizes the cost function (3) such that jm = K(K Here y + (72 I)-ly. = (YI, ... ,YN)' is the vector of targets and I is the N-dimensional unit matrix. 3 Generalized Gaussian Processes and the Support Vector Machine In generalized Gaussian processes the Gaussian prior assumption is maintained but the likelihood model is now derived from the exponential family of distributions. The most important special cases are two-class classification 1 P(y = 1If(x)) = 1 + exp( - f(x)) and multiple-class classification. Here, y is a discrete variable with C states and P(y . exp (h(x)) = ~Ih(x), ... , fdx)) = ~c;;-----"---'-'-----'----'--'-? Lj=l exp (/i(x)) (4) Note, that for multiple-class classification C Gaussian processes h (x), ... , f c (x) are used. Generalized Gaussian processes are discusses in Tresp (2000). The special case of classification was discussed by Williams and Barber (1998) from a Bayesian perspective. The related smoothing splines approaches are discussed in Fahrmeir and Tutz (1994). For generalized Gaussian processes, the optimization of the cost function is based on an iterative Fisher scoring procedure. Incidentally, the support vector machine (SVM) can also be considered to be a generalized Gaussian process model with P(ylf(x)) ex exp (-const(l- yf(x))+). Here, y E {-I, I}, the operation 0+ sets all negative values equal to zero and const is a constant (Sollich (2000? .1 The SVM cost function is particularly interesting since due to its discontinuous first derivative, many components of the optimal weight vector w are zero, i.e. we obtain sparse solutions. 4 Mixtures of Gaussian Processes GPR employs a global scale parameter s. In many applications it might be more desirable to permit an input-dependent scale parameter: the complexity of the map might be input dependent or the input data density might be nonuniform. In the latter case one might want to use a smaller scale parameter in regions with high data density. This is the main motivation for introducing another generalization of the simple GPR model, the mixture of Gaussian processes (MGP) model, which is a variant of the mixture of experts model of Jacobs et al. (1991). Here, a set of GPR models with different scale parameters is used and the system can autonomously decide which GPR model is appropriate for a particular region of input space. Let P'(x) = {ff.'(x), ... , ff{(x)} denote this set of M GPR models. The state ofa discrete M -state variable z determines which of the GPR models is active for a given input x. The state of z is estimated by an M -class classification Gaussian process model with P(z = iIFZ(x)) = exp (ft(x)) L~l exp (Jj(x)) where FZ(x) = {f{(x), ... , fit (x)} denotes a second set of M Gaussian processes. Finally, we use a set of M Gaussian processes FU (x) = {ff(x) , ... ,fM(X)} to model the input-dependent noise variance of the GPR models. The likelihood model given the state of z P(ylz, P'(x), F U(x)) = G (yj ff'(x), exp(2g (x))) is a Gaussian centered at ff(x) and with variance (exp(2J:(x))). The exponential is used to ensure positivity. Note that G(aj b, c) is our notation for a Gaussian density with mean b, variance c, evaluated at a. In the remaining parts of the paper we will not denote the lProperly normalizing the conditional probability density is somewhat tricky and is discussed in detail in Sollich (2000). dependency on the Gaussian processes explicitely, e.g we will write P(Ylz, x) instead of P(Ylz, F/1-(x), FU (x)). Since z is a latent variable we obtain with M P(Ylx) M = LP(z = ilx) G(Yjft(x),exp(2ff(x))) E(Ylx) =L i=l P(z = ilx) ft(x) i=l the well known mixture of experts network of Jacobs et al (1991) where the Jt(x) are the (Gaussian process) experts and P(z = ilx) is the gating network. Figure 2 (left) illustrates the dependencies in the GPR model. 4.1 EM Fisher Scoring Learning Rules Although a priori the functions f are Gaussian distributed, this is not necessarily true -in contrast to simple GPR in Section 2- for the posterior distribution due to the nonlinear nature of the model. Therefore one is typically interested in the minimum of the negative logarithm of the posterior density N M - L log L P(z = ilxk) G (Ykj ft(Xk), exp(2ff(xk))) k=l i=l 1M 1M 1M +2 LU:,m)'('E:,m)-l f:,m+ 2 Lut,m)'('Er,m)-l ft,m+ 2 LU;,m)'('Ef,m)-l f;,m. i=l i=l i=l The superscript m denotes the vectors and matrices defined at the measurement point, e.g. ft ,m = Ut(X1), . .. , ft(XN))'. In the E-step, based on the current estimates of the Gaussian processes at the data points, the state of the latent variable is estimated as In the M-step, based on the E-step, the Gaussian processes at the data points are updated. We obtain !t'm = 'Er,m ('Er,m + wr,m) -1 ym where wr,m is a diagonal matrix with entries (Wr,mhk = exp(2!f(xk))/ F(z = ilxk, Yk). Note, that data with a small F(z = ilxk,Yk) obtain a small weight. To update the other Gaussian processes iterative Fisher scoring steps have to be used as shown in the appendix. There is a serious problem with overtraining in the MGP approach. The reason is that the GPR model with the highest bandwidth tends to obtain the highest weight in the E-step since it provides the best fit to the data. There is an easy fix for the MGP: For calculating the responses of the Gaussian processes at Xk in the E-step we use all training data except (Xk, Yk). Fortunately, this calculation is very cheap in the case of Gaussian processes since for example )_ Yk - it(Xk) 1_/1-( i Xk - Yk 1- Si ,kk where it(Xk) denotes the estimates at the training data point Xk not using (Xk, Yk) . Here, Si,kk is the k-th diagonal element of Si = 'Er,m ('Er,m + wr,m)-1. 2 2 See Hofmann (2000) for a discussion of the convergence of this type of algorithms. ~/ ./ / / 0.5 vpt \. 0.5 ~ / / ~ III ti 0 > c 0 u V (\ i.: -1 'JllA -2 -1 I il. -0.5 I I I I I I / .Ii 0 x -1 -2 2 -1 0 2 x 0.8 0.5 ~0.6 c.. ?Ii (9 :2 a:- 0.4 0 -0.5 0.2 0 -2 0 :( I -0.5 / Q) -1 -1 0 2 -2 x -1 0 2 x Figure 1: The input data are generated from a Gaussian distribution with unit variance and mean O. The output data are generated from a step function (0, bottom right). The top left plot shows the map formed by three GPR models with different bandwidths. As can be seen no individual model achieves a good map. Then a MGP model was trained using the three GPR models. The top right plot shows the GPR models after convergence. The bottom left plot shows P{ z = ilx). The GPR model with the highest bandwidth models the transition at zero, the GPR model with an intermediate bandwidth models the intermediate region and the GPR model with the lowest bandwidth models the extreme regions. The bottom right plot shows the data 0 and the fit obtained by the complete MGP model which is better than the map formed by any of the individual GPR models. 4.2 Experiments Figure 1 illustrates how the MGP divides up a complex task into subtasks modeled by the individual GPR models (see caption). By dividing up the task, the MGP model can potentially achieve a performance which is better than the performance of any individual model. Table 1 shows results from artificial data sets and real world data sets. In all cases, the performance of the MGP is better than the mean performance of the GPR models and also better than the performance of the mean (obtained by averaging the predictions of all GPR models). 5 Gaussian Processes for Graphical Models Gaussian processes can be useful models for quantifying the dependencies in Bayesian networks and dependency networks (the latter were introduced in Hofmann and Tresp, 1998, Heckerman et ai., 2000), in particular when parent variables are continuous quantities. If the child variable is discrete, Gaussian process classification or the SVM are appropriate models whereas when the child variable is continuous, the MGP model can be employed as a general conditional density estimator. Typically one would require that the continuous input variables to the Gaussian process systems x are known. It might therefore be Table 1: The table shows results using artificial and real data sets of size N = 100 using M = 10 GPR models. The data set ART is generated by adding Gaussian noise with a standard deviation of 0.2 to a map defined by 5 normalized Gaussian bumps. numin is the number of inputs. The bandwidth s was generated randomly between 0 and max. s. Furthermore, mean peif. is the mean squared test set error of all GPR networks and peif. of mean is the mean squared test set error achieved by simple averaging the predictions. The last column shows the performance of the MGP. I Data I numin I max. s I mean perf. I perf. of mean I MGP ART 0.0054 1 1 0.0167 0.0080 0.0345 0.0239 ART 2 0.0573 3 ART 0.1994 0.0808 5 6 0.1383 ART 0.1670 0.1135 0.0739 10 10 ART 20 20 0.1716 0.1203 0.0662 HOUSING 13 10 0.4677 0.3568 0.2634 0.8804 BUPA 20 0.9654 6 0.9067 DIABETES 40 0.8230 8 0.7660 0.7275 WAVEFORM 21 40 0.6295 0.4453 0.5979 useful to consider those as exogenous variables which modify the dependencies in a graphical model of y-variables as shown in Figure 2 (right). As an example consider a medical domain in which a Bayesian network of discrete variables y models the dependencies between diseases and symptoms and where these dependencies are modified by exogenous (often continuous) variables x representing quantities such as the patient's age, weight or blood pressure. Another example would be collaborative filtering where y might represent a set of goods and the correlation between customer preferences is modeled by a dependency network as in Heckerman et al. (2000). Here, exogenous variables such as income, gender and social status might be useful quantities to modify those correlations. Note, that the GPR model itself can also be considered to be a graphical model with dependencies modeled as Gaussian processes (compare Figure 2). Readers might also be interested in the related and independent paper by Friedman and Nachman (2000) in which those authors used GPR systems (not in form of the MGP) to perform structural learning in Bayesian networks of continuous variables. 6 Conclusions We demonstrated that Gaussian processes can be useful building blocks for forming complex probabilistic models. In particular we introduced the MGP model and demonstrated how Gaussian processes can model the dependencies in graphical models. 7 Appendix For r and r the mode estimates are found by iterating Newton-Raphson equations JCHl) = jO) - iI-I (l)J(l) where J(l) is the Jacobian and iI(l) the Hessian matrix for which certain interactions are ignored. One obtains for (1 f'z'I. ,m,(I+I) = ~z,m (~z , m 'I. + WZ ,m,(I)) -1 'I. z m (I) di " w;,m ,(I) = 1,2, ... ) the following update equations. 'I. ( , (Wz ,m,(I)dz ,m,(I) 'l. + f'z ,m,o)) 'I. t '(I))N = P(z = ilxk' Yk) - P (z = ilxk) k=I ' = diag ([P(I\z = ilxk)(l - P(l)(z = ilxk))]-1 (=1 . where Figure 2: Left: The graphical structure of an MGP model consisting of the discrete latent variable z, the continuous variable y and input variable x. The probability density of z is dependent on the Gaussian processes F Z. The probability distribution of y is dependent on the state of z and of the Gaussian processes FI' , FO". Right: An example of a Bayesian network which contains the variables Y1, Y2, Y3, Y4. Some of the dependencies are modified by x via Gaussian processes it, /2, h. Similarly, J; ,m,(/+tl = ~f, m (~f,m + ~f,m'(/l) -1 Ge _tf;f,m,(ll + J;,m,(I)) where e is an N-dimensional vector of ones and References [1] Jacobs, R. A., Jordan, M. I., Nowlan, S. J., Hinton, J. E. (1991). Adaptive Mixtures of Local Experts, Neural Computation, 3. [2] Tresp, V. (2000). The Generalized Bayesian Committee Machine. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2000. [3] Williams, C. K. I., Barber. D. (1998). Bayesian Classification with Gaussian Processes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12). [4] Fahrmeir, L., Tutz, G. (1994) Multivariate Statistical Modeling Based on Generalized Linear Models, Springer. [5] Sollich, P. (2000). Probabilistic Methods for Support Vector Machines. In Solla, S. A., Leen, T. K., Miiller, K.-R. (Eds.), Advances in Neural Information Processing Systems 12, MIT Press. [6] Hofmann R. (2000). Lemen der Struktur nichtlinearer Abhiingingkeiten mit graphischen Modellen. PhD Dissertation. [7] Hofmann, R., Tresp, V. (1998). Nonlinear Markov Networks for Continuous Variables. In Jordan, M. I., Kearns, M. S., Solla, S. A., (Eds.), Advances in Neural Information Processing Systems 10, MIT Press. [8] Heckerman, D. , Chickering, D., Meek, c., Rounthwaite, R., Kadie C. (2000). Dependency Networks for Inference, Collaborative Filtering, and Data Visualization .. Journal of Machine Learning Research, 1. [9] Friedman, N., Nachman, I. (2000). Gaussian Process Networks. In Boutilier, C., Goldszmidt, M., (Eds.), Proc. Sixteenth Conf. on Uncertainty in Artificial Intelligence (UAl).
1900 |@word briefly:1 inversion:1 covariance:1 jacob:4 pressure:2 contains:1 selecting:1 current:1 nowlan:1 si:3 additive:1 kdd:1 hofmann:4 cheap:1 plot:4 update:2 intelligence:2 xk:17 dissertation:1 provides:1 miinchen:1 preference:2 location:1 introduce:3 expected:1 jm:1 underlying:1 notation:1 lowest:1 minimizes:1 ag:1 y3:1 ti:1 ofa:1 classifier:1 tricky:1 unit:2 medical:2 ly:1 yn:1 organize:1 positive:1 local:2 modify:3 tends:1 might:10 bupa:1 fdx:1 yj:1 block:1 definite:1 procedure:1 context:1 map:7 customer:2 demonstrated:2 dz:1 williams:2 rule:1 estimator:1 updated:1 target:2 caption:1 us:1 diabetes:1 element:1 particularly:1 bottom:3 ft:6 calculate:2 region:7 autonomously:1 solla:2 highest:3 yk:8 disease:2 complexity:3 trained:1 train:1 fast:1 artificial:3 vpt:1 otto:1 cov:1 itself:1 superscript:1 housing:1 advantage:1 interaction:1 achieve:1 sixteenth:1 convergence:2 parent:1 requirement:1 incidentally:1 ring:1 dividing:1 waveform:1 discontinuous:2 centered:1 require:1 fix:1 generalization:3 considered:2 normal:1 exp:12 bump:1 achieves:1 proc:1 nachman:2 superposition:1 tf:1 mit:3 gaussian:58 modified:4 rather:1 volker:2 derived:3 likelihood:5 contrast:1 sigkdd:1 inference:1 dependent:10 typically:3 lj:1 relation:1 interested:3 germany:1 classification:9 priori:2 smoothing:1 special:2 art:6 equal:1 kw:1 spline:1 serious:1 employ:1 randomly:1 individual:4 consisting:1 friedman:2 mining:1 mixture:10 extreme:1 fu:2 divide:1 logarithm:1 column:1 modeling:2 cost:3 introducing:1 addressing:1 entry:1 deviation:1 dependency:19 density:10 international:1 probabilistic:2 ym:1 jo:1 squared:2 positivity:1 conf:1 expert:6 derivative:2 de:1 wk:1 kadie:1 exogenous:6 collaborative:3 il:1 formed:2 variance:5 bayesian:8 lu:2 overtraining:1 fo:1 ed:3 sixth:1 di:1 knowledge:1 ut:1 organized:1 amplitude:1 higher:2 response:1 leen:1 evaluated:1 symptom:2 lifetime:1 furthermore:2 correlation:3 nonlinear:2 mode:1 yf:1 aj:1 building:1 requiring:1 true:1 normalized:1 y2:1 wkk:1 ll:1 self:1 maintained:1 generalized:8 complete:1 ef:1 fi:1 discussed:3 measurement:1 llx:1 ai:1 aexp:1 similarly:1 ylf:2 deduce:1 posterior:2 multivariate:1 perspective:1 certain:1 yi:1 der:1 scoring:3 seen:1 minimum:1 fortunately:1 somewhat:1 employed:1 ii:4 multiple:2 corporate:1 desirable:1 calculation:1 raphson:1 prediction:3 variant:3 regression:4 patient:2 represent:2 kernel:4 achieved:1 whereas:1 want:1 jordan:2 structural:1 intermediate:2 iii:1 easy:1 wn:1 xj:1 fit:3 mgp:24 bandwidth:14 fm:5 miiller:1 hessian:1 jj:1 ignored:1 useful:8 iterating:1 boutilier:1 ylx:2 locally:1 fz:1 estimated:2 wr:4 discrete:6 write:1 blood:2 ylz:3 uncertainty:1 family:3 reader:1 decide:1 prefer:1 appendix:2 meek:1 department:1 smaller:1 sollich:3 em:1 heckerman:3 wi:1 lp:1 taken:1 equation:2 visualization:1 discus:4 count:1 committee:1 ge:1 operation:1 permit:1 appropriate:3 assumes:1 denotes:3 ensure:1 remaining:1 top:2 graphical:9 newton:1 const:2 calculating:1 hahn:1 quantity:5 diagonal:2 me:2 barber:2 reason:1 modeled:4 relationship:1 kk:2 y4:1 mostly:1 potentially:1 negative:2 perform:1 markov:1 hinton:1 y1:1 nonuniform:1 subtasks:1 introduced:2 pattern:1 ilxk:7 wz:2 max:2 ual:1 representing:2 technology:1 perf:2 rounthwaite:1 tresp:6 tutz:2 review:1 prior:2 discovery:1 interesting:2 filtering:3 age:2 last:1 fahrmeir:2 mchp:1 sparse:1 distributed:1 xn:2 gram:1 transition:1 world:1 author:1 adaptive:1 income:2 social:2 lut:1 transaction:1 obtains:1 status:2 global:1 active:1 assumed:1 explicitely:1 xi:1 continuous:8 iterative:2 latent:3 table:3 nature:1 necessarily:1 complex:2 domain:2 diag:1 main:1 motivation:1 noise:3 child:2 x1:1 tl:1 ff:7 exponential:4 chickering:1 gpr:32 jacobian:1 jt:1 gating:1 er:5 svm:4 normalizing:1 ih:1 adding:1 phd:1 illustrates:2 ilx:4 forming:1 springer:1 gender:2 determines:1 acm:1 conditional:4 quantifying:2 fisher:3 cav:1 infinite:2 except:1 averaging:2 kearns:1 experimental:1 siemens:2 support:6 latter:4 goldszmidt:1 ykj:1 ex:2
986
1,901
Interactive Parts Model: an Application to Recognition of On-line Cursive Script Predrag Neskovic, Philip C Davis' and Leon N Cooper Physics Department and Institute for Brain and Neural Systems Brown University, Providence, RI 02912 Abstract In this work, we introduce an Interactive Parts (IP) model as an alternative to Hidden Markov Models (HMMs). We t ested both models on a database of on-line cursive script. We show that implementations of HMMs and the IP model, in which all letters are assumed to have the same average width , give comparable results. However , in contrast to HMMs, the IP model can handle duration modeling without an increase in computational complexity. 1 Introduction Hidden Markov models [9] have been a dominant paradigm in speech and handwriting recognition over the past several decades. The success of HMMs is primarily due to their ability to model the statistical and sequential nature of speech and handwriting data. However , HMMs have a numb er of weaknesses [2] . First , discriminative powers of HMMs are weak since the training algorithm is based on a Maximum Likelihood Estimate (MLE) criterion, whereas the optimal training should be based on a Maximum a Posteriori (MAP) criterion [2] . Second , in most HMMs, only first or second order dependencies are assumed. Although explicit duration HMMs model data more accurately, the computational cost of such modeling is high [5]. To overcome the first problem, it has been suggested [1 , 11,2] that Neural Networks (NNs) should be used for estimating emission probabilities. Since NNs cannot deal well with sequential data, they are often used in combination with HMMs as hybrid NN/HMM systems [2 , 11]. In this work, we introduce a new model that provides a possible solution to the second problem . In addition, this new objective function can be cast into a NNbased framework [7, 8] and can easily deal with the sequential nature of handwriting . In our approach, we model an object as a set of local parts arranged at specific spatial locations. 'Now at MIT Lincoln Laboratory, Lexington, MA 02420-9108 a 0.6 O.B 0.3 d 0.3 e 0.3 0.1 0.4 SHAPE DISTORTIONS 0 o1t~ _________ SPATIAL at 0.2 0.7 DISTORTIONS Cl ct cut Figure 1: Effect of shape distortion, and Figure 2: Some of the non-zero elements spatial distortions applied on the word of the detection matrix associated with "act" . the word "act". Parts-based representation has been used in face detection systems [3] and has recently been applied to spotting keywords in cursive handwriting data [4]. Although the model proposed in [4] presents a rigorous probabilistic approach, it only models the positions of key-points and , in order to learn the appropriate statistics, it requires many ground-truthed training examples. In this work, we focus on modeling one dimensional objects. In our application, an object is a handwritten word and its parts are the letters. However, the m ethod we propose is quite general and can easily be extended to two dimensional problems. 2 The Objective Function In our approach , we assume that a handwritten pattern is a distorted version of one of the dictionary words. Furthermore, we assume that any distortion of a word can be expressed as a combination of two types of local distortions [6]: a) shape distortions of one or more letters, and b) spatial distortions, also called domain warping, as illustrated in Figure 1. In the latter case, the shape of each letter is unchanged but the lo cation of one or more letters is perturbed. Shape distortions can be captured using "letter detectors". A number of different techniques can be used to construct letter detectors. In our implementation, we use a neural network-based approach. The output of a letter detector is in the range [0 - 1], where 1 corresponds to the undistorted shape of the corresponding letter. Since it is not known, a priori, where the letters are located in the pattern, letter detectors , for each letter of the alphabet , are arranged over the pattern so that the pattern is completely covered by their (overlapping) receptive fields. The outputs of the letter detectors form a det ection matrix, Figure 2. Each row of the detection matrix represents one letter and each column corresponds to the position of the letter within the pattern. An element of the detection matrix is lab eled as d!:(x), where k denotes the class of the letter, k E [1 , ... ,26], and the x represents the column numb er . In general , the detection matrix contains a large number of "false alarms" due to the fact that local segments are often ambiguous. The recognition system segments a pattern by selecting one detection matrix element for each letter of a given dictionary word 1. To measure spatial distortions, one must first choose a reference point from which distortions are measured. It is clear that for any choice of reference point, the location estimates for letters that are not close to the reference point might be very poor. For this reason, we chose a representation in which each letter serves as a reference point to estimate the position of every other letter. This representation allows translation invariant recognition, is very robust (since it does not depend on any single reference point) and very accurate (since it includes nearest neighbor reference points). To evaluate the level of distortion of a handwritten pattern from a given dictionary word, we introduce an objective function. The value of this function represents the amount of distortion of the pattern from the dictionary word. We require that the objective function reach a minimal value if all the letters that constitute the dictionary word are detected with the highest confidence and are located at the locations with highest expectation values. Furthermore, we require that the dependence of the function on one of its letters be smaller for longer words. One function with similar properties to these is the energy function of a system of interacting particles, Li,j qiUi,j(Xi, Xj)qj. If we assume that all the letters are of the same size, we can map 1) letter detection estimates into "charge" and 2) choose interaction terms (potentials) to reflect the expected relative positioning of the letters (detection matrix elements). The energy function of the n -th dictionary word, is then Ln En(x) = L di(x;)Ui~j(xi, xj)d'J(xj), (1) i ,j=l,iicj where Ln is the number of letters in the word, Xi is the location of the i - th letter of the n - th dictionary word, and = (Xl,? .. , XLJ is a particular configuration of detection matrix elements. Although this equation has a similar form as, say, the Coulomb energy, it is much more complicated. The interaction terms Ui,j are more complex than l/r, and each "charge", di(Xi), does not have a fixed value, but depends on its location. Note that this energy is a function of a specific choice of elements from the detection matrix, x, a specific segmentation of the word. x Interaction terms can be calculated from training data in a number of different ways. One possibility is to use the EM algorithm [9] and do the training for each dictionary word. Another possibility is to propagate nearest neighbor estimates. Let us denote with the symbol pij (Xi, Xj) the (pairwise) probability of finding the j - th letter of the n - th dictionary word at distance X = Xj - Xi from the location of the i - th letter. A simple way to approximate pairwise probabilities is to find the probability distribution of letter widths for each letter and then from single letter distributions calculate nearest neighbor pairwise probabilities. Knowing the nearest neighbor probabilities, it is then easy to propagate them and find the pairwise probabilities between any two letters of any dictionary word [7]. Interaction potentials are related to pairwise probabilities (using the Boltzmann distribution and setting j3 = 1/ kT = 1), as Ui~j(Xi,Xj) = -lnpij(xi,Xj)+C. Since the interaction potentials are defined up to a constant, we can selectively 1 Note that this segmentation corresponds to finding the centers of the letters, as opposed to segmenting a word into letters by finding their boundaries. I b u - c __---//// Figure 3: Solid line: an example of a pairwise probability distribution for neighboring letters. Dashed lines: a family of corresponding interaction potentials. Figure 4: Modified interaction potential. Regions x ::::: a and x :::: b are the "forbidden" regions for letter locations. In the regions a < x < a' and b' < x < b the interaction term is zero. change the value of their minima by choosing different values for C, Fig. 3. It is important to stress that the only valid domain for the interaction terms is the region for which Ui,j < 0 since for each pair ofletters (i, j) we want to simultaneously minimize the interaction term Ui,j and to maximize the term d i ?dj 2. We will assume that there is a value, Pmin, for the pairwise probability below which the estimate of the letter location is not reliable. So, for every Pij such that 0 < Pij < Pmin, we set Pij = Pmin? We choose the value ofthe constant such that Ui,j = -In(Pmin)+C = 0, Fig. 4. In practice, this means that there is an effective range of influence for each letter, and beyond that range the influence ofthe letter is zero. In the limiting case, one can get a nearest neighbor approximation by appropriately setting Pmin. It is clear that the interaction terms put constraints on the possible lo cations of the letters of a given dictionary word. They define "allowed" regions, where the letters can be found, unimportant regions, where the influence of a letter on other letters is zero, and not-allowed regions (U = (0) , which have zero probability of finding a letter in that region, Fig. 4. The task of recognition can now be formulated as follows. For a given dictionary word, find the configuration of elements from the detection matrix (a specific segmentation ofthe pattern) such that the energy is minimal. Then, in order to find the best dictionary word, repeat the previous procedure for every dictionary word and associate with the pattern the dictionary word with lowest energy. If we denote by X the space of all possible segmentations of the pattern, then the final segmentation of the pattern, x*, is given as x* = argmin~Ex,nEN(En(x)). (2) where the index n runs through the dictionary words. 3 Implementation and an Overview of the System An overview of the system is illustrated in Fig. 5. A raw data file, representing a handwritten word, contains x and y positions of the pen recorded every 10 milliseconds. This input signal is first transformed into strokes, which are defined as lines between points with zero velocity in the y direction. Each stroke is characterized by 2 For Ui,J > 0, increasing di ?d J would increase, rather than decrease, the energy function. Comparison Between the IP model and HMMs Most Likely Word Most Likely Word t ~ __ HM_M_S__ ___ t ~I ~I IP_m_o_de_l~ t - - r I--- r-- t ' FIJ ~ .H..... ' 0" Wrtter Number Preprocessor t Handwritten Pattern Figure 5: An overview of the system. Figure 6: Comparison of recognition resuits on 10 writers using the IP model and HMMs. a set of features as suggested in [10]. The preprocessor extracts these features from each stroke and supplies them to the neural network. We have built a multi-layer feedforward network based on a weight sharing technique to detect letters. This particular architecture was proposed by Rumelhart [10]. Similar networks can also be found in literature by the name Time Delay Neural Network , (TDNN) [ll]). In our implementation, the network has one hidden layer with thirty rows of hidden units. For details of the network architecture see [10 , 7]. The output of the NN, the detection matrix, is then supplied to the HMM-based and IP model-based post-processors, Fig. 5. For both models , we assume that every letter has the same average width. Interaction Terms. The first approximation for interaction t erms is to assume a "square well" shape. Each interaction t erm is then defined with only three parameters , the left boundary a, the right boundary b and the depth of the well, en, which are the same for all the nearest neighbor letters, Fig. 7. The lower and upper limits for the i - th and j - th non-adjacent interaction terms can then b e approximated as aij = Ij - il . a and bij = Ij - il . b, respectively. Nearest Neighbor Approximation. Since the exact solution of the energy function given by Eq. (2) is often computationally infeasible (the detection matrices can exceed 40 columns in width for long words), one has to use some approximation technique. One possible solution is suggested in [7] , where contextual information is used to constrain the search space. Another possibility is to revise the energy function by considering only nearest neighbor t erms and then solve it exactly using a Dynamic Programming (DP) algorithm. We have used DP to find the optimal segmentation for each word . We then use this "optimal" configuration of letters to calculate the energy given by Eq. (1). It is important to m ention that we have introduced beginning (B) and end (E) "letters" to mark the beginning and end of the pattern, and their detection probabilities are set to some constant value 3 . Hidden Markov Models. The goal of the recognition system is to find the dictionary word with the maximum posterior probability, p(w IO) = p(Olw)p(w)/p(O), 3This is necessary in order to define interaction potentials for single letter words. HMMs P(d) u x en / \ a -f- ' - - - - - - ' b Figure 7: Square well approximation of the interaction potential. Allowed region is defined as a < x < b, and forbidden regions are x < a, and x > b. Expected \~ Figure 8: The probability ofremaining in the same state for exactly d time steps: HMMs (dashed line) vs. expected probability (solid line). given the handwritten pattern, 6. Since p( 0) and p( w) are the same for all dictionary words, maximizing p(wIO) is equivalent to maximizing p(Olw). To find p(Olw), we constructed a left-right (or Bakis) HMM [9] for each dictionary word, ,\", where each letter was represented with one state. Given a dictionary word (a model ,\n), we calculated the maximumlikelihood,p(OI,\n) = Lall Q P(O, QI,\n) = Lall Q P(OIQ, ,\n)p(QI,\n), where the summation is done over all possible state sequences. We used the forward-backward procedure [9] for calculating the previous sum. Emission probabilities were calculated from the detection probabilities using Bayes' rule P(Oxlqk) = dk(x)P(Ox)/ P(qk), where P(qk) denotes the frequency of the k - th letter in the dictionary and the term P(Ox) is the same for all words and can therefore be omitted. Transition probabilities were adjusted until the best recognition results were obtained. Recall that we assumed that all letter widths are the same and therefore the transition probabilities are independent of letter pairs. 4 Results and Discussion Our dataset (obtained from David Rumelhart [10]) consists of words written by 100 different writers, where each writer wrote 1000 words. The size of the dictionary is 1000 words. The neural network was trained on 70 writers (70,000 words) and an independent group of writers was used as a cross validation set. We have tested both the IP model and HMMs on a group of 10 writers (different from the testing and cross-validation groups). The results for each model are depicted in Fig. 6. The IP model chose the correct word 79.89% of the time, while HMMs selected the correct word 79.44% of the time. Although the overall performance of the two models was almost identical, the results differ by several percent on individual writers. This suggests that our model could be used in combination with HMMs (e.g. with some averaging technique) to improve overall recognition. It is important to mention that new dictionary words can be easily added to the dictionary and the IP model does not require retraining on the new words (using the method of calculating interaction terms suggested in this paper). The only information about the new word that has to be supplied to the system is the ordering of the letters. Knowing the nearest neighbor pairwise probabilities, pi} (Xi, X j), it is easy to calculate the location estimates between any two letters of the new word. Furthermore, the IP model can easily recognize words where many of the letters are highly distorted or missing. In standard first-order HMMs with time-independent transition probabilities, the probability of remaining in the i - th state for exactly d time steps is illustrated in Fig. 8. The real probability distribution on letter widths is actually similar to a Poisson distribution [11), Fig. 8. It has been shown that explicit duration HMMs can significantly improve recognition accuracy, but at the expense of a significant increase in computational complexity [5] . Our model, on the other hand, can easily model arbitrarily complex pairwise probabilities without increasing the computational complexity (using DP in a nearest neighbor approximation). We think that this is one of the biggest advantages of our approach over HMMs. We believe that including more precise interaction terms will yield significantly better results (as in HMMs) and this work is currently in progress. Acknowledgments Supported in part by the Office of Naval Research. The authors thank the members of Institute for Brain and Neural Systems for helpful conversations. References [1] Y. Bengio, Y. LeCun, C. Nohl, and C. Burges. Lerec: A NN/HMM hybrid for on-line handwriting recognition. Neural Computation, 7:1289-1303, 1995. [2] H. Bourlard and C. Wellekens. Links between hidden Markov models and multilayer perceptrons. IEEE Transactions on PAMI, 12:1167-1178, 1990. [3] M. Burl, T. Leung, and P. Perona. Recognition of planar object classes. In Proc. IEEE Comput. Soc. Con/. Comput. Vision and Pattern Recogn., 1996. [4] M. Burl and P. Perona. Using hierarchical shape models to spot keywords in cursive handwriting data. In Proc. CVPR 98, 1998. [5] C. Mitchell and L. Jamieson. Modeling duration in a hidden markov model with the exponential family. In Proc. ICASSP, pages 331-334, 1993. [6] D. Mumford. Neuronal archetectures for pattern theoretic problems. In K. C. and D. J. L., editors, Large-Scale Neuronal theories of the Brain, pages 125-152. MIT Press, Cambridge, MA, 1994. [7] P. Neskovic. Feedforward, Feedback Neural Networks With Context Driven Segmentation And Recognition. PhD thesis, Brown University, Physics Dept. , May 1999. [8] P. Neskovic and L. Cooper. Neural network-based context driven recognition of on-line cursive script. In 7th IWFHR, 2000. [9] L. Rabiner and B. Juang. An introduction to hidden markov models. ASSP magazine, 3(1):4-16, 1986. [10] D. E. Rumelhart . Theory to practice: A case study - recognizing cursive handwriting. In E. B. Baum, editor, Computational Learning and Cognition: Proceedings of the Third NEC Research Symposium. SIAM, Philadelphia, 1993. [11] M. Schenkel, 1. Guyon, and D. Henderson. On-line cursive script recognition using time delay neural networks and hidden markov models. Machine Vision and Applications, 8:215- 223, 1995.
1901 |@word version:1 retraining:1 propagate:2 mention:1 solid:2 configuration:3 contains:2 selecting:1 past:1 contextual:1 erms:2 must:1 written:1 shape:8 v:1 selected:1 beginning:2 provides:1 location:9 constructed:1 supply:1 symposium:1 consists:1 introduce:3 pairwise:9 expected:3 multi:1 brain:3 considering:1 increasing:2 estimating:1 lowest:1 argmin:1 lexington:1 finding:4 every:5 act:2 charge:2 interactive:2 exactly:3 unit:1 jamieson:1 segmenting:1 local:3 limit:1 io:1 pami:1 might:1 chose:2 suggests:1 hmms:20 range:3 acknowledgment:1 thirty:1 lecun:1 testing:1 practice:2 spot:1 procedure:2 significantly:2 word:46 confidence:1 get:1 cannot:1 close:1 put:1 context:2 influence:3 equivalent:1 map:2 center:1 maximizing:2 missing:1 baum:1 duration:4 ention:1 rule:1 handle:1 limiting:1 magazine:1 exact:1 programming:1 associate:1 element:7 velocity:1 recognition:15 rumelhart:3 located:2 approximated:1 cut:1 database:1 calculate:3 region:10 lerec:1 ordering:1 decrease:1 highest:2 complexity:3 ui:7 dynamic:1 o1t:1 trained:1 depend:1 segment:2 writer:7 completely:1 easily:5 icassp:1 represented:1 recogn:1 alphabet:1 effective:1 detected:1 choosing:1 quite:1 solve:1 cvpr:1 distortion:13 say:1 ability:1 statistic:1 think:1 ip:10 final:1 sequence:1 advantage:1 propose:1 predrag:1 interaction:19 neighboring:1 lincoln:1 juang:1 object:4 undistorted:1 measured:1 ij:2 nearest:10 keywords:2 progress:1 eq:2 soc:1 differ:1 direction:1 fij:1 correct:2 require:3 summation:1 adjusted:1 ground:1 cognition:1 dictionary:24 numb:2 omitted:1 proc:3 currently:1 mit:2 modified:1 rather:1 office:1 emission:2 focus:1 naval:1 likelihood:1 contrast:1 rigorous:1 detect:1 posteriori:1 helpful:1 nn:3 leung:1 hidden:9 perona:2 transformed:1 overall:2 priori:1 spatial:5 field:1 construct:1 identical:1 represents:3 primarily:1 simultaneously:1 recognize:1 individual:1 detection:15 ested:1 possibility:3 highly:1 henderson:1 weakness:1 accurate:1 kt:1 necessary:1 minimal:2 column:3 modeling:4 cost:1 delay:2 recognizing:1 providence:1 dependency:1 perturbed:1 nns:2 siam:1 probabilistic:1 physic:2 thesis:1 reflect:1 recorded:1 opposed:1 choose:3 pmin:5 li:1 potential:7 includes:1 depends:1 script:4 lab:1 bayes:1 complicated:1 minimize:1 square:2 il:2 oi:1 accuracy:1 qk:2 yield:1 ofthe:3 rabiner:1 weak:1 handwritten:6 raw:1 accurately:1 cation:2 processor:1 stroke:3 detector:5 reach:1 sharing:1 energy:10 frequency:1 associated:1 di:3 handwriting:7 con:1 dataset:1 revise:1 mitchell:1 recall:1 conversation:1 segmentation:7 actually:1 planar:1 arranged:2 done:1 ox:2 furthermore:3 until:1 hand:1 overlapping:1 believe:1 name:1 effect:1 brown:2 burl:2 laboratory:1 illustrated:3 deal:2 adjacent:1 ll:1 width:6 davis:1 ambiguous:1 criterion:2 ection:1 stress:1 theoretic:1 percent:1 recently:1 overview:3 significant:1 cambridge:1 particle:1 dj:1 longer:1 dominant:1 posterior:1 forbidden:2 driven:2 success:1 arbitrarily:1 captured:1 minimum:1 paradigm:1 maximize:1 dashed:2 signal:1 positioning:1 characterized:1 cross:2 long:1 post:1 mle:1 qi:2 j3:1 multilayer:1 vision:2 expectation:1 poisson:1 whereas:1 addition:1 want:1 appropriately:1 file:1 member:1 feedforward:2 exceed:1 easy:2 bengio:1 xj:7 architecture:2 knowing:2 det:1 qj:1 speech:2 constitute:1 covered:1 unimportant:1 clear:2 cursive:7 amount:1 supplied:2 millisecond:1 group:3 key:1 backward:1 sum:1 run:1 letter:59 distorted:2 family:2 almost:1 guyon:1 schenkel:1 comparable:1 layer:2 ct:1 lall:2 constraint:1 constrain:1 ri:1 leon:1 department:1 combination:3 poor:1 smaller:1 xlj:1 em:1 invariant:1 erm:1 ln:2 equation:1 computationally:1 wellekens:1 serf:1 end:2 nen:1 hierarchical:1 appropriate:1 coulomb:1 alternative:1 denotes:2 remaining:1 calculating:2 unchanged:1 warping:1 objective:4 eled:1 added:1 olw:3 mumford:1 receptive:1 dependence:1 dp:3 distance:1 thank:1 link:1 philip:1 hmm:4 evaluate:1 reason:1 index:1 expense:1 implementation:4 ethod:1 boltzmann:1 upper:1 markov:7 extended:1 assp:1 precise:1 interacting:1 introduced:1 david:1 cast:1 pair:2 beyond:1 suggested:4 spotting:1 below:1 pattern:17 built:1 reliable:1 including:1 power:1 hybrid:2 bourlard:1 representing:1 improve:2 nohl:1 tdnn:1 extract:1 philadelphia:1 literature:1 relative:1 neskovic:3 validation:2 pij:4 editor:2 pi:1 translation:1 lo:2 row:2 repeat:1 supported:1 infeasible:1 aij:1 burges:1 institute:2 neighbor:10 face:1 overcome:1 calculated:3 boundary:3 valid:1 depth:1 transition:3 feedback:1 forward:1 author:1 transaction:1 approximate:1 wrote:1 assumed:3 discriminative:1 xi:9 search:1 pen:1 decade:1 nature:2 learn:1 robust:1 cl:1 complex:2 domain:2 alarm:1 allowed:3 neuronal:2 fig:9 biggest:1 en:4 cooper:2 position:4 explicit:2 exponential:1 xl:1 comput:2 third:1 bij:1 preprocessor:2 specific:4 er:2 symbol:1 dk:1 false:1 sequential:3 phd:1 nec:1 depicted:1 likely:2 expressed:1 truthed:1 corresponds:3 ma:2 goal:1 formulated:1 change:1 averaging:1 called:1 wio:1 perceptrons:1 selectively:1 mark:1 maximumlikelihood:1 latter:1 dept:1 tested:1 ex:1
987
1,902
Noise suppression based on neurophysiologically-motivated SNR estimation for robust speech recognition J iirgen Tcharz Medical Physics Group Oldenburg University 26111 Oldenburg Germany [email protected] Michael Kleinschmidt Medical Physics Group Oldenburg University 26111 Oldenburg Germany Birger Kallmeier Medical Physics Group Oldenburg University 26111 Oldenburg Germany Abstract A novel noise suppression scheme for speech signals is proposed which is based on a neurophysiologically-motivated estimation of the local signal-to-noise ratio (SNR) in different frequency channels. For SNR-estimation, the input signal is transformed into so-called Amplitude Modulation Spectrograms (AMS), which represent both spectral and temporal characteristics of the respective analysis frame, and which imitate the representation of modulation frequencies in higher stages of the mammalian auditory system. A neural network is used to analyse AMS patterns generated from noisy speech and estimates the local SNR. Noise suppression is achieved by attenuating frequency channels according to their SNR. The noise suppression algorithm is evaluated in speakerindependent digit recognition experiments and compared to noise suppression by Spectral Subtraction. 1 Introduction One of the major problems in automatic speech recognition (ASR) systems is their lack of robustness in noise, which severely degrades their usefulness in many practical applications. Several proposals have been made to increase the robustness of ASR systems, e.g. by model compensation or more noise-robust feature extraction [1, 2]. Another method to increase robustness of ASR systems is to suppress the background noise before feature extraction. Classical approaches for single-channel noise suppression are Spectral Subtraction [3] and related schemes, e.g. [4], where the noise spectrum is usually measured in detected speech pauses and subtracted from the signal. In these approaches, stationarity of the noise has to be assumed while speech is active. Furthermore, portions detected as speech pauses must not contain any speech in order to allow for correct noise measurement. At the same time, all actual speech pauses should be detected for a fast update of the noise measurement. In reality, however, these partially conflicting requirements are often not met . The noise suppression algorithm outlined in this work directly estimates the local SNR in a range of frequency channels even if speech and noise are present at the same time, i.e., no explicit detection of speech pauses and no assumptions on noise stationarity during speech activity are necessary. For SNR estimation, the input signal is transformed into spectro-temporal input features, which are neurophysiologically-motivated: experiments on amplitude modulation processing in higher stages of the auditory system in mammals show that modulations are represented in "periodotopical" gradients, which are almost orthogonal to the tonotopical organization of center frequencies [5]. Thus, both spectral and temporal information is represented in two-dimensional maps. These findings were applied to signal processing in a binaural noise suppression system [6] with the introduction of so-called Amplitude Modulation Spectrograms (AMS) , which contain information on both center frequencies and modulation frequencies. In the present study, the different representations of speech and noise in AMS patterns are detected by a neural network, which estimates the local SNR in each frequency channel. For noise suppression, the frequency bands are attenuated according to the estimated local SNR in the different frequency channels. The proposed noise suppression scheme is evaluated in isolated-digit recognition experiments. As recognizer, a combination of an auditory-based front end [2] and a locally-recurrent neural network [7] is used. This combination was found to allow for more robust isolated-digit recognition rates, compared to a standard recognizer with mel-cepstral features and HMM modeling [8, 9]. Thus, the recognition experiments in this study were conducted with this particular combination to evaluate whether a further increase of robustness can be achieved with additional noise suppression. 2 2.1 The recognition system Noise suppression Figure 1 shows the processing steps which are performed for noise suppression. To generate AMS patterns which are used for SNR estimation, the input signal (16 kHz sampling rate) is short-term level adjusted, i.e., each 32 ms segment which is later transformed into an AMS pattern is scaled to the same root-mean-square value. The level-adjusted signal is then subdivided into overlapping segments of 4.0 ms duration with a progression of 0.25 ms for each new segment. Each segment is multiplied by a Hanning window and padded with zeros to obtain a frame of 128 samples which is transformed with a FFT into a complex spectrum, with a spectral resolution of 125 Hz. The resulting 64 complex samples are considered as a function of time, i.e., as a band pass filtered complex time signal. Their respective envelopes are extracted by squaring. This envelope signal is again segmented into overlapping segments of 128 samples (32ms) with an overlap of 64 samples. Each segment is multiplied with a Hanning window and padded with zeros to obtain a frame of 256 samples. A further FFT is computed and supplies a modulation spectrum in each frequency channel, with a modulation frequency resolution of 15.6 Hz. By an appropriate summation of neighbouring FFT bins the frequency axis is transformed to a Bark scale with 15 channels, with center frequencies from 100-7300 Hz. The modulation input signal scaled input signal -+- ---c=J level normalization bandpass time signals ft~ ~ftl;l ov-add analysis modulation spectrogram envelope LJ ~U FFI ftl : ! ~ LJ ------' rescale, logamplitude output signal [ - -+- [ Figure 1: Processing stages of AMS-based noise suppression. frequency spectrum is scaled logarithmically by appropriate summation, which is motivated by psychoacoustical findings about the shape of auditory modulation filters [10). The modulation frequency spectrum is restricted to the range between 50-400 Hz and has a resolution of 15 channels. Thus, the fundamental frequency of typical voiced speech is represented in the modulation spectrum. The AMS representation is restricted to a 15 times 15 pattern to limit the amount of training data which is necessary to train the fully connected perceptron. In a last processing step, the amplitude range is log-compressed. Examples for AMS patterns can be seen in Fig. 2. The AMS pattern on the left side was generated from a voiced speech portion. The periodicity at the fundamental frequency (approx. 110 Hz) is represented in each center frequency band. The AMS pattern on the right side was generated from speech simulating noise. The typical spectral tilt can be seen, but there is no structure across modulation frequencies . For classifying AMS patterns and estimating the narrow-band SNR of each AMS pattern, a feed-forward neural network is employed. The net consists of 225 input neurons (15*15, the AMS resolution of center frequencies and modulation frequencies, respectively), a hidden layer with 160 neurons, and an output layer with 15 neurons. The activity of each output neuron indicates the SNR in one of the 15 center frequency channels. For training, the narrow-band SNRs in 15 channels were measured for each AMS analysis frame of the training material prior to adding speech and noise. The neural network was trained with AMS patterns generated from 72 min of noisy speech from 400 talkers and 41 natural noise types, using the momentum backpropagation algorithm. After training, AMS patterns generated from "unknown" sound material are presented to the network. The 15 output neuron activities that appear for each pattern serve as SNR estimates for the respective frequency channels. In a detailed study on AMS-based broad-band SNR estimation [11) it was shown that harmonicity which is well represented in AMS patterns is the most important cue for the neural network to distinguish between speech and noise. However, harmonicity is not the only cue, as the algorithm allows for reliable discrimination between unvoiced speech and noise. The accuracy of SNR 55 73 100 135 192 246 Modulation Frequency [Hz] 333 55 73 100 135 192 246 Modulation Frequency [Hz] 333 Figure 2: AMS patterns generated from a voiced speech segment (left), and from speech simulating noise (right). Each AMS pattern represents a 32 ms portion of the input signal. Bright and dark areas indicate high and low energies, respectively. estimation in terms of mean deviation between the actual and the estimated SNR in each frame, for each frequency channel, was determined with "unknown" test data (36 min of noisy speech). The average deviation across all frequency channels was 5.4 dB, with a decrease of accuracy towards higher frequency channels. Sub-band SNR estimates are utilized for noise suppression by attenuating frequency channels according to their local SNR. The gain function which was applied is given by Uk = (SNRk / (SNRk + 1))X , where k denotes the frequency channel, SNR the signalto-noise ratio on a linear scale, and x is an exponent which controls the strength of the attenuation, and which was set to 1.5 for the experiments described below. Noise suppression based on AMS-derived SNR estimation is performed in the FFTdomain. The input signal is segmented into overlapping frames with a window length of 32 ms, and a shift of 16 ms is applied, i.e., each window corresponds to one AMS analysis frame. The FFT is computed in every window. The magnitude in each frequency bin is multiplied by the corresponding gain computed from the AMS-based SNR estimation. The gain in frequency bins which are not covered by the center frequencies from the SNR estimation is linearly interpolated from neighboring estimation frequencies . The phase of the input signal is unchanged and applied to the attenuated magnitude spectrum. An inverse FFT is computed, and the enhanced speech is attained by overlapping and adding. 2.2 Auditory-based ASR feature extraction The front end which is used in the recognition system is based on a quantitative model of the "effective" peripheral auditory processing. The model simulates both spectral and temporal properties of sound processing in the auditory system which were found in psychoacoustical and physiological experiments. The model was originally developed for describing human performance in typical psychoacoustical spectral and temporal masking experiments, e.g., predicting the thresholds in backward, simultaneous, and forward-masking experiments [12, 13]. The main processing stages of the auditory model are gammatone filtering, envelope extraction in each frequency channel, adaptive amplitude compression, and low pass filtering of the envelope in each band. The adaptive compression stage compresses steadystate portions of the input signal logarithmically. Changes like onsets or offsets, in contrast, are transformed linearly. A detailed description of the auditory-based front end is given in [2]. 2.3 Neural network recognizer For scoring of the input features, a locally recurrent neural network (LRNN) is employed with three layers of neurons (150 input, 289 hidden, and 10 output neurons). Hidden layer neurons have recurrent connections to their 24 nearest neighbours. The input matrix consists of 5 times the auditory model feature vector with 30 elements, glued together in order to allow the network to memorize a time sequence of input matrices. The network was trained using the Backpropagation-trough-time algorithm with 200 iterations (see [7] for a detailed description of the recognizer) . 3 3.1 Recognition experiments Setup The speech material for training of the word models and scoring was taken from the ZIFKOM database of Deutsche Telekom AG. Each German digit was spoken once by 200 different speakers (100 males, 100 females). The recording sessions took place in soundproof booths or quiet offices. The speech material was sampled at 16 kHz. Three different types of noise were added to the speech material at different signal-to-noise ratios before feature extraction: a) white Gaussian noise, b) speechsimulating noise which is characterized by a long-term speech spectrum and amplitude modulations which reflect an uncorrelated superposition of 6 speakers, and c) background noise recorded in a printing room which strongly fluctuates in both amplitude and spectral shape. The background noises were added to the utterances with signal-to-noise ratios ranging from 20 to -10 dB. The word models were trained with features from 100 undisturbed and unprocessed utterances of each digit. Features for testing were calculated from another 100 utterances of each digit which were distorted by additive noise before preprocessing. The recognition rates were measured without noise suppression and with noise suppression as described in Section 2.1. For comparison, the recognition rates were measured with noise suppression based on Spectral Subtraction including residual noise reduction [3] before feature extraction. Two methods for noise estimation were applied. In the first method, speech pauses in the noisy signals were detected using Voice Activity Detection (VAD) [14]. The noise measure was updated in speech pauses using a low pass filter with a time constant of 40 ms. In the second method, the noise spectrum was measured in speech pauses which were detected from the clean utterances using an energy criterion (thus, perfect speech pause information is provided, which is not available in real applications). 3.2 Results The speaker-independent isolated-digit recognition rates which were obtained in the experiments are plotted in Fig. 3 for three types of background noise as a function of the SNR. In all tested noises, noise suppression with the proposed algorithm increases the recognition rate in comparison with the unprocessed data and with Spectral Subtraction with VAD-based noise measurement. Spectral Subtraction with perfect speech pause detection allows for higher recognition rates than the AMS-based approach in stationary white noise. Here, the noise measure for Spectral Subtraction is very accurate during speech activity and allows for effective noise removal. AMS-based noise suppression estimates the SNR in every analysis frame, and no a priori information on speech-free segments is provided to the algorithm. In White noise 100 ~ 90 80 ~ <: :8 '2: Cl ? 70 . ~>!.:~~:-<~----" 60 50 40 30 a: 20 10 Printing room noise ~=,..,...--r-.---'----r---'----'----. * " 0 noalgo --+-- ~. x AMS-based ____, ___ . ,-.., SS VAD . .. ,.... ". SSJ)erf ??? 0 .... .. L...L--H-.....L:::....L-----'-----''---'-----'---'----' clean 20 15 10 5 0 -5 -10 SNR [dB) Speech simulating noise 100 r---~ ? ..~ .F . ..~ .. ~_~.~'--.-r-, ~ 90 CD 80 ~ 70 <: :8 '2: Cl ? a: 60 50 noalgo --+-AMS-based ____, ___ . 30 SS VAD ... ,. ... 20 SSJ)erfo 1 0 L...L--H-.....L::....L-----'-----''---'-----'---'----' clean 20 15 10 5 0 -5 -10 40 20 15 10 5 0 -5 -10 SNR [dB) Figure 3: Speaker-independent, isolated digit recognition rates for three types of noise as a function of the SNR without noise suppression (noalgo ), with AMS-based noise suppression, Spectral Subtraction with VAD-based noise measurement, and Spectral Subtraction with perfect speech pause information. SNR [dB) speech simulation noise, which fluctuates in level but not in spectral shape, Spectral Subtraction with perfect speech pause detection works slightly better than AMSbased noise suppression. In printing room noise, which fluctuates in both level and spectrum, the AMS-based approach yields the best results. Here, Spectral Subtraction even degrades the recognition rates in some SNRs, compared to the unprocessed data. The noise measure from VAD-based or perfect speech pause detection cannot be updated while speech is active. Thus, an incorrect spectrum is subtracted and leads to artifacts and degraded recognition performance. In clean speech, recognition rates of 99.5% for unprocessed speech, 99.1% after Spectral Subtraction, and 98.9% after AMS-based noise suppression were obtained. 4 Discussion The proposed neurophysiologically-motivated noise suppression scheme was shown to significantly improve digit recognition in noise in comparison with unprocessed data and with Spectral Subtraction using VAD-based noise measures. A perfect speech pause detection (which is not available yet in real systems) allows for a reliable estimation of the noise floor in stationary noise. In non-stationary noise, however, the AMS pattern-based signal classification and noise suppression is advantageous, as it does not depend on speech pause detection and no assumption is necessary about the noise being stationary while speech is active. Spectral Subtraction as described in [3] produces musical tones, i.e. fast fluctuating spectral peaks. The neurophysiologically-based noise suppression scheme outlined in this paper does not produce such fast fluctuating artifacts. In general, a good quality of speech is maintained. The choice of the attenuation exponent x has only little impact on the quality of speech in favourable SNRs. With decreasing SNR, however, there is a tradeoff between the amount of noise suppression and distortions of the speech. A typical distortion of speech in poor signal-to-noise ratios is an unnatural spectral "coloring", rather than fast fluctuating distortions. In informal tests, most listeners did not have the impression that the algorithm improves speech intelligibility, but clearly preferred the processed signal over the unprocessed one, as the background noise was significantly suppressed without annoying artifacts. Clean speech is almost perfectly preserved after processing. The performance and characteristics of the algorithm of course strongly depends on the training data, as only lttle knowledge on the differences between speech and noise is "hard wired". Acknowledgments We thank Klaus Kasper and Herbert Reininger from Institut fUr Angewandte Physik, Universitat Frankfurt/M. for supplying us with their LRNN implementation. References [1] Hermansky, H. and Morgan, N . (1994). RASTA processing of speech. IEEE Trans. Speech Audio Processing 2(4), pp. 578-589 [2] Tchorz, J . and Kollmeier, B. (1999) . A Model of Auditory Perception as Front End for Automatic Speech Recognition. J. Acoust. Soc. Am. 106, pp. 2040-2050 [3] Boll, S. (1979). Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans. Acoust ., Speech, Signal Processing 27(2) , pp. 113- 120 [4] Ephraim, Y. and Malah, M. (1984). Speech enhancement using a minimum meansquare error short-time spectral amplitude estimator. IEEE Trans. Acoust., Speech, Signal Processing 32(6), pp. 1109-1121 [5] Langner, G., Sams, M., Heil, P., and Schulze, H., (1997) . Frequency and periodicity are represented in orthogonal maps in the human auditory cortex: evidence from magnetoencephalography. J. Compo Physiol. A 181, pp. 665- 676 [6] Kollmeier, B. and Koch, R., (1994) . Speech enhancement based on physiological and psycho acoustical models of modulation perception and binaural interaction. J. Acoust. Soc. Am. 95, pp. 1593- 1602 [7] Kasper, K., Reininger, H., Wolf, D., and Wiist, H. (1995). A speech recognizer with low complexity based on RNN. In: Neural Networks for Signal Processing V, Proc. of the IEEE workshop, Cambridge (MA), pp. 272- 281 [8] Kasper, K., Reininger, R., and Wolf, D. (1997). Exploiting the potential of auditory preprocessing for robust speech recognition by locally recurrent neural networks. Proc. Int. Conf. Acoustics, Speech and Signal Processing (ICASSP) 2, pp. 1223- 1227 [9] Kleinschmidt, M., Tchorz, J ., and Kollmeier, B. (2000). Combining speech enhancement and auditory feature extraction for robust speech recognition. Speech Communication, Special issue on robust ASR (accepted) [10] Ewert, S. and Dau, T . (1999) . Frequency selectivity in amplitude-modulation processing. J. Acoust. Soc. Am. (submitted) [11] Tchorz, J. and Kollmeier, B. (2000). Estimation of the signal-to-noise ratio with amplitude modulation spectrograms. Speech Communication (submitted) [12] Dau, T ., Piischel, D., and Kohlrausch, A. (1996) . A quantitative model of the "effective" signal processing in the auditory system: II. Simulations and measurements. J. Acoust. Soc. Am 99, pp. 3623- 3631 [13] Dau, T., Kollmeier, B., and Kohlrausch, A. (1997). Modeling auditory processing of amplitude modulation: I. Modulation Detection and masking with narrowband carriers. J . Acoust. Soc. Am 102, pp. 2892- 2905 [14] Recommendation ITU-T G.729 Annex B, 1996
1902 |@word compression:2 advantageous:1 annoying:1 physik:2 simulation:2 meansquare:1 mammal:1 reduction:1 oldenburg:7 medi:1 yet:1 must:1 physiol:1 additive:1 speakerindependent:1 shape:3 update:1 discrimination:1 stationary:4 cue:2 imitate:1 tone:1 short:2 supplying:1 filtered:1 compo:1 supply:1 incorrect:1 consists:2 vad:7 decreasing:1 actual:2 little:1 window:5 provided:2 estimating:1 deutsche:1 developed:1 spoken:1 finding:2 ag:1 acoust:7 annex:1 temporal:5 quantitative:2 every:2 attenuation:2 scaled:3 uk:1 control:1 medical:3 appear:1 before:4 carrier:1 local:6 limit:1 severely:1 modulation:23 glued:1 kasper:3 range:3 practical:1 acknowledgment:1 testing:1 backpropagation:2 digit:9 area:1 rnn:1 significantly:2 word:2 cannot:1 map:2 center:7 duration:1 resolution:4 estimator:1 updated:2 enhanced:1 neighbouring:1 logarithmically:2 element:1 recognition:22 utilized:1 mammalian:1 database:1 ft:1 connected:1 decrease:1 ephraim:1 complexity:1 trained:3 ov:1 depend:1 segment:8 serve:1 binaural:2 icassp:1 langner:1 represented:6 listener:1 train:1 snrs:3 fast:4 effective:3 detected:6 klaus:1 fluctuates:3 distortion:3 s:2 compressed:1 erf:1 analyse:1 noisy:4 sequence:1 net:1 took:1 interaction:1 neighboring:1 combining:1 gammatone:1 description:2 exploiting:1 enhancement:3 requirement:1 produce:2 wired:1 perfect:6 recurrent:4 measured:5 nearest:1 rescale:1 soc:5 kleinschmidt:2 indicate:1 memorize:1 met:1 correct:1 filter:2 human:2 material:5 bin:3 subdivided:1 summation:2 adjusted:2 koch:1 considered:1 talker:1 major:1 recognizer:5 estimation:14 proc:2 superposition:1 clearly:1 gaussian:1 rather:1 office:1 derived:1 fur:1 indicates:1 kollmeier:5 contrast:1 suppression:30 am:37 squaring:1 lj:2 psycho:1 hidden:3 transformed:6 germany:3 issue:1 classification:1 exponent:2 priori:1 special:1 once:1 asr:5 extraction:7 undisturbed:1 sampling:1 represents:1 broad:1 hermansky:1 neighbour:1 phase:1 detection:8 stationarity:2 organization:1 male:1 accurate:1 necessary:3 respective:3 orthogonal:2 institut:1 plotted:1 isolated:4 modeling:2 deviation:2 snr:29 usefulness:1 conducted:1 front:4 universitat:1 fundamental:2 peak:1 physic:3 michael:1 together:1 again:1 reflect:1 recorded:1 conf:1 potential:1 de:1 int:1 trough:1 ewert:1 onset:1 depends:1 performed:2 later:1 root:1 portion:4 masking:3 voiced:3 square:1 bright:1 accuracy:2 degraded:1 musical:1 characteristic:2 yield:1 submitted:2 simultaneous:1 lrnn:2 energy:2 frequency:38 pp:10 gain:3 auditory:16 sampled:1 knowledge:1 improves:1 amplitude:11 coloring:1 feed:1 higher:4 attained:1 originally:1 evaluated:2 strongly:2 furthermore:1 stage:5 overlapping:4 lack:1 dau:3 quality:2 artifact:3 contain:2 white:3 during:2 maintained:1 speaker:4 mel:1 m:8 criterion:1 impression:1 narrowband:1 ranging:1 steadystate:1 novel:1 khz:2 tilt:1 schulze:1 measurement:5 cambridge:1 frankfurt:1 automatic:2 approx:1 outlined:2 session:1 cortex:1 add:1 female:1 selectivity:1 psychoacoustical:3 malah:1 scoring:2 morgan:1 seen:2 herbert:1 additional:1 floor:1 spectrogram:4 employed:2 minimum:1 subtraction:14 signal:30 ii:1 sound:2 segmented:2 characterized:1 long:1 logamplitude:1 impact:1 iteration:1 represent:1 normalization:1 achieved:2 proposal:1 background:5 ftl:2 preserved:1 envelope:5 hz:7 recording:1 db:5 simulates:1 fft:5 perfectly:1 attenuated:2 tradeoff:1 shift:1 unprocessed:6 whether:1 motivated:5 unnatural:1 speech:68 detailed:3 covered:1 amount:2 dark:1 band:8 locally:3 processed:1 generate:1 estimated:2 group:3 threshold:1 clean:5 backward:1 padded:2 inverse:1 harmonicity:2 distorted:1 place:1 almost:2 layer:4 distinguish:1 activity:5 strength:1 interpolated:1 min:2 according:3 peripheral:1 combination:3 poor:1 across:2 slightly:1 suppressed:1 sam:1 restricted:2 taken:1 describing:1 ffi:1 german:1 end:4 informal:1 available:2 multiplied:3 progression:1 fluctuating:3 spectral:25 appropriate:2 intelligibility:1 simulating:3 subtracted:2 robustness:4 voice:1 compress:1 denotes:1 classical:1 unchanged:1 added:2 degrades:2 gradient:1 quiet:1 thank:1 hmm:1 acoustical:1 itu:1 length:1 ratio:6 setup:1 suppress:1 implementation:1 unknown:2 rasta:1 neuron:8 unvoiced:1 compensation:1 communication:2 frame:8 boll:1 connection:1 acoustic:2 conflicting:1 narrow:2 trans:3 usually:1 pattern:17 below:1 perception:2 reliable:2 including:1 overlap:1 natural:1 predicting:1 pause:14 residual:1 scheme:5 improve:1 axis:1 heil:1 utterance:4 prior:1 bark:1 removal:1 fully:1 neurophysiologically:5 filtering:2 classifying:1 uncorrelated:1 cd:1 periodicity:2 course:1 last:1 free:1 side:2 allow:3 perceptron:1 cepstral:1 calculated:1 forward:2 made:1 adaptive:2 preprocessing:2 spectro:1 uni:1 preferred:1 active:3 assumed:1 spectrum:11 reality:1 channel:18 robust:6 angewandte:1 complex:3 cl:2 did:1 main:1 linearly:2 noise:84 fig:2 sub:1 momentum:1 hanning:2 explicit:1 bandpass:1 printing:3 favourable:1 offset:1 physiological:2 evidence:1 workshop:1 adding:2 magnitude:2 booth:1 signalto:1 tch:1 partially:1 recommendation:1 corresponds:1 wolf:2 extracted:1 ma:1 attenuating:2 magnetoencephalography:1 towards:1 room:3 change:1 hard:1 typical:4 determined:1 called:2 pas:3 accepted:1 evaluate:1 audio:1 tested:1
988
1,903
Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics Thomas NatschIager & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {tna t schl,maass }@i g i.tu -gra z. ac . a t Eduardo D. Sontag Anthony Zador Dept. of Mathematics Rutgers University New Brunswick, NJ 08903, USA so nt ag@h il bert. r ut ge rs . e du Cold Spring Harbor Laboratory 1 Bungtown Rd Cold Spring Harbor, NY 11724 za d or@cshl. org Abstract Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their "weight" changes on a short time scale by several hundred percent in dependence of the past input to the synapse. In this article we explore the consequences that these synaptic dynamics entail for the computational power of feedforward neural networks. We show that gradient descent suffices to approximate a given (quadratic) filter by a rather small neural system with dynamic synapses. We also compare our network model to artificial neural networks designed for time series processing. Our numerical results are complemented by theoretical analysis which show that even with just a single hidden layer such networks can approximate a surprisingly large large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. 1 Introduction More than two decades of research on artificial neural networks has emphasized the central role of synapses in neural computation. In a conventional artificial neural network, all units ("neurons") are assumed to be identical, so that the computation is completely specified by the synaptic "weights," i. e. by the strengths of the connections between the units. Synapses in common artificial neural network models are static: the value Wij of a synaptic weight is assumed to change only during "learning". In contrast to that, the "weight" Wij (t) of a biological synapse at time t is known to be strongly dependent on the inputs Xj(t - T) that this synapse has received from the presynaptic neuron i at previous time steps t - T, see e.g. [1]. We will focus in this article on mean-field models for populations of neurons connected by dynamic synapses. A B 1 0.5 00 pure facilitation 100 C 1 0.5 \ 00 200 0.5 pure depression 100 time 1 facilitation and depression 00 200 time 100 200 time Figure 1: A dynamic synapse can produce quite different outputs for the same input. The response of a single synapse to a step increase in input activity applied at time step 0 is compared for three different parameter settings. Several models for single synapses have been proposed for the dynamic changes in synaptic efficacy. In [2] the model of [3] is extended to populations of neurons where the current synaptic efficacy Wij (t) between a population j and a population i at time t is modeled as a product of a facilitation term lij (t) and a depression term dij (t) scaled by the factor W ij . We consider a time discrete version of this model defined as follows : Wij (t) lij(t + 1) = W ij . hj (t) . dij (t) = hj(t) - lij(t) ~ (1) - + Uij . (1- hj(t)) . Xj(t) (2) 'J dij(t + 1) = dij(t) + 1-~~j(t) - lij(t). dij(t)? Xj(t) (3) 'J hj(t) = hj(t) . (1- Uij) + Uij (4) with dij (0) = 1 and hj (0) = O. Equation (2) models facilitation (with time constant Fij ), whereas equation (3) models the combined effects of synaptic depression (with time constant D ij) and facilitation. Depending on the values of the characteristic parameters Uij, D ij , Fij a synaptic connection (ij) maps an input function Xj(t) into the corresponding time varying synaptic output Wij (t) . Xj (t). The same input Xj (t) can yield markedly different outputs Wij (t) . Xi (t) for different values of the characteristic parameters Uij, Dij, Fij. Fig. 1 compares the output for three different sets of values for the parameters Uij, D ij , Fij . These examples illustrate just three of the range of input-output behaviors that a single synapse can achieve. In this article we will consider feedforward networks coupled by dynamic synapses. One should think of the computational units in such a network as populations of spiking neurons. We refer to such networks as "dynamic networks", see Fig. 2 for details. hidden units dynamic synapses Figure 2: The dynamic network model. The output Xi(t) of the itk unit is given by Xi(t) = u(E j Wij(t) . Xj(t)), where u is either the sigmoid function u(u) = 1/(1 + exp(-u)) (in the hidden layers) or just the identity function u( u) = u (in the output layer) and Wij (t) is modeled according to Equ. (1) to (4). In Sections 2 and 3 we demonstrate (by employing gradient descent to find appropriate values for the parameters Uij, D ij , Fij and Wij) that even small dynamic networks can compute complex quadratic filters. In Section 4 we address the question which synaptic parameters are important for a dynamic network to learn a given filter. In Section 5 we give a precise mathematical characterization of the computational power of such dynamic networks. 2 Learning Arbitrary Quadratic Filters by Dynamic Networks In order to analyze which filters can be approximated by small dynamic networks we investigate the task of learning a quadratic filter Q randomly chosen from a class Qm. The class Qm consists of all quadratic filters Q whose output (Qx) (t) in response to the input time series x(t) is defined by some symmetric m x m matrix HQ = [hkd of filter coefficients hkl E ~ k = 1 .. . m, l = l ... m through the equation (Qx)(t) = Z=;:1 Z=~=1 hkl x(t - k) x(t - l) . An example of the input and output for one choice of quadratic parameters (m = 10) are shown in Figs. 3B and 3C, respectively. We view such filter Q as an example for the kinds of complex transformations that are important to an organism's survival, such as those required for motor control and the processing of time-varying sensory inputs. For example, the spectrotemporal receptive field of a neuron in the auditory cortex [4] reflects some complex transformation of sound pressure to neuronal activity. The real transformations actually required may be very complex, but the simple filter Q provides a useful starting point for assessing the capacity of this architecture to transform one time-varying signal into another. Can a network of units coupled by dynamic synapses implement the filter Q? We tested the approximation capabilities of a rather small dynamic network with just 10 hidden units (5 excitatory and 5 inhibitory ones), and one output (Fig. 3A). The dynamics of inhibitory synapses is described by the same model as that for excitatory synapses. For any particular temporal pattern applied at the input and any particular choice of the synaptic parameters, this network generates a temporal pattern as output. This output can be thought of, for example, as the activity of a particular population of neurons in the cortex, and the target function as the time series generated for the same input by some unknown quadratic filter Q. The synaptic parameters Wij, D ij , Fij and Uij are chosen so that, for each input in the training set, the network minimized the mean-square error E[z, zQ] = ~ z=;=-oI(Z(t) - ZQ(t))2 between its output z(t) and the desired output zQ(t) specified by the filter Q. To achieve this minimization, we used a conjugate gradient algorithm. l The training inputs were random signals, an example of which is shown in Fig. 3B. The test inputs were drawn from the same random distribution as the training inputs, but were not actually used during training. This test of generalization ensured that the observed performance represented more than simple "memorization" of the training set. Fig. 3C compares the network performance before and after training. Prior to training, the output is nearly flat, while after training the network output tracks the filter output closely (E[z,zQ] = 0.0032). Fig. 3D shows the performance after training for different randomly chosen quadratic filters Q E Qm for m = 4, ... ,16. Even for larger values of m the relatively small network with 10 hidden units performs rather well. Note that a quadratic filter of dimension m has m(m + 1)/2 free parameters, whereas the dynamic network has a constant number of 80 adjustable parameters. This shows clearly that dynamic synapses enable a small network to mimic a wide range of possible quadratic target filters. 1 In order to apply such a conjugate gradient algorithm ones has to calculate the partial derivatives Ii E [z zQ ] Ii E[z zQ] Ii u'? . , Ii n '? . Ii E[z zQ] Ii F: . w. . Ii E[z zQ ] ( ?. ) . ill and Ii 'I., for all synapses ~J about conjugate gradient algorithms see e.g. [5]. l.J '&J , 'l.J . the network. For more detaIls B A O.B 0?20 50 100 150 200 time steps c D o -020 50 100 150 200 o 4 6 time steps B 10 m 12 14 16 Figure 3: A network with units coupled by dynamic synapses can approximate randomly drawn quadratic filters. A Network architecture. The network had one input unit, 10 hidden units (5 excitatory, 5 inhibitory), and one output unit, see Fig. 2 for details. B One of the input patterns used in the training ensemble. For clarity, only a portion of the actual input is shown. C Output of the network prior to training, with random initialization of the parameters, and the output of the dynamic network after learning. The target was the output of a quadratic filter Q E QlQ. The filter coefficients hkl (1 :::; k, l :::; 10) were generated randomly by subtracting J-t/2 from a random number generated from an exponential distribution with mean J-t = 3. D Performance after network training. For different sizes of HQ (HQ is a symmetric m x m matrix) we plotted the average performance (mse measured on a test set) over 20 different filters Q, i.e. 20 randomJy generated matrices HQ. 3 Comparison with the model of Back and Tsoi Our dynamic network model is not the first to incorporate temporal dynamics via dynamic synapses. Perhaps the earliest suggestion for a role for synaptic dynamics in network computation was by [7]. More recently, a number of networks have been proposed in which synapses implemented linear filters; in particular [6]. To assess the performance of our network model in relation to the model proposed in [6] we have analyzed the performance of our dynamic network model for the same system identification task that was employed as benchmark task in [6]. The goal of this task is to learn a filter F with (Fx)(t) = sin(u(t)) where u(t) is the output of a linear filter applied to the input time series X(t).2 The result is summarized in Fig. 4. It can clearly be seen that our network model (see Fig. 3A for the network architecture) is able to learn this particular filter. The mean square error (mse) on the test data is 0.0010, which is slightly smaller than the mse of 0.0013 reported in [6]. Note that the network Back and Tsoi used to learn the task had 130 adjustable parameters (13 parameters per IIR synapse, 10 hidden units) whereas our network model had only 80 adjustable parameters (all parameters U ij , F ij , Dij and W ij were adjusted during learning). 2U(t) is the solution to the difference equation u(t)-1.99u(t-1)+ 1.572u(t-21)-0.4583u(t31) = O.0154x(t) + O.0462x(t - 1) + O.0462x(t - 2 1) + O.0154x(t - 31). Hence, u(t) is the output of a linear filter applied to the input x(t). A B 2 C .~ 0 I~~~ 0.0 I 0.5 1.0 1.5 D -1 -20 50 100 time 150 200 150 DN I = ST 200 50 100 I 150 Figure 4: Performance of our model on the system identification task used in [6]. The network architecture is the same as in Fig. 3. A One of the input patterns used in the training ensemble. B Output of the network after learning and the target. C Comparison of the mean square error (in units of to- 3 ) achieved on test data by the model of Back and Tsoi (BT) and by the dynamic network (DN). D Comparison of the number of adjustable parameters. The network model of Back and Tsoi (BT) utilizes slightly more adjustable parameters than the dynamic network (DN). _!.!.!. F. _? A 1 and 2-tuples W ____ .. _____ 1 _____ .... ___ _ U. !- !.!_ DilL. ,_ ~. B3-tuples . w/oF . w/oD . w/oU I--- - - r- -- W U C '.'.' 1 and 2-tuples :- F U. : ? :.: ? D W . : . w/oW , , , __ _ _ l __ _ __ I ___ _ _ L __ _ _ D. ,-i-i-- - - , D F : ---- ... -----1----- 1- ---- - - - I- - - - - r F _ ' ? '. ' W U D U - - - W F Figure 5: Impact of different synaptic parameters on the learning capabilities of a dynamic network. The size of a square (the "impact") is proportional to the inverse of the mean squared error averaged over N trials. A In each trial (N = 100) a different quadratic filter matrix HQ (m = 6) was randomly generated as described in Fig. 3. Along the diagonal one can see the impact of a single parameter, whereas the off-diagonal elements (which are symmetric) represent the impact of changing pairs of parameters. B The impact of subsets of size three is shown where the labels indicate which parameter is not included. C Same interpretation as for panel A but the results shown (N = 20) are for the filter used in [6]. D Same interpretation as for panel B but the results shown (N = 20) are for the same filter as in panel C. This shows that a very simple feedforward network with biologically realistic synaptic dynamics yields performance comparable to that of artificial networks that were previously designed to yield good performance in the time series domain without any claims of biological realism. 4 Which Parameters Matter? It remains an open experimental question which synaptic parameters are subject to usedependent plasticity, and under what conditions. For example, long term potentiation appears to change synaptic dynamics between pairs of layer 5 cortical neurons [8] but not in the hippocampus [9]. We therefore wondered whether plasticity in the synaptic dynamics is essential for a dynamic network to be able to learn a particular target filter. To address this question, we compared network performance when different parameter subsets were optimized using the conjugate gradient algorithm, while the other parameters were held fixed. In all experiments, the fixed parameters were chosen to ensure heterogeneity in presynaptic dynamics. Fig. 5 shows that changing only the postsynaptic parameter W has comparable impact to changing only the presynaptic parameters U or D, whereas changing only F has little impact on the dynamics of these networks (see diagonal of Fig. 5A and Fig. 5C). However, to achieve good performance one has to change at least two different types of parameters such as {W, U} or {W, D} (all other pairs yield worse performance). Hence, neither plasticity in the presynaptic dynamics (U, D, F) alone nor plasticity of the postsynaptic efficacy (W) alone was sufficient to achieve good performance in this model. 5 A Universal Approximation Theorem for Dynamic Networks In the preceding sections we had presented empirical evidence for the approximation capabilities of our dynamic network model for computations in the time series domain. This gives rise to the question, what the theoretical limits of their approximation capabilities are. The rigorous theoretical result presented in this section shows that basically there are no significant a priori limits. Furthermore, in spite of the rather complicated system of equations that defines dynamic networks, one can give a precise mathematical characterization of the class of filters that can be approximated by them. This characterization involves the following basic concepts. An arbitrary filter F is called time invariant if a shift of the input functions by a constant to just causes a shift of the output function by the same constant to. Another essential property of filters is fading memory. A filter F has fading memory if and only if the value of F;f(O) can be approximated arbitrarily closely by the value of F~(O) for functions ~ that approximate the functions ;f for sufficiently long bounded intervals [- T, 0]. Interesting examples of linear and nonlinear time invariant filters with fading memory can be generated with the help of representations of the form (Fx)(t) = oo ... oo x(t - Tt) ..... x(t - Tk)hh, . .. ,Tk)dTl ... dTk for measurable and essentially bounded functions x : R -+ R (with hELl). One refers to such an integral as a Volterra term of order k. Note that for k = 1 it yields the usual representation for a linear time invariant filter. The class of filters that can be represented by Volterra series, i.e., by finite or infinite sums of Volterra terms of arbitrary order, has been investigated for quite some time in neurobiology and engineering. Io Io Theorem 1 Assume that X is the class of functions from R into [Bo, B l ] which satisfy Ix(t) - x(s)1 ~ B2 ?It - sl for all t,s E ffi, where B o ,Bl ,B2 are arbitrary real-valued constants with 0 < Bo < Bl and 0 < B 2. Let F be an arbitrary filter that maps vectors offunctions ;f = (Xl, ... ,x n) E xn into functions from R into ~ Then the following are equivalent: (a) F can be approximated by dynamic networks' N defined in Fig. 2 (i.e., for any ? > 0 there exists such network N such that I(F;f)(t) - (N;f)(t) I < ? for all ;f E xn and all t E R) (b) F can be approximated by dynamic networks (see Fig. 2) with just a single layer of sigmoidal neurons ( c) F is time invariant and has fading memory (d) F can be approximated by a sequence of (finite or infinite) Volterra series. The proof of Theorem 1 relies on the Stone-Weierstrass Theorem, and is contained as the proof of Theorem 3.4 in [10]. The universal approximation result contained in Theorem 1 turns out to be rather robust with regard to changes in the definition of a dynamic network. Dynamic networks with just one layer of dynamic synapses and one subsequent layer of sigmoidal gates can approximate the same class of filters as dynamic networks with an arbitrary number of layers of dynamic synapses and sigmoidal neurons. It can also be shown that Theorem 1 remains valid if one considers networks which have depressing synapses only or if one uses the model for synaptic dynamics proposed in [1]. 6 Discussion Our central hypothesis is that rapid changes in synaptic strength, mediated by mechanisms such as facilitation and depression, are an integral part of neural processing. We have analyzed the computational power of such dynamic networks, which represent a new paradigm for neural computation on time series that is based on biologically realistic models for synaptic dynamics [11]. Our analytical results show that the class of nonlinear filters that can be approximated by dynamic networks, even with just a single hidden layer of sigmoidal neurons, is remarkably rich. It contains every time invariant filter with fading memory, hence arguable every filter that is potentially useful for a biological organism. The computer simulations we performed show that rather small dynamic networks are not only able to perform interesting computations on time series, but their performance is comparable to that of previously considered artificial neural networks that were designed for the purpose of yielding efficient processing of temporal signals. We have tested dynamic networks on tasks such as the learning of a randomly chosen quadratic filter, as well as on the learning task used in [6], to illustrate the potential of this architecture. References [1] J. A. Varela, K. Sen, 1. Gibson, J. Fost, L. F. Abbott, and S. B. Nelson. A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. J. Neurosci, 17:220-4, 1997. [2] M.Y. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic synapses. Neural Computation, 10:821-835, 1998. [3] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. PNAS,95:5323-5328, 1998. [4] R.C. deCharms and M.M. Merzenich. Optimizing sound features for cortical neurons. Science, 280:1439-43, 1998. [5] John Hertz, Anders Krogh, and Richard Palmer. Introduction to the Theory oj Neural Computation. Addison-Wesley, 1991. [6] A. D. Back and A. C. Tsoi. A simplified gradient algorithm for 1IR synapse multilayer perceptrons. Neural Computation, 5:456-462, 1993. [7] W.A. Little and G.L. Shaw. A statistical theory of short and long term memory. Behavioural Biology, 14:115-33, 1975. [8] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature, 382:807-10, 1996. [9] D.K. Selig, R.A. Nicoll, and R.C. Malenka. Hippocampal long-term potentiation preserves the fidelity of postsynaptic responses to presynaptic bursts. J. Neurosci. , 19:1236-46, 1999. [10] W. Maass and E. D. Sontag. 12(8):1743-1772,2000. Neural systems as nonlinear filters. Neural Computation, [11] A. M. Zador. The basic unit of computation. Nature Neuroscience, 3(Supp):1167, 2000.
1903 |@word trial:2 dtk:1 version:1 hippocampus:1 open:1 r:1 simulation:1 pressure:1 series:12 efficacy:4 contains:1 past:1 current:1 nt:1 od:1 john:1 numerical:1 realistic:3 subsequent:1 plasticity:5 offunctions:1 motor:1 designed:3 alone:2 realism:1 short:3 weierstrass:1 characterization:3 provides:1 org:1 sigmoidal:4 mathematical:2 dn:3 along:1 burst:1 differential:1 consists:1 rapid:1 behavior:1 nor:1 actual:1 little:2 pawelzik:1 bounded:2 circuit:1 panel:3 what:2 kind:1 ag:1 transformation:3 nj:1 eduardo:1 temporal:4 quantitative:1 every:2 ensured:1 scaled:1 qm:3 control:1 unit:15 before:1 engineering:1 limit:2 consequence:1 io:2 fost:1 initialization:1 palmer:1 range:2 averaged:1 tsoi:5 implement:1 signaling:1 cold:2 universal:2 empirical:1 gibson:1 thought:1 refers:1 spite:1 symbolic:1 dtl:1 memorization:1 conventional:1 map:2 measurable:1 equivalent:1 zador:2 starting:1 wondered:1 pure:2 facilitation:6 population:6 fx:2 target:5 us:1 hypothesis:1 element:1 approximated:7 observed:1 role:2 wang:1 calculate:1 tsodyks:3 graz:1 connected:1 dynamic:56 completely:1 differently:1 various:1 represented:2 artificial:7 quite:3 whose:1 larger:1 valued:1 think:1 transform:1 sequence:1 analytical:1 sen:1 subtracting:1 product:1 tu:1 achieve:4 description:1 assessing:1 produce:1 tk:2 help:1 depending:1 illustrate:2 ac:1 oo:2 measured:1 schl:1 ij:11 received:1 krogh:1 implemented:1 involves:1 indicate:1 fij:6 closely:2 filter:46 enable:1 redistribution:1 potentiation:2 suffices:1 generalization:1 dill:1 hell:1 biological:5 adjusted:1 sufficiently:1 considered:1 exp:1 claim:1 purpose:1 spectrotemporal:1 label:1 reflects:1 minimization:1 clearly:2 rather:6 hj:6 varying:3 earliest:1 usedependent:1 focus:1 contrast:1 rigorous:1 dependent:1 anders:1 bt:2 hidden:8 uij:8 relation:1 wij:10 fidelity:1 ill:1 priori:1 field:2 identical:1 biology:1 nearly:1 mimic:1 minimized:1 richard:1 randomly:6 preserve:1 investigate:1 analyzed:2 yielding:1 held:1 integral:2 partial:1 desired:1 plotted:1 theoretical:4 technische:1 subset:2 hundred:1 dij:8 universitat:1 reported:1 iir:1 combined:1 st:1 off:1 squared:1 central:2 worse:1 derivative:1 supp:1 potential:1 summarized:1 b2:2 coefficient:2 matter:1 satisfy:1 performed:1 view:1 wolfgang:1 analyze:1 portion:1 capability:4 complicated:1 ass:1 il:1 square:4 oi:1 ir:1 characteristic:2 ensemble:2 yield:5 identification:2 basically:1 za:1 synapsis:23 synaptic:23 definition:1 proof:2 static:1 auditory:1 austria:1 ut:1 ou:1 actually:2 back:5 appears:1 wesley:1 response:3 synapse:8 depressing:1 strongly:1 furthermore:1 just:8 nonlinear:4 defines:1 perhaps:1 b3:1 effect:1 usa:1 concept:1 hence:3 merzenich:1 symmetric:3 laboratory:1 maass:3 sin:1 during:3 rat:1 hippocampal:1 stone:1 tt:1 demonstrate:1 neocortical:2 performs:1 percent:1 recently:1 common:2 sigmoid:1 spiking:1 organism:2 interpretation:2 refer:1 significant:1 rd:1 mathematics:1 had:4 entail:1 cortex:3 optimizing:1 arbitrarily:1 seen:1 preceding:1 employed:1 paradigm:1 signal:3 ii:8 sound:2 pnas:1 characterized:1 long:4 impact:7 basic:2 multilayer:1 essentially:1 rutgers:1 itk:1 represent:2 achieved:1 whereas:5 remarkably:1 interval:1 pyramidal:2 markedly:1 subject:1 feedforward:3 harbor:2 xj:7 architecture:5 shift:2 whether:1 sontag:2 cause:1 depression:5 useful:2 cshl:1 sl:1 arguable:1 inhibitory:3 neuroscience:1 track:1 per:1 discrete:1 varela:1 drawn:2 clarity:1 changing:4 neither:1 abbott:1 sum:1 inverse:1 gra:1 utilizes:1 comparable:3 layer:10 quadratic:14 activity:3 strength:2 fading:5 flat:1 generates:1 spring:2 malenka:1 relatively:1 according:1 conjugate:4 hertz:1 smaller:1 slightly:2 postsynaptic:3 biologically:3 invariant:5 behavioural:1 equation:5 nicoll:1 previously:2 remains:2 turn:1 ffi:1 mechanism:1 hh:1 addison:1 ge:1 apply:1 appropriate:1 shaw:1 gate:1 thomas:1 ensure:1 bl:2 question:4 volterra:5 receptive:1 primary:1 dependence:1 usual:1 diagonal:3 gradient:7 ow:1 hq:5 capacity:1 nelson:1 presynaptic:5 considers:1 modeled:2 potentially:1 decharms:1 rise:1 unknown:1 adjustable:5 perform:1 bungtown:1 neuron:14 benchmark:1 finite:2 descent:2 behave:1 heterogeneity:1 extended:1 neurobiology:1 precise:2 bert:1 arbitrary:6 pair:3 required:2 specified:2 connection:2 optimized:1 address:2 able:3 pattern:4 hkl:3 oj:1 memory:6 power:3 mediated:1 coupled:3 lij:4 prior:2 suggestion:1 interesting:2 proportional:1 sufficient:1 article:3 excitatory:4 surprisingly:1 free:1 institute:1 wide:1 markram:3 regard:2 dimension:1 cortical:2 xn:2 valid:1 rich:1 sensory:1 simplified:1 employing:1 qx:2 approximate:5 assumed:2 equ:1 tuples:3 xi:3 decade:1 zq:8 learn:5 nature:2 robust:2 du:1 mse:3 investigated:1 complex:4 anthony:1 domain:2 neurosci:2 neuronal:1 fig:17 ny:1 axon:1 tna:1 exponential:1 xl:1 ix:1 theorem:7 emphasized:1 survival:1 evidence:1 essential:2 exists:1 explore:1 visual:1 contained:2 bo:2 relies:1 complemented:1 identity:1 goal:1 change:8 included:1 infinite:2 called:1 experimental:2 perceptrons:1 brunswick:1 incorporate:1 dept:1 tested:2
989
1,904
Competition and Arbors in Ocular Dominance Peter Dayan Gatsby Computational Neuroscience Unit, UCL 17 Queen Square, London, England, WCIN 3AR. d a y a n @gat sby.u c l.a c .uk Abstract Hebbian and competitive Hebbian algorithms are almost ubiquitous in modeling pattern formation in cortical development. We analyse in theoretical detail a particular model (adapted from Piepenbrock & Obermayer, 1999) for the development of Id stripe-like patterns, which places competitive and interactive cortical influences, and free and restricted initial arborisation onto a common footing. 1 Introduction Cats, many species of monkeys, and humans exibit ocular dominance stripes, which are alternating areas of primary visual cortex devoted to input from (the thalamic relay associated with) just one or the other eye (see Erwin et aI, 1995; Miller, 1996; Swindale, 1996 for reviews of theory and data). These well-known fingerprint patterns have been a seductive target for models of cortical pattern formation because of the mix of competition and cooperation they suggest. A wealth of synaptic adaptation algorithms has been suggested to account for them (and also the concomitant refinement of the topography of the map between the eyes and the cortex), many of which are based on forms of Hebbian learning. Critical issues for the models are the degree of correlation between inputs from the eyes, the nature of the initial arborisation of the axonal inputs, the degree and form of cortical competition, and the nature of synaptic saturation (preventing weights from changing sign or getting too large) and normalisation (allowing cortical and/or thalamic cells to support only a certain total synaptic weight). Different models show different effects of these parameters as to whether ocular dominance should form at all, and, if it does, then what determines the widths of the stripes, which is the main experimental observable. Although particular classes of models excite fervid criticism from the experimental community, it is to be hoped that the general principles of competitive and cooperative pattern formation that underlie them will remain relevant. To this end we seek models in which we can understand the interactions amongst the various issues above. Piepenbrock & Obermayer (1999) suggested an interesting model in which varying a single parameter spans a spectrum from cortical competition to cooperation. However, the nature of competition in their model makes it hard to predict the outcome of adaptation completely, except in some special cases. In this paper, we suggest a slightly different model of competition which makes the analysis tractable, and simultaneously generalise the model to consider an additional spectrum between flat and peaked arborisation. 2 The Model Figure 1 depicts our model. It is based on the competitive model of Piepenbrock & Obermayer (1999), who developed it in order to explore a continuum between competitive and linear cortical interactions. We use a slightly different competition mechanism and also c B A L cortex.-- competitive interaction veal W'(a,b) A (a,b)~ A D L R W R ocularity w- W '(a,b) A (a,b) a o u'(b) 60000 left 0 0000 thalamus right u'(b) b Figure 1: Competitive ocular dominance model. A) Left (L) and right (R) input units (with activities u L (b) and uR(b) at the same location b in input space) project through weights WL(a, b) and WR(a, b) and a restricted topography arbor function A(a, b) (B) to an output layer, which is subject to lateral competitive interactions. C) Stable weight patterns W(a , b) showing ocular dominance. D) (left) difference in the connections W- = W R - W L from right and left eye; (right) sum difference across b showing the net ocularity for each a. Here, O"A = 0.2, 0"[ = 0.08, O"u = 0.075 , f3 = 10, I = 0.95, n = 3. There are N = 100 units in each input layer and the output layer. Circular (toroidal) boundary conditions are used with bE [0, 1). extend the model with an arbor function (as in Miller et aI, 1989). The model has two input layers (representing input from the thalamus from left 'L' and right 'R' eyes), each containing N units, laid out in a single spatial dimension. These connect to an output layer (layer IV of area VI) with N units too, which is also laid out in a single spatial dimension. We use a continuum approximation, so labeling weights W L ( a, b) and W R ( a, b) . An arbor function, A(a, b), represents the multiplicity of each such connection (an example is given in figure IB). The total strengths of the connections from b to a are the products WL(a,b)A(a, b) and WR(a,b)A(a, b). Four characteristics define the model: the arbor function, the statistics of the input; the mapping from input to output; and the rule by which the weights change. The arbor function A(a, b) specifies the basic topography of the map at the time that the pattern of synaptic growth is being established. We consider A(a, b) ()( e-(a-b)2 /20-1 , where O"A is a parameter specifies its width (figure IB). The two ends of the spectrum for the arbor are fiat, when A(a, b) = 0: is constant (O"A = 00), and rigid or punctate, when A(a, b) ()( c5(a - b) (O"A = 0) and so input cells are mapped only to their topographically matched cells in the cortex. The second component of the model is the input. Since the model is non-linear, pattern formation is a function of aspects of the input in addition to the two-point correlations between input units that drive development of standard, non-competitive, Hebbian models. We follow Piepenbrock & Obermayer (1999) and consider highly spatially simplified input activities at location b in the left (u L(b) and right (u R (b) projections, refiecting just a single Gaussian bump (of width oV) which is stronger to the tune of I in (a randomly chosen) one of the input projections than the other uL(b) = 0.5(1 + zl)e-(b-e)2/20-~ uR(b) = 0.5(1- zl)e-(b-e) 2 /20-~ (1) where ~ E [0,1) is the randomly chosen input location, z is -lor 1 (with probability 0.5 each), and determines whether the input is more from the right or left projection. 0::::: I ::::: 1 governs the weakness of correlations between the projections . The third component of the model is the way that input activities and the weights conspire to form output activities. This happens in linear (I), competitive (c) and interactive (i) steps: I: c: v(a) = JdbA(a,b) (WL(a,b)uL(b) + WR(a,b)uR(b)) , v~a) = (v(a))/3 / Jda' (v(a'))/3 i : vi(a) = Jda' I(a, a')v~a) (2) (3) Weights, arbor and input and output activities are all positive. In equation 3c, f3 ~ 1 is a parameter governing the strength of competition between the cortical cells. As f3 -+ 00, the activation process becomes more strongly competitive, ultimately having a winner-takes-all effect as in the standard self-organising map. This form of competition makes it possible to perform analyses of pattern formation that are hard for the model of Piepenbrock & Obermayer (1999). A natural form for the cortical interactions of equation 3i is the purely positive Gaussian I(a, at) = e-(a-a')2/ 2o} . The fourth component of the model is the weight adaptation rule which involves the Hebbian correlation between input and output activities, averaged over input patterns The weights are constrained W(a, b) E [0,1], and also multiplicatively normalised so fdbA(a, b)(WL(a, b) + WR(a, b)) = n, for all a. ez. WL(a, b) -+ WL(a, b) + E( (vi(a)uL(b))~z - A(a)WL(a, b)) . (4) = A(a)(W L, WR) is chosen to enforce normalisation. The initial values for the weights are WL,R = we-(a-b)2/20'~ +1]8W L,R, where w is cho(similarly for WR) where A(a) sen to satisfy the normalisation constraints, 1] is small, and 8WL(a, b) and 8WR(a, b) are random perturbations constrained so that normalisation is still satisfied. Values of u~ < 00 can emerge as equilibrium values of the weights if there is sufficient competition (sufficiently large (3) or a restricted arbor (ul < 00). 3 Pattern Formation We analyse pattern formation in the standard manner, finding the equilibrium points (which requires solving a non-linear equation), linearising about them and finding which linear mode grows the fastest. By symmetry, the system separates into two modes, one involving the sum of the weight perturbations 8W+ =8W R +8W L, which governs the precision of the topography of the final mapping, and one involving the difference 8W+ = 8W R-;5W L , which governs ocular dominance. The development of ocular dominance requires that a mode of 8W- (a, b) # 0 grows, for which each output cell has weights of only one sign (either positive or negative). The stripe width is determined by changes in this sign across the output layer. Figure 1C;D show the sort of patterns for which we would like to account. Equilibrium solution The equilibrium values of the weights can be found by solving (5) for the A+ determined such that the normalisation constraint fdb W L (a, b) + W R ( a, b) = satisfied for all a. v(a) is a non-linear function of the weights; however, the simple form of the inputs means that at least one set of equilibrium values of WL(a, b) and WR(a, b) are the same, WL(a, b) = we-(a-b)2 /20'~ for a particular width Uw that depends on I = 1/ ul, A = 1/ ul, U = 1/ ub and (3 according to a simple quadratic equation. We assume that w < 1, so the weights do not reach their upper saturating limit, and this implies thatw = 2~J(A + W)/1l'. The quadratic equation governing the equilibrium width can be derived by postulating Gaussian weights, and finding the values successively of v(a), v"(a) and vi(a) of equations 2 and 3, calculating ((vi(a)u L (b)) ~z and finding a consistency condition that W must satisfy in orderfor W L (a, b) -+ W L (a, b) in equation 4. The result is n is (((3 + I)I + (3U)W2 + (A(((3 + I)I + (3U) - ((3 - I)UI)W - (3AIU = 0 (6) Figure 2 shows how the resulting physically realisable (W > 0) equilibrium value of Uw depends on (3, UA and UI, varying each in turn about a single set of values in figure 1. Figure 2A shows that the width rapidly asymptotes as (3 grows, and it only gets large as the arbor function gets large for (3 near 1. Figure 2B shows this in another way. For (3 =1 (the dotted line), which quite closely parallels the non-competitive case of Miller et al (1989), A 0.5. - - - - - - , B fJ = 1.0 ..??? .., 0.3 ':.. ITA = 2.0 "~.~ OW~ 0.1 -".:~:.""!0:"':!.0~00~1-~ 10? (3 0 C , .. _' .". _.IJ?;'~_ = 1.25 fJ 10 0.5 0.3 0.1 10 1 Figure 2: Log-log plots of the equilibrium values of ow in the case of multiplicative normalisation. Solid lines based on parameters as in figure 1 (aA = 0.2, a[ = 0.08, au = 0.075, fl = 10). A) aw as a function of fl for aA = 0.2 (solid), aA = 2.0 (dotted) and aA = 0.0001 (dashed). B) aw as a function of aA for fl = 10 (solid), fl = 1.25 (dashed) and fl = 1.0 (dotted). C) aw as a function of a[. Other parameters as for the solid lines. aw grows roughly like the square root of aA as the arborisation gets flatter. For any (3 > 1, one equilibrium value of aw has a finite asymptote with UA. For absolutely flat topography = = 00) and (3 > 1, there are actually two equilibrium values for uw, one with Uw 00, flat weights; the other with Uw taking values such as the asymptotic values for the dotted and solid lines in figure 2B. (UA ie The sum mode The update equation for (normalised) perturbations to the sum mode is 8W+ (a, b) -t (1 - f.A+)oW+(a, b) + f~ II daldb l O(a, b, al, bdoW+(al' bl ) - f.A'(a)W+(a, b) where the operator 0 = 0 1 1 - (7) 0 2 is defined by averaging over ~ with z = 1, 'Y = 1 da2I(a, a2)v"(a2) 6~t:t) A(al' bl)uR(bl)uR(b)) 0 (a, b, aI, bl ) = (I 02(a,b,al,bl ) = (I da2I(a,a2)v"(a2)~t:SA(al,bt)uR(bl)uR(b)) , (8) (9) where, for convenience, we have hidden the dependence of v(a) and v"(a) on ~ and z. Here, the values of A+ and A'Ca) = (3 III dbdaldb l A(a, b)O(a, b, aI, bl )8W+(al, bl )/2f2 (10) come from the normalisation condition. The value of A+ is determined by W+(a, b) and not by 8W+(al,b l ). Except in the special case that UA = 00, the term f.A'(a)W+(a,b) generally keeps stable the equilibrium solution. We consider the full eigenfunctions ofO(a, b, aI, bl ) below. However, the case that Piepenbrock & Obermayer (1999) studied of a flat arbor function (u A = 00) turns out to be special, admitting two equilibrium solutions, one flat, one with topography, whose stability depends on (3. For UA < 00, the only Gaussian equilibrium solution for the weights has a refined topography (as one might expect), and this is stable. This width depends on the parameters in a way shown in equation 6 and figure 2, in particular, reaching a non-zero asymptote even as (3 gets very large. The difference mode The sum mode controls the refinement of topography, whereas the difference mode controls the development and nature of ocular dominance. The equilibrium value of W- (a, b) is always 0, by symmetry, and the linearised difference equation for the mode is oW- (a , b) -t (l-f.A+)oW-(a, b) + fflt II daldbl O(a, b, al, bl)OW- (al' bd (11) o n= 0 k= 10.86 0 .81 2 o 0 .06 10.86 2 0 .00 0 .00 0.81 0 .81 2 3 Figure 3: Eigenfunctions and eigenvalues of 0 1 (left block), 0 2 (centre block), and and the theoretical and empirical approximations to 0 (right columns). Here, as in equation 12, k is the frequency of alternation of ocularity across the output (which is integral for a finite system); n is the order of the Hermite polynomial. The numbers on top of each eigenfunction is the associated eigenvalue. Parameters are as in figure 1 with I = 1. which is almost the same as equation 7 (with the same operator 0), except that the multiplier for the integral is (3"(2 /2 rather than (3/2. Since "( < 1, the eigenvalues for the difference mode are therefore all less than those for the sum mode, and by the same fraction. The multiplicative decay term EA+JW- (a, b) uses the same A+ as equation 7, whose value is determined exclusively by properties of W+ (a, b); but the non-multiplicative term EA'(a)W+(a, b) is absent. Note that the equilibrium values of the weights (controlled by ow) affect the operator 0, and hence its eigenfunctions and eigenvalues. Provided that the arbor and the initial values of the weights are not both flat (aA =j:. 00 or aw =j:. 00), the principal eigenfunctions of 0 1 and 0 2 have the general form (12) where Pn(r, k) is a polynomial (related to a Hermite polynomial) of degree n in r whose coefficients depend on k. Here k controls the periodicity in the projective field of each input cell b to the output cells, and ultimately the periodicity of any ocular dominance stripes that might form. The remaining terms control the receptive fields of the output cells. Operator 0 2 has zero eigenvalues for the polynomials of degree n > 0. The expressions for the coefficients of the polynomials and the non-zero eigenvalues of 0 1 and 0 2 are rather complicated. Figure 3 shows an example of this analysis. The left 4 x 3 block shows eigenfunctions and eigenvalues of 0 1 for k = 0 ... 5 and n = 0, 1, 2; the middle 4 x 3 block, the equivalent eigenfunctions and eigenvalues of 0 2 . The eigenvalues come essentially from a Gaussian, whose standard deviation is smaller for 0 2 . To a crude first approximation, therefore, the eigenvalues of 0 resemble the difference of two Gaussians in k, and so have a peak at a non-zero value of k, ie a finite ocular dominance periodicity. However, this approximation is too crude. Although the eigenfunctions of 0 1 and 0 2 shown in figure 3 look almost identical, they are, in fact, subtly different, since 0 1 and 0 2 do not commute (except for flat or rigid topography). The similarity between the eigenfunctions makes it possible to approximate the eigenfunctions of 0 very closely by expanding those of 0 2 in terms of 0 1 (or vice-versa). This only requires knowing the overlap between the eigenfunctions, which can be calculated analytically from their form in equation 12. Expanding for n ~ 2 leads to the approximate eigenfunctions and eigenvalues for 0 shown in the penultimate column on the right of figure 3. The difference, for instance, between the A B ':E: 10- 3 10- 2 10-' (II Figure 4: A) The constraint term >'+(0./N) (dotted line) and the ocular dominance eigenvalues e(k)(Q/N) (solid line 7 = 1; dotted line 7 = 0.5) of /3720/2 as a function of C>[ , where k is the stripe frequency associated with the maximum eigenvalue. For C>[ too large, the ocular dominance eigenfunction no longer dominates. The star and hexagon show the maximum values of C>r such that ocular dominance can form in each case. The scale in (A) is essentially arbitrary. B) Stripe frequency k associated with the largest eigenvalue as a function of C>r. The star and hexagon are the same as in (A), showing that the critical preferred stripe frequency is greater for higher correlations between the inputs (lower 7). Only integer values are considered, hence the apparent aliasing. = 3 and those for 0 1 and 0 2 is striking, considering the simieigenfunction of 0 for k larity between the latter two. For comparison, the farthest right column shows empirically calculated eigenfunctions and eigenvalues of 0 (using a 50 x 50 grid). Putting 8W- back in terms of ocular dominance, we require that eigenmodes of 0 resembling the modes with n = 0 should grow more strongly than the normalisation makes them shrink; and then the value of k associated with the largest eigenvalue will be the stripe frequency that should be expected to dominate. For the parameters of figure 3, the case with k 3 has the largest eigenvalue, and exactly this leads to the outcome of figure IC;D. = 4 Results We can now predict the outcome of development for any set of parameters. First, the analysis of the behavior of the sum mode (including, if necessary, the point about multiple equilibria for flat initial topography) allows a prediction of the equilibrium value of c>w, which indicates the degree of topographic refinement. Second, this value of C>w can be used to calculate the value of the normalisation parameter ).+ that affects the growth of 8W+ and 8W-. There is then a barrier of 2),+ / f3'' -? that the eigenvalues of 0 must surmount for a solution that is not completely binocular to develop. Third, if the peak eigenvalue of is indeed sufficiently large that ocular dominance develops, then the favored periodicity is set by the value of k associated with this eigenvalue. Of course, if many eigenfunctions have similarly large eigenvalues, then slightly different stripe periodicities may be observed depending on the initial conditions. o The solid line in figure 4A shows the largest eigenvalue of f37 2 0/2 as a function of the width of the cortical interactions C>[, for 7 = 1, the value of C>w specified through the equilibrium analysis, and values of the other parameters as in figure 1. The dashed line shows ).+, which comes from the normalisation. The largest value of C>[ for which ocular dominance still forms is indicated by the star. For 7 0.5, the eigenvalues are reduced by a factor of 7 2 = 0.25, and so the critical value of C>[ (shown by the hexagram) is reduced. Figure 4B shows the frequency of the stripes associated with the largest eigenvalue. The smaller C>[ , the greater the frequency of the stripes. This line is jagged because only integers are acceptable as stripe frequencies. = Figure 5 shows the consequences of such relationships slightly differently. Some models consider the possibility that C>[ might change during development from a large to a small value. If the frequency of the stripes is most strongly determined by the frequency that grows fastest when C>[ is first sufficiently small that stripes grow, we can analyse plots such as those in figure 4 to determine the outcome of development. The figures in the top row Ii = 1.5 Ii = 10 Joe G: Ii = 100 ../ ~;:D!~ /~ lLj '--=~~='----:! Ii = 1.5 ~ O'A=2.0 / 1 0.5 1 1 0.5 1 Ii = 10 k '~ 1 05 > ?' 1 0 1 '~:~-:-'-.:-, Ii = 100 ~ ' l 0 1 :;~ .... , Figure 5: First three figures : maximal values of fr[ for which ocular dominance will develop as a function of /. All other parameters as in figure 1, except that frA = 0.2 (solid), frA = 2.0 (dashed); frA = 0.0001 (dotted). Last three figures: value of stripe frequency k associated with the maximal eigenvalue for parameters as in the left three plots at the critical value of fr[. show the largest values of fr[ for which ocular dominance can develop; the bottom plots show the stripe frequencies associated with these critical values of fr[ (like the stars and hexagons in figure 4), in both cases as a function of /. The columns are for successively larger values of fJ; within each plot there are three lines, for frA =0.0001 (dotted); frA =0.2 (solid), and frA = 2.0 (dashed). Where no value of fr[ permits ocular dominance to form, no line is shown. From the plots, we can see that the more similar the inputs, (the smaller 'Y) or the less the competition (the smaller fJ), the harder it is for ocular dominance to form. However, if ocular dominance does form, then the width of the stripes depends only weakJy on the degree of competition, and slightly more strongly on the width of the arbors. The narrower the arbor, the larger the frequency of the stripes. For rigid topography, as frA -t 0, the critical value of fr[ depends roughly linearly on 'Y . We analyse this case in more detail below. Note that the stripe width predicted by the linear analysis does not depend on the correlation between the input projections unless other parameters (such as a[) change, although ocular dominance might not develop for some values of the parameters. 5 Discussion The analytical tractability of the model makes it possible to understand in depth the interaction between cooperation, competition, correlation and arborisation. Further exploration of this complex space of interactions is obviously required. Simulations across a range of parameters have shown that the analysis makes correct predictions, although we have only analysed linear pattern formation. Non-linear stability turns out to playa highly significant role in higher dimensions (such as the 2d ocular dominance stripe pattern) where a continuum of eigenmodes share the same eigenvalues (Bressloff & Cowan, personal communication), and also in Id models involving very strong competition (fJ -t 00) like the self-organising map (Kohonen, 1995). Acknowledgements Funded by the Gatsby Charitable Foundation. I am very grateful to Larry Abbott, Ed Erwin, Geoff Goodhill, John Hertz, Ken Miller, Klaus Obermayer, Read Montague, Nick Swindale, Peter Wiesing and David Willshaw for discussions and to Zhaoping Li for making this paper possible. References Erwin, E, Obermayer, K & Schulten, K (1995) Neural Computation 7:425-468 . Kohonen, T (1995) Self-Organizing Maps . Berlin, New York:Springer-Verlag. Miller, KD (1996) In E Domany, JL van Hemmen & K Schulten, eds, Models of Neural Networks, Ill. New York:Springer-Verlag, 55-78. Miller, KD, Keller, JB & Stryker, MP (1989) Science 245:605-615. Piepenbrock, C & Obermayer, K (1999). In MS Keams, SA SoBa & DA Cohn, eds, Advances in Neuralfnformalion Processing Systems, fl. Cambridge, MA: MIT Press. Swindale, NV (1996) Network: Computation in Neural Systems 7: 161-247.
1904 |@word middle:1 polynomial:5 stronger:1 simulation:1 seek:1 commute:1 solid:9 harder:1 initial:6 exclusively:1 analysed:1 activation:1 must:2 bd:1 john:1 piepenbrock:7 asymptote:3 plot:6 update:1 sby:1 footing:1 location:3 organising:2 ofo:1 lor:1 hermite:2 manner:1 indeed:1 expected:1 behavior:1 roughly:2 aliasing:1 considering:1 ua:5 becomes:1 project:1 provided:1 matched:1 what:1 monkey:1 developed:1 finding:4 growth:2 interactive:2 exactly:1 willshaw:1 toroidal:1 uk:1 control:4 unit:6 underlie:1 zl:2 farthest:1 positive:3 limit:1 consequence:1 punctate:1 id:2 might:4 au:1 studied:1 fastest:2 projective:1 range:1 averaged:1 block:4 area:2 empirical:1 projection:5 suggest:2 get:4 onto:1 convenience:1 operator:4 influence:1 equivalent:1 map:5 resembling:1 keller:1 rule:2 dominate:1 stability:2 target:1 us:1 stripe:21 cooperative:1 observed:1 bottom:1 role:1 calculate:1 ui:2 personal:1 ultimately:2 ov:1 solving:2 depend:2 grateful:1 topographically:1 subtly:1 purely:1 f2:1 completely:2 montague:1 differently:1 geoff:1 cat:1 various:1 london:1 labeling:1 klaus:1 formation:8 outcome:4 refined:1 quite:1 whose:4 apparent:1 larger:2 soba:1 statistic:1 topographic:1 analyse:4 final:1 obviously:1 eigenvalue:26 net:1 analytical:1 ucl:1 sen:1 linearising:1 interaction:8 product:1 maximal:2 adaptation:3 fr:6 kohonen:2 relevant:1 rapidly:1 organizing:1 competition:14 getting:1 wcin:1 arborisation:5 aiu:1 develop:4 depending:1 ij:1 sa:2 strong:1 predicted:1 involves:1 implies:1 come:3 resemble:1 closely:2 correct:1 exploration:1 human:1 larry:1 require:1 swindale:3 sufficiently:3 considered:1 ic:1 equilibrium:18 mapping:2 predict:2 bump:1 continuum:3 a2:4 relay:1 largest:7 wl:11 vice:1 mit:1 gaussian:5 always:1 larity:1 reaching:1 rather:2 pn:1 varying:2 derived:1 indicates:1 criticism:1 am:1 seductive:1 dayan:1 rigid:3 bt:1 hidden:1 keams:1 issue:2 ill:1 favored:1 development:8 spatial:2 special:3 constrained:2 field:2 f3:4 having:1 identical:1 represents:1 zhaoping:1 look:1 peaked:1 jb:1 develops:1 randomly:2 simultaneously:1 linearised:1 normalisation:10 circular:1 highly:2 possibility:1 weakness:1 admitting:1 devoted:1 integral:2 necessary:1 unless:1 iv:1 theoretical:2 instance:1 column:4 modeling:1 ar:1 queen:1 tractability:1 deviation:1 too:4 connect:1 aw:6 cho:1 peak:2 ie:2 satisfied:2 successively:2 containing:1 li:1 account:2 realisable:1 star:4 flatter:1 coefficient:2 satisfy:2 jagged:1 mp:1 vi:5 depends:6 multiplicative:3 root:1 competitive:12 sort:1 thalamic:2 parallel:1 complicated:1 square:2 characteristic:1 who:1 miller:6 drive:1 reach:1 synaptic:4 ed:3 frequency:13 ocular:23 associated:9 ubiquitous:1 fiat:1 actually:1 ea:2 back:1 higher:2 follow:1 jw:1 shrink:1 strongly:4 just:2 governing:2 binocular:1 correlation:7 cohn:1 mode:13 eigenmodes:2 indicated:1 grows:5 effect:2 multiplier:1 hence:2 analytically:1 alternating:1 spatially:1 read:1 during:1 width:12 self:3 m:1 fj:5 common:1 empirically:1 veal:1 winner:1 jl:1 extend:1 significant:1 versa:1 cambridge:1 ai:5 consistency:1 grid:1 similarly:2 centre:1 fingerprint:1 funded:1 stable:3 cortex:4 similarity:1 longer:1 playa:1 certain:1 verlag:2 alternation:1 additional:1 greater:2 determine:1 dashed:5 ii:9 multiple:1 mix:1 full:1 thalamus:2 hebbian:5 england:1 controlled:1 prediction:2 involving:3 basic:1 essentially:2 physically:1 erwin:3 cell:8 addition:1 whereas:1 wealth:1 grow:2 w2:1 eigenfunctions:13 nv:1 subject:1 cowan:1 integer:2 axonal:1 near:1 iii:1 affect:2 domany:1 knowing:1 absent:1 whether:2 expression:1 ul:6 peter:2 york:2 generally:1 governs:3 tune:1 ken:1 reduced:2 specifies:2 dotted:8 sign:3 neuroscience:1 wr:8 dominance:23 putting:1 four:1 changing:1 conspire:1 abbott:1 uw:5 fraction:1 sum:7 surmount:1 fourth:1 striking:1 place:1 almost:3 laid:2 acceptable:1 layer:7 jda:2 fl:6 hexagon:3 quadratic:2 activity:6 adapted:1 strength:2 constraint:3 flat:8 aspect:1 span:1 llj:1 according:1 kd:2 hertz:1 remain:1 slightly:5 across:4 smaller:4 ur:7 making:1 happens:1 restricted:3 multiplicity:1 equation:14 turn:3 mechanism:1 tractable:1 end:2 gaussians:1 permit:1 enforce:1 top:2 remaining:1 calculating:1 bl:10 receptive:1 primary:1 dependence:1 stryker:1 obermayer:9 amongst:1 ow:7 separate:1 mapped:1 lateral:1 penultimate:1 berlin:1 relationship:1 multiplicatively:1 concomitant:1 negative:1 perform:1 allowing:1 upper:1 finite:3 communication:1 perturbation:3 arbitrary:1 community:1 david:1 required:1 specified:1 connection:3 nick:1 established:1 eigenfunction:2 suggested:2 below:2 pattern:15 goodhill:1 ocularity:3 saturation:1 including:1 critical:6 overlap:1 natural:1 representing:1 eye:5 review:1 acknowledgement:1 asymptotic:1 expect:1 topography:11 interesting:1 ita:1 foundation:1 degree:6 sufficient:1 principle:1 charitable:1 share:1 row:1 periodicity:5 cooperation:3 course:1 last:1 free:1 normalised:2 understand:2 generalise:1 taking:1 barrier:1 emerge:1 bressloff:1 van:1 boundary:1 dimension:3 cortical:10 calculated:2 depth:1 preventing:1 c5:1 refinement:3 simplified:1 approximate:2 observable:1 preferred:1 keep:1 excite:1 spectrum:3 nature:4 ca:1 expanding:2 symmetry:2 complex:1 da:1 main:1 linearly:1 hemmen:1 depicts:1 gatsby:2 postulating:1 precision:1 schulten:2 crude:2 ib:2 third:2 fra:7 showing:3 decay:1 dominates:1 joe:1 gat:1 hoped:1 explore:1 ez:1 visual:1 saturating:1 springer:2 aa:7 determines:2 ma:1 narrower:1 hard:2 change:4 determined:5 except:5 averaging:1 principal:1 total:2 specie:1 arbor:14 experimental:2 support:1 latter:1 ub:1 absolutely:1
990
1,905
A silicon primitive for competitive learning David Usu Miguel Figueroa Chris Diorio Computer Science and Engineering The University of Washington 114 Sieg Hall, Box 352350 Seattle, W A 98195-2350 USA hsud, miguel, [email protected] Abstract Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task. 1 Introduction Competitive learning is a family of neural learning algorithms that has proved useful for training many classification and clustering networks [1]. In these networks, a neuron's synaptic weight vector typically represents a tight cluster of data points. Upon presentation of a new input to the network, the neuron representing the closest cluster adapts its weight vector, decreasing the difference between the weight vector and present input. Details on this adaptation vary for different competitive learning rules, but the general functionality of the synapse is preserved across various competitive learning networks. These functions are weight storage, similarity computation, and competitive learning dynamics. Many VLSI implementations of competitive learning have been reported in the literature [2]. These circuits typically use digital registers or capacitors for weight storage. Digital storage is expensive in terms of die area and power consumption; capacitive storage typically requires a refresh scheme to prevent weight decay. In addition, these implementations require separate computation and weight-update phases, increasing complexity. More importantly, neural networks built with these circuits typically do not adapt during normal operation. Synapse transistors [3][4] address the problems raised in the previous paragraph. These devices use the floating-gate technology to provide nonvolatile analog storage and local adaptation in silicon. The adaptation mechanisms do not perturb the operation of the device, thus enabling simultaneous adaptation and computation. Unfortunately, the adaptation mechanisms provide dynamics that are difficult to translate into existing neural-network learning rules. Allen et. al. [5] proposed a silicon competitive learning synapse that used floating gate technology in the early 90's. However, that approach suffers from asymmetric adaptation due to separate mechanisms for increasing and decreasing weight values. In addition, they neither characterized the adaptation dynamics of their device, nor demonstrated competitive learning with their device. We present a new silicon primitive, the automaximizing bump circuit, that uses synapse transistors to implement competitive learning in silicon. This ll-transistor circuit computes a similarity measure, provides nonvolatile storage, implements local adaptation, and performs simultaneous adaptation and computation. In addition, the circuit naturally exhibits competitive learning dynamics. In this paper, we derive the properties of the automaximizing bump circuit directly from the physics of synapse transistors, and corroborate our analysis with data measured from a chip fabricated in a 0.351lm CMOS process. In addition, experiments on a competitive learning circuit, and software simulations of the learning rule, show that this device provides a suitable primitive for competitive learning. 2 Synapse transistors The automaxmizing bump circuit's behavior depends on the storage and adaptation properties of synapse transistors . Therefore this section briefly reviews these devices. A synapse transistor comprises a floating-gate MOSFET, with a control gate capacitively coupled to the floating gate, and an associated tunneling implant. The transistor uses floating-gate charge to implement a nonvolatile analog memory, and outputs a source current that varies with both the stored value and the control-gate voltage. The synapse uses two adaptation mechanisms: Fowler-Nordheim tunneling [6] increases the stored charge; impact-ionized hot-electron injection (IHEI) [7] decreases the charge. Because tunneling and IHEI can both be active during normal transistor operation, the synapse enables simultaneous adaptation and computation. A voltage difference between the floating gate and the tunneling implant causes electrons to tunnel from the floating gate, through gate oxide, to the tunneling implant. We can approximate this current (with respect to fixed tunneling and floatinggate voltages, V tunO and V go ) as [4]: (1) where I tunO and Vx are constants that depend on V tunO and V gO , and Ll V tun and Ll Vg are deviations of the tunneling and floating gate voltages from these fixed levels . IHEI adds electrons to the floating gate, decreasing its stored charge. The IHEI current increases with the transistor's source current and drain-to-source voltage; over a small drain-voltage range, we model this dependence as [3][4]: (2) where the constant Vy depends on the VLSI process, and Ut is the thermal voltage. 3 Automaximizing bump circuit The automaximizing bump circuit (Fig. 1) is an adaptive version of the classic bump-antibump circuit [8]. It uses synapse transistors to implement the three essential functions of a competitive learning synapse: storage of a weight value f1" computation of a similarity measure between the input and f1" and the ability to move f1, closer to the input. Both circuits take two inputs, VI and V2 , and generate three cur- Vdd V,o(V) (b) Figure 1. (a) Automaximizing bump circuit. MI-M5 form the classic bumpantibump circuit; we added M6-MII and the floating gates. (b) Data showing that the circuit computes a similarity between the input, V in , and the stored value, /-l, for three different stored weights. Yin is represented as VI =+ Vin12, V2 =- Vin/2. (a) rents. The two outside currents, II and /Z, are a measure of the dissimilarity between the two inputs; the center current, Imid' is a measure of their similarity: Imid = Ib(l+l\,cosh 2 (K~V)) -1 ~ (3) where A, and K are process and design-dependent parameters, ~ V is the voltage difference between VI and V2 , and h is a bias current. Imid is symmetric with respect to the difference between VI and V2 , and approximates a Gaussian centered at ~ V =O. We augment the bump-anti bump circuit by adding floating gates and tunneling junctions to MI-M5, turning them into synapse transistors; MI and M3 share the same floating gate and tunneling junction, as do M2 and M4. We also add transistors M6MIl to control IHEI. For convenience, we will refer to our new circuit merely as a bump circuit. The charge stored on the bump circuit's floating gates, QI and Q2, shift Imi/S peak away from ~V=O by an amount determined by their difference. We interpret this difference as the weight, p, stored by the circuit, and interpret Imid as a similarity measure between the circuit's input and stored weight. Tunneling and IHEI adapt the bump circuit's weight. The circuit is automaximizing because tunneling and IHEI naturally tune the peak of Imid to coincide with the present input. This high-level behavior coincides with the dynamics of competitive learning; both act to decrease the difference between a stored weight and the applied input. Therefore, no explicit computation of the direction or magnitude of weight updates is necessary-the circuit naturally performs these computations for us. Consequently, we only need to indicate when the circuit should adapt, not how it does adapt. Applying -IOV to Vlun and -OV to Vinj activates adaptation. Applying <8V to Vlun and >2V to Vinj deactivates adaptation. 3.1 Weight storage The bump circuit's weight value derives directly from the charge on its floatinggates. A synapse transistor's floating-gate charge looks, for all practical purposes, like a voltage source, V" applied to the control gate. This voltage source has a value Vs = QIC i", where Cin is the control-gate to floating-gate coupling capacitance and Q is the floating gate charge. We encode the input to the bump circuit, Yin, as a differential signal: V I = Vin /2; and V2 =-Vin /2 (similar results will follow for any symmetric encoding of Yin)' As a result, froid computes the similarity between the two floating-gate voltages: Vfgl = VsI + Vin /2, and Vfg2 = Vs2 - Vin /2 where VsI and Vs2 are the voltages due to the charge stored on the floating gates. We define the bump circuit's weight, /1, as: (4) This weight corresponds to the value of Yin that equalizes the two floating-gate voltages (and maximizes froid). Part (b) of Fig. 1 shows the bump circuit's froid output for three weight values, as a function of the differential input. We see that different stored values change the location of the peak, but do not change the shape of the bump. Because floating gate charge is nonvolatile, the weight is also nonvolatile. The differential encoding of the input makes the bump circuit's adaptation symmetric with respect to (Vin -/1). Without loss of generality, we can represent Yin as: (5) If we apply Vin !2 and -Vin!2 to the two input terminals, we arrive at the following two floating-gate voltages: Vfgl = (Vs2 + Vsl + ~n Vfg2 = (Vs2 + Vsl - ~n /1) 12 (6) + /1) 1 2 (7) - By reversing the sign of (Vin -/1), we obtain the same floating-gate voltages on the opposite terminals. Because the floating gate voltages are independent of the sign of (Vin-/1), the bump circuit's learning rule is symmetric with respect to (Vin-/1). 3.2 Adaptation We now explore the bump circuit's adaptation dynamics. We define L1Vfg=Vfgl-Vfg2' From Eqs. 4-7, we can see that V in -/1=L1Vfg . Consequently, the learning rate, dfl/dt, is equivalent to -dL1 Vfgldt. In our subsequent derivations, we consider only positive L1 Vfg , because adaptation is symmetric (albeit with a change of sign). We show complete derivations of the equations in this section in [9]. Tunneling causes adaptation by decreasing the difference between the floating-gate voltages V fgl and Vfg2 . Electron tunneling increases the voltage of both floating gates, but, because tunneling increases exponentially with smaller floating-gate voltages (see Eq.l), tunneling decreases the difference. Assuming that Ml 's floating gate voltage is lower than M2's, the change in L1 Vfg due to electron tunneling is: d L1 Vfg 1dt = -(I tunl - f tun2 ) 1Cfg (8) We substitute Eq.1 into Eq.8 and solve for the tunneling learning rule: d L1Vfg Idt = -ftOe (.1.Vtun-.1.VO)/Vx . . h sm ((L1Vfg -f/J) 12 Y.) (9) where ftO=ftunO/Cfp Vx is a model constant, L1 Vo = (L1 Vfgl + L1 V fg2 )12, and f/J models the tunneling mismatch between synapse transistors. This rule depends on three factors: lO ' .--~--~--~--~------, 10 ' .~ 10' '"~ ] ~ 10 ' ~ o .d .., .,,. injection data tunneling data '0 I 10 ' .., .,,. 0 '1 10 111 o -fit data fit 10 ' 0 .1 0 .2 0.3 .1.V,& 0.4 0 0.1 (V) 0 .2 .lVi' (a) 0.3 0.4 0 .5 (V) (b) Figure 2. (a) Measured adaptation rates, due to tunneling and IHEI, along with fits from Eqs.9 and 11. (b) Composite adaptation rate, along with a fit from (12). We slowed the IHEI adaptation rate (by using a higher Vinj ), compared with the data from part (a), to cause better matching between tunneling and IHEI. a controllable learning rate, ~ Vtun ; the difference between Yin and f.1, average floating gate voltage, ~ Yo. ~ V rg ; and the The circuit also uses IHEI to decrease ~ Vrg . We bias the bump circuit so that only transistors Ml and M2 exhibit IHEI. According to Eq.2, IHEI depends linearly on a transistor's source current, but exponentially on its source-to-drain voltage. Consequently, we decrease ~ V rg by controlling the drain voltages at Ml and M2. Coupled current mirrors (M6-M7 and M8-M9) at the drains of Ml and M2, simultaneously raise the drain voltage of the transistor that is sourcing a larger current, and lower the drain voltage of the transistor that is sourcing a smaller current. The transistor with the smaller source current will experience a larger V sd , and thus exponentially more IHEI, causing its source current to rapidly increase. Diodes (MlO and M11) further increase the drain voltage of the transistor with the larger current, further reducing its IHEI. The net effect is that IHEI acts to equalize the currents, and, likewise, the floating gate voltages . Recently Hasler proposed a similar method for controlling IHEI in a floating gate differential pair [4]. Assuming II >h, the change in ~ V rg due to IHEI is: (10) We expand the learning rule by substituting Eq.2 into Eq.lO. To compute values for the drain voltages of MI and M 2 , we assume that all of II flows through MIl and all of 12 flows through M7. The IHEI learning rule is given below: d/1 Vf g 1 dt = - IjOe9WO (e -rVi"i<l>l (/1 Vfg ) - e~V;"i <l>2 (/1 Vfg ? (11) where fjo=finjO/Crg, r=-2heVy, 17=-lIVy, and I;=KlVy. <1>1 and <1>2 are given by: l-2U,/KVy -lO~Vfg f mid )/2 cos h( K~ Vfg 12 V t )) e (12) - (l~Vfg ( -K~Vfg I U, )- u, IVy) - I mid ) / 2 cosh(clVfg 12VI )) e l-e (13) <I> l (~ Vfg ) = (( f b <l>2(~Vfg) = ( ( Ib - where (J =(I-V/Vy)Kl2Vh and w =Kl2Vt -Kl2Vy-llVy. Like tunneling, the IHEI rule depends on three factors: a controllable learning rate, Vinj ; the difference between Yin and f.1, ~ Vrg; and ~ Yo. Part (a) of Fig. 2 shows measurements of d~ Vrgldt versus ~ Vfg due to tunneling and IHEI, along with fits to Eqs.9 and 11 respectively. IHEI and tunneling facilitate adaptation by adding and removing charge from the floating gates, respectively. Isolated, any of these mechanisms will eventually drive the bump circuit out of its operating range. In order to obtain useful adaptation, we need to activate both mechanisms at the same time. There is an added benefit to combining tunneling and IHEI: Part (a) Fig 2 shows that tunneling acts more strongly for smaller values of ~ Vfg , while IHEI shows the opposite behavior. The mechanisms complement each other, providing adaptation over more than a I V range in ~ Vrg . We combine Eq. 9 and Eq.11 to derive the bump learning rule: --d~Vrg / dt =/tOe (~Vtun - ~I'o)/V, . sinh((~Vrg -?')/2V~)+IjOe ?oVo (e TViol CPl(~Vrg)-e ~viIj CP2(~Vfg)) (14) Part (b) of Fig. 2 illustrates the composite weight-update dynamics. When ~ V fg is small, adaptation is primarily driven by IHEI, while tunneling dominates for larger values of ~ Vfg ? The bump learning rule is unlike any learning rule that we have found in the literature. Nevertheless, it exhibits several desirable properties. First, it naturally moves the bump circuit's weight towards the present input. Second, the weight update is symmetric with respect to the difference between the stored value and the present input. Third, we can vary the weight-update rate over many orders of magnitude by adjusting Vlun and V inj ? Finally, because the bump circuit uses synapse transistors to perform adaptation, the circuit can adapt during normal operation. 4 Competitive learning with bump circuits We summarize the results of simulations of the bump learning rule and also results from a competitive learning circuit fabricated in the TSMC 0.35 f.lm process below. For further details consult [9]. We first compared the performance of a software neural network on a standard clustering task, using the bump learning rule (fitted to data from Fig. 2), and a basic competitive learning rule (learning rate p=O.OI): djl / dt = p X CVin - jl) (15) We trained both networks on data drawn from a mixture of 32 Gaussians, in a 32dimensional space. The Gaussian means were drawn from the interval [0,1] and the covariance matrix was the diagonal matrix 0.1 *1. On an input presentation, the network updated the weight vector of the closest neuron using either the bump learning rule, or Eq.15. We measured the performance of the two learning rules by evaluating the coding error of each trained network, on a test set drawn from the same distribution as the training data. The coding error is the sum of the squared distances between each test point and its closest neuron. Part (a) of Fig. 3 shows that the bump circuit's rule performs favorably with the hard competitive learning rule. Our VLSI circuit (Part (b) of Fig. 3) comprised two neurons with a one-dimensional input (a neuron was a single bump circuit), and a feedback network to control adaptation. The feedback network comprised a winner-take-all (WT A) [10] that detected which bump was closest to the present input, and additional circuitry [9] that generated Vtun and Vinj from the WT A output. We tested this circuit on a clustering task, to learn the centers of a mixture of two Gaussians. In part (c) of Fig. 3, we compare the performance of our circuit with a simulated neural network using Eq.15. The VLSI circuit performed comparably with the neural network, demonstrating that our bump circuit, in conjunction with simple feedback mechanisms, can implement competitive learning in VLSI. We can generalize the circuitry to multiple dimensions (multiple bump circuits per neuron) and multiple neurons; each neuron only requires one V lun and V inj signal. 3 X10' + Hard competitive learning rule o bump learning rule 2.6 2.2 t 1.8 1.4 10~~1~ 00~0--2"'0~00n-'3~0~00--'4~ 00~0~<~~0~0--N60~00 number of training examples (b) (a) Figure 3. (a) Comparison of a neural network using the bump learning rule versus a standard competitive learning rule. We drew the training data from a mixture of thirty-two Gaussians, and averaged the results over ten trials. (b) A competitive learning circuit. (c) Performance of a competitive learning circuit versus a neural network for learning a mixture of two Gaussians. 09 DB 07 .. Q) circuit output I + + ~+ 05 :J > target values - + ~ 06 04 neural network output . 03 I . , .0- _,-/ ....... 02 .. 0 1 o ? / ~ + o = :-: ..... '" / / O ~~--~--~--~--~~--~--~~ o SOD 1000 1500 2000 2500 3000 3500 4000 4500 number of training examples (c) Acknowledgements This work was supported by the NSF under grants BES 9720353 and ECS 9733425, and by a Packard Foundation Fellowship. References [1] M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Cambridge, MA: The MIT Press, 1995. [2] H.C. Card, D.K. McNeill, and C.R. Schneider, "Analog VLSI circuits for competitive learning networks", in Analog Integrated Circuits and Signal Processing, 15, pp. 291-314, 1998. [3] C. Diorio, "A p-channel MOS synapse transistor with self-convergent memory writes", IEEE Transactions on Electron Devices, vol. 47, no. 2, pp 464-472, 2000. [4] P. Hasler, "Continuous-Time Feedback in Floating-Gate MOS Circuits," to appear in IEEE Transactions on Circuits and Systems IT, Feb. 2001 [5] T. Allen et. aI, "Electrically adaptable neural network with post-processing circuitry," U.S. Patent No. 5,331 ,215, issued July 19, 1994. [6] M. Lenzlinger and E.H. Snow, "Fowler- Nordheim tunneling into thermally grown Si0 2", Journal of Applied Physics, vol. 40(1), pp . 278-283 , 1969. [7] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [8] T. Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages", CNS Memo 26, California Institute of Technology, 1993. [9] D. Hsu, M. Figueroa, and C. Diorio, "A silicon primitive for competitive learning," UW CSE Technical Report no . 2000-07-01 , 2000. [10] J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and c.A. Mead, "Winner-take-all networks of O(n) complexity", in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman, vol. 1, pp 703-711 , 1989.
1905 |@word trial:1 version:1 briefly:1 simulation:2 covariance:1 cp2:1 existing:1 current:15 ihei:25 refresh:1 subsequent:1 shape:1 enables:1 designed:1 update:5 v:1 device:8 floatinggate:1 dfl:1 provides:2 cse:1 location:1 sieg:1 along:3 differential:4 m7:2 combine:1 paragraph:1 behavior:3 nor:1 brain:1 terminal:2 m8:1 decreasing:4 vfg:15 increasing:2 circuit:58 maximizes:1 kaufman:1 q2:1 fabricated:3 act:3 charge:11 control:6 grant:1 appear:1 positive:1 carrier:1 engineering:1 local:3 sd:1 encoding:2 mead:1 mateo:1 cpl:1 co:1 range:3 averaged:1 practical:1 thirty:1 implement:7 writes:1 vfg2:4 area:1 composite:2 matching:1 djl:1 convenience:1 storage:10 applying:2 equivalent:1 demonstrated:1 center:2 primitive:6 go:2 m2:5 rule:23 importantly:1 classic:2 updated:1 controlling:2 target:1 diego:1 us:6 vtun:4 fjo:1 expensive:1 asymmetric:1 tuno:3 diorio:4 decrease:5 cin:1 equalize:1 complexity:2 dynamic:8 vdd:1 depend:1 tight:1 ov:1 raise:1 trained:2 upon:1 chip:1 various:1 represented:1 grown:1 derivation:2 mosfet:1 activate:1 detected:1 outside:1 equalizes:1 larger:4 solve:1 ability:1 cfg:1 ionized:1 transistor:24 net:1 adaptation:29 causing:1 combining:1 rapidly:1 translate:1 adapts:1 ivy:1 deactivates:1 takeda:1 seattle:1 cluster:2 mcneill:1 vinj:5 cmos:1 derive:2 coupling:1 miguel:2 measured:3 eq:13 c:1 diode:1 indicate:1 vlun:3 direction:1 snow:1 functionality:1 centered:1 vx:3 implementing:1 require:1 f1:3 crg:1 hall:1 normal:3 mo:3 bump:39 lm:2 electron:6 substituting:1 circuitry:3 vary:2 early:1 purpose:1 vsi:2 si0:1 mit:1 activates:1 gaussian:2 voltage:29 mil:1 conjunction:1 encode:1 yo:2 usu:1 dependent:1 typically:4 integrated:1 vlsi:7 expand:1 classification:2 augment:1 raised:1 washington:2 represents:1 look:1 report:1 idt:1 primarily:1 simultaneously:1 m4:1 floating:32 phase:1 cns:1 mixture:4 closer:1 necessary:1 experience:1 kvy:1 capacitively:1 isolated:1 fitted:1 corroborate:1 miura:1 delbruck:1 mahowald:1 deviation:1 comprised:2 sod:1 imi:1 reported:1 stored:12 varies:1 peak:3 physic:2 squared:1 oxide:1 m9:1 coding:2 fg2:1 register:1 depends:5 vi:5 performed:1 vs2:4 competitive:28 vin:11 oi:1 likewise:1 generalize:1 comparably:1 drive:1 simultaneous:4 suffers:1 synaptic:1 ed:1 pp:4 toe:1 naturally:4 associated:1 mi:4 cur:1 hsu:1 proved:1 adjusting:1 lenzlinger:1 ut:1 adaptable:1 higher:1 dt:5 follow:1 synapse:17 box:1 strongly:1 generality:1 thermally:1 fowler:2 facilitate:1 effect:2 fgl:1 usa:1 symmetric:6 ll:3 during:3 self:1 die:1 coincides:1 m5:2 complete:1 demonstrate:1 vo:2 performs:4 allen:2 l1:6 recently:1 patent:1 winner:2 exponentially:3 jl:1 analog:5 approximates:1 interpret:2 silicon:6 refer:1 measurement:1 cambridge:1 ai:1 similarity:9 operating:1 add:2 feb:1 closest:4 driven:1 issued:1 morgan:1 additional:1 schneider:1 signal:3 ii:3 july:1 multiple:3 desirable:1 x10:1 technical:1 adapt:5 characterized:1 vrg:6 academic:1 post:1 impact:1 qi:1 basic:1 represent:1 preserved:1 addition:4 fellowship:1 interval:1 source:9 tsmc:1 unlike:1 db:1 flow:2 capacitor:1 effectiveness:1 consult:1 yang:1 m6:2 fit:5 arbib:1 opposite:2 shift:1 cause:3 lazzaro:1 tunnel:1 useful:2 tune:1 amount:1 qic:1 mid:2 mlo:1 cosh:2 ten:1 generate:1 affords:1 vy:2 nsf:1 sign:3 per:1 vol:3 nevertheless:1 demonstrating:1 drawn:3 prevent:1 neither:1 hasler:2 uw:1 merely:1 sum:1 arrive:1 family:1 mii:1 tunneling:28 vf:1 sinh:1 convergent:1 hamada:1 figueroa:2 software:2 injection:2 according:1 electrically:1 across:1 smaller:4 slowed:1 equation:1 eventually:1 mechanism:8 vsl:2 junction:2 operation:4 gaussians:4 apply:1 v2:5 away:1 gate:37 substitute:1 capacitive:1 clustering:5 perturb:1 move:2 capacitance:1 added:2 ryckebusch:1 dependence:1 diagonal:1 exhibit:3 distance:1 separate:2 card:1 simulated:1 consumption:1 chris:1 assuming:2 providing:1 difficult:1 unfortunately:1 favorably:1 memo:1 implementation:2 design:1 perform:1 m11:1 neuron:9 sm:1 enabling:1 anti:1 thermal:1 david:1 complement:1 pair:1 california:1 nordheim:2 address:1 below:2 mismatch:1 summarize:1 tun:1 built:1 packard:1 memory:2 power:1 suitable:2 hot:2 turning:1 representing:1 scheme:1 technology:3 coupled:2 review:1 literature:2 acknowledgement:1 drain:9 loss:1 versus:3 vg:1 digital:2 foundation:1 ovo:1 share:1 lo:3 sourcing:2 supported:1 bias:2 institute:1 fg:1 benefit:1 feedback:4 dimension:1 evaluating:1 computes:3 adaptive:1 coincide:1 san:2 ec:1 transaction:2 approximate:1 ml:4 active:1 handbook:1 continuous:1 learn:1 channel:1 ca:2 controllable:2 lvi:1 linearly:1 dl1:1 fig:9 nonvolatile:6 lun:1 comprises:1 explicit:1 ib:2 rent:1 third:1 removing:1 showing:1 decay:1 hsud:1 derives:1 essential:1 dominates:1 albeit:1 adding:2 drew:1 mirror:1 dissimilarity:2 magnitude:2 implant:3 illustrates:1 rg:3 yin:7 explore:1 corresponds:1 froid:3 rvi:1 ma:1 presentation:2 consequently:3 towards:1 change:5 hard:2 determined:1 reducing:1 reversing:1 wt:2 inj:2 m3:1 tested:1
991
1,906
Weak Learners and Improved Rates of Convergence in Boosting Shie Mannor and Ron Meir Department of Electrical Engineering Technion, Haifa 32000, Israel {shie,rmeir }@{techunix,ee}.technion.ac.il Abstract The problem of constructing weak classifiers for boosting algorithms is studied. We present an algorithm that produces a linear classifier that is guaranteed to achieve an error better than random guessing for any distribution on the data. While this weak learner is not useful for learning in general, we show that under reasonable conditions on the distribution it yields an effective weak learner for one-dimensional problems. Preliminary simulations suggest that similar behavior can be expected in higher dimensions, a result which is corroborated by some recent theoretical bounds. Additionally, we provide improved convergence rate bounds for the generalization error in situations where the empirical error can be made small, which is exactly the situation that occurs if weak learners with guaranteed performance that is better than random guessing can be established. 1 Introduction The recently introduced boosting approach to classification (e.g., [10]) has been shown to be a highly effective procedure for constructing complex classifiers. Boosting type algorithms have recently been shown [9] to be strongly related to other incremental greedy algorithms (e.g., [6]). Although a great deal of numerical evidence suggests that boosting works very well across a wide spectrum of tasks, it is not a panacea for solving classification problems. In fact, many versions of boosting algorithms currently exist (e.g., [4],[9]), each possessing advantages and disadvantages in terms of classification accuracy, interpretability and ease of implementation. The field of boosting provides two major theoretical results. First, it is shown that in certain situations the training error of the classifier formed converges to zero (see (2)). Moreover, under certain conditions, a positive margin can be guaranteed. Second, bounds are provided for the generalization error of the classifier (see (1)). The main contribution of this paper is twofold. First, we present a simple and efficient algorithm which is shown, for every distribution on the data, to yield a linear classifier with guaranteed error which is smaller than 1/2 - 'Y where 'Y is strictly positive. This establishes that a weak linear classifier exists. From the theory of boosting [10] it is known that such a condition suffices to guarantee that the training error converges to zero as the number of boosting iterations increases. In fact, the empirical error with a finite margin is shown to converge to zero if , is sufficiently large. However, the existence of a weak learner with error 1/2 - , is not always useful in terms of generalization error, since it applies even to the extreme case where the binary labels are drawn independently at random with equal probability at each point, in which case we cannot expect any generalization. It is then clear that in order to construct useful weak learners, some assumptions need to be made about the data. In this work we show that under certain natural conditions, a useful weak learner can be constructed for one-dimensional problems, in which case the linear hyper-plane degenerates to a point. We speculate that similar results hold for higher dimensional problems, and present some supporting numerical evidence for this. In fact, some very recent results [7] show that this expectation is indeed borne out. The second contribution of our work consists of establishing faster convergence rates for the generalized error bounds introduced recently by Mason et al. [8]. These improved bounds show that faster convergence can be achieved if we allow for convergence to a slightly larger value than in previous bounds. Given the guaranteed convergence of the empirical loss to zero (in the limited situations in which we have proved such a bound), such a result may yield a better trade-off between the terms appearing in the bound, offering a better model selection criterion (see Chapter 15 in [1]). 2 Construction of a Linear Weak Learner We recall the basic generalization bound for convex combinations of classifiers. Let H be a class of binary classifiers of VC-dimension dv , and denote by co(H) the convex hull of H. Given a sample S = {(Xl, Y1), ... , (xm, Ym)} E (X x {-I, +1})m of m examples drawn independently at random from a probability distribution D over X x {-I, +1}, Schapire et al. [10] show that with probability at least 1 - 15, for every f E co(H) and every () > 0, P DIY f(X} '" 0] '" Ps [Y f(X} '" 8] + 0 ( Jm (d. IOg~m/d.) + Iog(l/5)) 'I') , (1) where the margin-error P slY f(X) :::; ()] denotes the fraction of training points for which yd(Xi) :::; (). Clearly, if the first term can be made small for a large value of the margin (), a tight bound can be established. Schapire et al. [10] also show that if each weak classifier can achieve an error smaller than 1/2 -" then P slY f(X) :::; ()] :::; ((1 - 2,)1-11 (1 + 2,)1+8) T/2 , (2) where T is the number of boosting iterations. Note that if , > (), the bound decreases to zero exponentially fast. It is thus clear that a large value of, is needed in order to guarantee a small value for the margin-error. However, if , (and thus ()) behaves like m -/3 for some (3 > 0, the rate of convergence in the second term in (1) will deteriorate, leading to worse bounds than those available by using standard VC results [11]. What is needed is a characterization of conditions under which the achievable () does not decrease rapidly with m. In this section we present such conditions for one-dimensional problems, and mention recent work [7] that proves a similar result in higher dimensions. We begin by demonstrating that for any distribution on m points, a linear classifier can achieve an error smaller than 1/2 -" where, = O(I/m). In view of our comments above, such a fast convergence of, to zero may be useless for generalization bounds. We then use our construction to show that, under certain regularity conditions, a value of" and thus of (), which is independent of m can be established for one-dimensional problems. Let {Xl, ... , xm} be points in IRd, and denote by {Yl, ... , Ym} their binary labels, i.e., Yi E {-1,+1}. A linear decision rule takes the form y(x) = sgn(a? X + b), where? is the standard inner product in IRd. Let P E ~ m be a probability measure on the m points. The weighted misclassification error for a classifier Y is Pe (a, b) = Lz:,l PiI(Yi i Yi). For technical reasons, we prefer to use the expression 1- 2Pe = Lz:,l PiYiYi. Obviously if 1 - 2Pe ~ E we have that Pe ~ ~ - ~. Lemma 1 For any sample of m distinct points, S = {(Xi'Yi)}~l E (IRd X { -1, + 1} ) m, and a probability measure P E ~ m on S, there is some a E IRd and b E IR such that the weighted misclassification error of the linear classifier Y = sgn(a? X + b) is bounded away from 1/2, in particular Lz:,l PiI(Yi i Yi) :::; ~ - 4~? Proof The basic idea of the proof is to project a finite number of points onto a line h so that no two points coincide. Since there is at least one point X whose weight is not smaller than 11m, we consider the four possible linear classifiers defined by h with boundaries near x (at both sides of it and with opposite sign), and show that one of these yields the desired result. We proceed to the detailed proof. Fix a probability vector P = (PI' . .. ' Pm) E ~m. We may assume w.l.o.g that all the Xi are different, or we can merge two elements and get m - 1 points. First, observe that if ILz:,l PiYil 2: 2~' then the problem is trivially solved. To see this, denote by S? the sub-samples of S labelled by ?1 respectively. Assume, for example, that LiES+ Pi 2: LiES_ Pi + 2~? Then the choice a = 0, b = 1, namely Yi = 1 for all i, implies that Li PiYiYi 2: 2~. Similarly, the choice a = 0, b = -1 solves the problem if LiES_ Pi 2: LiES+ Pi + 2~. Thus, we can assume, without loss of generality, that ILz:,l PiYil < 2~? Next, note that there exists a direction u such that i i j implies that U? Xi i U? Xj. This can be seen by the following argument. Construct all one-dimensional lines containing two data points or more; clearly the number of such lines is at most m(m -1)/2. It is then obvious that any line, which is not perpendicular to any of these lines obeys the required condition. Let Xi be a data-point for which Pi 2: 11m, and set E to be a positive number such that 0< E < min{lu?xi -u?xjl: i,j E 1, ... ,m}. Such an E always exists since the points are assumed to be distinct. Note the following trivial algebraic fact: (3) For each j = 1,2, .. . , m let the classification be given by Yj = sgn(u? Xj + b), where the bias b is given by b = -U?Xi +EYi. Then clearly Yi = Yi and Yj = sgn(u?Xj -U?Xi), and therefore Lj PjYjYj = Pi + L#i PjYjsgn(u . Xj - U . Xi ). Let A = Pi and B = L#i PjYjsgn(u . Xj - U . Xi). If IA + BI 2: 112m we are done. Otherwise, if IA + BI < 112m, consider the classifier yj = sgn( -u . Xj + b'), with b' = U . Xi + EYi (note that y~ = Yi and yj = -Yj, j i i). Using (3) with 61 = 112m and 62 = 11m the claim follows. ? We comment that the upper bound in Lemma 1 may be improved to 1/2 -II (4(m1)), m 2: 2, using a more refined argument. Remark 1 Lemma 1 implies that an error of 1/2 - 'Y, where 'Y = O(l/m), can be guaranteed for any set of arbitrarily weighted points. It is well known that the problem of finding a linear classifier with minimal classification error is NP-hard (in d) [5]. Moreover, even the problem of approximating the optimal solution is NP-hard [2]. Since the algorithm described in Lemma 1 is clearly polynomial (in m and d), there seems to be a transition as a function of 'Y between the class NP and P (assuming, as usual, that they are different). This issue warrants further investigation. While the result given in Lemma 1 is interesting, its generality precludes its usefulness for bounding generalization error. This can be seen by observing that the theorem guarantees the given margin even in the case where the labels Yi are drawn uniformly at random from {?1}, in which case no generalization can be expected. In order to obtain a more useful result, we need to restrict the complexity of the data distribution. We do this by imposing constraints on the types of decision regions characterizing the data. In order to generate complex, yet tractable, decision regions we consider a multi-linear mapping from Rd to {-I, l}k, generated by the k hyperplanes Pi = {x: WiX + WiO,X E Rd},i = 1, ... ,k, as in the first hidden layer of a neural network. Such a mapping generates a partition of the input space Rd into M connected components, {Rd \ U~=l Pi }, each characterized by a unique binary vector of length k. Assume that the weight vectors (Wi, WiO) E Rd+! are in general position. The number of connected components is given by (e.g., Lemma 3.3. in [1]) C(k, d + 1) = 2 2::~=0 (kil). This number can be bounded from below by 2(k;;I), which in turn is bounded below by 2((k - 1)/d)d. An upper bound is given by 2(e(k - 1)/d)d, m ~ d. In other words, C(k, d + 1) = e ((k/d)d). In order to generate a binary classification problem, we observe that there exists a binary function from {-I, l}k I--t {-I, I}, characterized by these M decision regions. This can be seen as follows. Choose an arbitrary connected component, and label it by +1 (say). Proceed by labelling all its neighbors by -1, where neighbors share a common boundary (a (d -I)-dimensional hyperplane in d dimensions). Proceeding by induction, we generate a binary classification problem composed of exactly M decision regions. Thus, we have constructed a binary classification problem, characterized by at least 2(k;;l) ~ 2((k -1)/d))d decision regions. Clearly as k becomes arbitrarily large, very elaborate regions are formed. We now apply these ideas, together with Lemma 1, to a one dimensional problem. Note that in this case the partition is composed of intervals. Theorem 1 Let F be a class of functions from R to {?1} which partitions the real line into at most k intervals, k ~ 2. Let f.L be an arbitrary probability measure on R. Then for any f E F there exist a, T* E R for which, f.L {x: f(x)sgn(ax - 1 T*) 1 = I} ~ 2" + 4k (4) Proof Let a function f be given, and denote its connected components by h, ... , Ik, that is h = [-00, h), h = [h,12), 13 = [12,la), and so on until Ik = [lk-I,OO], with -00 = 10 < h < 12 < ... < lk-l. Associate with every interval a point in R, = h - 1, X2 = (h + l2) /2, ... , Xk-l = (lk-2 + lk-d /2, Xk = lk-1 + 1, a weight f.Li = f.L(Ii), i = 1, ... , k, and a label f(Xi) E {?1}. We now apply Lemma 1 to conclude that there exist a E {?1} and T E R such that 2::7=1 f.Ld(xi)sgn(axi T) > 1/(4k). The value of T lies between li and li+! for some i E {O, 1, ... , k I} (recall that lo = -00). We identify T* of (4) as lH1. This is the case since Xl by choosing this T*, f(x) in any segment Ii is equal to f(Xi) so we have that ? f.L {x: f(x)sgn(ax - T*) = I} = ~ + 2::7=1 f.Ld(xi)sgn(axi - T*) ~ ~ + 41k? Note that the result in Theorem 1 is in fact more general than we need, as it applies to arbitrary distributions, rather than distributions over a finite set of points. An open problem at this point is whether a similar result applies to d-dimensional problems. We conjecture that in d dimensions 'Y behaves like k-l(d) for some function l, where k is a measure for the number of homogeneous convex regions defined by the data (a homogeneous region is one in which all points possess identical labels). While we do not have a general proof at this stage, we have recently shown [7] that the conjecture holds under certain natural conditions on the data. This result implies that, at least under appropriate conditions, boosting-like algorithm are expected to have excellent generalization performance. To provide some motivation, we present results of some numerical simulations for two-dimensional problems. For this simulation we used random lines to generate a partition of the unit square in IR2. We then drew 1000 points at random from the unit square and assigned them labels according to the partition. Finally, in order to have a non-trivial problem we made sure that the cumulative weights of each class are equal. We then calculated the optimal linear classifier by exhaustive search. In Figure 1(b) we show a sample decision region with 93 regions. Figure l(a) shows the dependence of'Y on the number of regions k. As it turns out there is a significant logarithmic dependence between'Y and k, which leads us to conjecture that 'Y '" Ck- l + E for some C, land E. In the presented case it turns out that 1 = 3 turns out to fit our model well. It is important to note, however, that the procedure described above only supports our claim in an average-case, rather than worst-case, setting as is needed. (a) g a rT1rT1 8 n:I as a. func:1: l on of r e gion s 0 . 3_ . ~ ~O . 2 l5 .-,o . oeo '----------c~-----c_=_=_--~--=' Number of R e gion s Figure 1: (a) 'Y as a function of the number of regions. (b) A typical complex partition of the unit square used in the simulations. 3 Improved Convergence Rates In Section 2 we proved that under certain conditions a weak learner exists with a sufficiently large margin, and thus the first term in (1) indeed converges to zero. We now analyze the second term in (1) and show that it may be made to converge considerably faster, if the first term is made somewhat larger. First, we briefly recall the framework introduced recently by Mason et al. [8]. These authors begin by introducing the notion of a B-admissible family of functions. For completeness we repeat their definition. Definition 1 (Definition 2 in [8]) A family {CN : N E N} of margin cost functions is B -admissible for B 2: a if for all N E N there is an interval Y C IR of length no more than B and a function WN : [-1, 1] r-+ Y that satisfies sgn( -0:) ::; EZ~QN Jw N(Z)] ::; CN(o:) for all 0: E [-1,1], where EZ~QN.'" (-) denotes the expectation when Z is chosen randomly as Z = (liN) 2:[:1 Zi. and P(Zi = 1) = (1 + 0:)/2. Denote the convex hull of a class H by co(H). The main theoretical result in [8] is the following lemma. Lemma 2 ([8], Theorem 3) For any B-admissible family {CN : N E N} of margin junctions, for any binary hypothesis class of VC dimension d v and any distribution D on X x {-I, + 1}, with probability at least 1 - c5 over a random sample S of m examples drawn at random according to D, every N and every f E co(H) satisfies Pn[yf(x) :::; 0] :::; Es[CN(yf(x)] + EN, where EN = o ([(B 2 N dv logm + log(N/c5))/m] 1/2) . Remark 2 The most appealing feature of Lemma 2, as of other results for convex combinations, is the fact that the bound does not depend on the number of hypotheses from H defining f E co(H), which may in fact be infinite. Using standard VC results (e.g. [11]) would lead to useless bounds, since the VC dimension of these classes is often huge (possibly infinite). Lemma 2 considers binary hypotheses. Since recent works has demonstrated the effectiveness of using real valued hypotheses, we consider the case where the weak classifiers may be confidence-rated, i.e., taking values in [-1,1] rather than {?I}. We first extend Lemma 2 to confidence-rated classifiers. Note that the variables Zi in Definition 1 are no longer binary in this case. Lemma 3 Let the conditions of Lemma 2 hold, except that H is a class of real valued functions from X to [-1, +1] of pseudo-dimension dp . Assume further that WN in Definition 1 obeys a Lipschitz condition of the form IWN(X) - wN(x')1 :::; Llx - x'I for every x, x' EX. Then with probability at least 1- c5, Pn[yf(x) :::; 0] :::; Es[CN(yf(x)] + EN, where EN =0 ([(LB 2 Nd p logm + log(N/c5))/m] 1/2) . Proof The proof is very similar to the proof of Theorem 2, and will be omitted for the sake of brevity. ? It is well known that in the standard setting where C N is replaced by the empirical classification error, improved rates, replacing O( Jlog m/m) by O(log m/m), are possible in two situations: (i) if the minimal value of CN is zero (the restricted model of [1]), and (ii) if the empirical error is replaced by (1 +o:)CN for some 0: > O. The latter case is especially important in a model selection setup, where nested classes of hypothesis functions are considered, since in this case one expects that, with high probability, CN becomes smaller as the classes become more complex. In this situation, case (ii) provides better overall bounds, often leading to the optimal minimax rates for non parametric problems (see a discussion of these issues in Sec. 15.4 of [1]). We now establish a faster convergence rate to a slightly larger value than Es [CN(Yf(X))]. In situations where the latter quantity approaches zero, the overall convergence rate may be improved, as discussed above. We consider cost functions CN(o:), which obey the condition CN(o:) :::; (1 + (3N )1(0: < 0) + 'TJN ((3N > 0, 'TJN > 0). (5) for some positive (3N and 'TJN (see [8] for details on legitimate cost functions). Theorem 2 Let D be a distribution over X x {-I, + I}, and let S be a sample of m points chosen independently at random according to D. Let dp be the pseudodimension of the class H, and assume that CN(O:) obeys condition (5). Then for sufficiently large m, with probability at least 1- c5, every function f E co( H) satisfies the following bound for every 0 < 0: < 1/(3N 1+ ) P n [Yf(X):::; 0]:::; ( 1- O:;N Es [CN(Yf(X))] +0 (d N log + log! ) mo:/(:P+ 20:) m p (j ? Proof The proof combines two ideas. First, we use the method of [8] to transform the problem from co(H) to a discrete approximation of it. Then, we use recent results for relative uniform deviations of averages from their means [3]. Due to lack of space, we defer the complete proof to the full version of the paper. 4 Discussion In this paper we have presented two main results pertaining to the theory of boosting. First, we have shown that, under reasonable conditions, an effective weak classifier exists for one dimensional problems. We conjectured, and supported our claim by numerical simulations, that such a result holds for multi-dimensional problems as well. The non-trivial extension of the proof to multiple dimensions can be found in [7]. Second, using recent advances in the theory of uniform convergence and boosting we have presented bounds on the generalization error, which may, under certain conditions, be significantly better than standard bounds, being particularly useful in the context of model selection. Acknowledgment We thank Shai Ben-David and Yoav Freund for helpful discussions. References [1] M. Anthony and P.L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [2] P. Bartlett and S. Ben-David. On the hardness of learning with neural networks. In Proceedings of the fourth European Conference on computational Learning Theory, 99. [3] P. Bartlett and G. Lugosi. An inequality for uniform deviations of sample averages from their means. Statistics and Probability Letters, 44:55- 62, 1999. [4] T. Hastie J. Friedman and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, To appear, 2000. [5] D.S. Johnson and F.P. Preparata. The densest hemisphere problem. Theoretical Computer Science, 6:93- 107,1978. [6] S. Mallat and Z. Zhang. Matching pursuit with time-frequencey dictionaries. IEEE Trans. Signal Processing, 41(12):3397- 3415, December 1993. [7] S. Mannor and R. Meir. On the existence of weak learners and applications to boosting. Submitted to Machine Learning [8] L. Mason, P. Bartlett and J. Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 2000. To appear. [9] L. Mason, P. Bartlett, J. Baxter and M. Frean. Functional gradient techniques for combining hypotheses. In B. Sch6lkopf A. Smola, P. Bartlett and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press, 2000. [10] R.E. Schapire, Y. Freund, P. Bartlett and W .S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998. [11] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer Verlag, New York, 1982.
1906 |@word briefly:1 version:2 polynomial:1 achievable:1 seems:1 nd:1 open:1 simulation:5 mention:1 ld:2 offering:1 yet:1 additive:1 numerical:4 partition:6 greedy:1 plane:1 xk:2 characterization:1 provides:2 mannor:2 boosting:16 ron:1 hyperplanes:1 completeness:1 zhang:1 constructed:2 become:1 ik:2 consists:1 combine:1 deteriorate:1 indeed:2 hardness:1 expected:3 behavior:1 multi:2 jm:1 becomes:2 provided:1 begin:2 moreover:2 bounded:3 project:1 israel:1 what:1 finding:1 guarantee:3 pseudo:1 every:9 voting:1 exactly:2 classifier:21 unit:3 appear:2 positive:4 engineering:1 establishing:1 yd:1 merge:1 lugosi:1 studied:1 suggests:1 co:7 ease:1 limited:1 perpendicular:1 bi:2 obeys:3 unique:1 acknowledgment:1 yj:5 procedure:2 empirical:6 significantly:1 matching:1 word:1 confidence:2 suggest:1 get:1 onto:1 ir2:1 selection:3 cannot:1 context:1 demonstrated:1 independently:3 convex:5 legitimate:1 rule:1 notion:1 annals:2 construction:2 mallat:1 xjl:1 densest:1 homogeneous:2 hypothesis:6 associate:1 element:1 particularly:1 corroborated:1 electrical:1 solved:1 worst:1 region:12 connected:4 trade:1 decrease:2 complexity:1 depend:1 solving:1 tight:1 segment:1 learner:10 chapter:1 distinct:2 fast:2 effective:3 pertaining:1 hyper:1 choosing:1 refined:1 exhaustive:1 whose:1 larger:3 valued:2 say:1 otherwise:1 precludes:1 statistic:3 transform:1 obviously:1 advantage:1 product:1 combining:1 rapidly:1 degenerate:1 achieve:3 convergence:12 regularity:1 p:1 produce:1 incremental:1 converges:3 ben:2 oo:1 ac:1 frean:1 solves:1 implies:4 pii:2 direction:1 hull:2 vc:5 sgn:10 suffices:1 generalization:11 fix:1 preliminary:1 investigation:1 strictly:1 extension:1 c_:1 hold:4 sufficiently:3 considered:1 great:1 mapping:2 mo:1 claim:3 major:1 dictionary:1 omitted:1 estimation:1 label:7 currently:1 establishes:1 weighted:3 mit:1 clearly:5 always:2 rather:3 ck:1 pn:2 ax:2 helpful:1 lj:1 hidden:1 issue:2 classification:9 overall:2 field:1 equal:3 construct:2 identical:1 warrant:1 np:3 preparata:1 randomly:1 composed:2 replaced:2 logm:2 friedman:1 huge:1 highly:1 extreme:1 haifa:1 desired:1 theoretical:5 minimal:2 disadvantage:1 yoav:1 cost:3 introducing:1 deviation:2 expects:1 uniform:3 technion:2 usefulness:1 wix:1 johnson:1 considerably:1 l5:1 lee:1 off:1 yl:1 diy:1 ym:2 together:1 containing:1 choose:1 possibly:1 tjn:3 borne:1 worse:1 leading:2 li:4 speculate:1 sec:1 eyi:2 view:2 observing:1 analyze:1 shai:1 defer:1 contribution:2 il:1 formed:2 accuracy:1 ir:2 square:3 yield:4 identify:1 weak:15 lu:1 submitted:1 definition:5 obvious:1 proof:12 proved:2 recall:3 higher:3 improved:8 jw:1 done:1 strongly:1 generality:2 stage:1 smola:1 sly:2 until:1 replacing:1 lack:1 logistic:1 yf:7 pseudodimension:1 assigned:1 deal:1 criterion:1 generalized:1 complete:1 recently:5 possessing:1 common:1 behaves:2 kil:1 functional:1 exponentially:1 extend:1 discussed:1 m1:1 significant:1 cambridge:1 imposing:1 llx:1 rd:5 trivially:1 pm:1 similarly:1 longer:1 recent:6 conjectured:1 hemisphere:1 certain:7 verlag:1 inequality:1 binary:11 arbitrarily:2 yi:11 seen:3 somewhat:1 converge:2 signal:1 ii:5 full:1 multiple:1 technical:1 faster:4 characterized:3 lin:1 iog:2 basic:2 regression:1 expectation:2 iteration:2 achieved:1 interval:4 posse:1 sure:1 comment:2 shie:2 december:1 effectiveness:2 ee:1 near:1 wn:3 baxter:2 xj:6 fit:1 zi:3 hastie:1 restrict:1 opposite:1 inner:1 idea:3 cn:13 whether:1 expression:1 bartlett:7 ird:4 algebraic:1 proceed:2 york:1 remark:2 useful:6 clear:2 detailed:1 ilz:2 schapire:3 generate:4 meir:2 exist:3 sign:1 rmeir:1 tibshirani:1 discrete:1 four:1 demonstrating:1 drawn:4 fraction:1 letter:1 fourth:1 family:3 reasonable:2 decision:7 prefer:1 bound:21 layer:1 guaranteed:6 constraint:1 x2:1 sake:1 generates:1 argument:2 min:1 conjecture:3 department:1 according:3 combination:2 across:1 smaller:5 slightly:2 wi:1 appealing:1 dv:2 restricted:1 turn:4 needed:3 tractable:1 available:1 junction:1 pursuit:1 apply:2 observe:2 obey:1 away:1 appropriate:1 appearing:1 existence:2 denotes:2 jlog:1 panacea:1 prof:1 especially:1 approximating:1 establish:1 quantity:1 occurs:1 parametric:1 dependence:3 usual:1 guessing:2 gradient:1 dp:2 thank:1 considers:1 trivial:3 reason:1 induction:1 assuming:1 length:2 useless:2 gion:2 setup:1 implementation:1 sch6lkopf:1 upper:2 finite:3 supporting:1 situation:7 defining:1 y1:1 arbitrary:3 lb:1 introduced:3 david:2 namely:1 required:1 established:3 trans:1 below:2 xm:2 interpretability:1 explanation:1 ia:2 misclassification:2 natural:2 minimax:1 rated:2 lk:5 func:1 l2:1 relative:1 freund:2 loss:2 expect:1 interesting:1 foundation:1 oeo:1 editor:1 pi:10 share:1 land:1 lo:1 repeat:1 supported:1 side:1 allow:1 bias:1 wide:1 neighbor:2 characterizing:1 taking:1 boundary:2 dimension:9 axi:2 transition:1 cumulative:1 calculated:1 qn:2 author:1 made:6 c5:5 coincide:1 lz:3 assumed:1 conclude:1 xi:15 spectrum:1 search:1 additionally:1 schuurmans:1 excellent:1 complex:4 european:1 constructing:2 anthony:1 main:3 bounding:1 motivation:1 en:4 elaborate:1 sub:1 position:1 explicit:1 xl:3 lie:3 pe:4 admissible:3 theorem:6 mason:4 evidence:2 exists:6 vapnik:1 drew:1 labelling:1 margin:12 logarithmic:1 ez:2 applies:3 springer:1 nested:1 satisfies:3 twofold:1 labelled:1 lipschitz:1 hard:2 typical:1 infinite:2 uniformly:1 except:1 hyperplane:1 lemma:15 wio:2 la:1 e:4 support:1 latter:2 brevity:1 ex:1
992
1,907
Propagation Algorithms for Variational Bayesian Learning Zoubin GhahraIllani and Matthew J. Beal Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, England {zoubin,m.beal}~gatsby.ucl.ac.uk Abstract Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1 Introduction Bayesian approaches to machine learning have several desirable properties. Bayesian integration does not suffer overfitting (since nothing is fit to the data). Prior knowledge can be incorporated naturally and all uncertainty is manipulated in a consistent manner. Moreover it is possible to learn model structures and readily compare between model classes. Unfortunately, for most models of interest a full Bayesian analysis is computationally intractable. Until recently, approximate approaches to the intractable Bayesian learning problem had relied either on Markov chain Monte Carlo (MCMC) sampling, the Laplace approximation (Gaussian integration), or asymptotic penalties like BIC. The recent introduction of variational methods for Bayesian learning has resulted in the series of papers showing that these methods can be used to rapidly learn the model structure and approximate the evidence in a wide variety of models. In this paper we will not motivate advantages of the variational Bayesian approach as this is done in previous papers [1, 5]. Rather we focus on deriving variational Bayesian (VB) learning in a very general form, relating it to EM, motivating parameter-hidden variable factorisations, and the use of conjugate priors (section 3). We then present several theoretical results relating VB learning to the belief propagation and junction tree algorithms for inference in belief networks and Markov networks (section 4). Finally, we show how these results can be applied to learning the dimensionality of the hidden state space of linear dynamical systems (section 5). 2 Variational Bayesian Learning The basic idea of variational Bayesian learning is to simultaneously approximate the intractable joint distribution over both hidden states and parameters with a simpler distribution, usually by assuming the hidden states and parameters are independent; the log evidence is lower bounded by applying Jensen's inequality twice: In P(yIM) > / dO Qo(O) [ / dx Qx(x) In P(~I(~)M) + In Pci~~~)] (1) = F(Qo(O),Qx(x),y) where y, x, 0 and M, are observed data, hidden variables, parameters and model class, respectively; P(OIM) is a parameter prior under model class M. The lower bound F is iteratively maximised as a functional of the two free distributions, Qx(x) and Qo(O). From (1) we can see that this maximisation is equivalent to minimising the KL divergence between Qx(x)Qo(O) and the joint posterior over hidden states and parameters P(x, Oly, M). This approach was first proposed for one-hidden layer neural networks [6] under the restriction that Qo(O) is Gaussian. It has since been extended to models with hidden variables and the restrictions on Qo(O) and Qx(x) have been removed in certain models to allow arbitrary distributions [11, 8, 3, 1, 5]. Free-form optimisation with respect to the distributions Qo(O) and Qx(x) is done using calculus of variations, often resulting in algorithms that appear closely related to the corresponding EM algorithm. We formalise this relationship and others in the following sections. 3 Conjugate-Exponential Models We consider variational Bayesian learning in models that satisfy two conditions: Condition (1). The complete data likelihood is in the exponential family: P(x,yIO) = f(x,y) g(O)exp{?(O)T u(x,y)} where ?( 0) is the vector of natural parameters, and u and f and g are the functions that define the exponential family . The list of latent-variable models of practical interest with complete-data likelihoods in the exponential family is very long. We mention a few: Gaussian mixtures, factor analysis, hidden Markov models and extensions, switching state-space models, Boltzmann machines, and discrete-variable belief networks. 1 Of course, there are also many as yet undreamed-of models combining Gaussian, Gamma, Poisson, Dirichlet, Wishart, Multinomial, and other distributions. Condition (2) . The parameter prior is conjugate to the complete data likelihood: P(OI"7, v) = h("7, v) g(O)'1 exp {?(O) TV} where "7 and v are hyperparameters of the prior. Condition (2) in fact usually implies condition (1). Apart from some irregular cases, it has been shown that the exponential families are the only classes of distributions with a fixed number of sufficient statistics, hence allowing them to have natural conjugate priors. From the definition of conjugacy it is easy to see that the hyperparameters of a conjugate prior can be interpreted as the number ("7) and values (v) of pseudo-observations under the corresponding likelihood. We call models that satisfy conditions (1) and (2) conjugate-exponential. IModels whose complete-data likelihood is not in the exponential family (such as ICA with the logistic nonlinearity, or sigmoid belief networks) can often be approximated by models in the exponential family with additional hidden variables. In Bayesian inference we want to determine the posterior over parameters and hidden variables P(x, 91y, 'f'J, v). In general this posterior is neither conjugate nor in the exponential family. We therefore approximate the true posterior by the following factorised distribution: P(x, 91y, 'f'J, v) :::::i Q(x, 9) = Qx(x)Q9(9), and minimise KL(QIIP) = fdX d9 Q(X, 9) In (Q~7,9) ) Y,'f'J,V P x, which is equivalent to maximising F(Qx(X),Q9(9),y). We provide several general results with no proof (the proofs follow from the definitions and Gibbs inequality). Theorem 1 Given an iid data set Y = (Yl, ... Yn), if the model satisfies conditions (1) and (2), then at the maxima of F(Q,y) (minima of KL(QIIP)): (a) Q9(9) is conjugate and of the form: Q9(9) = h(ij, v)g(9)7) exp {4>(9) Tv} where ij = 'f'J+n, v = v+ L~=l U(Yi), and U(Yi) = (U(Xi,yi))Q, using (.)Q to denote expectation under Q. (b) Qx (x) = TI~=l Qx. (Xi) and Qx. (Xi) is of the same form as the known parameter posterior: Qx. (Xi) where ?(9) ex: f(xi,Yi)exp{?(9)T u (xi,yi)} =P(xiIYi,?(9)) = (4)(9))Q. Since Q9(9) and Qx.(Xi) are coupled, (a) and (b) do not provide an analytic solution to the minimisation problem. We therefore solve the optimisation problem numerically by iterating between the fixed point equations given by (a) and (b), and we obtain the following variational Bayesian generalisation of the EM algorithm: VE Step: Compute the expected sufficient statistics t(y) = under the hidden variable distributions Qx. (Xi). Li U(Yi) VM Step: Compute the expected natural parameters ?( 9) under the parameter distribution given by ij and v. This reduces to the EM algorithm if we restrict the parameter density to a point estimate (Le. Dirac delta function), Q9(9) = 15(9 - 9*), in which case the M step involves re-estimating 9*. Note that unless we make the assumption that the parameters and hidden variables factorise, we will not generally obtain the further hidden variable factorisation over n in (b). In that case, the distributions of Xi and Xj will be coupled for all cases i,j in the data set, greatly increasing the overall computational complexity of inference. 4 Belief Networks and Markov Networks The above result can be used to derive variational Bayesian learning algorithms for exponential family distributions that fall into two important special classes. 2 Corollary 1: Conjugate-Exponential Belief Networks . Let M be a conjugate-exponential model with hidden and visible variables z = (x, y) that satisfy a belief network factorisation. That is, each variable Zj has parents zp; and P(zI9) = TI j P(Zjlzp;,9). Then the approximating joint distribution for M satisfies the same belief network factorisation: Qz(z) = II Q(zjlzp;,ij) j 2 A tutorial on belief networks and Markov networks can be found in [9]. where the conditional distributions have exactly the same form as those in the original model but with natural parameters ?(O) = ?(9). Furthermore, with the modified parameters 0, the expectations under the approximating posterior Qx(x) ex: Qz(z) required for the VE Step can be obtained by applying the belief propagation algorithm if the network is singly connected and the junction tree algorithm if the network is multiply-connected. This result is somewhat surprising as it shows that it is possible to infer the hidden states tractably while integrating over an ensemble of model parameters. This result generalises the derivation of variational learning for HMMs in [8], which uses the forward-backward algorithm as a subroutine. Theorem 2: Markov Networks . Let M be a model with hidden and visible variables z = (x, y) that satisfy a Markov network factorisation. That is, the joint density can be written as a product of clique-potentials 'lj;j, P(zI9) = g(9) TI j 'lj;j(Cj , 9), where each clique C j is a subset of the variables in z. Then the approximating joint distribution for M satisfies the same Markov network factorisation: Qz(z) = 9 II ?j (Cj ) j where ?j (Cj ) = exp { (In 'lj;j (Cj , 9))Q} are new clique potentials obtained by averaging over Qe(9), and 9 is a normalisation constant. Furthermore, the expectations under the approximating posterior Qx(x) required for the VE Step can be obtained by applying the junction tree algorithm. Corollary 2: Conjugate-Exponential Markov Networks. Let M be a conjugate-exponential Markov network over the variables in z . Then the approximating joint distribution for M is given by Qz(z) = gTIj 'lj;j(Cj,O), where the clique potentials have exactly the same form as those in the original model but with natural parameters ?(O) = ?(9). For conjugate-exponential models in which belief propagation and the junction tree algorithm over hidden variables is intractable further applications of Jensen's inequality can yield tractable factorisations in the usual way [7]. In the following section we derive a variational Bayesian treatment of linearGaussian state-space models. This serves two purposes. First, it will illustrate an application of Theorem 1. Second, linear-Gaussian state-space models are the cornerstone of stochastic filtering, prediction and control. A variational Bayesian treatment of these models provides a novel way to learn their structure, i.e. to identify the optimal dimensionality of their state-space. 5 State-space models In state-space models (SSMs) , a sequence of D-dimensional real-valued observation vectors {Yl,'" ,YT}, denoted Yl :T, is modeled by assuming that at each time step t, Yt was generated from a K-dimensional real-valued hidden state variable Xt, and that the sequence of x's define a first-order Markov process. The joint probability of a sequence of states and observations is therefore given by (Figure 1): T P(Xl:T' Yl:T) = P(Xl)P(Yllxl) II P(Xt IXt-l)P(Yt IXt). t=2 We focus on the case where both the transition and output functions are linear and time-invariant and the distribution of the state and observation noise variables is Gaussian. This model is the linear-Gaussian state-space model: Yt = CXt +Vt ~~???1T ? @ ? ~ Figure 1: Belief network representation of a state-space model. where A and C are the state transition and emission matrices and Wt and Vt are state and output noise. It is straightforward to generalise this to a linear system driven by some observed inputs, Ut. A Bayesian analysis of state-space models using MCMC methods can be found in [4]. The complete data likelihood for state-space models is Gaussian, which falls within the class of exponential family distributions. In order to derive a variational Bayesian algorithm by applying the results in the previous section we now turn to defining conjugate priors over the parameters. Priors. Without loss of generality we can assume that Wt has covariance equal to the unit matrix. The remaining parameters of a linear-Gaussian state-space model are the matrices A and C and the covariance matrix of the output noise, Vt , which we will call R and assume to be diagonal, R = diag(p)-l, where Pi are the precisions (inverse variances) associated with each output. Each row vector of the A matrix, denoted a"[, is given a zero mean Gaussian prior with inverse covariance matrix equal to diag( a). Each row vector of C, c"[, is given a zero-mean Gaussian prior with precision matrix equal to diag(pi,8). The dependence of the precision of c"[ on the noise output precision Pi is motivated by conjugacy. Intuitively, this prior links the scale of the signal and noise. The prior over the output noise covariance matrix, R, is defined through the precision vector, p, which for conjugacy is assumed to be Gamma distributed 3 with p~-l exp{ -bpi}. Here, a, ,8 are hyperparameters a and b: P(p la, b) = I1~1 hyperparameters that we can optimise to do automatic relevance determination (ARD) of hidden states, thus inferring the structure of the SSM. A:) Variational Bayesian learning for SSMs Since A, C, p and Xl :T are all unknown, given a sequence of observations Yl:T, an exact Bayesian treatment of SSMs would require computing marginals of the posterior P(A, C, p, xl:TIY1:T). This posterior contains interaction terms up to fifth order (for example, between elements of C, x and p), and is not analytically manageable. However, since the model is conjugate-exponential we can apply Theorem 1 to derive a variational EM algorithm for state-space models analogous to the maximumlikelihood EM algorithm [10]. Moreover, since SSMs are singly connected belief networks Corollary 1 tells us that we can make use of belief propagation, which in the case of SSMs is known as the Kalman smoother. Writing out the expression for log P(A, C, p, Xl :T, n :T), one sees that it contains interaction terms between p and C, but none between A and either p or C. This observation implies a further factorisation, Q(A, C,p) = Q(A)Q(C, p), which falls out of the initial factorisation and the conditional independencies of the model. Starting from some arbitrary distribution over the hidden variables, the VM step obtained by applying Theorem 1 computes the expected natural parameters of Q9((}), where (} = (A,C,p). 3More generally, if we let R be a full covariance matrix for conjugacy we would give its inverse V = R- 1 a Wishart distribution: P(Vlv, S) ex IVI(v-D-1)/2 exp {-~tr VS- 1 } , where tr is the matrix trace operator. We proceed to solve for Q(A). We know from Theorem 1 that Q(A) is multivariate Gaussian, like the prior, so we only need to compute its mean and covariance. A has mean ST(diag(o:) + W)-l and each row of A has covariance (diag(o:) + W)-I, where S = Ei'=2 (Xt-lXi) , W = Ei'.;/ (Xtxi), and (.) denotes averaging w.r.t. the Q(Xl:T) distribution. Q(C, p) is also of the same form as the prior. Q(p) is a product of Gamma densities Q(Pi) = 9(Pi; ii, bi) where ii = a + t, bi = b + ~gi' gi = Ei'=l yfi - Ui (diag(,8) + W,)-lUi, Ui = Ei'=l Yti(xi) and W' = W + (XTXj:). Given p, each row of C is Gaussian with covariance COV(Ci) = (diag(,8) + W,)-l / Pi and mean Ci = Pi Ui COV(Ci). Note that S, W and Ui are the expected complete data sufficient statistics IT mentioned in Theorem l(a). Using the parameter distributions the hyperparameters can also be optimised. 4 We now turn to the VE step: computing Q(Xl:T). Since the model is a conjugateexponential singly-connected belief network, we can use belief propagation (Corollary 1). For SSMs this corresponds to the Kalman smoothing algorithm, where every appearance of the natural parameters of the model is replaced with the following corresponding expectations under the Q distribution: (PiCi), (PiCiCi) , (A), (A T A). Details can be found in [2]. Like for PCA [3], independent components analysis [1], and mixtures of factor analysers [5], the variational Bayesian algorithm for state-space models can be used to learn the structure of the model as well as average over parameters. Specifically, using F it is possible to compare models with different state-space sizes and optimise the dimensionality of the state-space, as we demonstrate in the following section. 6 Results Experiment 1: The goal of this experiment was to see if the variational method could infer the structure of a variety of state space models by optimising over 0: and ,8. We generated a 200-step time series of 10-dimensional data from three models: 5 (a) a factor analyser (Le. an SSM with A = 0) with 3 factors (static state variables); (b) an SSM with 3 dynamical interacting state variables, Le. A::p 0; (c) an SSM with 3 interacting dynamical and 1 static state variables. The variational Bayesian method correctly inferred the structure of each model in 2-3 minutes of CPU time on a 500 MHz Pentium III (Fig. 2 (a)- (c)). Experiment 2: We explored the effect of data set size on complexity of the recovered structure. 10-dim time series were generated from a 6 state-variable SSM. On reducing the length of the time series from 400 to 10 steps the recovered structure became progressively less complex (Fig. 2(d)- (j)), to a 1-variable static model (j). This result agrees with the Bayesian perspective that the complexity of the model should reflect the data support. Experiment 3 (Steel plant): 38 sensors (temperatures, pressures, etc) were sampled at 2 Hz from a continuous casting process for 150 seconds. These sensors covaried and were temporally correlated, suggesting a state-space model could capture some of its structure. The variational algorithm inferred that 16 state variables were required, of which 14 emitted outputs. While we do not know whether this is reasonable structure we plan to explore this as well as other real data sets. 4The ARD hyperparameters become Ok = (AT~}kk ' and 13k = (CTdia~p)C}kk . The hyperparameters a and b solve the fixed point equations '1j;(a) = Inb+ ~~l (Inpi), and = ab ~~l (Pi) , where '1j;(w) = BBw In r(w) is the digamma function. sParameters were chosen as follows: R = I, and elements of C sampled from '" Unif( -5,5), and A chosen with eigen-values in [0.5,0.9] . t i x,_, x, V, J v, x,_, x, Y, x,_, Y, V, x,_, V, Xt_, V, ??? x,_, ~ ~ Y, Figure 2: The elements of the A and C matrices after learning are displayed graphically. > E, and either _{311 > E or A link is drawn from node k in Xt-l to node 1 in Xt iff -..L Ok ...!... > E, for a small threshold E. Similarly links are drawn from node k of Xt to Yt if {31 > E. 01 k Therefore the graph shows the links that take part in the dynamics and the output. 7 Conclusions We have derived a general variational Bayesian learning algorithm for models in the conjugate-exponential family. There are a large number of interesting models that fall in this family, and the results in this paper should allow an almost automated protocol for implementing a variational Bayesian treatment of these models. We have given one example of such an implementation, state-space models, and shown that the VB algorithm can be used to rapidly infer the hidden state dimensionality. Using the theory laid out in this paper it is straightforward to generalise the algorithm to mixtures of SSMs, switching SSMs, etc. For conjugate-exponential models, integrating both belief propagation and the junction tree algorithm into the variational Bayesian framework simply amounts to computing expectations of the natural parameters. Moreover, the variational Bayesian algorithm contains EM as a special case. We believe this paper provides the foundations for a general algorithm for variational Bayesian learning in graphical models. References [1] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. [2] M.J. Beal and Z. Ghahramani. The variational Kalman smoother. Technical report, Gatsby Computational Neuroscience Unit, University College London, 2000. [3] C.M. Bishop. Variational PCA. In Proc. Ninth ICANN, 1999. [4] S. Friiwirth-Schnatter. Bayesian model discrimination and Bayes factors for linear Gaussian state space models. J. Royal. Stat. Soc. B , 57:237-246, 1995. [5] Z. Ghahramani and M.J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Adv. Neur. Inf. Proc. Sys. 12. MIT Press, Cambridge, MA, 2000. [6] G.E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length ofthe weights. In Sixth ACM Conference on Computational Learning Theory, Santa Cruz, 1993. [7] M.1. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K Saul. An introduction to variational methods in graphical models. Machine Learning, 37:183- 233, 1999. [8] D.J .C. MacKay. Ensemble learning for hidden Markov models. Technical report, Cavendish Laboratory, University of Cambridge, 1997. [9] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988. [10] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using the EM algorithm. J . Time Series Analysis, 3(4) :253- 264, 1982. [11] S. Waterhouse, D.J.C. Mackay, and T. Robinson. Bayesian methods for mixtures of experts. In Adv. Neur. Inf. Proc. Sys . 7. MIT Press, 1995.
1907 |@word manageable:1 unif:1 calculus:1 covariance:8 pressure:1 mention:1 tr:2 initial:1 series:6 contains:3 recovered:2 surprising:1 yet:1 dx:1 written:1 readily:1 cruz:1 visible:2 analytic:1 update:1 progressively:1 v:1 discrimination:1 maximised:1 sys:2 provides:2 node:3 ssm:5 simpler:1 become:1 manner:1 yllxl:1 expected:4 ica:1 nor:1 cpu:1 increasing:1 estimating:1 moreover:3 bounded:1 interpreted:1 pseudo:1 every:1 ti:3 exactly:2 uk:1 control:1 unit:3 appear:1 yn:1 switching:2 optimised:1 becoming:1 twice:1 mateo:1 hmms:1 bi:2 fdx:1 practical:1 maximisation:1 procedure:1 yio:1 integrating:3 zoubin:2 operator:1 applying:6 writing:1 restriction:2 equivalent:2 yt:5 straightforward:2 graphically:1 starting:1 factorisation:9 deriving:1 cavendish:1 variation:1 laplace:1 analogous:1 exact:1 us:1 element:3 approximated:1 observed:2 capture:1 connected:4 adv:2 removed:1 mentioned:1 complexity:3 ui:4 dynamic:1 motivate:1 joint:7 derivation:1 london:3 monte:1 vlv:1 tell:1 analyser:3 pci:1 whose:1 solve:3 valued:2 plausible:1 statistic:3 gi:2 cov:2 beal:4 advantage:1 sequence:4 ucl:1 interaction:2 product:2 combining:1 rapidly:2 iff:1 description:1 dirac:1 parent:1 zp:1 derive:4 illustrate:1 ac:1 stat:1 ard:2 ij:4 ex:3 soc:1 involves:1 implies:2 closely:1 stochastic:1 implementing:1 require:1 extension:1 exp:7 matthew:1 purpose:1 proc:3 agrees:1 tool:1 mit:3 sensor:2 gaussian:15 modified:1 rather:1 casting:1 jaakkola:1 minimisation:1 corollary:4 derived:1 focus:2 emission:1 likelihood:6 greatly:1 pentium:1 digamma:1 camp:1 dim:1 inference:6 lj:4 hidden:24 subroutine:1 i1:1 overall:1 denoted:2 plan:1 smoothing:3 integration:2 special:2 mackay:2 equal:3 sampling:1 optimising:1 others:1 report:2 intelligent:1 few:1 manipulated:1 simultaneously:1 resulted:1 divergence:1 gamma:3 ve:4 replaced:1 ab:1 factorise:1 interest:2 normalisation:1 multiply:1 stoffer:1 mixture:5 wc1n:1 chain:1 unless:1 tree:6 re:1 theoretical:2 formalise:1 ar:1 mhz:1 queen:1 subset:1 ixt:2 motivating:1 synthetic:1 st:1 density:3 probabilistic:1 yl:5 vm:2 d9:1 reflect:1 wishart:2 bpi:1 expert:1 li:1 suggesting:1 potential:3 factorised:1 satisfy:4 relied:1 oly:1 bayes:1 cxt:1 square:1 oi:1 became:1 variance:1 kaufmann:1 ensemble:2 yield:1 identify:1 ofthe:1 bayesian:35 iid:1 none:1 carlo:1 definition:2 sixth:1 naturally:1 proof:2 associated:1 static:3 sampled:2 treatment:4 knowledge:1 ut:1 dimensionality:5 cj:5 ok:2 follow:1 done:2 generality:1 furthermore:2 until:1 qo:7 ei:4 propagation:9 widespread:1 logistic:1 believe:1 effect:1 true:1 hence:1 analytically:1 iteratively:1 covaried:1 laboratory:1 qe:1 complete:6 demonstrate:2 temperature:1 reasoning:1 variational:32 novel:1 recently:1 sigmoid:1 functional:1 multinomial:1 relating:2 numerically:1 marginals:1 cambridge:3 gibbs:1 automatic:1 similarly:1 nonlinearity:1 had:1 etc:2 posterior:9 multivariate:1 recent:1 perspective:1 inf:2 apart:1 driven:1 certain:1 inequality:3 vt:3 yi:6 morgan:1 minimum:1 additional:1 somewhat:1 ssms:8 determine:1 signal:1 ii:5 smoother:2 full:2 desirable:1 infer:4 reduces:1 technical:2 generalises:1 england:1 determination:1 minimising:1 long:1 inb:1 prediction:1 basic:1 optimisation:2 expectation:5 poisson:1 irregular:1 want:1 ivi:1 hz:1 jordan:1 call:2 emitted:1 iii:1 easy:1 automated:1 variety:3 xj:1 fit:1 bic:1 restrict:1 idea:1 attias:1 minimise:1 whether:1 motivated:1 expression:1 pca:2 forecasting:1 penalty:1 suffer:1 proceed:1 cornerstone:1 generally:2 iterating:1 santa:1 singly:3 amount:1 zj:1 tutorial:1 neuroscience:2 delta:1 correctly:1 discrete:1 independency:1 threshold:1 drawn:2 neither:1 backward:1 graph:1 inverse:3 uncertainty:1 family:12 reasonable:1 almost:1 laid:1 vb:3 layer:1 q9:7 bound:1 tv:2 neur:2 conjugate:18 em:8 intuitively:1 invariant:1 computationally:1 equation:2 conjugacy:4 turn:2 know:2 tractable:1 serf:1 junction:6 apply:1 yim:1 lxi:1 eigen:1 original:2 denotes:1 dirichlet:1 remaining:1 graphical:5 exploit:1 ghahramani:3 approximating:5 dependence:1 usual:1 diagonal:1 link:4 assuming:2 maximising:1 kalman:4 length:2 modeled:1 relationship:1 kk:2 minimizing:1 unfortunately:1 trace:1 steel:1 implementation:1 boltzmann:1 unknown:1 allowing:1 observation:6 markov:12 oim:1 displayed:1 defining:1 extended:1 incorporated:1 hinton:1 interacting:2 ninth:1 arbitrary:2 xtxi:1 inferred:2 required:3 kl:3 pearl:1 tractably:1 robinson:1 dynamical:3 usually:2 optimise:2 royal:1 belief:18 natural:8 temporally:1 coupled:2 prior:15 waterhouse:1 asymptotic:1 shumway:1 loss:1 plant:1 interesting:1 filtering:1 foundation:1 sufficient:3 consistent:1 pi:8 row:4 course:1 free:2 keeping:1 allow:2 generalise:2 wide:1 fall:4 saul:1 fifth:1 distributed:1 van:1 transition:2 computes:1 forward:1 san:1 qx:16 approximate:4 clique:4 overfitting:1 assumed:1 xi:10 continuous:1 latent:1 qz:4 learn:4 lineargaussian:1 ca:1 complex:1 protocol:1 diag:7 icann:1 noise:6 hyperparameters:7 nothing:1 fig:2 schnatter:1 xt_:1 gatsby:3 precision:5 inferring:1 exponential:20 xl:7 theorem:7 minute:1 xt:6 bishop:1 showing:1 jensen:2 list:1 explored:1 evidence:2 intractable:4 ci:3 qiip:2 simply:1 appearance:1 explore:1 inpi:1 corresponds:1 satisfies:3 acm:1 ma:2 conditional:2 goal:1 yti:1 generalisation:1 specifically:1 lui:1 reducing:1 averaging:2 wt:2 la:1 college:2 support:1 maximumlikelihood:1 relevance:1 mcmc:2 correlated:1
993
1,908
Speech Denoising and Dereverberation Using Probabilistic Models Hagai Attias John C. Platt Alex Acero Li Deng Microsoft Research 1 Microsoft Way Redmond, WA 98052 {hagaia,jplatt,alexac,deng} @microsoft.com Abstract This paper presents a unified probabilistic framework for denoising and dereverberation of speech signals. The framework transforms the denoising and dereverberation problems into Bayes-optimal signal estimation. The key idea is to use a strong speech model that is pre-trained on a large data set of clean speech. Computational efficiency is achieved by using variational EM, working in the frequency domain, and employing conjugate priors. The framework covers both single and multiple microphones. We apply this approach to noisy reverberant speech signals and get results substantially better than standard methods. 1 Introduction This paper presents a statistical-model-based algorithm for reconstructing a speech source from microphone signals recorded in a stationary noisy reverberant environment. Speech enhancement in a realistic environment is a challenging problem, which remains largely unsolved in spite of more than three decades of research. Speech enhancement has many applications and is particularly useful for robust speech recognition [7] and for telecommunication. The difficulty of speech enhancement depends strongly on environmental conditions. If a speaker is close to a microphone, reverberation effects are minimal and traditional methods can handle typical moderate noise levels. However, if the speaker is far away from a microphone, there are more severe distortions , including large amounts of noise and noticeable reverberation. Denoising and dereverberation of speech in this condition has proven to be a very difficult problem [4]. Current speech enhancement methods can be placed into two categories: singlemicrophone methods and multiple-microphone methods. A large body of literature exists on single-microphone speech enhancement methods. These methods often use a probabilistic framework with statistical models of a single speech signal corrupted by Gaussian noise [6, 8]. These models have not been extended to dereverberation or multiple microphones. Multiple-microphone methods start with microphone array processing, where an array of microphones with a known geometry is deployed to make both spatial and temporal measurements of sounds. A microphone array offers significant advantages compared to single microphone methods. Non-adaptive algorithms can denoise a signal reasonably well, as long as it originates from a limited range of azimuth. These algorithms do not handle reverberation, however. Adaptive algorithms can handle reverberation to some extent [4], but existing methods are not derived from a principled probabilistic framework and hence may be sub-optimal. Work on blind source separation has attempted to remove the need for fixed array geometries and pre-specified room models. Blind separation attempts the full multi-source, multimicrophone case. In practice, the most successful algorithms concentrate on instantaneous noise-free mixing with the same number of sources as sensors and with very weak probabilistic models for the source [5]. Some algorithms for noisy non-square instantaneous mixing have been developed [1], as well as algorithms for convolutive square noise-free, mixing [9]. However, the full problem including noise and convolution has so far remained open. In this paper, we present a new method for speech denoising and dereverberation. We use the framework of probabilistic models, which allows us to integrate the different aspects of the whole problem, including strong speech models, environmental noise and reverberation, and microphone arrays. This integration is performed in a principled manner facilitating a coherent unified treatment. The framework allows us to produce a Bayes-optimal estimation algorithm. Using a strong speech model leads to computational intractability, which we overcome using a variational approach. The computational efficiency is further enhanced by working in the frequency domain and by employing conjugate priors. The resulting algorithm has complexity O(N log N). Results on noisy speech show significant improvement over standard methods. Due to space limitations, the full derivation and mathematical details for this method are provided in the technical report [3]. Notation and conventions. We work with time series data using a frame-by-frame analysis with N -point frames. Thus, all signals and systems, e.g. Y~' have a time point subscript extending over n = 0, ... , N - 1. With the superscript i omitted, Yn denotes all microphone signals. When n is also omitted, Y denotes all signals at all time points. Superscripts may become subscripts and vice versa when no confusion arises. The discrete Fourier transform (DFf) of Xn is Xk = En exp( -iwkn)Xn. We define the primed quantity p ii~ =1 - L e-iwknan (1) n=l for variables an with n = 1, ... ,p. The Gaussian distribution for a random vector a with mean fl and precision matrix V (defined as the inverse covariance matrix) is denotedN(a I fl, V). The Gamma distribution for a non-negative random variable v with a degrees of freedom and inverse scale (3 is denoted g(v I a, (3) IX v a / 2 - 1 exp( -(3v/2). Their product, the Normal-Gamma distribution Ng(a, v I fl, V, a, (3) = N(a I fl, vV)g(v I a, (3) , (2) turns out to be particularly useful. Notice that it relates the precision of a to v. Problem Formulation We consider the case where a single speech source is present and M microphones are available. The treatment of the single-microphone case is a special case of M = 1, but is not qualitatively different. Let Xn be the signal emitted by the source at time n, and let y~ be the signal received at microphone i at the same time. Then y~ = h~ * Xn + u~ = L h~xn-m + u~ , (3) m where h:'" is the impulse response of the filter (of length Ki ~ N) operating on the source as it propagates toward microphone i, * is the convolution operator, and u~ denotes the noise recorded at that microphone. Noise may originate from both microphone responses and from environmental sources. In a given environment, the task is to provide an optimal estimate of the clean speech signal x from the noisy microphone signals yi. This requires the estimation of the convolving filters hi and characteristics of the noise u i . This estimation is accomplished by Bayesian inference on probabilistic models for x and u i . 2 Probabilistic Signal Models We now turn to our model for the speech source. Much of the work on speech denoising in the past has usually employed very simple source models: AR or ARMA descriptions [6]. One exception is [8] , which uses an HMM whose observations are Gaussian AR models. These simple denoising models incorporate very little information on the structure of speech. Such an approach a priori allows any value for the model coefficients, including values that are unlikely to occur in a speech signal. Without a strong prior, it is difficult to estimate the convolving filters accurately due to identifiability. A source prior is especially important in the single microphone case, which estimates N clean samples plus model coefficients from N noisy samples. Thus, the absence of a strong speech model degrades reconstruction quality. The most detailed statistical speech models available are those employed by state-of-theart speech recognition engines. These systems are generally based on mixture of diagonal Gaussian models in the mel-cepstral domain. These models are endowed with temporal Markov dynamics and have a very large (f'.:::l 100000) number of states corresponding to individual atoms of speech. However, in the mel-cepstral domain, the noisy reverberant speech has a strong non-linear relationship to the clean speech. Physical speech production model. In this paper, we work in the linear time/frequency domain using a statistical model and take an intermediate approach regarding the model size. We model speech production with an AR(P) model: p Xn = L amXn-m +Vn , (4) m=l where the coefficients am are related to the physical shape of a "lossless tube" model of the vocal tract. To tum this physical model into a probabilistic model, we assume that Vn are independent zero-mean Gaussian variables with scalar precision v. Each speech frame x = (xo, ... ,XN-l) has its own parameters (J = (al, ... , ap , v). Given (J, the joint distribution of x is generally a zero-mean Gaussian, p(x 1 (J) = N(x 1 0, A), where A is the N x N precision matrix. Specifically, the joint distribution is given by the product p(x 1 (J) = IT N(xn 1 L amXn-m, v). (5) m n Probabilistic model in the frequency domain. However, rather than employing this product form directly, we work in the frequency domain and use the DFf to write p(x 1(J) ()( exp( - 2~ N-l L 1ii~ 121 Xk 12) , (6) k=O where ii~ is defined in (1). The precision matrix A is now given by an inverse DFf, Anm = (v/N)I:keiWk(n-m) 1 ii~ 12. This matrix belongs to a sub-class of Toeplitz matrices called circulant Toeplitz. It follows from (6) that the mean power spectrum of x is related to (J via Sk = (I Xk 12) = N/(v 1ii~ 12). Conjugate priors. To complete our speech model, we must specify a distribution over the speech production parameters O. We use a S-state mixture model with a Normal-Gamma distribution (2) for each component s = 1, ' ''' S: p(O 1 s) = N(al' " " ap 1 /-Ls, vVs)Q(v 1 O:s, (3s) . This form is chosen by invoking the idea of a conjugate prior, which is defined as follows. Given the model p(x 1 O)p( 1 s) , the prior p( 1 s) is conjugate to p(x 1 0) iff the posterior p(O 1 x, s) , computed by Bayes' rule, has the same functional form as the prior. This choice has the advantage of being quite general while keeping the clean speech model analytically tractable. ? ? It turns out, as discussed below, that significant computational savings result if we restrict the p x p precision matrices Vs to have a circulant Toeplitz structure. To do this without having to impose an explicit constraint, we reparametrize p(O 1 s) in terms of ~;, 'f/; instead of /-L;, V':m' and work in the frequency domain: p(O 1 s) ex exp(-~ p-l L: 2p k=O 1 ~kak - iik 12 ) , v-~ exp(_(3s v) . (7) 2 Note that we use a p- rather than N -point DFf. The precisions are now given by the inverse DFT V':m = (lip) Lk eiWk(n-m) 1 ~k 12 and are manifestly circulant. It is easy to show that conjugacy still holds. Finally, the mixing fractions are given by p( s) = 7r s . This completes the specification of our clean speech modelp(x) in terms of the latent variable modelp(x, 0, s) = p(x 1 O)p(O 1 s)p(s). The model is parametrized by W = (~~, 'f/~, O:s, (3s, 7rs) . Speech model training. We pre-train the speech model parameters W using 10000 sentences of the Wall Street Journal corpus, recorded with a close-talking microphone for 150 male and female speakers of North American English. We used 16msec overlapping frames with N = 256 time points at 16kHz sampling rate. Training was performed using an EM algorithm derived specifically for this model [3]. We used S = 256 clusters and p = 12. W were initialized by extracting the AR(P) coefficients from each frame using the autocorrelation method. These coefficents were converted into cepstral coefficients, and clustered into S classes by k-means clustering. We then considered the corresponding hard clusters of the AR(p) coefficients, and separately fit a model p(O 1 s) (7) to each. The resulting parameters were used as initial values for the full EM algorithm. Noise model. In this paper, we use an AR(q) description for the noise recorded by microphone i, u~ = Lm b~u~_m + w~. The noise parameters are ?>i = (b~, Ai), where Ai are the precisions of the zero-mean Gaussian excitations w~ . In the frequency domain we have the joint distribution . N-l p(u i 1 ?i) ex exp( - 2~ L: 1 b~,k 121 u~ (8) 12) , k=O As in (6), the parameters ?i determine the spectra of the noise. But unlike the speech model, the AR(q) noise model is chosen for mathematical convenience rather than for its relation to an underlying physical model. Noisy speech model. The form (8) now implies that given the clean speech x, the distribution of the data yi is . N-l i I p( y x) ex exp (N - 2N "" L...J 1 -b'i,k 121 Yk -i - h- ikXk 12) . (9) k=O ? This completes the specification of our noisy speech model p(y) in terms of the joint distribution Oi p(yi 1 x )p( x 1 O)p( 1 s )p( s). 3 Variational Speech Enhancement (VSE) Algorithm The denoising and dereverberation task is accomplished by estimating the clean speech x, which requires estimating the speech parameters 8, the filter coefficients hi, and the noise parameters qi. These tasks can be performed by the EM algorithm. This algorithm receives the data yi from an utterance (a long sequence of frames) as input and proceeds iteratively. In the E-step, the algorithm computes the sufficient statistics of the clean speech x and the production parameters 8 for each frame. In the M-step, the algorithm uses the sufficient statistics to update the values of hi and <pi, which are assumed unchanged throughout the utterance. This assumption limits the current VSE algorithm to stationary noise and reverberation. Source reconstruction is performed as a by-product of the E-step. Intractability and variational EM. In the clean speech model p( x) above, inference (i.e. computing p(s, 8 x) for the observed clean speech x) is tractable. However, in the noisy case, x is hidden and consequently inference becomes intractable. The posterior p(s, 8, x y) includes a quartic term exp(x 2 82 ), originating from the product of two Gaussian variables, which causes the intractability. 1 1 To overcome this problem, we employ a variational approach [10]. We replace the exact posterior distribution over the hidden variables by an approximate one, q(s, 8, x 1 y), and select the optimal q by maximizing F[q] = l:!dXdf) q(s,8,x s 1 Y)) y)log p?,:,x'I q s, ,x y (10) w.r.t. q. To achieve tractability, we must restrict the space of possible q. We use the partially factorized form q = q(s)q(8 1 s)q(x 1 s) , (11) where the y-dependence of q is omitted. Given y, this distribution defines a mixture model for x and a mixture model for 8, while maintaining correlations between x and 8 (i.e., q(x,8) :j:. q(x)q(8). Maximizing F is equivalent to minimizing the KL distance between q and the exact conditional p(s, 8, x y) under the restriction (11). 1 With no further restriction, the functional form of q falls out of free-form optimization, as shown in [2]. For the production parameters, q(8 s) turns out to have the form q(8 s) = N(al, ... , ap {is, vVs )9(v Ct. s , Ss) . This form is functionally identicalto that of the prior p(8 s), consistent with the conjugate prior idea. The parameters of q are distinguished from the prior's by the symbol. Similarly, the state responsibilities are q( s) = s. For the clean speech, we obtain Gaussians, q(x 1 s) = N(x 1 Ps, As), with state-dependent means and precisions. 1 1 1 1 * 1 A E-step and Wiener filtering. To derive the E-step, we first ignore reverberation by setting h~ = In,o and assuming a single microphone signal Yn, thus omitting i. The extension to multiple microphones and reverberation is straightforward. The parameters of q are estimated at the E-step from the noisy speech in each frame, using an iterative algorithm. First, the parameters of q( 8 1 s) are updated via Vs = Rs + Vs , {is = Vs-l(rs + VsILs) , (12) where R~m = (liN) 2:k eiwk(n-m) Es(1 Xk 2 ) , r~ = R~o, and Es denotes averaging w.r.t. q(x s), which is easily done analytically. The update rules for Ct. s, Ss, *s are shown in [3]. 1 1 Next, the parameters of q(x 1 s) are obtained by inverse DFT via (13) where J: = A 1 b~ 12 tfJZ, and gZ = A 1 b~ 12 +Es(v 1 ii~ 12). Here Es denotes averaging w.r.t. q((} s). These steps are iterated to convergence, upon which the estimated speech signal for this frame is given by the weighted sum x = Es 1r sPs. 1 We point out that the correspondence between AR parameters and spectra implies the Wiener filter form J: = Sz/(SZ + Nk), where Si. is the estimated clean speech spectrum associated with state s, and Nk is the noise spectrum, both at frequency Wk. Hence, the updated Ps in (13) is obtained via a state-dependent Wiener filter, and the clean speech is estimated by a sum of Wiener filters weighted by the state responsibilities. The same Wiener structure holds in the presence of reverberation. Notice that, whereas the conventional Wiener filter is linear and obtained directly from the known speech spectrum, our filters depend nonlinearly on the data, since the unknown speech spectra and state responsibilities are estimated iteratively by the above algorithm. M-step. After computing the sufficient statistics of (), x for each frame, ?i and hi are updated using the whole utterance. The update rules are shown in [3]. Alternatively, the ?i can be estimated directly by maximum likelihood if a non-speech portion of the input signal can be found . Computational savings. The complexity of the updates for q(x 1 s) and q((} 1 s) is N log Nand Splogp, respectively. This is due to working in the frequency domain, using the FFf algorithm to perform the DFf, and by using conjugate priors and circulant precisions. Working in the time domain and using priors with general precisions would result in the considerably higher complexity of N 2 and Sp3, respectively. 4 Experiments Denoising. We tested this algorithm on 150 speech sentences by male and female speakers from the Wall Street Journal (WSJ) database, which were not included in the training set. These sentences were distorted by adding either synthetic noise (white or pink), or noise recorded in an office environment with a PC and air conditioning. The distortions were applied at different SNRs. All of these noises were stationary. We then applied the algorithm to estimate the noise parameters and reconstruct the original speech signal. The result was compared with a sophisticated, subband-based implementation of the spectral subtraction (SS) technique. Denoising & Dereverberation. We tested this algorithm on 100 WSJ sentences, which were distorted by convolving them with a lO-tap artificial filter and adding synthetic white Gaussian noise at different SNRs. We then applied the algorithm to estimate both the noise level and the filter. Here we used a simpler speech model with p( () 1 s) = 8 (() - () s). Speech Recognition. We also examined the potential contribution of this algorithm to robust speech recognition, by feeding the denoised signals as inputs to a recognition system. The system used a version of the Microsoft continuous-density HMM (Whisper), with 6000 tied HMM states (senones), 20 Gaussians per state, and the speech represented via Mel-cepstrum, delta cepstrum, and delta-delta cepstrum. A fixed bigram language model is used in all the experiments. The system had been trained on a total of 16,000 female clean speech sentences. The test set consisted of 167 female WSJ sentences, which were distorted by adding synthetic white non-Gaussian noise. The word error rate was 55 .06% under the training-test mismatched condition of no preprocessing on the test set and decoding by HMMs trained with clean speech. This condition is the baseline for the relative performance improvement listed in the last row of Table 1. For these experiments, we compared VSE to the SS algorithm described in [7]. Table 1 shows that the Variational Speech Enhancement (VSE) algorithm is superior to SS at removing stationary noise either measured via SNR improvement or via relative reduction in speech recognition error rate (compared to baseline). SNR improvement SNR improvement SNR improvement SNR improvement Speech recognition relative improvement dB noise added reverb added 5 10 5 10 No No Yes Yes SS synthetic noise 4.3 4.1 6.7 8.3 10 No 38.6% SS real noise 4.3 4.1 VSE synthetic noise 6.0 5.8 10.2 13.2 VSE real noise 5.5 5.1 65.1% Table 1: Experimental Results. 5 Conclusion We have presented a probabilistic framework for denoising and dereverberation. The framework uses a strong speech model to perform Bayes-optimal signal estimation. The parameter estimation and the reconstruction of the signal are performed using a variational EM algorithm. Working in the frequency domain and using conjugate priors leads to great computational savings. The framework applies equally well to one-microphone and multiple-microphone cases. Experiments show that the optimal estimation can outperform standard methods such as spectral subtraction. Future directions include adding temporal dynamics to the speech model via an HMM structure, using a richer adaptive noise model (e.g. a mixture), and handling non-stationary noise and filters. References [1] H. Attias. Independent factor analysis. Neural Computation, 11(4):803-851,1999. [2] H. Attias. A variational bayesian framework for graphical models. In T. Leen, editor, Advances in Neural Information Processing Systems, volume 12, pages 209-215. MIT Press, 2000. [3] H. Attias, J. C. Platt, A. Acero, and L. Deng. Speech denoising and dereverberation using probabilistic models: Mathematical details. Technical Report MSR-TR-200102, Microsoft Research, 2001. http://research.microsoft.com/,,,hagaia. [4] M. S. Brandstein. On the use of explicit speech modeling in microphone array applications. In Proc. ICASSP, pages 3613-3616, 1998. [5] J.-F. Cardoso. Infomax and maximum likelihood for source separation. IEEE Signal Processing Letters, 4(4): 112-114, 1997. [6] A. Dembo and O. Zeitouni. Maximum a posteriori estimation of time-varying ARMA processes from noisy observations. IEEE Trans. Acoustics, Speech, and Signal Processing, 36(4):471--476, 1988. [7] L. Deng, A. Acero, M. Plumpe, and X. D. Huang. Large-vocabulary speech recognition under adverse acoustic environments. In Proceedings of the International Conference on Spoken Language Processing, volume 3, pages 806-809,2000. [8] Y. Ephraim. Statistical-model-based speech enhancement systems. 80(10):1526--1555,1992. Proc. IEEE, [9] J. C. Platt and F. Faggin. Networks for the separation of sources that are superimposed and delayed. In J. E. Moody, editor, Advances in Neural Information Processing Systems, volume 4, pages 730-737,1992. [10] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory of sigmoid belief networks. 1. Artijiciallntelligence Research, 4:61-76,1996.
1908 |@word msr:1 version:1 brandstein:1 bigram:1 open:1 r:3 covariance:1 invoking:1 tr:1 reduction:1 initial:1 series:1 dff:5 past:1 existing:1 current:2 com:2 si:1 artijiciallntelligence:1 must:2 john:1 realistic:1 shape:1 remove:1 update:4 v:4 stationary:5 xk:4 dembo:1 simpler:1 mathematical:3 become:1 autocorrelation:1 manner:1 multi:1 little:1 becomes:1 provided:1 estimating:2 notation:1 underlying:1 factorized:1 substantially:1 developed:1 unified:2 spoken:1 temporal:3 platt:3 originates:1 yn:2 limit:1 whisper:1 subscript:2 ap:3 plus:1 examined:1 challenging:1 hmms:1 limited:1 range:1 practice:1 pre:3 word:1 vocal:1 spite:1 get:1 convenience:1 close:2 operator:1 acero:3 restriction:2 equivalent:1 conventional:1 maximizing:2 straightforward:1 l:1 rule:3 array:6 handle:3 updated:3 enhanced:1 exact:2 us:3 manifestly:1 recognition:8 particularly:2 database:1 observed:1 yk:1 principled:2 ephraim:1 environment:5 complexity:3 dynamic:2 trained:3 depend:1 upon:1 efficiency:2 easily:1 joint:4 icassp:1 represented:1 reparametrize:1 derivation:1 train:1 snrs:2 artificial:1 whose:1 quite:1 richer:1 distortion:2 s:7 reconstruct:1 toeplitz:3 statistic:3 transform:1 noisy:12 superscript:2 advantage:2 sequence:1 reconstruction:3 product:5 mixing:4 iff:1 achieve:1 description:2 convergence:1 enhancement:8 cluster:2 extending:1 p:2 produce:1 tract:1 wsj:3 derive:1 measured:1 received:1 noticeable:1 strong:7 implies:2 convention:1 concentrate:1 direction:1 filter:12 fff:1 feeding:1 clustered:1 wall:2 hagai:1 extension:1 hold:2 considered:1 normal:2 exp:8 great:1 lm:1 modelp:2 omitted:3 estimation:8 proc:2 vice:1 weighted:2 mit:1 sensor:1 gaussian:10 primed:1 rather:3 varying:1 jaakkola:1 office:1 derived:2 improvement:8 likelihood:2 superimposed:1 baseline:2 am:1 posteriori:1 inference:3 dependent:2 unlikely:1 nand:1 hidden:2 relation:1 originating:1 denoted:1 priori:1 spatial:1 integration:1 special:1 field:1 saving:3 having:1 ng:1 atom:1 sampling:1 theart:1 future:1 report:2 employ:1 gamma:3 individual:1 delayed:1 geometry:2 microsoft:6 attempt:1 freedom:1 severe:1 male:2 mixture:5 pc:1 initialized:1 arma:2 minimal:1 modeling:1 cover:1 ar:8 tractability:1 snr:5 jplatt:1 successful:1 azimuth:1 corrupted:1 faggin:1 considerably:1 synthetic:5 density:1 international:1 probabilistic:12 decoding:1 infomax:1 moody:1 recorded:5 tube:1 huang:1 convolving:3 american:1 sps:1 li:1 converted:1 potential:1 wk:1 north:1 coefficient:7 includes:1 depends:1 blind:2 performed:5 responsibility:3 portion:1 start:1 bayes:4 denoised:1 identifiability:1 contribution:1 square:2 oi:1 air:1 wiener:6 largely:1 characteristic:1 yes:2 weak:1 bayesian:2 ikxk:1 iterated:1 accurately:1 frequency:10 associated:1 unsolved:1 treatment:2 sophisticated:1 tum:1 higher:1 response:2 specify:1 cepstrum:3 formulation:1 done:1 leen:1 strongly:1 correlation:1 working:5 receives:1 overlapping:1 defines:1 quality:1 impulse:1 effect:1 omitting:1 consisted:1 hence:2 analytically:2 iteratively:2 white:3 anm:1 speaker:4 mel:3 kak:1 excitation:1 complete:1 confusion:1 variational:8 instantaneous:2 superior:1 sigmoid:1 functional:2 physical:4 khz:1 conditioning:1 volume:3 discussed:1 functionally:1 measurement:1 significant:3 versa:1 dft:2 ai:2 dxdf:1 similarly:1 language:2 had:1 specification:2 operating:1 posterior:3 own:1 female:4 quartic:1 moderate:1 belongs:1 yi:4 accomplished:2 impose:1 deng:4 employed:2 subtraction:2 determine:1 signal:24 ii:6 relates:1 multiple:6 sound:1 full:4 technical:2 offer:1 long:2 lin:1 equally:1 qi:1 achieved:1 whereas:1 separately:1 completes:2 source:15 unlike:1 db:1 jordan:1 emitted:1 extracting:1 presence:1 intermediate:1 reverberant:3 easy:1 fit:1 restrict:2 idea:3 regarding:1 attias:4 speech:75 cause:1 useful:2 generally:2 detailed:1 listed:1 cardoso:1 transforms:1 amount:1 category:1 http:1 outperform:1 notice:2 estimated:6 delta:3 per:1 hagaia:2 discrete:1 write:1 key:1 clean:16 fraction:1 sum:2 inverse:5 letter:1 telecommunication:1 distorted:3 throughout:1 vn:2 separation:4 fl:4 ki:1 hi:4 ct:2 correspondence:1 occur:1 constraint:1 alex:1 aspect:1 fourier:1 pink:1 conjugate:8 em:6 reconstructing:1 handling:1 xo:1 conjugacy:1 remains:1 turn:4 tractable:2 available:2 gaussians:2 endowed:1 apply:1 away:1 spectral:2 distinguished:1 original:1 denotes:5 clustering:1 include:1 graphical:1 maintaining:1 zeitouni:1 subband:1 especially:1 unchanged:1 added:2 quantity:1 degrades:1 dependence:1 traditional:1 diagonal:1 distance:1 hmm:4 parametrized:1 street:2 originate:1 extent:1 toward:1 assuming:1 length:1 relationship:1 minimizing:1 difficult:2 negative:1 reverberation:9 coefficents:1 implementation:1 unknown:1 perform:2 convolution:2 observation:2 markov:1 extended:1 frame:11 nonlinearly:1 specified:1 kl:1 sentence:6 tap:1 engine:1 coherent:1 acoustic:2 trans:1 redmond:1 proceeds:1 usually:1 below:1 convolutive:1 dereverberation:10 including:4 belief:1 power:1 difficulty:1 lossless:1 lk:1 gz:1 utterance:3 prior:14 literature:1 relative:3 limitation:1 filtering:1 proven:1 integrate:1 degree:1 sufficient:3 consistent:1 propagates:1 editor:2 intractability:3 pi:1 production:5 lo:1 row:1 placed:1 last:1 free:3 keeping:1 english:1 vv:1 circulant:4 fall:1 mismatched:1 saul:1 cepstral:3 overcome:2 xn:8 vocabulary:1 computes:1 qualitatively:1 adaptive:3 preprocessing:1 employing:3 far:2 approximate:1 ignore:1 sz:2 corpus:1 assumed:1 alternatively:1 spectrum:7 continuous:1 latent:1 iterative:1 decade:1 sk:1 table:3 lip:1 reasonably:1 robust:2 domain:12 whole:2 noise:33 denoise:1 facilitating:1 body:1 en:1 deployed:1 precision:11 sub:2 explicit:2 msec:1 tied:1 ix:1 removing:1 remained:1 symbol:1 exists:1 intractable:1 adding:4 nk:2 iik:1 partially:1 scalar:1 talking:1 applies:1 environmental:3 conditional:1 consequently:1 room:1 replace:1 absence:1 hard:1 adverse:1 included:1 typical:1 specifically:2 averaging:2 denoising:12 microphone:29 called:1 total:1 e:5 experimental:1 attempted:1 exception:1 select:1 arises:1 incorporate:1 tested:2 ex:3
994
1,909
Color Opponency Constitutes A Sparse Representation For the Chromatic Structure of Natural Scenes Te-Won Lee; Thomas Wachtler and Terrence Sejnowski Institute for Neural Computation, University of California, San Diego & Computational Neurobiology Laboratory, The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037, USA {tewon,thomas,terry}~salk.edu Abstract The human visual system encodes the chromatic signals conveyed by the three types of retinal cone photoreceptors in an opponent fashion. This color opponency has been shown to constitute an efficient encoding by spectral decorrelation of the receptor signals. We analyze the spatial and chromatic structure of natural scenes by decomposing the spectral images into a set of linear basis functions such that they constitute a representation with minimal redundancy. Independent component analysis finds the basis functions that transforms the spatiochromatic data such that the outputs (activations) are statistically as independent as possible, i.e. least redundant. The resulting basis functions show strong opponency along an achromatic direction (luminance edges), along a blueyellow direction, and along a red-blue direction. Furthermore, the resulting activations have very sparse distributions, suggesting that the use of color opponency in the human visual system achieves a highly efficient representation of colors. Our findings suggest that color opponency is a result of the properties of natural spectra and not solely a consequence of the overlapping cone spectral sensitivities. 1 Statistical structure of natural scenes Efficient encoding of visual sensory information is an important task for information processing systems and its study may provide insights into coding principles of biological visual systems. An important goal of sensory information processing Electronic version available at www. cnl. salk . edu/ ""tewon. is to transform the input signals such that the redundancy between the inputs is reduced. In natural scenes, the image intensity is highly predictable from neighboring measurements and an efficient representation preserves the information while the neuronal output is minimized. Recently, several methods have been proposed for finding efficient codes for achromatic images of natural scenes [1, 2, 3, 4]. While luminance dominates the structure of the visual world, color vision provides important additional information about our environment. Therefore, we are interested in efficient, i.e. redundancy reducing representations for the chromatic structure of natural scenes. 2 Learning efficient representation for chromatic image Our goal was to find efficient representations of the chromatic sensory information such that its spatial and chromatic redundancy is reduced significantly. The method we used for finding statistically efficient representations is independent component analysis (ICA). ICA is a way of finding a linear non-orthogonal co-ordinate system in multivariate data that minimizes mutual information among the axial projections of the data. The directions of the axes of this co-ordinate system (basis functions) are determined by both second and higher-order statistics of the original data, compared to Principal Component Analysis (PCA) which is used solely in second order statistics and has orthogonal basis functions. The goal of ICA is to perform a linear transform which makes the resulting source outputs as statistically independent from each other as possible [5]. ICA assumes an unknown source vector s with mutually independent components Si. A small patch of the observed image is stretched into a vector x that can be represented as a linear combination of sources components Si such that x=As, (1) where A is a scalar square matrix and the columns of A are the basis functions. Since A and s are unknown the goal of ICA is to adapt the basis functions by estimating s so that the individual components Si are statistically independent and this adaptation process minimizes the mutual information between the components Si. A learning algorithm can be derived using the information maximization principle [5] or the maximum likelihood estimation (MLE) method which can be shown to be equivalent in this case. In our experiments, we used the infomax learning rule with natural gradient extension and the learning algorithm for the basis functions is (2) where I is the identity matrix, rp(s) = - 8p~(W3s and sT denotes the matrix transpose of s . .6.A is the change of the basis functions that is added to A. The change in .6.A will converge to zero once the adaptation process is complete. Note that rp(s) requires a density model for p(Si). We used a parametric exponential power density P(Si) ex exp( -ISilqi) and simultaneously updated its shape by inferring the value qi to match the distribution of the estimated sources [6]. This is accomplished by finding the maximum posteriori value of qi given the observed data. The ICA algorithm can thus characterize a wide class of statistical distributions including uniform, Gaussian, Laplacian, and other so-called sub- and super-Gaussian densities. In other words, our experiments do not constrain the coefficients to have a a) b) <[n m[ 700 Figure 1: Linear decomposition of an observed spectral image patch into its basis functions. sparse distribution, unlike some previous methods [1, 2]. The algorithm converged to a solution of maximal independence and the distributions of the coefficients were approximated by exponential power densities. We investigated samples of spectral images of natural scenes as illustrated in Figure 1. We analyzed a set of hyperspectral images [7] with a size of 256 x 256 pixels. Each pixel is represented by radiance values for 31 wavebands of 10 nm width, sampled in 10 nm steps between 400 and 700 nm. The pixel size corresponds to 0.056xO.056 deg of visual angle. The images were recorded around Bristol, either outdoors, or inside the glass houses of Bristol Botanical Gardens. We chose eight of these images which had been obtained outdoors under apparently different illumination conditions. The vector of 31 spectral radiance values of each pixel was converted to a vector of 3 cone excitation values whose components were the inner products of the radiance vector with the vectors of L-, M-, and S-cone sensitivity values [8], respectively. From the entire image data set, 7x7 pixel image patches were chosen randomly, yielding 7x7x3 = 147 dimensional vectors. The learning process was done in 500 steps, each using a set of spectra of 40000 image patches, 5000 chosen randomly from each of the eight images. A set of basis functions for 7x7 pixel patches was obtained, with each pixel containing the logarithms of the excitations of the three human cone photo receptors that represented the receptor signals in the human retina [8, 9]. To visualize the learned basis functions, we used the method by Ruderman et al. [9] and plotted for each basis function a 7 x 7 pixel matrix, with the color of each pixel indicating the combination of L, M, and S cone responses as follows. The values for each patch were normalized to values between a and 255, with a cone excitation corresponding to a value of 128. Thus, the R, G, and B components of each pixel represent the relative excitations of L, M, and S cones, respectively. To further illustrate the chromatic properties of the basis functions, we convert the L, M, S vector of each pixel to its projection onto the isoluminant plane of a cone-opponent color space similar to the color spaces of MacLeod and Boynton[lO] and Derrington et al[l1]. In our plots, the horizontal axis corresponds to the response of an L cone versus M cone opponent mechanism, the vertical axis corresponds to S cone modulation. For each pixel of the basis functions, a point is plotted at its corresponding location in that color space. The color of the points are the same as used for the pixels in the top part of the figure. Thus, although only the projection onto the isoluminant plane is shown, the third dimension (i.e., luminance) can be inferred by the brightness of the points. Figure 2a shows the learned leA basis functions in a pseudo color representation. Figure 2b shows the color space coordinates of the chromaticities of the pixels in each basis function. The peA basis functions and their corresponding color space coordinates are shown in Figure 2c and 2d respectively. Both representations are in order of decreasing L 2 -norm. The peA results show a global spatial representation and their opponent basis functions lie mostly along the coordinate axes of the cone-opponent color space. In addition, there are functions that imply mixtures of non-opponent colors. In contrast to peA basis functions, the leA basis functions are localized and oriented. When ordered by decreasing L 2 -norm, achromatic basis functions tend to appear before chromatic basis functions. This reflects the fact that in the natural environment, luminance variations are generally larger than chromatic variations [7]. The achromatic basis functions are localized and oriented, similar to those found in the analysis of grayscale natural images [1, 2]. Most ofthe chromatic basis functions, particularly those with strong contributions, are color opponent, i.e., the chromaticities of their pixels lie roughly along a line through the origin of our color space. Most chromatic basis functions with relatively high contributions are modulated between light blue and dark yellow, in the plane defined by luminance and S-cone modulation. Those with lower L 2 -norm are highly localized, but still are mostly oriented. There are other chromatic basis functions with tilted orientations, corresponding to blue versus orange colors. The chromaticities of these basis functions occupy mainly the second and fourth quadrant. The basis functions with lowest contributions are less strictly aligned in color space, but still tend to be color opponent, mostly along a bluish-green/orange direction. There are no basis functions with chromaticities along the horizontal axis, corresponding to pure L versus M cone opponency, like peA basis functions in Figure 2d [9]. The tilted orientations of the opponency axes most likely reflects the distribution of the chromaticities in our images. In natural images, L-M and S coordinates in our color space are negatively correlated [12]. leA finds the directions that correspond to maximally decorrelated signals, i.e. extracts statistical structure of the inputs. peA did not yield basis functions in these directions, probably because it is limited by the orthogonality constraint. While it is known that chromatic properties of neurons in the lateral geniculate nucleus (LGN) of primates correspond to variations along the axes of cone-opponency ('cardinal axes') [11], cortical neurons show sensitivities for intermediate directions [13]. Since the results of peA and leA, respectively, match these differences qualitatively, we suspect that opponent coding along the 'cardinal directions' of cone opponency is used by the visual system to transmit reliably visual information to the cortex, where the information is recoded in order to better reflect the statistical structure of the environment [14]. 3 Discussion This result shows that the independence criterion alone is sufficient to learn efficient image codes. Although no sparseness constraint was used, the obtained coefficients are extremely sparse, i.e. the data x are encoded in the sources s in such a way that the coefficients of s are mostly around zero; there is only a small percentage of informative values (non-zero coefficients). From an information coding perspective this assumes that we can encode and decode the chromatic image patches with only a small percentage of the basis functions. In contrast, Gaussian densities are not sparsely distributed and a large portion of the basis functions is required to represent the chromatic images. The normalized kurtosis value is one measure of sparseness and the average kurtosis value was 19.7 for leA, and 6.6 for peA. Interestingly the basis functions in Figure2a produced only sparse coefficients except for basis function 7 (green basis function) that resulted in a nearly uniform distribution, suggesting that this basis function is active almost all the time. The reason may be that a green color component is present in almost all image patches of the natural scenes. We repeated the experiment with different leA methods and obtained similar results. The basis functions obtained with the exponential power distributions or the simple Laplacian prior were statistically most efficient. In this sense, the basis functions that produce sparse distributions are statistically efficient codes. To quantitatively measure the encoding difference we compared the coding efficiency between leA and peA using Shannon's theorem to obtain a lower bound on the number of bits required to encode a spatiochromatic pattern [4]. The average number of bits required to encode 40000 patches randomly selected from the 8 images in Figure 1 with a fixed noise coding precision of O' x = 0.059 was 1.73 bits for leA and 4.46 bits for peA. Note that the encoding difference for achromatic image patches using leA and peA is about 20% in favor of leA [4]. The encoding difference in the chromatic case is significantly higher (> 100%) and suggests that there is a large amount of chromatic redundancy in the natural scenes. To verify our findings, we computed the average pairwise mutual information f in the original data (Ix = 0.1522), the peA representation (IPCA = 0.0123) and the leA representation (fICA = 0.0093). leA was able to further reduce the redundancy between its components, and its basis functions therefore represent more efficient codes. In general, the leA results support the argument that basis functions for efficient coding of chromatic natural images are non-orthogonal. In order to determine whether the color opponency is merely a result of correlation in the receptor signals due to the strong overlap of the photoreceptor sensitivities [15], we repeated the analysis, this time assuming hypothetical receptor sensitivities which do not overlap, but sample roughly in the same regions as the L-, M-, and S- cones. We used rectangular sensitivities with absorptions between 420 and 480 nm ("S"), 490 and 550 nm ("M"), and 560 and 620 nm ("L"), respectively. The resulting basis functions were as strongly color opponent as for the case of overlapping cone sensitivities. This suggests that the correlations of radiance values in natural spectra are (b) EBEBEEEBEBEBEEEBEEEB EBEBEEEBEEEBEBEBEBEE EErnrnrnEEBJOJOJEEEB EBrnrnrnrnrnEEBJEEEB EBEBEB+ ~ EElEElrnEEl EBEBEB I~ ~ EElEEEElEB tIJrnEEEE ~ EEEElEflEEl ~B1EElEEl ~ ~ EEEEEEEljEE ~,~ -f. EEEBEEEE~EEEElEB EE ~I~ ,~ EElEBrn EEl '~ EBa1EB ~ Bj~tB ~8j~ 5j ~ rn ?m-+- t- ffi~~ ' (d) EElEEEflEElEBEEEBEElEBEB EEEBEBEEEBEEEBEEEEEE EBEEEBEBEBEEEBEEEEEE EEEBEBEBrnEEEBEEEEEB EEEBEEEErnEBEEEBEEEE EBEEEEEBEBEEEEEEEBEB !~ EE EE EB ~ ~ ? EEEBt13 EEJEEEE tE~~ ffitijtljEE EElrnEEl EB~tIl~~EEEEm~B:l I m rn B:l ~838:jB:l ~mB:l I I 83 rnrnrn ~rnB:l ' Figure 2: (a) 147 total lCA spatiochromatic structure of basis functions (7 by 7 pixels and 3 colors) are shown in order of decreasing L 2 -norm, from top to bottom and left to right. The R, G, and B values of the color of each pixel correspond to the relative excitation of L-, M-, and S-cones, respectively. (b) Chromaticities of the lCA basis functions, plotted in cone-opponent color space coordinates. Each dot represents the coordinate of a pixel of the respective basis function, projected onto the isoluminant plane. Luminance can be inferred from the brightness of the dot. Horizontal axes: L- versus M-cone variation. Vertical axes: S-cone variation. (c) 147 PCA spatiochromatic basis functions and (d) Corresponding PCA chromaticities. sufficiently high to require a color opponent code in order to represent the chromatic structure efficiently. In summary, our findings strongly suggest color opponency is not a mere consequence of the overlapping cone spectral sensitivities but moreover an attempt to represent the intrinsic spatiochromatic structure of natural scenes in a statistically efficient manner. References [1] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381 :607- 609, 1996. [2] A. J. Bell and T. J. Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327- 3338, 1997. [3] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359- 366, 1998. [4] M.S. Lewicki and B. Olshausen. A probablistic framwork for the adaptation and comparison of image codes. J. Opt.Soc., A : Optics, Image Science and Vision, in press, 1999. [5] A. J. Bell and T. J. Sejnowski. An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7:1129-1159, 1995. [6] M.S . Lewicki. A flexible prior for independent component analysis. Neural Computation, submitted, 2000. [7] C. A. par-raga, G. Brelstaff, and T . Troscianko. Color and luminance information in natural scenes. Journal of the Optical Society of America A , 15:563- 569, 1998. (http://www.crs4.it/ ...... gjb/ftpJOSA.html). [8] A. Stockman, D. I. A. MacLeod, and N. E. Johnson. Spectral sensitivities of the human cones. Journal of the Optical Society of America A, 10:2491- 2521, 1993. (http://www-cvrl.ucsd.edu). [9] D. L. Ruderman, T. W. Cronin , and C.-C. Chiao. Statistics of cone responses to natural images: Implications for visual coding. Journal of the Optical Society of America A , 15:2036- 2045, 1998. [10] D. I. A. MacLeod and R. M. Boynton. Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69:11831186, 1979. [11] A. M. Derrington, J. Krauskopf, and P. Lennie. Chromatic mechanisms in lateral geniculate nucleus of macaque. Journal of Physiology, 357:241- 265 , 1984. [12] D. I. A. MacLeod and T. von der Twer. The pleistochrome: Optimal opponent codes for natural colors. Preprint, 1998. [13] P. Lennie , J. Krauskopf, and G. Sclar. Chromatic mechanisms in striate cortex of macaque. Journal of Neuroscience, 10:649- 669, 1990. [14] D. J. Field. What is the goal of sensory coding? 1994. Neural Computation, 6:559- 601 , [15] G. Buchsbaum and A. Gottschalk. Trichromacy, opponent colours coding and optimum colour information transmission in the retina. Proceedings of the Royal Society London B, 220:89- 113, 1983.
1909 |@word version:1 norm:4 decomposition:1 brightness:2 gjb:1 interestingly:1 activation:2 si:6 tilted:2 informative:1 shape:1 plot:1 alone:1 selected:1 plane:4 provides:1 location:1 along:9 inside:1 manner:1 pairwise:1 twer:1 ica:6 roughly:2 krauskopf:2 decreasing:3 estimating:1 moreover:1 lowest:1 what:1 minimizes:2 finding:7 pseudo:1 hypothetical:1 appear:1 before:1 consequence:2 receptor:5 encoding:5 solely:2 modulation:2 probablistic:1 chose:1 eb:2 suggests:2 co:2 limited:1 statistically:7 bell:2 significantly:2 physiology:1 projection:3 word:1 road:1 quadrant:1 suggest:2 onto:3 fica:1 www:3 equivalent:1 rectangular:1 pure:1 insight:1 rule:1 coordinate:6 variation:5 updated:1 transmit:1 diego:1 decode:1 origin:1 approximated:1 particularly:1 sparsely:1 observed:3 bottom:1 preprint:1 region:1 predictable:1 environment:3 rnb:1 stockman:1 negatively:1 efficiency:1 basis:47 represented:3 america:4 london:1 sejnowski:3 whose:1 isoluminant:3 larger:1 cnl:1 encoded:1 achromatic:5 statistic:3 favor:1 torrey:1 transform:2 emergence:1 kurtosis:2 maximal:1 product:1 adaptation:3 mb:1 neighboring:1 aligned:1 optimum:1 transmission:1 produce:1 illustrate:1 axial:1 ex:1 strong:3 soc:2 direction:9 filter:2 pea:11 human:5 require:1 opt:1 biological:1 absorption:1 extension:1 strictly:1 around:2 sufficiently:1 exp:1 bj:1 visualize:1 pine:1 achieves:1 radiance:4 estimation:1 proc:1 geniculate:2 wachtler:1 reflects:2 gaussian:3 super:1 chromatic:22 encode:3 ax:7 derived:1 likelihood:1 mainly:1 contrast:2 cronin:1 sense:1 glass:1 posteriori:1 lennie:2 chiao:1 entire:1 interested:1 lgn:1 pixel:18 among:1 orientation:2 flexible:1 html:1 spatial:3 orange:2 mutual:3 schaaf:1 field:3 once:1 equal:1 represents:1 constitutes:1 nearly:1 minimized:1 jb:1 stimulus:1 quantitatively:1 cardinal:2 retina:2 randomly:3 oriented:3 preserve:1 simultaneously:1 resulted:1 individual:1 botanical:1 attempt:1 highly:3 analyzed:1 mixture:1 yielding:1 light:1 implication:1 edge:2 respective:1 orthogonal:3 logarithm:1 opponency:11 plotted:3 minimal:1 column:1 maximization:2 uniform:2 johnson:1 characterize:1 st:1 density:5 sensitivity:9 lee:1 terrence:1 eel:1 infomax:1 von:1 reflect:1 nm:6 recorded:1 containing:1 til:1 suggesting:2 converted:1 retinal:1 coding:9 coefficient:6 blind:2 analyze:1 apparently:1 red:1 portion:1 contribution:3 square:1 efficiently:1 correspond:3 ofthe:1 yield:1 yellow:1 produced:1 mere:1 bristol:2 converged:1 submitted:1 decorrelated:1 sampled:1 color:32 higher:2 response:3 maximally:1 done:1 strongly:2 furthermore:1 correlation:2 horizontal:3 ruderman:2 overlapping:3 outdoors:2 olshausen:2 usa:1 normalized:2 verify:1 laboratory:1 illustrated:1 chromaticity:8 width:1 excitation:6 troscianko:1 won:1 criterion:1 complete:1 l1:1 derrington:2 image:29 recently:1 measurement:1 stretched:1 had:1 dot:2 cortex:3 multivariate:1 perspective:1 jolla:1 accomplished:1 der:2 additional:1 converge:1 determine:1 redundant:1 signal:6 match:2 adapt:1 mle:1 laplacian:2 qi:2 vision:3 bluish:1 represent:5 cell:2 lea:13 addition:1 diagram:1 source:5 unlike:1 probably:1 suspect:1 tend:2 ee:3 intermediate:1 independence:2 buchsbaum:1 inner:1 reduce:1 whether:1 pca:3 colour:2 constitute:2 generally:1 transforms:1 amount:1 dark:1 reduced:2 http:2 occupy:1 percentage:2 estimated:1 neuroscience:1 blue:3 redundancy:6 luminance:8 merely:1 cone:27 convert:1 angle:1 fourth:1 almost:2 electronic:1 patch:10 separation:1 bit:4 bound:1 optic:1 orthogonality:1 constraint:2 constrain:1 scene:12 encodes:1 x7:2 argument:1 extremely:1 lond:1 optical:4 relatively:1 lca:2 spatiochromatic:5 combination:2 primate:1 xo:1 mutually:1 ffi:1 mechanism:3 photo:1 available:1 decomposing:1 opponent:14 eight:2 spectral:8 rp:2 thomas:2 original:2 assumes:2 denotes:1 top:2 macleod:4 society:5 added:1 parametric:1 receptive:1 primary:1 striate:1 gradient:1 lateral:2 reason:1 assuming:1 code:8 mostly:4 reliably:1 recoded:1 unknown:2 perform:1 vertical:2 neuron:2 neurobiology:1 rn:2 ucsd:1 intensity:1 inferred:2 ordinate:2 required:3 california:2 learned:2 macaque:2 able:1 pattern:1 tb:1 including:1 green:3 garden:1 royal:1 terry:1 power:3 overlap:2 decorrelation:1 natural:23 tewon:2 imply:1 axis:3 extract:1 prior:2 relative:2 par:1 versus:4 localized:3 nucleus:2 conveyed:1 sufficient:1 principle:2 boynton:2 lo:1 summary:1 transpose:1 institute:2 wide:1 sparse:7 distributed:1 van:2 dimension:1 cortical:1 world:1 sensory:4 qualitatively:1 san:1 projected:1 deg:1 global:1 active:1 gottschalk:1 photoreceptors:1 spectrum:3 grayscale:1 learn:1 nature:1 investigated:1 did:1 noise:1 repeated:2 neuronal:1 fashion:1 salk:3 precision:1 sub:1 inferring:1 exponential:3 lie:2 house:1 third:1 ix:1 theorem:1 showing:1 dominates:1 deconvolution:1 intrinsic:1 hyperspectral:1 te:2 illumination:1 sparseness:2 likely:1 visual:10 ordered:1 sclar:1 scalar:1 lewicki:2 corresponds:3 goal:5 identity:1 change:2 determined:1 except:1 reducing:1 principal:1 called:1 total:1 la:1 shannon:1 photoreceptor:1 indicating:1 support:1 modulated:1 hateren:1 correlated:1
995
191
348 Farotimi, Demho and Kailath Neural Network Weight Matrix Synthesis Using Optimal Control Techniques O. Farotimi A. Dembo Information Systems Lab. Electrical Engineering Dept. Stanford University, Stanford, CA 94305 T. Kailath ABSTRACT Given a set of input-output training samples, we describe a procedure for determining the time sequence of weights for a dynamic neural network to model an arbitrary input-output process. We formulate the input-output mapping problem as an optimal control problem, defining a performance index to be minimized as a function of time-varying weights. We solve the resulting nonlinear two-point-boundary-value problem, and this yields the training rule. For the performance index chosen, this rule turns out to be a continuous time generalization of the outer product rule earlier suggested heuristically by Hopfield for designing associative memories. Learning curves for the new technique are presented. 1 INTRODUCTION Suppose that we desire to model as best as possible some unknown map 4> : u V, where U, V ~ One way we might go about doing this is to collect as many input-output samples {(9in, 90u d : 4>(9 in ) = 9 0u d as possible and "find" some function f : U - V such that a suitable distance metric d(f( z(t)), 4>(z(t)))I ZE {9 ... :4>c9 ... )=9 o .. d is minimized. nn. In the foregoing, we assume a system of ordinary differential equations motivated by dynamic neural network structures[l] [2]. In particular we set up an n-dimensional Neural Network Weight Matrix Synthesis neural network; call it N. Our goal is to synthesize a possibly time varying weight matrix for N such that for initial conditions zeta), the input-output transformation, or flow 1 : zeta) -- I(z(t,? associated with N approximates closely the desired map 4>. For the purposes of synthesizing the weight program for N, we consider another system, say S, a formal nL-dimensional system of differential equations comprising L n-dimensional subsystems. With the exception that all L n-dimensional subsystems are constrained to have the same weight matrix, they are otherwise identical and decoupled. We shall use this system to determine the optimal weight program given L input-output samples. The resulting time program of weights is then applied to the original n-dimensional system N during normal operation. We emphasize the difference between this scheme and a simple L-fold replication of N: the latter will yield a practically unwieldy nL x nL weight matrix sequence, and in fact will generally not discover the underlying map from U to V, discovering instead different maps for each input-output sample pair. By constraining the weight matrix sequence to be an identical n x n matrix for each subsystem during this synthesis phase, our scheme in essence forces the weight sequence to capture some underlying relationship between all the input-output pairs. This is arguably the best estimate of the map given the information we have. Using formal optimal control techniques[3], we set up a performance index to maximize the correlation between the system S output and the desired output. This optimization technique leads in general to a nonlinear two-point-boundary-value problem, and is not usually solvable analytically. For this particular performance index we are able to derive an analytical solution to the optimization problem. The optimal interconnection matrix at each time is the sum (over the index of all samples) of the outer products between each desired output n-vector and the corresponding subsystem output. At the end of this synthesis procedure, the weight matrix sequence represents an optimal time-varying program for the weights of the n-dimensional neural network N that will approximate 4> : U -- V. We remark that in the ideal case, the weight matrix at the final time (i.e one element of the time sequence) corresponds to the symmetric matrix suggested empirically by Hopfield for associative memory applications[4]. It becomes clear that the Hopfield matrix is suboptimal for associative memory, being just one point on the optimal weight trajectory; it is optimal only in the special case where the initial conditions coincide exactly with the desired output. In Section 2 we outline the mathematical formulation and solution of the synthesis technique, and in Section 3 we present the learning curves. The learning curves also by default yield the system performance over the training samples, and we compare this performance to that of the outer product rule. In Section 4 we give concluding remarks and give the directions of our future work. Although the results here are derived for a specific case of the neuron state equation, and a specific choice of performance index, in further work we have extended the results to very general state equations and performance indices. 349 350 Farotimi, Dembo and Kailath 2 SYNTHESIS OF WEIGHT MATRIX TIME SEQUENCE Suppose we have a training set consisting of L pairs of n-dimensional vectors (o(r)i, e(r\), r 1,2, ... , L, i 1,2, ... , n. For example, in an autoassociative system in which we desire to store e(r)i,r 1,2, ... ,L,i 1,2, ... ,n, we can choose the o(r)i, r 1,2, ... , L, i 1,2, ... , n to be sample points in the neighborhood of (}(r)i in n-dimensional space. The idea here is that by training the network to map samples in the neighborhood of an exemplar to the exemplar, it will have developed a map that can smoothly interpolate (or generalize) to other points around the exemplar that may not be in the training set. In this paper we deal with the issue of finding the weight matrix that transforms the neural network dynamics into such a map. We demonstrate through simulation results that such a map can be achieved. For autoassociation, and using error vectors drawn from the training set, we show that the method here performs better (in an error-correcting sense) than the outer product rule. We are still investigating the performance of the network in generalizing to samples outside the training set. = = = = = = We construct an n-dimensional neural network system N to model the underlying input-output map according to N: z(t) = -z(t) + W(t)g(z(t), (1) We interpret z as the neuron activation, g(z(t)) is the neuron output, and W(t) is the neural network weight matrix. To determine the appropriate W(t), we define an nL-dimensional formal system of differential equations, S S: z?(t) = -z.(t) + W.(t)g(z.), g(z.(to? = iJ (2) formed by concatenating the equations for N L times. W. (t) is block-diagonal with identical blocks W(t). 8 is the concatenated vector of sample desired outputs, iJ is the concatenated vector of sample inputs. The performance index for S is mi nJ =min {-z.T(tI)8 + " 41t' (-2Z. to T(t)8 + /3Q + /3-1 t WJ(t)Wi(t?) dt} i=1 (3) The performance index is chosen to minimize the negative of the correlation between the (concatenated) neuron activation and the (concatenated) desired output vectors, or equivalently maximize the correlation between the activation and the desired output at the final time tl, (the term -Z.T(t1 )8). Along the way from initial time to to final time t I, the term -z. T (t)8 under the integral penalizes decorrelation of the neuron activation and the desired output. Wj(t), j = 1,2, ... , n are the rows of W(t), and /3 is a positive constant. The term /3-1 Ei=l wJ(t)Wj(t) effects a bound Neural Network Weight Matrix Synthesis 351 on the magnitude of the weights. The term n Q(g(Z(t?) = n L L L L L L o/r)o/v)g(zu(v?g(zu(r?, j=lr=lu=lv=1 and its meaning will be clear when we examine the optimal path later. g(.) is assumed Cl differentiable. Proceeding formally[3], we define the Hamiltonian: H = ~ ( _2zT(I)9 + Q + ~ ( _2",T(I)9 + Q + t t WJ<I)BWj(I?) + >7(1)( -z(l) + W.(I)g(z(l))) WJ<I)BWj(I?) - >7(1)",(1) + t. t, A(r)jwJ<l)g(r)(z(l? where .\T (t) = [ ,\(1)1 (t) ,\(1)2(t) ... ,\(L)n (t) ] is the vector of Lagrange multipliers, and we have used the fact that W.(t) is blockdiagonal with identical blocks W(t) in writing the summation of the last term in the second line of equation (4). The Euler-Lagrange equations are then given by ( OH)T OZ = 21 (OQ)T oz - (9 + .\(t? + (Og)T -9 oz T W. (t)'\(t) (5) (6) L o oH ow. J = w'f B + """ ,\(r) .g(r)T (z(t? J ~ J r=1 (7) From equation (7) we have L Wij(t) = -f3 L ,\(r) jg(z;<r)(t? (8) r=l Choosing .\(t) = -9 (9) satisfies the final condition (6), and with some algebra we find that this choice is also consistent with equations (5) and (7). The optimal weight program is therefore L Wij (t) = f3 L o(r\g(z;<r)(t? (4) (10) r=l This describes the weight paradigm to be applied to the n-dimensional neural network ..system /II in order to model the underlying map described by the sample 352 Farotimi, Dembo and KaiIath points. A similar result can be derived for the discrete-time network z(k + 1) = W(k)g(z(k?: L wi;(k) = f3 L o(r)ig(x/r)(k? r=l 2.1 REMARKS ? Meaning ofQ. On the optimal path, using equation (10), it is straightforward to show that n f3Q = f3- 1 L wT(t)w;(t) ;=1 Thus Q acts like another integral constraint term on the weights. ? The Optimal Return Function. The optimal return function[3], which is the value of the performance index on the optimal path can be shown to be Thus the optimal weight matrix W(t) seeks at every instant to minimize the negative correlation (or maximize the correlation) on the optimal path in the formal system S (and hence in the neural network N). ? Comparison with outer product rule. It is worthwhile to compare equation (10) with the outer product rule: L Wi; = f3 L o(r) jo(r); (11) r=l We see that the outer product rule is just one point on the weight trajectory defined by equation (10) - the point at final time tf when g(X/r)(tf?) = o(r)j' 3 LEARNING CURVES In our simulation we considered 14 8-dimensional vectors as the desired outputs. The weight synthesis or learning phase is as follows: we initialized the 112-dimensional formal synthesis system S with a corrupted version of the vector set, and used equation (10) to find the optimal 8 x 8 weight matrix sequence for an 8-dimensional neural network N to correctly classify any of the corrupted 14 vectors. The weight sequence is recorded. This procedure is required only once for any given training set. After this learning is completed, the normal operation of the neural network N consists in running it using the weights obtained from the synthesis phase above. The resulting network describes a continuous input-output map. At points belonging to the training set this map coincides with the underlying map we are trying to model. For points outside the training set, it performs a nonlinear interpolation Neural Network Weight Matrix Synthesis (generalization) the nature of which is determined by the training set as well as the neuron state equation. Figure 1 shows the learning procedure through time. The curves labeled "Optimally Trained Network" shows the behavior of two correlation measures as the training proceeds. One correlation measure used was the cosine of the angle between the desired vector (8) and the neuron activation (z) vector. The other correlation measure was the cosine of the angle between the desired vector (8) and the neuron output (g(z(t?) vector. Given our system initialization in equation (2), the correlation g(z(t?)T8 more accurately represents our objective) although the performance index (3) reflects the correlation zT 8. The reason for our performance index choice is that the weight trajectory yielded by g(z(t?)T8 leads the system to an all-zero, trivial equilibri um for a sigmoid gC) (we used such a g(.) with saturation values at +1 and -1 in our simulations). This is not the case for the weight trajectory yielded by z T 8. Since g(z(t? is monotonic with z, zT 8 represented an admissible alternative choice for the performance index. The results bear this out. Another possible choice is (g(z(t? + z)T8. This gives similar results upon simulation. The correlation measures are plotted on the ordinate. The abscissa is the number of computer iterations. A discrete-time network with real-valued parameters was used. The total number of errors in the 14 8-bit binary {I, -I} vectors used to initialize the system was 21. This results in an average of 1.5 errors per 8-bit vector. We note that the learning was completed in two time steps. Therefore, in this case at least) we see that the storage requirement is not intensive - only two weight matrices need to be stored during the synthesis phase. We note that the learning phase by default also represents the autoassociative system error-correcting performance over input samples drawn from the training set. Therefore over the training set we can compare this performance with that of the outer product rule (11). By considering corrupted input vectors from the training set, we compare the error-correcting capabilities of the two methods, not their capacities to store uncorrupted vectors. In fact we see that the two weight rules become identical when we initialize with the true vectors (this equivalence is not a peculiarity of the new technique, but merely a consequence of the particular performance index chosen). In other words, this comparison is a test of the extent of the basins of attraction around the desired memories for the two techniques. Looking at the curves labeled "Conventional Outer Product", we see that the new technique performs better than the outer product rule. 4 CONCLUSIONS AND FURTHER WORK We have described a technique for training neural networks based on formal tools from optimal control theory. For a specific example consisting of learning the inputoutput map in a training set we derived the relevant weight equations and illustrated the learning phase of the method. This example gives a weight rule that turns out to be a continuous-time generalization of the outer-product rule. Using corrupted vectors from the training set, we show that the new rule performs better in error-correction than the outer-product rule. Simulations on the generalization capa5ilities of the method are ongoing and are not included in the present work. 353 354 Farotimi, Dembo and Kailath , lo O.9~ I DISCItEn CUI ?-~T"""Hc 0.1 t __ 0.1 ... c.........aa- ........ 0.1 I 0.9 0.1 I 0.6 t 0-' 0.4 Q.6 0-' 0.4 0.3 D.l 0.1 0.1 ? 0.1 0 10 ? 30 40 50 60 10 10 90 laD 4.1 0 10 ~ Figure 1: Learning Curves Although we considered a training set consisting of input-output vector pairs as the starting point for the procedure, a closer examination shows that this is not required. More generally, what is required is a performance index that reflects the objective of the training. Also in our ongoing work we have extended the results to more general forms of the state equation and the performance index. Using an appropriate performance index we are investigating a network for the Travelling Salesman Problem and related applications like Tracking and Data Association. References [1] Michael A. Cohen & Stephen Grossberg, "Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural Networks," IEEE Transactions on Systems, Man and Cybernetics SMC-13 (1983),815-826. [2] J. J. Hopfield & D. W. Tank, "Neural Computation of Decisions in Optimization Problems," Biological Cybernetics 52 (1985), 141-152. [3] Arthur E. Bryson & Yu-Chi Ho, Applied Optimal Control, Hemisphere, 1975. [4] J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proceedings of the National Academy of Sciences 79 (1982), 2554-2558.
191 |@word version:1 heuristically:1 seek:1 simulation:5 initial:3 activation:5 discovering:1 dembo:4 hamiltonian:1 lr:1 mathematical:1 along:1 differential:3 become:1 replication:1 consists:1 behavior:1 abscissa:1 examine:1 chi:1 considering:1 becomes:1 discover:1 underlying:5 what:1 developed:1 finding:1 transformation:1 nj:1 every:1 ti:1 act:1 exactly:1 um:1 control:5 arguably:1 t1:1 positive:1 engineering:1 consequence:1 path:4 interpolation:1 might:1 initialization:1 equivalence:1 collect:1 autoassociation:1 smc:1 grossberg:1 block:3 procedure:5 word:1 subsystem:4 storage:2 writing:1 conventional:1 map:15 go:1 straightforward:1 starting:1 formulate:1 correcting:3 rule:15 attraction:1 oh:2 stability:1 suppose:2 designing:1 synthesize:1 element:1 ze:1 labeled:2 electrical:1 capture:1 wj:6 dynamic:3 trained:1 algebra:1 upon:1 hopfield:5 emergent:1 represented:1 farotimi:5 describe:1 formation:1 neighborhood:2 outside:2 choosing:1 stanford:2 solve:1 foregoing:1 say:1 valued:1 otherwise:1 interconnection:1 ability:1 final:5 associative:3 sequence:9 differentiable:1 analytical:1 product:12 relevant:1 oz:3 academy:1 inputoutput:1 requirement:1 derive:1 exemplar:3 ij:2 direction:1 closely:1 peculiarity:1 generalization:4 ofq:1 biological:1 summation:1 correction:1 practically:1 around:2 considered:2 normal:2 mapping:1 purpose:1 tf:2 tool:1 reflects:2 varying:3 og:1 derived:3 bryson:1 sense:1 nn:1 wij:2 comprising:1 tank:1 issue:1 constrained:1 special:1 initialize:2 construct:1 f3:5 once:1 identical:5 represents:3 yu:1 future:1 minimized:2 national:1 interpolate:1 phase:6 consisting:3 nl:4 integral:2 closer:1 arthur:1 decoupled:1 penalizes:1 desired:12 initialized:1 plotted:1 classify:1 earlier:1 ordinary:1 euler:1 optimally:1 stored:1 corrupted:4 michael:1 synthesis:12 zeta:2 jo:1 recorded:1 choose:1 possibly:1 return:2 later:1 lab:1 doing:1 competitive:1 capability:1 parallel:1 minimize:2 formed:1 yield:3 generalize:1 accurately:1 lu:1 trajectory:4 cybernetics:2 associated:1 mi:1 dt:1 formulation:1 just:2 correlation:11 ei:1 nonlinear:3 effect:1 multiplier:1 true:1 analytically:1 hence:1 symmetric:1 illustrated:1 deal:1 during:3 essence:1 coincides:1 cosine:2 trying:1 outline:1 demonstrate:1 performs:4 meaning:2 sigmoid:1 empirically:1 physical:1 cohen:1 association:1 approximates:1 interpret:1 lad:1 jg:1 hemisphere:1 store:2 binary:1 uncorrupted:1 determine:2 maximize:3 paradigm:1 ii:1 stephen:1 metric:1 iteration:1 achieved:1 flow:1 oq:1 call:1 ideal:1 constraining:1 suboptimal:1 idea:1 intensive:1 motivated:1 remark:3 autoassociative:2 generally:2 clear:2 transforms:1 correctly:1 per:1 discrete:2 shall:1 drawn:2 merely:1 sum:1 angle:2 decision:1 bit:2 bound:1 fold:1 yielded:2 constraint:1 min:1 concluding:1 according:1 cui:1 belonging:1 describes:2 wi:3 equation:18 turn:2 end:1 travelling:1 salesman:1 operation:2 worthwhile:1 appropriate:2 c9:1 alternative:1 ho:1 original:1 running:1 completed:2 instant:1 concatenated:4 objective:2 diagonal:1 ow:1 bwj:2 distance:1 capacity:1 outer:12 extent:1 trivial:1 reason:1 index:17 relationship:1 equivalently:1 negative:2 synthesizing:1 zt:3 collective:1 unknown:1 neuron:8 defining:1 extended:2 looking:1 gc:1 arbitrary:1 ordinate:1 pair:4 required:3 able:1 suggested:2 proceeds:1 usually:1 pattern:1 saturation:1 program:5 memory:5 suitable:1 decorrelation:1 force:1 examination:1 solvable:1 scheme:2 blockdiagonal:1 determining:1 bear:1 lv:1 basin:1 consistent:1 row:1 lo:1 last:1 formal:6 absolute:1 boundary:2 curve:7 default:2 coincide:1 ig:1 transaction:1 approximate:1 emphasize:1 global:1 investigating:2 assumed:1 continuous:3 nature:1 ca:1 hc:1 cl:1 t8:3 f3q:1 tl:1 concatenating:1 admissible:1 unwieldy:1 specific:3 zu:2 magnitude:1 jwj:1 smoothly:1 generalizing:1 lagrange:2 desire:2 tracking:1 monotonic:1 aa:1 corresponds:1 satisfies:1 kailath:4 goal:1 man:1 included:1 determined:1 wt:1 total:1 exception:1 formally:1 latter:1 ongoing:2 dept:1
996
1,910
Foundations for a Circuit Complexity Theory of Sensory Processing* Robert A. Legenstein & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {Iegi, maass }@igi.tu-graz.ac.at Abstract We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solved by circuits for translation- and scale-invariant sensory processing. We exhibit new circuit design strategies for these new benchmark functions that can be implemented within realistic complexity bounds, in particular with linear or almost linear total wire length. 1 Introduction Circuit complexity theory is a classical area of theoretical computer science, that provides estimates for the complexity of circuits for computing specific benchmark functions, such as binary addition, multiplication and sorting (see, e.g. (Savage, 1998?. In recent years interest has grown in understanding the complexity of circuits for early sensory processing, both from the biological point of view and from the point of view of neuromorphic engineering (see (Mead, 1989?. However classical circuit complexity theory has provided little insight into these questions, both because its focus lies on a different set of computational problems, and because its traditional complexity measures are not tailored to those resources that are of primary interest in the analysis of neural circuits in biological organisms and neuromorphic engineering. This deficit is quite unfortunate since there is growing demand for energy-efficient hardware for sensory processing, and complexity issues become very important since the number n of parallel inputs which such circuits have to handle is typically quite large (for example n 2': 106 in the case of many visual processing tasks). We will follow traditional circuit complexity theory in assuming that the underlying graph of each circuit is a directed graph without cycles. l The most frequently considered complexity measures in traditional circuit complexity theory are the number (and types) of "Research for this article was partially supported by the the Fonds zur Forderung der wissenschaftlichen Forschung (FWF), Austria, project P12153, and the NeuroCOLT project of the EC. I Neural circuits in "wetware" as well as most circuits in analog VLSI contain in addition to feedforward connections also lateral and recurrent connections. This fact presents a serious obstacle for a direct mathematical analysis of such circuits. The standard mathematical approach is to model such circuits by larger feedforward circuits, where new "virtual gates" are introduced to represent the state of existing gates at later points in time. gates, as well as the depth of a circuit. The latter is defined as the length of the longest directed path in the underlying graph, and is also interpreted as the computation time of the circuit. The focus lies in general on the classification of functions that can be computed by circuits whose number of gates can be bounded by a polynomial in the number n of input variables. This implicitly also provides a polynomial- typically quite large - bound on the number of "wires" (defined as the edges in the underlying graph of the circuit). We proceed on the assumption that the area (or volume in the case of neural circuits) occupied by wires is a severe bottleneck for physical implementations of circuits for sensory processing. Therefore we wiJI not just count wires, but consider a complexity measure that provides an estimate for the total area or volume occupied by wires. In the cortex, neurons occupy an about 2 mm thick 3-dimensional sheet of "grey matter". There exists a strikingly general upper bound on the order of 105 for the number of neurons under any mm2 of cortical surface, and the total length of wires (axons and dendrites, including those running in the sheet of "white matter" that lies below the grey matter) under any mm 2 of cortical surface is estimated to be ~ 8km = 8?106 mm (Koch, 1999). Together this yields an upper bound of 8~~~6 n = 80 . n mm for the wire length of the "average" cortical circuit involving n neurons. In order to arrive at a concise mathematical model we project each 3D cortical circuit into 2D, and assume for simplicity that its n gates (neurons) occupy the nodes of a grid. Then for a circuit with n gates, the total length of the horizontal components of all wires is on average ~ 80 . n mm = 80 . n .10 5 / 2 ~ 25300? n grid units. Here, one grid unit is the distance between adjacent nodes on the grid, which amounts to 1O- 5 / 2 mm for an assumed density of 10 5 neurons per mm 2 of cortical surface. Thus we arrive at a simple test for checking whether the total wire length of a proposed circuit design has a chance to be biologically realistic: Check whether you can arrange its n gates on the nodes of a grid in such a way that the total length of the horizontal components of all wires is ~ 25300 . n grid units. More abstractly, we define the following model: Gates, input- and output-ports of a circuit are placed on different nodes of a 2-dimensional grid (with unit distance 1 between adjacent nodes). These nodes can be connected by (unidirectional) wires that run through the plane in any way that the designer wants, in particular wires may cross and need not run rectilinearly (wires are thought of as running in the 3 dimensional .Ipace above the plane, without charge for vertical wire segmentsp. We refer to the minimal value of the sum of all wire lengths that can be achieved by any such arrangement as the total wire length of the circuit. The attractiveness of this model lies in its mathematical simplicity, and in its generality. It provides a rough estimate for the cost of connectivity both in artificial (basically 2dimensional) circuits and in neural circuits, where 2-dimensional wire crossing problems are apparently avoided (at least on a small scale) since dendritic and axonal branches are routed through 3-dimensional cortical tissue. There exist quite reliable estimates for the order of magnitudes for the number n of inputs, the number of neurons and the total wire length of biological neural circuits for sensory processing, see (Abeles, 1998; Koch, 1999; Shepherd, 1998; Braitenberg and Schiiz, 1998).3 2We will allow that a wire from a gate may branch and provide input to several other gates. For reasonable bounds on the maximal fan-out (10 4 in the case of neural circuits) this is realistic both for neural circuits and for VLSI. 3The number of neurons that transmit information from the retina (via the thalamus) to the cortex Collectively they suggest that only those circuit architectures for sensory processing are biologically realistic that employ a number of gates that is almost linear in the number n of inputs, and a total wire length that is quadratic or subquadratic - with the additional requirement that the constant factor in front of the asymptotic complexity bound has a value close to 1. Since most asymptotic bounds in circuit complexity theory have constant factors in front that are much larger than 1, one really has to focus on circuit architectures with clearly subquadratic bounds for the total wire length. The complexity bounds for circuits that can realistically be implemented in VLSI are typically even more severe than for "wetware", and linear or almost linear bounds for the total wire length are desirable for that purpose. In this article we begin the investigation of algorithms for basic pattern recognition tasks that can be implemented within this low-level complexity regime. The architecture of such circuits has to differ strongly from most previously proposed circuits for sensory processing, which usually involve at least 2 completely connected layers, since already complete connectivity between just two linear size 2-dimensionallayers of a feedforward neural net requires a total wire length on the order of n 5 / 2 . Furthermore a circuit which first selects a salient input segment consisting of a block of up to m adjacent inputs in some 2-dimensional map, and then sends this block of ~ m inputs in parallel to some central "pattern template matcher", typically requires a total wire length of O(n 3 / 2 ? m) - even without taking the circuitry for the "selection" or the template matching into account. 2 Global Pattern Detection in 2-Dimensional Maps For many important sensory processing tasks - such as for visual or somatosensory input - the input variables are arranged in a 2-dimensional map whose structure reflects spatial relationship in the outside world. We assume that local feature detectors are able to detect the presence of salient local features in their specific "receptive field", such as for example a center which emits is estimated to be around 10 6 (all estimates given are for primates, and they only reflect the order of magnitude). The total number of neurons that transmit sensory (mostly somatosensory) information to the cortex is estimated to be around 10 8 . In the subsequent sections we assume that these inputs represent the outputs of various local feature detectors for n locations in some 2-dimensional map. Thus, if one assumes for example that on average there are 10 different feature detectors for each location on this map, one arrives at biologically realistic estimates for n that lie between 10 5 and 10 7 . The total number of neurons in the primary visual cortex of primates is estimated to be around 10 9 , occupying an area of roughly 10 4 mm2 of cortical surface. There are up to 10 5 neurons under one mm2 of cortical surface, which yields a value of 10- 5 / 2 mm for the distance between adjacent grid points in our model. The total length of axonal and dendritic branches below one mm 2 of cortical surface is estimated to be between 1 and 10 km, yielding up to lOll mm total wire length for primary visual cortex. Thus if one assumes that 100 separate circuits are implemented in primary visual cortex, each of them can use 10 7 neurons and a total wire length of 10 9 mm. Hence realistic bounds for the complexity of a single one of these circuits for visual pattern recognition are 107 = n 7 / 5 neurons (for n = 105 ), and a total wire length of at most 10 1 1.5 = n 2 . 3 grid units in the framework of our model. The whole cortex receives sensory input from about 108 neurons. It processes this input with about 10 10 neurons and less than 10 12 mm total wire length. If one assumes that 10 3 separate circuits process this sensory information in parallel, each of them processing about l/lOth of the input (where again 10 different local feature detectors report about every location in a map), one anives at n = 10 6 neurons for each circuit, and each circuit can use on average n 7 /6 neurons and a total wire length of lO ll . 5 < n 2 grid units in the sense of our model. The actual resources available for sensory processing are likely to be substantially smaller, since most cortical neurons and circuits are believed to have many other functions besides online sensory processing. higher (or lower) intensity than its immediate surrounding, or a high-intensity line segment in a certain direction, the end of a line, a junction of line segments, or even more complex local visual patterns like an eye or a nose. The ultimate computational goal is to detect specific global spatial arrangements of such local patterns, such as the letter "T", or in the end also a human face, in a translation- and scale-invariant manner. We formalize 2-dimensional global pattern detection problems by assuming that the input consists of arrays g = (al, ... , an), ~ = (bl, ... , bn ), etc. of binary variables that are arranged on a 2-dimensional square grid4 ? Each index i can be thought of as representing a location within some y'ri x y'ri-square in the outside world. We assume that ai = 1 if and only if feature a is detected at location i and that bi = 1 if and only if feature b is detected at location i. In our formal model we can reserve a subsquare within the 2-dimensional grid for each index i, where the input variables ai, bi , etc. are given on adjacent nodes of this grid5 . Since we assume that this spatial arrangement of input variables reflects spatial relations in the outside world, many salient examples for global pattern detection problems require the computation of functions such as 1, PI) (g,~) = { if there exist i and j so that ai = bj = 1 and input location j is above and to the right of input location i 0, else Theorem 2.1 The fun ction PI) can be computed - and witnesses i and j with ai = bj = 1 can be exhibited if they exist - by a circuit with total wire length O(n), consisting ofO(n) Boolean gates offan-in 2 (andfan-out 2) in depth o (log n . log logn). The depth of the circuit can be reduced to o (log n) if one employs threshold gates 6 with fan-in logn. This can also be done with total wire length O(n). Proof (sketch) At first sight it seems that PI) needs complete connectivity on the plane because of its global character. However, we show that there exists a divide and conquer approach with rather small communication cost. Divide the input plane into four sub-squares C l , ... , C4 (see Figure la). We write and ~l , ... , ~4 for the restrictions of the input to these four sub-areas and assume that the following values have already been computed for each sub-square C i : gl, ... , g4 ? The x-coordinate of the leftmost occurrence of feature a in C i ? The x-coordinate of the rightmost occurrence of feature b in Ci ? The y-coordinate of the lowest occurrence of feature a in C i ? The y-coordinate of the highest occurrence of feature b in Ci ? The value of p;;/4(gi,~i) We employ a merging algorithm that uses this information to compute corresponding values for the whole input plane. The first four values can be computed by comparison-like 4Whenever needed we assume for simplicity that n is such that Vii, log n etc. are natural numbers. The arrangement of the input variables an the grid will in general leave many nodes empty, which can be occupied by gates of the circuit. 5To make this more formal one can assume that indices i and) represent pairs (il' i2), (jl, h) of coordinates. Then "input location) is above and to the right of input location i" means: il < 1t and i2 < )2. The circuit complexity of variations of the function PE where one or both of the "<" are replaced by "~" is the same. 6 A threshold gate computes a Boolean function T : {O, 1} k -+ {O , 1} of the form T(Xl' . . . ,Xk) = 1 ?:} E~=l WiXi ~ Woo ,- - - - - - - - - r - - - - - - - - -. ~ : : ' ~1 : : ~ ,- ? ?? ? ? ? ?? ? ? : ' ....--,.--or - - - - - - , -. l ~ ~ i ~ ~ ~ ~ .L--~----- '--_ _ _"'---_ _____' a) ~ ~ i ~ ~ ~ .w. ???????? ~ ~ -~ : ????????????????? I : .. ~ ____~________'b) ~ -~ ~ ~ ~ I -I 1 ~I ' - -_ _ _.... _ _ _ _ _ _ _ I c) Figure 1: The 2-dimensional input plane. Occurrences of features in Q are indicated by light squares, and occurrences of features in fl. are indicated by dark squares. Divide the input area into four sub-squares (a). Merging horizontally adjacent sub-squares (b). Merging vertically adjacent sub-squares (c). a) b) c) Figure 2: The H-tree construction. Black squares represent sub-circuits for the merging algorithm. The shaded areas contain the leaves of the tree. The lightly striped areas represent busses of wires that run along the edges of the H-Tree. The H-tree HI divides the input-area into four sub-squares (a). To construct H 2 , replace the leaves of HI by H-trees HI (b). To construct H k , replace the leaves of HI by H-trees H k - I (c). operations. The computation of P]5(Q, fl.) can be sketched as follows: First, check whether p{;/4(Qi ,fl.i) = 1 for some i E {I, ... ,4}. Then, check the spatial relationships between feature occurrences in adjacent sub-squares. When checking spatial relationships between features from two horizontally adjacent sub-squares, only the lowest and the highest feature occurrence is crucial for the value of P]5 (see Figure Ib). This is true, since the x-coordinates are already separated. When checking spatial relationships of features from two vertically adjacent sub-squares, only the leftmost and the rightmost feature occurrence is crucial for the value of P]5 (see Figure lc). This is true, since the y-coordinates are already separated. When checking spatial relationships of features from the lower left and the upper right sub-squares, it suffices to check whether there is an a-feature occurrence in the lower left and a b-feature occurrence in the upper right sub-square. Hence, one can reduce the amount of information needed from each sub-square to 0 (log n/ 4) bits. In the remaining part of the proof sketch, we present an efficient layout for a circuit that implements this recursive algorithm. We need a layout strategy that is compatible with the recursive two-dimensional division of the input plane. We adopt for this purpose a well known design strategy: the H-tree (see (Mead and Rem, 1979?. An H-tree is a recursive tree-layout on the 2-dimensional plane. Let Hk denote such a tree with 4k leaves. The layout of HI is illustrated in Figure 2a. To construct an H -Tree H k, build an H -tree HI and replace its four leaves by H-trees H k - I (see Figure 2b,c). We need to modify the H-tree construction of Mead and Rum to make it applicable to our problem. The inner nodes of the tree are replaced by sub-circuits that implement the merging algorithm. Furthermore, each edge of the H-tree is replaced by a "bus" consisting of O(log m) wires if it originates in an area with m inputs. It is not difficult to show that ? this layout uses only linear total wire length. The linear total wire length of this circuit is up to a constant factor optimal for any circuit whose output depends on all of its n inputs. Note that most connections in this circuit are local, just like in a biological neural circuit. Thus, we see that minimizing total wire length tends to generate biology-like circuit structures. The next theorem shows that one can compute PI) faster (i.e. by a circuit with smaller depth) if one can afford a somewhat larger total wire length. This circuit construction, that is based on AND/OR gates of limited fan-in ~, has the additional advantage that it can not just exhibit some pair (i, j) as witness for PI) (g" Q) = 1 (provided such witness exists), but it can exhibit in addition all j that can be used as witness together with some i. This property allows us to "chain" the global pattern detection problem formalized through the function PI), and to decide within the same complexity bound whether for any fixed number k of input vectors g,(l), ... ,g,(k) from {a, 1}n there exist locations i(l), ... ,i(k) so that = 1 for m = 1, ... ,k and location i(m+1) lies to the right and above location i(m) for m = 1, ... ,k - 1. In fact, one can also compute a k-tuple of witnesses i(l), ... ,i(k) within the same complexity bounds, provided it exists. This circuit design is based on an efficient layout for prefix computations. ai;:\ Theorem 2.2 For any given n and ~ E {2, ... ,Vn} one can compute the function PI) in depth O(:~:~) by a feed-forward circuit consisting ofO(n) AND/OR gates offan-in ~ ~, with total wire length O(n . ~ . :~;~). ? Another essential ingredient of translation- and scale-invariant global pattern recognition is the capability to detect whether a local feature c occurs in the middle between locations i and j where the local features a and b occur. This global pattern detection problem is formalized through the following function PF : {a, 1 -t {a, 1}: pn If LA = LQ = 1 thenPF(g"Q,~) = 1, if and only if there existi,j,k so that input location k lies on the middle of the line between locations i and j, and ai = bj = Ck = 1. This function PF can be computed very fast by circuits with the least possible total wire length (up to a constant factor), using threshold gates of fan-in up to Vn: Theorem 2.3 The function PF can be computed - and witnesses can be exhibited - by a circuit with total wire length and area O(n), consisting ofO(n) Boolean gates offan-in 2 and 0 (..jn) threshold gates offan-in Vn in depth 7. The design of the circuit exploits that the computation of PF can be reduced to the solution of two closely related 1-dimensional problems. ? 3 Discussion There exists a very large literature on neural circuits for translation-invariant pattern recognition see http://www.cn!.salk.edurwiskottiBibliographies/Invariances.htm!. Unfortunately there exists substantial disagreement regarding the interpretation of existing approaches see http://www.ph.tn.tudelft.nIIPRInfo/shiftimaillist.html. Virtually all positive results are based on computer simulations of small circuits, or on learning algorithms for concrete neural networks with a fixed input size n on the order of 20 or 30, without an analysis how the required number of gates and the area or volume occupied by wires scale up with the input size. The computational performance of these networks is often reported in an anecdotical manner. The goal of this article is to show that circuit complexity theory may become a useful ingredient for understanding the computational strategies of biological neural circuits, and for extracting from them portable principles that can be applied to novel artificial circuits 7 . For that purpose we have introduced the total wire length as an abstract complexity measure that appears to be among the most salient ones in this context, and which can in principle be applied both to neural circuits in the cortex and to artificial circuitry. We would like to argue that only those computational strategies that can be implemented with subquadratic total wire length have a chance to reflect aspects of cortical information processing, and only those with almost linear total wire length are implementable in special purpose VLSIchips for real-world sensory processing tasks. 8 The relevance of the total wire length of cortical circuits has been emphasized by numerous neuroscientists, from Cajal (see for example p. 14 in (Cajal, 1995)) to (Chklovskii and Stevens, 2000). On the other hand the total wire length of a circuit layout is also closely related to the area required by a VLSI implementation of such a circuit (see (Savage, 1998)). We have formalized some basic computational problems, that appear to underly various translation- and scale-invariant sensory processing tasks, as a first set of benchmark functions for a circuit complexity theory of sensory processing. We have presented designs for circuits that compute these benchmark functions with small - in most cases linear or almost linear - total wire length (and constant factors of moderate size). The computational strategies of these circuits differ strongly from those that have been considered in previous approaches, which failed to take the limitations imposed by the realistically available amount of total wire length into account. References Abeles, M. (1998). Corticonics: Neural Circuits of the Cerebral Cortex, Cambridge Univ. Press. Braitenberg, V., Schuz, A. (1998). Cortex: Statistics and Geometry of Neuronal Connectivity, 2nd ed., Springer Verlag. Cajal, S.R. (1995). Histology of the Nervous System, volumes 1 and 2, Oxford University Press (New York). Chklovskii, D.B. and Stevens, C.P. (2000). Wiring optimization in the brain. Advances in Neural Information Processing Systems vol. 12, MIT Press, 103-107. Koch, C. (1999). Biophysics of Computation, Oxford Univ. Press. Lazzaro, J., Ryckebusch, S., Mahowald, M. A., Mead, C. A. (1989). Winner-take-all networks of O(n) complexity. Advances in Neural Information Processing Systems, vol. 1, Morgan Kaufmann (San Mateo), 703-711. Mead, C. and Rem M. (1979). Cost and performance of VLSI computing structures. IEEE 1. Solid State Circuits SC-14(1979), 455-462. Mead, C. (1989). Analog VLSI and Neural Systems. Addison-Wesley (Reading, MA, USA). Savage, J. E. (1998). Models of Computation: Exploring the Power of Computing. Addison-Wesley (Reading, MA, USA). Shepherd, G. M. (1998). The Synaptic Organization of the Brain, 2nd ed., Oxford Univ. Press. 7We do not want to argue that learning plays no role in the design and optimization of circuits for specific sensory processing tasks; on the contrary. But one of the few points where the discussion from http://www.ph.tn.tudelft.nUPRInfo/shiftfmaillist.html agreed is that translation- and scaleinvariant pattern recognition is a task which is so demanding, that learning algorithms have to be supported by pre-existing circuit structures. 80f course there are other important complexity measures for circuits - such as energy consumption - besides those that have been addressed in this article.
1910 |@word middle:2 polynomial:2 seems:1 nd:2 grey:2 km:2 simulation:1 bn:1 concise:1 solid:1 prefix:1 rightmost:2 existing:3 savage:3 subsequent:1 underly:1 realistic:6 leaf:5 nervous:1 plane:8 xk:1 provides:4 node:9 location:16 ofo:3 mathematical:4 along:1 direct:1 become:2 loll:1 consists:1 manner:2 introduce:1 g4:1 roughly:1 frequently:1 growing:1 brain:2 rem:2 little:1 actual:1 pf:4 provided:3 project:3 underlying:3 bounded:1 circuit:84 begin:1 lowest:2 interpreted:1 substantially:1 every:1 fun:1 charge:1 unit:6 originates:1 appear:1 positive:1 engineering:3 local:9 vertically:2 modify:1 tends:1 oxford:3 mead:6 path:1 black:1 mateo:1 shaded:1 limited:1 bi:2 directed:2 recursive:3 block:2 implement:2 area:13 thought:2 matching:1 pre:1 suggest:1 close:1 selection:1 sheet:2 context:1 restriction:1 www:3 map:6 imposed:1 center:1 layout:7 simplicity:3 formalized:3 insight:1 array:1 handle:1 coordinate:7 variation:1 transmit:2 construction:3 play:1 wetware:2 us:2 crossing:1 recognition:5 role:1 solved:1 graz:2 cycle:1 connected:2 highest:2 substantial:1 complexity:28 segment:3 division:1 completely:1 strikingly:1 htm:1 various:2 grown:1 surrounding:1 separated:2 univ:3 fast:1 ction:1 artificial:3 detected:2 sc:1 outside:3 quite:4 whose:3 larger:3 statistic:1 gi:1 abstractly:1 scaleinvariant:1 online:1 advantage:1 net:1 maximal:1 tu:1 realistically:2 empty:1 requirement:1 leave:1 recurrent:1 ac:1 implemented:5 somatosensory:2 differ:2 direction:1 thick:1 closely:2 stevens:2 human:1 virtual:1 require:1 suffices:1 really:1 investigation:1 biological:6 dendritic:2 exploring:1 mm:12 koch:3 considered:2 around:3 bj:3 circuitry:2 reserve:1 arrange:1 early:1 adopt:1 purpose:4 applicable:1 occupying:1 reflects:2 rough:1 clearly:1 mit:1 sight:1 rather:1 occupied:4 pn:1 ck:1 focus:3 longest:1 check:4 hk:1 detect:3 sense:1 typically:4 vlsi:6 relation:1 selects:1 sketched:1 issue:1 classification:1 html:2 among:1 logn:2 spatial:8 special:1 field:1 construct:3 corticonics:1 mm2:3 biology:1 braitenberg:2 subquadratic:3 report:1 serious:1 employ:3 retina:1 few:1 cajal:3 loth:1 replaced:3 geometry:1 consisting:5 detection:5 neuroscientist:1 interest:2 organization:1 severe:2 arrives:1 yielding:1 light:1 chain:1 edge:3 tuple:1 tree:16 divide:4 theoretical:2 minimal:1 obstacle:1 boolean:3 neuromorphic:3 mahowald:1 cost:3 technische:1 front:2 universitat:1 reported:1 abele:2 density:1 together:2 concrete:1 connectivity:4 again:1 central:1 reflect:2 account:2 matter:3 igi:1 depends:1 later:1 view:2 wolfgang:1 apparently:2 parallel:3 capability:1 unidirectional:1 square:17 il:2 kaufmann:1 yield:2 basically:1 ipace:1 tissue:1 detector:4 whenever:1 ed:2 synaptic:1 energy:2 proof:2 emits:1 wixi:1 austria:2 formalize:1 agreed:1 appears:1 feed:1 wesley:2 higher:1 follow:1 arranged:2 done:1 strongly:2 generality:1 furthermore:2 just:4 sketch:2 receives:1 horizontal:2 hand:1 indicated:2 usa:2 contain:2 true:2 hence:2 maass:2 i2:2 illustrated:1 white:1 adjacent:10 ll:1 wiring:1 leftmost:2 complete:2 tn:2 novel:1 physical:1 winner:1 volume:4 jl:1 analog:2 organism:1 interpretation:1 cerebral:1 refer:1 cambridge:1 ai:6 grid:12 cortex:10 surface:6 etc:3 wiji:1 recent:1 histology:1 moderate:1 certain:1 verlag:1 binary:2 der:1 morgan:1 additional:2 somewhat:1 forderung:1 branch:3 desirable:1 thalamus:1 faster:1 cross:1 believed:1 biophysics:1 qi:1 involving:1 basic:3 represent:5 tailored:1 achieved:1 zur:1 addition:3 want:2 chklovskii:2 addressed:1 else:1 sends:1 crucial:2 exhibited:2 shepherd:2 virtually:1 contrary:1 fwf:1 extracting:1 axonal:2 presence:1 feedforward:3 architecture:3 reduce:1 inner:1 cn:1 regarding:1 bottleneck:1 whether:6 ultimate:1 grid5:1 routed:1 proceed:1 wissenschaftlichen:1 afford:1 york:1 lazzaro:1 useful:1 involve:1 amount:3 dark:1 ph:2 hardware:1 reduced:2 generate:1 occupy:2 http:3 exist:4 designer:1 estimated:5 per:1 write:1 vol:2 salient:5 four:6 threshold:4 graph:4 year:1 sum:1 run:3 letter:1 you:1 arrive:2 almost:5 reasonable:1 decide:1 vn:3 legenstein:1 bit:1 bound:13 layer:1 fl:3 hi:6 fan:4 quadratic:1 occur:1 striped:1 ri:2 lightly:1 aspect:1 smaller:2 character:1 biologically:3 primate:2 invariant:5 resource:2 previously:1 bus:2 count:1 needed:2 addison:2 nose:1 end:2 available:2 junction:1 operation:1 disagreement:1 occurrence:11 gate:21 jn:1 assumes:3 running:2 remaining:1 unfortunate:1 schuz:1 exploit:1 build:1 conquer:1 classical:2 bl:1 question:1 arrangement:4 already:4 occurs:1 strategy:6 primary:4 receptive:1 ryckebusch:1 traditional:3 exhibit:3 distance:3 deficit:1 separate:2 neurocolt:1 lateral:1 consumption:1 argue:2 portable:1 assuming:2 length:38 besides:2 index:3 relationship:5 minimizing:1 difficult:1 mostly:1 unfortunately:1 robert:1 design:7 implementation:2 upper:4 vertical:1 wire:49 neuron:17 benchmark:4 implementable:1 immediate:1 witness:6 communication:1 intensity:2 introduced:2 pair:2 required:2 connection:3 c4:1 able:1 below:2 pattern:13 usually:1 regime:1 tudelft:2 reading:2 including:1 reliable:1 power:1 demanding:1 natural:1 representing:1 eye:1 numerous:1 woo:1 understanding:2 literature:1 checking:4 multiplication:1 asymptotic:2 limitation:1 ingredient:2 foundation:1 article:4 port:1 principle:2 pi:7 translation:6 lo:1 compatible:1 course:1 supported:2 placed:1 gl:1 formal:2 allow:1 institute:1 template:2 taking:1 face:1 depth:6 cortical:12 world:4 rum:1 computes:1 sensory:19 forward:1 san:1 avoided:1 ec:1 implicitly:1 global:8 assumed:1 dendrite:1 complex:1 whole:2 neuronal:1 attractiveness:1 salk:1 axon:1 lc:1 sub:15 lq:1 xl:1 lie:7 pe:1 ib:1 theorem:4 specific:4 emphasized:1 exists:6 essential:1 merging:5 ci:2 forschung:1 magnitude:2 fonds:1 demand:1 sorting:1 vii:1 likely:1 visual:7 failed:1 horizontally:2 partially:1 collectively:1 springer:1 chance:2 ma:2 goal:2 replace:3 total:38 invariance:1 la:2 matcher:1 latter:1 relevance:1
997
1,911
Reinforcement Learning with Function Approximation Converges to a Region Geoffrey J. Gordon [email protected] Abstract Many algorithms for approximate reinforcement learning are not known to converge. In fact, there are counterexamples showing that the adjustable weights in some algorithms may oscillate within a region rather than converging to a point. This paper shows that, for two popular algorithms, such oscillation is the worst that can happen: the weights cannot diverge, but instead must converge to a bounded region. The algorithms are SARSA(O) and V(O); the latter algorithm was used in the well-known TD-Gammon program. 1 Introduction Although there are convergent online algorithms (such as TD()') [1]) for learning the parameters of a linear approximation to the value function of a Markov process, no way is known to extend these convergence proofs to the task of online approximation of either the state-value (V*) or the action-value (Q*) function of a general Markov decision process. In fact, there are known counterexamples to many proposed algorithms. For example, fitted value iteration can diverge even for Markov processes [2]; Q-Iearning with linear function approximators can diverge, even when the states are updated according to a fixed update policy [3]; and SARSA(O) can oscillate between multiple policies with different value functions [4]. Given the similarities between SARSA(O) and Q-Iearning, and between V(O) and value iteration, one might suppose that their convergence properties would be identical. That is not the case: while Q-Iearning can diverge for some exploration strategies, this paper proves that the iterates for trajectory-based SARSA(O) converge with probability 1 to a fixed region. Similarly, while value iteration can diverge for some exploration strategies, this paper proves that the iterates for trajectory-based V(O) converge with probability 1 to a fixed region. l The question ofthe convergence behavior of SARSA()') is one of the four open theoretical questions of reinforcement learning that Sutton [5] identifies as "particularly important, pressing, or opportune." This paper covers SARSA(O), and together lIn a ''trajectory-based'' algorithm, the exploration policy may not change within a single episode of learning. The policy may change between episodes, and the value function may change within a single episode. (Episodes end when the agent enters a terminal state. This paper considers only episodic tasks, but since any discounted task can be transformed into an equivalent episodic task, the algorithms apply to non-episodic tasks as well .) with an earlier paper [4] describes its convergence behavior: it is stable in the sense that there exist bounded regions which with probability 1 it eventually enters and never leaves, but for some Markov decision processes it may not converge to a single point. The proofs extend easily to SARSA(,\) for ,\ > O. Unfortunately the bound given here is not of much use as a practical guarantee: it is loose enough that it provides little reason to believe that SARSA(O) and V(O) produce useful approximations to the state- and action-value functions. However, it is important for several reasons. First, it is the best result available for these two algorithms. Second, such a bound is often the first step towards proving stronger results. Finally, in practice it often happens that after some initial exploration period, only a few different policies are ever greedy; if this is the case, the strategy of this paper could be used to prove much tighter bounds. Results similar to the ones presented here were developed independently in [6]. 2 The algorithms The SARSA(O) algorithm was first suggested in [7]. The V(O) algorithm was popularized by its use in the TD-Gammon backgammon playing program [8]. 2 Fix a Markov decision process M, with a finite set 8 of states, a finite set A of actions, a terminal state T, an initial distribution 8 0 over 8, a one-step reward function r : 8 x A -+ R, and a transition function 8 : 8 x A -+ 8 U {T}. (M may also have a discount factor 'Y specifying how to trade future rewards against present ones. Here we fix 'Y = 1, but our results carry through to 'Y < 1.) Both the transition and reward functions may be stochastic, so long as successive samples are independent (the Markov property) and the reward has bounded expectation and variance. We assume that all states in 8 are reachable with positive probability. We define a policy 7r to be a function mapping states to probability distributions over actions. Given a policy we can sample a trajectory (a sequence of states, actions, and one-step rewards) by the following rule: begin by selecting a state So according to 8 0 . Now choose an action ao according to 7r(so), Now choose a onestep reward ro according to r(so, ao). Finally choose a new state Sl according to 8(so, ao). If Sl = T, stop; otherwise repeat. We assume that all policies are proper, that is, that the agent reaches T with probability 1 no matter what policy it follows. (This assumption is satisfied trivially if'Y < 1.) The reward for a trajectory is the sum of all of its one-step rewards. Our goal is to find an optimal policy, that is, a policy which on average generates trajectories with the highest possible reward. Define Q*(s, a) to be the best total expected reward that we can achieve by starting in state s, performing action a, and acting optimally afterwards. Define V*(s) = maxaQ*(s, a). Knowledge of either Q* or the combination of V*, 8, and r is enough to determine an optimal policy. The SARSA(O) algorithm maintains an approximation to Q*. We will write Q(s,a) for s E 8 and a E A to refer to this approximation. We will assume that Q is a full-rank linear function of some parameters w. For convenience of notation, we will write Q(T, a) = 0 for all a E A, and tack an arbitrary action onto the end of all trajectories (which would otherwise end with the terminal state). After seeing 2The proof given here does not cover the TD-Gammon program, since TD-Gammon uses a nonlinear function approximator to represent its value function. Interestingly, though, the proof extends easily to cover games such as backgammon in addition to MDPs. It also extends to cover SARSA('x) and V(,x) for ,X > O. a trajectory fragment s, a, r, s', a', the SARSA(O) algorithm updates Q(s, a) +- r + Q(s', a') The notation Q(s, a) +- V means that the parameters, w, which represent Q(s, a) should be adjusted by gradient descent to reduce the error (Q(s, a) - V)2; that is, for some preselected learning rate 0: ~ 0, Wnew = 8 Wold + 0:(V - Q(s, a)) 8w Q(s, a) For convenience, we assume that 0: remains constant within a single trajectory. We also make the standard assumption that the sequence of learning rates is fixed before the start of learning and satisfies Et O:t = 00 and Et o:~ < 00. We will consider only the trajectory-based version of SARSA(O). This version changes policies only between trajectories. At the beginning of each trajectory, it selects the E-greedy policy for its current Q function. From state s, the E-greedy policy chooses the action argmax a Q(s, a) with probability 1 - E, and otherwise selects uniformly at random among all actions. This rule ensures that, no matter the sequence of learned Q functions, each state-action pair will be visited infinitely often. (The use of E-greedy policies is not essential. We just need to be able to find a region that contains all of the approximate value functions for every policy considered, and a bound on the convergence rate of TD(O).) We can compare the SARSA(O) update rule to the one for Q-Iearning: Q(s, a) +- r + maxQ(s, b) b Often a' in the SARSA(O) update rule will be the same as the maximizing b in the Q-Iearning update rule; the difference only appears when the agent takes an exploring action, i.e., one which is not greedy for the current Q function. The V(O) algorithm maintains an approximation to V* which we will write V(s) for all s E S. Again, we will assume V is a full-rank linear function of parameters w, and V(T) is held fixed at O. After seeing a trajectory fragment s,a,r,s', V(O) sets V(s) +- r + V(s') This update ignores a. Often a is chosen according to a greedy or E-greedy policy for a recent V. However, for our analysis we only need to assume that we consider finitely many policies and that the policy remains fixed during each trajectory. We leave open the question of whether updates to w happen immediately after each transition or only at the end of each trajectory. As pointed out in [9], this difference will not affect convergence: the updates within a single trajectory are 0(0:), so they cause a change in Q(s,a) or V(s) of 0(0:), which means subsequent updates are affected by at most 0(0: 2 ). Since 0: is decaying to zero, the 0(0: 2 ) terms can be neglected. (If we were to change policies during the trajectory, this argument would no longer hold, since small changes in Q or V can cause large changes in the policy.) 3 The result Our result is that the weights w in either SARSA(O) or V(O) converge with probability 1 to a fixed region. The proof of the result is based on the following intuition: while SARSA(O) and V(O) might consider many different policies over time, on any given trajectory they always follow the TD(O) update rule for some policy. The TD(O) update is, under general conditions, a 2-norm contraction, and so would converge to its fixed point if it were applied repeatedly; what causes SARSA(O) and V(O) not to converge to a point is just that they consider different policies (and so take steps towards different fixed points) during different trajectories. Crucially, under general conditions, all of these fixed points are within some bounded region. So, we can view the SARSA(O) and V(O) update rules as contraction mappings plus a bounded amount of "slop." With this observation, standard convergence theorems show that the weight vectors generated by SARSA(O) and V(O) cannot diverge. Theorem 1 For any Markov decision process M satisfying our assumptions, there is a bounded region R such that the SARSA(O) algorithm, when acting on M, produces a series of weight vectors which with probability 1 converges to R. Similarly, there is another bounded region R' such that the V(O) algorithm acting on M produces a series of weight vectors converging with probability 1 to R' . PROOF: Lemma 2, below, shows that both the SARSA(O) and V(O) updates can be written in the form Wt+1 = Wt - at (Atwt - rt + Et) where At is positive definite, at is the current learning rate, E(Et) = 0, Var(Et) ::::: K(l + IlwtI12), and At and rt depend only on the currently greedy policy. (At and rt represent, in a manner described in the lemma, the transition probabilities and one-step costs which result from following the current policy. Of course, Wt, At, and rt will be different depending on whether we are following SARSA(O) or V(O).) Since At is positive definite, the SARSA(O) and V(O) updates are 2-norm contractions for small enough at. So, if we kept the policy fixed rather than changing it at the beginning of each trajectory, standard results such as Lemma 1 below would guarantee convergence. The intuition is that we can define a nonnegative potential function J(w) and show that, on average, the updates tend to decrease J(w) as long as at is small enough and J (w) starts out large enough compared to at. To apply Lemma 1 under the assumption that we keep the policy constant rather than changing it every trajectory, write At = A and rt = r for all t, and write w" = A -1 r. Let p be the smallest eigenvalue of A (which must be real and positive since A is positive definite). Write St = AWt - r + Et for the update direction at step t. Then if we take J(w) = Ilw - w,,11 2, E(V J(Wt)T stlwt) = = > 2(wt - w,,)T(Awt - r + E(Et)) 2(wt - w,,)T(Awt - Aw,,) 2pllwt - w,,11 2 = 2pJ(wt) so that -St is a descent direction in the sense required by the lemma. It is easy to check the lemma's variance condition. So, Lemma 1 shows that J(Wt) converges with probability 1 to 0, which means Wt must converge with probability 1 to W". If we pick an arbitrary vector u and define H(w) = max(O, Ilw - ull - C)2 for a sufficiently large constant C, then the same argument reaches the weaker conclusion that Wt must converge with probability 1 to a sphere of radius C centered at u. To see why, note that -St is also a descent direction for H(w): inside the sphere, H = 0 and V H = 0, so the descent condition is satisfied trivially. Outside the sphere, VH(w) V H(Wt)T E(stlwt) = 2(w - u) Il w-ull-C Ilw-ull = d(w)(w - d(wt)(wt - u)TE(stlwt) u) = d(wt)(wt - w" + w" - U)T A(wt - w,,) w,,11 2-llw" - ullllAllllwt - w"ID The positive term will be larger than the negative one if Ilwt - w" II is large enough. ~ d(wt)(pllwt - So, if we choose C large enough, the descent condition will be satisfied. The variance condition is again easy to check. Lemma 3 shows that \7 H is Lipschitz. So, Lemma 1 shows that H(wt) converges with probability 1 to 0, which means that Wt must converge with probability 1 to the sphere of radius C centered at u. But now we are done: since there are finitely many policies that SARSA(O) or V(O) can consider, we can pick any u and then choose a C large enough that the above argument holds for all policies simultaneously. With this choice of C the update for any policy decreases H(wt) on average as long as at is small enough, so the update for SARSA(O) or V(O) does too, and Lemma 1 applies. 0 The following lemma is Corollary 1 of [10]. In the statement of the lemma, a Lipschitz continuous function F is one for which there exists a constant L so that IIF(u) - F(w)11 ::; Lllu - wll for all u and w. The Lipschitz condition is essentially a uniform bound on the derivative of F. Lemma 1 Let J be a differentiable function, bounded below by J*, and let \7 J be Lipschitz continuous. Suppose the sequence Wt satisfies for random vectors St independent of Wt-;P, Wt+2, . . .. Suppose - St is a descent direction for J in the sense that E(stlwt) \7 J(Wt) > 6(E) > 0 whenever J(Wt) > J* + Eo Suppose also that E(llstI12Iwt) ::; Kd(wt) + K2E(stlwt)T\7J(Wt) + K3 and finally that the constants at satisfy at > 0 L: at = 00 t Then J(Wt) -+ J* with probability 1. Most of the work in proving the next lemma is already present in [1]. The transformation from an MDP under a fixed policy to a Markov chain is standard. Lemma 2 The update made by SARSA(O) or V(O) during a single trajectory can be written in the form where the constant matrix A" and constant vector r" depend on the currently greedy policy 7f, a is the current learning rate, and E(E) = O. Furthermore, A" is positive definite, and there is a constant K such that Var(E) ::; K(l + IlwI1 2). Consider the following Markov process M,,: M" has one state for each state-action pair in M. If M has a transition which goes from state S under action a with reward r to state s' with probability p, then M" has a transition from state (s,a) with reward r to state (s',a') for every a'; the probability of this transition is p7r(a'ls'). We will represent the value function for M" in the same way that we represented the Q function for M; in other words, the representation for V ( (s, a}) is the same as the representation for Q(s, a). With these definitions, it is easy to see that TD(O) acting on M" produces exactly the same sequence of parameter changes PROOF: as SARSA(O) acting on M under the fixed policy state of M", will be visited infinitely often.) 1r. (And since 7r(als) > 0, every Write T", for the transition probability matrix of the above Markov process. That is, the entry of T", in row (s, a) and column (s', a') will be equal to the probability of taking a step to (s', a') given that we start in (s, a). By definition, T", is substochastic. That is, it has nonnegative entries, and its row sums are less than or equal to l. Write s for the vector whose (s, a)th element is So(s)7r(als), that is, the probability that we start in state s and take action a. Write d1f = (I - T;f')-ls, where I is the identity matrix. As demonstrated in, e.g., [11], d", is the vector of expected visitation frequencies under 7rj that is, the element of d", corresponding to state sand action a is the expected number of times that the agent will visit state s and select action a during a single trajectory following policy 7r. Write D1f for the diagonal matrix with d1f on its diagonaL Write r for the vector of expected rewardsj that is, the component of r corresponding to state s and action a is E(r(s, a)). Finally write X for the Jacobian matrix ~. With this notation, Sutton [1] showed that the expected TD(O) update is E(wnewlwold) = Wold - aXT D",(I - T",)XWold + aXT D",r (Actually, he only considered the case where all rewards are zero except on transitions from nonterminal to terminal states, but his argument works equally well for the more general case where nonzero rewards are allowed everywhere.) So, we can take A", = X T D",(I - T",)X and r", = X T D",r to make E(f) = O. Furthermore, Sutton showed that, as long as the agent reaches the terminal state with probability 1 (in other words, as long as 7r is proper) and as long as every state is visited with positive probability (which is true since all states are reachable and 7r has a nonzero probability of choosing every action), the matrix D 1f (I - T",) is strictly positive definite. Therefore, so is A",. Finally, as can be seen from Sutton's equations on p. 25, there are two sources of variance in the update direction: variation in the number of times each transition is visited, and variation in the one-step rewards. The visitation frequencies and the one-step rewards both have bounded variance, and are independent of one another. They enter into the overall update in two ways: there is one set of terms which is bilinear in the one-step rewards and the visitation frequencies, and there is another set of terms which is bilinear in the visitation frequencies and the weights w. The former set of terms has constant variance. Because the policy is fixed, W is independent of the visitation frequencies, and so the latter set of terms has variance proportional to Ilw112. So, there is a constant K such that the total variance in f can be bounded by K(1 + IlwI1 2). A similar but simpler argument applies to V(O). In this case we define M", to have the same states as M, and to have the transition matrix T", whose element s, s' is the probability of landing in s' in M on step t + 1, given that we start in s at step t and follow 7r. Write s for the vector of starting probabilities, that is, Sx = So(x). Now define X = ~~ and d", = (I - TJ)-l s. Since we have assumed that all policies are proper and that every policy considered has a positive probability of reaching any state, the update matrix A", = XT D", (I - T",)X is strictly positive definite. 0 Lemma 3 The gradient of the function H(w) = max(O, Ilwll continuous. - 1)2 is Lipschitz Inside the unit sphere, H and all of its derivatives are uniformly zero. Outside, we have PROOF: 'VH = wd(w) where d(w) = II~Rlll, and '\7 2 H = d(w)I + '\7d(w)w T = d(w)I + = d(w)I + wIT IIwl12 Ilwllw wwT IIwl1 2(1 - d(w)) The norm of the first term is d( w), the norm of the second is 1 - d~w), and since one of the terms is a multiple of I the norms add. So, the norm of '\7 H is 0 inside the unit sphere and 1 outside. At the boundary of the unit sphere, '\7 H is continuous, and its directional derivatives from every direction are bounded by the argument above. So, '\7 H is Lipschitz continuous. 0 Acknowledgements Thanks to Andrew Moore and to the anonymous reviewers for helpful comments. This work was supported in part by DARPA contract number F30602- 97- 1- 0215, and in part by NSF KDI award number DMS- 9873442. The opinions and conclusions are the author's and do not reflect those of the US government or its agencies. References [1] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9-44, 1988. [2] Geoffrey J. Gordon. Stable function approximation in dynamic programming. Technical Report CMU-CS-95-103, Carnegie Mellon University, 1995. [3] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning: proceedings of the twelfth international conference, San Francisco, CA, 1995. Morgan Kaufmann. [4] Geoffrey J. Gordon. Chattering in SARSA(A). Internal report, 1996. CMU Learning Lab. Available from www.es . emu. edu;-ggordon. [5] R. S. Sutton. Open theoretical questions in reinforcement learning. In P. Fischer and H. U. Simon, editors, Computational Learning Theory (Proceedings of EuroCOLT'99), pages 11- 17, 1999. [6] D. P. de Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3), 2000. [7] Gavin A. Rummery and Mahesan Niranjan. On-line Q-Iearning using connectionist systems. Technical Report 166, Cambridge University Engineering Department, 1994. [8] G. Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6:215- 219, 1994. [9] T. Jaakkola, M. 1. Jordan, and S. P. Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation, 6:1185- 1201, 1994. [10] B. T. Polyak and Ya. Z. Tsypkin. Pseudogradient adaptation and training algorithms. Automation and Remote Control, 34(3):377- 397, 1973. 'Translated from A vtomatika i Telemekhanika. [11] J. G. Kemeny and J. L. SnelL Finite Markov Chains. Van Nostrand- Reinhold, New York, 1960.
1911 |@word version:2 stronger:1 norm:6 twelfth:1 open:3 crucially:1 contraction:3 pick:2 carry:1 initial:2 contains:1 fragment:2 selecting:1 series:2 interestingly:1 current:5 wd:1 must:5 written:2 subsequent:1 happen:2 wll:1 update:23 greedy:9 leaf:1 beginning:2 iterates:2 provides:1 successive:1 simpler:1 prove:1 inside:3 manner:1 expected:5 behavior:2 terminal:5 discounted:1 eurocolt:1 td:11 little:1 begin:1 bounded:11 notation:3 what:2 developed:1 transformation:1 guarantee:2 temporal:2 every:8 iearning:6 ull:3 ro:1 exactly:1 axt:2 control:1 unit:3 positive:11 before:1 engineering:1 sutton:6 bilinear:2 id:1 might:2 plus:1 specifying:1 practical:1 practice:1 definite:6 episodic:3 word:2 gammon:5 seeing:2 cannot:2 convenience:2 onto:1 landing:1 equivalent:1 www:1 demonstrated:1 reviewer:1 maximizing:1 go:1 starting:2 independently:1 l:2 wit:1 immediately:1 rule:7 his:1 proving:2 variation:2 updated:1 suppose:4 play:1 programming:2 us:1 element:3 roy:1 satisfying:1 particularly:1 enters:2 worst:1 region:11 ensures:1 episode:4 remote:1 trade:1 highest:1 decrease:2 intuition:2 agency:1 reward:17 neglected:1 dynamic:2 depend:2 singh:1 farias:1 translated:1 easily:2 darpa:1 represented:1 outside:3 choosing:1 whose:2 larger:1 otherwise:3 fischer:1 online:2 sequence:5 pressing:1 eigenvalue:1 differentiable:1 adaptation:1 achieve:1 convergence:9 produce:4 converges:4 leave:1 depending:1 andrew:1 nonterminal:1 finitely:2 c:1 direction:6 radius:2 stochastic:2 exploration:4 centered:2 opinion:1 sand:1 government:1 fix:2 ao:3 anonymous:1 snell:1 tighter:1 sarsa:29 adjusted:1 exploring:1 strictly:2 hold:2 sufficiently:1 considered:3 gavin:1 k3:1 mapping:2 predict:1 achieves:1 smallest:1 currently:2 visited:4 always:1 rather:3 reaching:1 jaakkola:1 corollary:1 backgammon:3 rank:2 check:2 sense:3 helpful:1 transformed:1 selects:2 llw:1 overall:1 among:1 equal:2 never:1 identical:1 iif:1 future:1 report:3 connectionist:1 gordon:3 few:1 simultaneously:1 argmax:1 tj:1 held:1 chain:2 theoretical:2 fitted:1 mahesan:1 column:1 earlier:1 cover:4 cost:1 entry:2 uniform:1 too:1 optimally:1 aw:1 chooses:1 st:5 thanks:1 international:1 contract:1 diverge:6 together:1 again:2 reflect:1 satisfied:3 choose:5 derivative:3 potential:1 de:1 automation:1 matter:2 baird:1 satisfy:1 view:1 lab:1 start:5 decaying:1 maintains:2 simon:1 il:1 variance:8 kaufmann:1 ofthe:1 directional:1 trajectory:23 reach:3 whenever:1 definition:2 iiwl12:1 against:1 frequency:5 dm:1 proof:8 d1f:3 stop:1 popular:1 knowledge:1 telemekhanika:1 actually:1 appears:1 wwt:1 follow:2 done:1 though:1 wold:2 furthermore:2 just:2 nonlinear:1 mdp:1 believe:1 true:1 former:1 nonzero:2 moore:1 game:1 during:5 self:1 substochastic:1 extend:2 he:1 refer:1 mellon:1 cambridge:1 counterexample:2 enter:1 trivially:2 similarly:2 pointed:1 teaching:1 reachable:2 stable:2 similarity:1 longer:1 add:1 recent:1 showed:2 tesauro:1 nostrand:1 approximators:1 seen:1 morgan:1 p7r:1 eo:1 converge:11 determine:1 period:1 ii:2 multiple:2 afterwards:1 full:2 rj:1 technical:2 long:6 lin:1 sphere:7 equally:1 visit:1 award:1 niranjan:1 converging:2 essentially:1 expectation:1 cmu:2 iteration:4 represent:4 addition:1 source:1 comment:1 tend:1 jordan:1 slop:1 enough:9 easy:3 affect:1 polyak:1 reduce:1 pseudogradient:1 whether:2 york:1 oscillate:2 cause:3 action:19 repeatedly:1 useful:1 amount:1 discount:1 sl:2 exist:1 ilwt:1 nsf:1 write:13 carnegie:1 affected:1 visitation:5 four:1 changing:2 pj:1 kept:1 sum:2 everywhere:1 master:1 extends:2 oscillation:1 decision:4 bound:5 convergent:1 nonnegative:2 generates:1 argument:6 performing:1 department:1 according:6 popularized:1 combination:1 kd:1 describes:1 happens:1 equation:1 remains:2 awt:3 kdi:1 eventually:1 loose:1 tsypkin:1 end:4 available:2 apply:2 existence:1 f30602:1 prof:2 question:4 already:1 strategy:3 rt:5 diagonal:2 kemeny:1 gradient:2 considers:1 reason:2 unfortunately:1 opportune:1 statement:1 ilw:3 negative:1 proper:3 policy:39 adjustable:1 observation:1 markov:11 finite:3 descent:6 ever:1 arbitrary:2 reinhold:1 pair:2 required:1 learned:1 emu:2 maxq:1 able:1 suggested:1 below:3 preselected:1 program:4 max:2 residual:1 rummery:1 mdps:1 identifies:1 vh:2 acknowledgement:1 proportional:1 iiwl1:1 geoffrey:3 approximator:1 var:2 agent:5 editor:1 playing:1 row:2 course:1 repeat:1 supported:1 weaker:1 taking:1 van:2 boundary:1 transition:11 ignores:1 author:1 made:1 reinforcement:5 san:1 approximate:3 keep:1 assumed:1 francisco:1 continuous:5 iterative:1 ggordon:2 why:1 ca:1 allowed:1 jacobian:1 theorem:2 xt:1 showing:1 essential:1 exists:1 ilwll:1 te:1 sx:1 infinitely:2 chattering:1 applies:2 satisfies:2 wnew:1 goal:1 identity:1 towards:2 lipschitz:6 change:9 onestep:1 except:1 uniformly:2 acting:5 wt:28 tack:1 lemma:16 total:2 e:2 ya:1 select:1 internal:1 latter:2
998
1,912
Smart Vision Chip Fabricated Using Three Dimensional Integration Technology H.Kurino, M.Nakagawa, K.W .Lee, T.Nakamura, Y.Yamada, K.T.Park and M.Koyanagi Dept. of Machine Intelligence and Systems Engineering, Tohoku University 01, Aza-Aramaki, Aoba-ku, Sendai 980-8579, Japan [email protected] Abstract The smart VISIOn chip has a large potential for application in general purpose high speed image processing systems . In order to fabricate smart vision chips including photo detector compactly, we have proposed the application of three dimensional LSI technology for smart vision chips. Three dimensional technology has great potential to realize new neuromorphic systems inspired by not only the biological function but also the biological structure. In this paper, we describe our three dimensional LSI technology for neuromorphic circuits and the design of smart vision chips . 1 Introduction Recently, the demand for very fast image processing systems with real time operation capability has significantly increased. Conventional image processing systems based on the system level integration of a camera and a digital processor, do not have the potential for application in general purpose consumer electronic products . This is simply due to the cost, size and complexity of these systems . Therefore the smart vision chip will be an inevitable component of future intelligent systems . In smart vision chips, 2D images are simultaneously processed in parallel. Therefore very high speed image processing can be realized. Each pixel includes a photo-detector. In order to receive a light signal as much as possible, the photo-detector should occupy a large proportion of the pixel area. However the successi ve processing circuits must become larger in each pixel to realize high level image processing. It is very difficult to achieve smart vision chips by using conventional two dimensional (2D) LSI technology because such smart vision chips have low fill-factor and low resolution. This problem can be overcome if three dimensional (3D) integration technology can be employed for the smart vision chip. In this paper, we propose a smart vision chip fabricated by three dimensional integration technology. We also discuss the key technologies for realizing three dimensional integration and preliminary test results of three dimensional image sensor chips. 2 Three Dimensional Integrated Vision Chips Figure 1 shows the cross-sectional structure of the three dimensional integrated vision chip. Several circuit layers with different functions are stacked into one chip in 3D LSI. For example, the first layer consists of a photo detector array acting like photo receptive cells in the retina, the second layer is horizontal/bipolar cell circuits, the third layer is ganglion cell circuits and so on. Each circuit layer is stacked and electrically connected vertically using buried interconnections and micro bumps. By using three dimensional integration technology, a photo detector can be formed with a high fill-factor and high resolution, because several successive processing circuits with large areas are formed on the lower layers underneath the photo detector layer. Every photo detector is directly connected with successive processing circuits (ie . horizontal and bipolar cell circuits) in parallel via the vertical interconnections. The signals in every pixel are simultaneously transferred in the vertical direction and processed in parallel in each layer. Therefore high performance real time vision chips can be realized. We considered the 3D LSI suitable for realizing neuromorphic LSI, because the three dimensional structure is quite similar to the structure of the retina or cortex. Three dimensional technology will realize new neuromorphic systems inspired by not only the biological function but also the biological structure. Glass Wafer Photoreceptors Layer Horizontal and Bipolar Cells Layer Ganglion Cells Layer Fig.1 Cross-sectional structure of three dimensional vision chip. Figure 2 shows the neuromorphic analog circuits implemented into 3D LSI. The circuits are divided into three circuit layers. Photodiodes and photocircuits are designed on the first layer. Horizontal/bipolar cell circuits and ganglion cells are on.. _t~~ _ ~~d a...n? _ }!2- !aJ~~' ~ !~e~c!i~~ry.:.. ~~Sh..... circuit layer is fabricated : [If.t Lv..: I r: I I I I ~ I I ~ -_ ...... .a - - - - - '::_- ":::::_- ':: ':'' : : : ::! _ _ _ __ ,_____________ _ Th ird LOU I ~ .- Fig.2 -= ... -.--,- ..... --- .... --- ..... ? --. ~ Circuit diagram of three dimensional vision chip. Photodiode Third Layer Fig.3 Layout of the three dimensional vision chip. on different Si wafers and stacked into a 3D LSI. Light signals are converted into electrical analog signals by photodiodes and photocircuits on the first layer. The electric signals are transferred from the first layer to the second layer through the vertical interconnections. The operational amplifiers and resistor network on the
1912 |@word aramaki:1 implemented:1 proportion:1 direction:1 realized:2 receptive:1 lou:1 preliminary:1 biological:4 consumer:1 image:7 considered:1 si:1 aoba:1 great:1 recently:1 difficult:1 must:1 bump:1 realize:3 jp:1 purpose:2 designed:1 design:1 analog:2 intelligence:1 vertical:3 realizing:2 yamada:1 sensor:1 successive:2 cortex:1 become:1 consists:1 sendai:1 fabricate:1 underneath:1 glass:1 ry:1 inspired:2 integrated:2 employed:1 buried:1 signal:5 including:1 pixel:4 circuit:15 suitable:1 photodiodes:2 nakamura:1 integration:6 cross:2 divided:1 fabricated:3 technology:10 park:1 every:2 vision:17 inevitable:1 future:1 bipolar:4 intelligent:1 micro:1 retina:2 cell:8 receive:1 simultaneously:2 ve:1 engineering:1 vertically:1 sd:1 diagram:1 lv:1 amplifier:1 digital:1 aza:1 sh:1 light:2 camera:1 mech:1 tohoku:2 area:2 significantly:1 overcome:1 increased:1 ird:1 neuromorphic:5 cost:1 conventional:2 layout:1 processed:2 photoreceptors:1 resolution:2 occupy:1 lsi:8 array:1 fill:2 ie:1 lee:1 ku:1 operational:1 wafer:2 key:1 electric:1 photodiode:1 japan:1 potential:3 converted:1 fig:3 electrical:1 includes:1 connected:2 electronic:1 resistor:1 complexity:1 layer:18 third:2 capability:1 parallel:3 formed:2 smart:11 compactly:1 chip:19 speed:2 stacked:3 fast:1 describe:1 demand:1 processor:1 koyanagi:1 transferred:2 detector:7 electrically:1 simply:1 quite:1 ganglion:3 larger:1 sectional:2 interconnection:3 discus:1 propose:1 product:1 photo:8 nakagawa:1 operation:1 achieve:1 acting:1 horizontal:4 ac:1 aj:1 dept:1
999
1,913
Shape Context: A new descriptor for shape matching and object recognition Serge Belongie, Jitendra Malik and Jan Puzicha Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 94720, USA {sjb, malik,puzicha} @cs.berkeley.edu Abstract We develop an approach to object recognition based on matching shapes and using a resulting measure of similarity in a nearest neighbor classifier. The key algorithmic problem here is that of finding pointwise correspondences between an image shape and a stored prototype shape. We introduce a new shape descriptor, the shape context, which makes this possible, using a simple and robust algorithm. The shape context at a point captures the distribution over relative positions of other shape points and thus summarizes global shape in a rich, local descriptor. We demonstrate that shape contexts greatly simplify recovery of correspondences between points of two given shapes. Once shapes are aligned, shape contexts are used to define a robust score for measuring shape similarity. We have used this score in a nearest-neighbor classifier for recognition of hand written digits as well as 3D objects, using exactly the same distance function. On the benchmark MNIST dataset of handwritten digits, this yields an error rate of 0.63%, outperforming other published techniques. 1 Introduction The last decade has seen increased application of statistical pattern recognition techniques to the problem of object recognition from images. Typically, an image block with n pixels is regarded as an n dimensional feature vector formed by concatenating the brightness values of the pixels. Given this representation, a number of different strategies have been tried, e.g. nearest-neighbor techniques after extracting principal components [15, 13], convolutional neural networks [12], and support vector machines [14, 5]. Impressive performance has been demonstrated on datasets such as digits and faces. A vector of pixel brightness values is a somewhat unsatisfactory representation of an object. Basic invariances e.g. to translation, scale and small amount of rotation must be obtained by suitable pre-processing or by the use of enormous amounts of training data [12]. Instead, we will try to extract "shape", which by definition is required to be invariant under a group of transformations. The problem then becomes that of operationalizing a definition of shape. The literature in computer vision and pattern recognition is full of definitions of shape descriptors and distance measures, ranging from moments and Fourier descriptors to the Hausdorff distance and the medial axis transform. (For a recent overview, see [16].) Most of these approaches suffer from one of two difficulties: (1) Mapping the shape to a small number of numbers, e.g. moments, loses information. Inevitably, this means sacrificing discriminative power. (2) Descriptors restricted to silhouettes and closed curves are of limited applicability. Shape is a much more general concept. Fundamentally, shape is about relative positional information. This has motivated approaches such as [1] who find key points or landmarks, and recognize objects using the spatial arrangements of point sets. However not all objects have distinguished key points (think of a circle for instance), and using key points alone sacrifices the shape information available in smooth portions of object contours. Our approach therefore uses a general representation of shape - a set of points sampled from the contours on the object. Each point is associated with a novel descriptor, the shape context, which describes the coarse arrangement of the rest of the shape with respect to the point. This descriptor will be different for different points on a single shape S; however corresponding (homologous) points on similar shapes Sand S' will tend to have similar shape contexts. Correspondences between the point sets of S and S' can be found by solving a bipartite weighted graph matching problem with edge weights Cij defined by the similarity of the shape contexts of points i and j. Given correspondences, we can effectively calculate the similarity between the shapes S and S'. This similarity measure is then employed in a nearest-neighbor classifier for object recognition. The core of our work is the concept of shape contexts and its use for solving the correspondence problem between two shapes. It can be compared to an alternative framework for matching point sets due to Gold, Rangarajan and collaborators (e.g. [7, 6]). They propose an iterative optimization algorithm to jointly determine point correspondences and underlying image transformations. The cost measure is Euclidean distance between the first point set and a transformed version of the second point set. This formulation leads to a difficult non-convex optimization problem which is solved using deterministic annealing. Another related approach is elastic graph matching [11] which also leads to a difficult stochastic optimization problem. 2 Matching with Shape Contexts In our approach, a shape is represented by a discrete set of points sampled from the internal or external contours on the shape. These can be obtained as locations of edge pixels as found by an edge detector, giving us a set P = {PI, ... ,Pn}, Pi E lR?, of n points. They need not, and typically will not, correspond to key-points such as maxima of curvature or inflection points. We prefer to sample the shape with roughly uniform spacing, though this is also not critical. Fig. 1(a,b) shows sample points for two shapes. For each point Pi on the first shape, we want to find the "best" matching point qj on the second shape. This is a correspondence problem similar to that in stereopsis. Experience there suggests that matching is easier if one uses a rich local descriptor instead of just the brightness at a single pixel or edge location. Rich descriptors reduce the ambiguity in matching. In this paper, we propose a descriptor, the shape context, that could play such a role in shape matching. Consider the set of vectors originating from a point to all other sample points on a shape. These vectors express the configuration of the entire shape relative to the reference point. Obviously, this set of n - 1 vectors is a rich ...... Ij) ... .. .. . . : .. .. .. :..... ..+ .... ,. ... ..... (b) (a) (d) (c) (e) Figure 1: Shape context computation and matching. (a, b) Sampled edge points of two shapes. (c) Diagram of log-polar histogram bins used in computing the shape contexts. We use 5 bins for log rand 12 bins for (). (d-f) Example shape contexts for reference samples marked by 0,0, <I in (a,b). Each shape context is a log-polar histogram of the coordinates of the rest of the point set measured using the reference point as the origin. (Dark=large value.) Note the visual similarity of the shape contexts for 0 and 0, which were computed for relatively similar points on the two shapes. By contrast, the shape context for <I is quite different. (g) Correspondences found using bipartite matching, with costs defined by the X 2 distance between histograms. description, since as n gets large, the representation of the shape becomes exact. The full set of vectors as a shape descriptor is much too detailed since shapes and their sampled representation may vary from one instance to another in a category. We identify the distribution over relative positions as a more robust and compact, yet highly discriminative descriptor. For a point Pi on the shape, we compute a coarse histogram hi of the relative coordinates of the remaining n - 1 points, This histogram is defined to be the shape context of Pi. The descriptor should be more sensitive to differences in nearby pixels. We therefore use a log-polar coordinate system (see Fig. l(c)). All distances are measured in units of a where a is the median distance between the n 2 point pairs in the shape. Note that the construction ensures that global translation or scaling of a shape will not affect the shape contexts. Since shape contexts are extremely rich descriptors, they are inherently tolerant to small perturbations of parts of the shape. While we have no theoretical guarantees here, robustness to small affine transformations, occlusions and presence of outliers is evaluated experimentally in [2]. Modifications to the shape context definition that provide for complete rotation invariance can alos be provided [2]. Consider a point Pi on the first shape and a point qj on the second shape. Let = C (Pi, qj) denote the cost of matching these two points. As shape contexts are C ij distributions represented as histograms, it is natural! to use the X2 test statistic: where hi(k) and hj(k) denote the K-bin normalized histogram at Pi and qj. The cost C ij for matching points can include an additional term based on the local appearance similarity at points Pi and qj. This is particularly useful when we are comparing shapes derived from gray-level images instead of line drawings. For example, one can add a cost based on color or texture similarity, SSD between small gray-scale patches, distance between vectors of filter outputs, similarity of tangent angles, and so on. The choice of this appearance similarity term is application dependent, and is driven by the necessary invariance and robustness requirements, e.g. varying lighting conditions make reliance on gray-scale brightness values risky. Given the set of costs Cij between all pairs of points i on the first shape and j on the second shape we want to minimize the total cost of matching subject to the constraint that the matching be one-to-one. This is an instance of the square assignment (or weighted bipartite matching) problem, which can be solved in O(N3) time using the Hungarian method. In our experiments, we use the more efficient algorithm of [10]. The input is a square cost matrix with entries Cij . The result is a permutation 7r(i) such that the sum Li C i ,lf(i) is minimized. When the number of samples on two shapes is not equal, the cost matrix can be made square by adding "dummy" nodes to each point set with a constant matching cost of Ed. The same technique may also be used even when the sample numbers are equal to allow for robust handling of outliers. In this case, a point will be matched to a "dummy" whenever there is no real match available at smaller cost than Ed. Thus, Ed can be regarded as a threshold parameter for outlier detection. Given a set of sample point correspondences between two shapes, one can proceed to estimate a transformation that maps one shape into the other. For this purpose there are several options; perhaps most common is the affine model. In this work, we use the thin plate spline (TPS) model, which is commonly used for representing flexible coordinate transformations [17, 6]. Bookstein [4], for example, found it to be highly effective for modeling changes in biological forms. The thin plate spline is the 2D generalization of the cubic spline, and in its regularized form, includes affine transformations as a limiting case. Our complete matching algorithm is obtained by alternating between the steps of recovering correspondences and estimating transformations. We usually employ a fixed number of iterations, typically three in large scale experiments, but more refined schemes are possible. However, experimental experiences show that the algorithmic performance is independent of the details. More details may be found in [2]. As far as we are aware, the shape context descriptor and its use for matching 2D shapes is novel. A related idea in past work is that due to Johnson and Hebert [9] in their work on range images. They introduced a representation for matching dense clouds of oriented 3D points called the "spin image" . A spin image is a 2D histogram formed by spinning a plane around a normal vector on the surface of the object and counting the points that fall inside bins in the plane. 1 Alternatives include Bickel's generalization of the Kolmogorov-Smirnov test for 2D distributions [3], which does not require binning. 0 . 3 ,-----~-~--~-~r=====:===il _-1 - sso 0.06 ,-------------r=~===cK;= = 1=jl - e - K::::3 SO 0 .25 0.0 0 .2 0.04 L 0 .15 0.02 ~I- - - ____ ~ ____ _ - - - :E- _ - - _ _ _ - -:I: 0 .05 L 00 K=5 0.03 L 0 .1 - <- 2000 4000 6000 BOOO 0.01 10000 10 3 size of training set 104 size of training set Figure 2: Handwritten digit recognition on the MNIST dataset. Left: Test set errors of a 1-NN classifier using SSD and Shape Distance (SD) measures. Right: Detail of performance curve for Shape Distance, including results with training set sizes of 15,000 and 20,000. Results are shown on a semilog-x scale for K = 1, 3, 5 nearest neighbors. 3 Classification using Shape Context matching Matching shapes enables us to define distances between shapes; given such a distance measure a straightforward strategy for recognition is to use a K -NN classifier. In the following two case studies we used 100 point samples selected from the Canny edges of each image. We employed a regularized TPS transformation model and used 3 iterations of shape context matching and TPS re-estimation. After matching, we estimated shape distances as the weighted sum of three terms: shape context distance, image appearance distance and bending energy. We measure shape context distance between shapes P and Q as the symmetric sum of shape context matching costs over best matching points, i.e . Dsc (P, Q) (p,T(q)) + ~ L argminC (p,T(q)) = .!.n LP argminC qEQ m Q PEP pE (1) qE where T(?) denotes the estimated TPS shape transformation. We use a term Dac (P, Q) for appearance cost, defined as the sum of squared brightness differences in Gaussian windows around corresponding image points. This score is computed after the thin plate spline transformation T has been applied to best warp the images into alignment. The third term Dbe (P, Q) corresponds to the 'amount' of transformation necessary to align the shapes. In the TPS case the bending energy is a natural measure (see [4, 2]). Case study 1: Digit recognition Here we present results on the MNIST dataset of handwritten digits, which consists of 60,000 training and 10,000 test digits [12]. Nearest neighbor classifiers have the property that as the number of examples n in the training set goes to infinity, the I-NN error converges to a value ~ 2E*, where E* is the Bayes Risk (for K-NN, K -+ 00 and K/n -+ 0, the error -+ E*). However, what matters in practice is the performance for small n, and this gives us a way to compare different similarity/distance measures. In Fig. 2, our shape distance is compared to SSD (sum of squared differences between pixel brightness values). On the MNIST dataset nearly 30 algorithms have been compared (http://www. research.att.com/ ,,-,yann/exdb/mnist/index.html). The lowest test set error rate published at this time is 0.7% for a boosted LeNet-4 with a training set of size O.4 ~:;;-~-~-~----C=-+-C==::s= so ::=====jl -e- so 0.35 """*- SO- rota 0.3 0 .25 0.2 0 .15 0.1 0 .05 %~-~ 2 -~4-~6~~8~~1~0~~ 12 average no. of prototypes per object (a) (b) Figure 3: 3D object recognition. (a) Comparison of test set error for SSD, Shape Distance (SD), and Shape Distance with K-medoid prototypes (SD-proto) vs. number of prototype views. For SSD and SD, we varied the number of prototypes uniformly for all objects. For SD-proto, the number of prototypes per object depended on the within-object variation as well as the between-object similarity. (b) K-medoid prototype views for two different examples, using an average of 4 prototypes per object. 60,000 X 10 synthetic distortions per training digit. Our error rate using 20,000 training examples and 3-NN is 0.63%. Case study 2: 3D object recognition Our next experiment involves the 20 common household objects from the COIL-20 database [13]. We prepared our training sets by selecting a number of equally spaced views for each object and using the remaining views for testing. The matching algorithm is exactly the same as for digits. Fig. 3(a) shows the performance using 1-NN on the weighted shape distance compared to a straightforward sum of squared differences (SSD). SSD performs very well on this easy database due to the lack of variation in lighting [8]. Since the objects in the COIL-20 database have differing variability with respect to viewing angle, it is natural to ask whether prototypes can be allocated more efficiently. We have developed a novel editing algorithm based on shape distance and K-medoid clustering. K-medoids can be seen as a variant of K-means that restricts prototype positions to data points. First a matrix of pairwise similarities between all possible prototypes is computed. For a given number of K prototypes the K -medoid algorithm then iterates two steps: (i) For a given assignment of points to (abstract) clusters a prototype is selected by minimizing the average distance of the prototype to all elements in the cluster, and (ii) given the set of prototypes, points are then reassigned to clusters according to the nearest prototype. The number of prototypes is selected by a greedy splitting strategy starting from one prototype per category. We choose the cluster to split based on the associated overall misclassification error. This continues until the overall misclassification error has dropped below a criterion level. The editing algorithm is illustrated in Fig. 3(b). As seen, more prototypes are allocated to categories with high within class variability. The curve marked SDproto in Fig. 3 shows the improved classification performance using this prototype selection strategy instead of equally-spaced views. Note that we obtain a 2.4% error rate with an average of only 4 two-dimensional views for each three-dimensional object, thanks to the flexibility provided by the matching algorithm. 4 Conclusion We have presented a new approach to computing shape similarity and correspondences based on the shape context descriptor. Appealing features of our approach are its simplicity and robustness. The standard invariances are built in for free, and as a consequence we developed a classifier that is highly effective even when only a small number of training examples are available. Acknowledgments This research is supported by (ARO) DAAH04-96-1-0341, the Digital Library Grant IRl-9411334, an NSF graduate Fellowship for S.B and the German Research Foundation (DFG) by Emmy Noether grant PU-165/1. References [1] Y. Amit, D. Geman, and K. Wilder. Joint induction of shape features and tree classifiers. IEEE Trans. PAMI, 19(11):1300- 1305, November 1997. [2] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. Technical report, UC Berkeley, January 200l. [3] P. J. Bickel. A distribution free version of the Smirnov two-sample test in the multivariate case. Annals of Mathematical Statistics, 40:1-23, 1969. [4] F. L. Bookstein. Principal warps: thin-plate splines and decomposition of deformations. IEEE Trans. PAMI, 11(6):567-585, June 1989. [5] C. Burges and B. SchOikopf. Improving the accuracy and speed of support vector machines. In NIPS, pages 375- 381, 1997. [6] H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In CVPR, volume 2, pages 44-51, June 2000. [7] S. Gold, A. Rangarajan, C-P. Lu, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching: pose estimation and correspondence. Pattern Recognition, 31(8), 1998. [8] D.P. Huttenlocher, R. Lilien, and C. Olson. View-based recognition using an eigenspace approximation to the Hausdorff measure. PAMI, 21(9):951-955, Sept. 1999. [9] Andrew E. Johnson and Martial Hebert. Recognizing objects by matching oriented points. In CVPR, pages 684- 689, 1997. [10] R. Jonker and A. Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38:325-340, 1987. [11] M. Lades, C.C. Vorbriiggen, J. Buhmann, J. Lange, C. von der Malsburg, R.P. Wurtz, and W. Konen. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Computers, 42(3):300-311, March 1993. [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998. [13] H. Murase and S.K. Nayar. Visual learning and recognition of 3-D objects from appearance. Int. Journal of Computer Vision, 14(1):5- 24, Jan. 1995. [14] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrian detection using wavelet templates. In CVPR, pages 193- 199, Puerto Rico, June 1997. [15] M. Turk and A.P. Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience, 3(1):71- 96, 1991. [16] R. C. Veltkamp and M. Hagedoorn. State of the art in shape matching. Technical Report UU-CS-1999-27, Utrecht, 1999. [17] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
1913 |@word version:2 smirnov:2 tried:1 decomposition:1 brightness:6 moment:2 configuration:1 score:3 att:1 selecting:1 document:1 past:1 comparing:1 com:1 yet:1 written:1 must:1 shape:97 enables:1 medial:1 v:1 alone:1 greedy:1 selected:3 plane:2 core:1 lr:1 coarse:2 iterates:1 node:1 location:2 mathematical:1 consists:1 inside:1 introduce:1 pairwise:1 sacrifice:1 roughly:1 window:1 becomes:2 provided:2 estimating:1 underlying:1 matched:1 eigenspace:1 lowest:1 what:1 developed:2 differing:1 finding:1 transformation:11 guarantee:1 berkeley:4 booo:1 exactly:2 classifier:8 unit:1 grant:2 engineering:1 local:3 dropped:1 sd:5 depended:1 consequence:1 path:1 pami:3 argminc:2 suggests:1 limited:1 range:1 graduate:1 acknowledgment:1 lecun:1 testing:1 pappu:1 practice:1 block:1 lf:1 digit:9 jan:2 matching:34 pre:1 rota:1 get:1 selection:1 context:30 risk:1 www:1 deterministic:1 demonstrated:1 map:1 straightforward:2 go:1 starting:1 convex:1 simplicity:1 recovery:1 splitting:1 regarded:2 coordinate:4 variation:2 limiting:1 annals:1 construction:1 play:1 exact:1 us:2 origin:1 element:1 recognition:19 particularly:1 continues:1 geman:1 database:3 binning:1 huttenlocher:1 role:1 cloud:1 electrical:1 capture:1 solved:2 calculate:1 ensures:1 mjolsness:1 dynamic:1 solving:2 bipartite:3 joint:1 represented:2 kolmogorov:1 effective:2 emmy:1 refined:1 quite:1 cvpr:3 distortion:2 drawing:1 statistic:2 think:1 transform:1 jointly:1 obviously:1 propose:2 aro:1 canny:1 aligned:1 flexibility:1 gold:2 description:1 olson:1 cluster:4 rangarajan:3 requirement:1 converges:1 object:27 develop:1 andrew:1 augmenting:1 pose:1 measured:2 ij:3 nearest:7 recovering:1 c:2 hungarian:1 involves:1 murase:1 uu:1 filter:1 stochastic:1 viewing:1 observational:1 bin:5 sand:1 require:1 generalization:2 collaborator:1 biological:1 konen:1 around:2 normal:1 algorithmic:2 mapping:1 vary:1 bickel:2 purpose:1 polar:3 estimation:2 sensitive:1 puerto:1 weighted:4 gaussian:1 ck:1 pn:1 hj:1 boosted:1 varying:1 derived:1 june:3 daah04:1 unsatisfactory:1 greatly:1 contrast:1 inflection:1 dependent:1 rigid:1 nn:6 typically:3 entire:1 originating:1 transformed:1 pixel:7 overall:2 classification:2 flexible:1 html:1 spatial:1 art:1 uc:1 equal:2 once:1 aware:1 dbe:1 nearly:1 thin:4 minimized:1 report:2 spline:6 simplify:1 fundamentally:1 employ:1 oriented:2 recognize:1 dfg:1 occlusion:1 detection:2 highly:3 alignment:1 edge:6 necessary:2 experience:2 poggio:1 tree:1 euclidean:1 circle:1 re:1 sacrificing:1 deformation:1 theoretical:1 sinha:1 increased:1 instance:3 modeling:1 measuring:1 assignment:3 applicability:1 cost:13 entry:1 sso:1 uniform:1 recognizing:1 johnson:2 too:1 stored:1 synthetic:1 thanks:1 siam:1 squared:3 ambiguity:1 von:1 choose:1 external:1 cognitive:1 li:1 includes:1 int:1 matter:1 pedestrian:1 jitendra:1 try:1 view:7 closed:1 portion:1 bayes:1 option:1 minimize:1 square:3 spin:2 formed:2 convolutional:1 descriptor:17 who:1 efficiently:1 accuracy:1 serge:1 yield:1 correspond:1 identify:1 spaced:2 handwritten:3 lu:1 utrecht:1 lighting:2 published:2 detector:1 whenever:1 ed:3 definition:4 energy:2 turk:1 associated:2 sampled:4 dataset:4 ask:1 color:1 operationalizing:1 rico:1 vorbriiggen:1 improved:1 rand:1 editing:2 formulation:1 evaluated:1 though:1 just:1 until:1 hand:1 dac:1 irl:1 lack:1 gray:3 perhaps:1 usa:1 concept:2 normalized:1 hausdorff:2 lenet:1 lades:1 volgenant:1 alternating:1 symmetric:1 illustrated:1 qe:1 criterion:1 plate:4 exdb:1 tps:5 complete:2 demonstrate:1 performs:1 image:12 ranging:1 novel:3 common:2 rotation:2 overview:1 dsc:1 volume:1 jl:2 pep:1 ssd:7 similarity:14 impressive:1 surface:1 add:1 align:1 pu:1 curvature:1 multivariate:1 recent:1 driven:1 outperforming:1 der:1 seen:3 additional:1 somewhat:1 employed:2 determine:1 shortest:1 ii:1 full:2 smooth:1 technical:2 match:1 equally:2 variant:1 basic:1 vision:2 wurtz:1 histogram:8 iteration:2 oren:1 want:2 fellowship:1 spacing:1 annealing:1 diagram:1 median:1 allocated:2 rest:2 semilog:1 subject:1 tend:1 extracting:1 presence:1 counting:1 split:1 easy:1 bengio:1 affect:1 architecture:1 wahba:1 reduce:1 idea:1 prototype:20 lange:1 haffner:1 qj:5 whether:1 motivated:1 jonker:1 schoikopf:1 suffer:1 proceed:1 useful:1 detailed:1 amount:3 dark:1 prepared:1 category:3 http:1 restricts:1 nsf:1 estimated:2 medoid:4 dummy:2 per:5 neuroscience:1 discrete:1 express:1 group:1 key:5 reliance:1 threshold:1 enormous:1 veltkamp:1 graph:2 sum:6 qeq:1 angle:2 yann:1 patch:1 prefer:1 summarizes:1 scaling:1 hi:2 correspondence:12 constraint:1 infinity:1 x2:1 n3:1 nearby:1 chui:1 fourier:1 speed:1 extremely:1 relatively:1 department:1 according:1 march:1 wilder:1 describes:1 smaller:1 osuna:1 lp:1 appealing:1 modification:1 outlier:3 invariant:2 restricted:1 medoids:1 bookstein:2 handling:1 german:1 noether:1 available:3 distinguished:1 alternative:2 robustness:3 denotes:1 remaining:2 include:2 clustering:1 malsburg:1 household:1 giving:1 amit:1 malik:3 arrangement:2 strategy:4 gradient:1 distance:23 link:1 landmark:1 induction:1 spinning:1 pointwise:1 index:1 minimizing:1 difficult:2 cij:3 datasets:1 benchmark:1 november:2 inevitably:1 pentland:1 january:1 variability:2 perturbation:1 varied:1 introduced:1 pair:2 required:1 california:1 nip:1 trans:3 usually:1 pattern:3 below:1 built:1 including:1 power:1 suitable:1 critical:1 difficulty:1 natural:3 homologous:1 regularized:2 misclassification:2 sjb:1 buhmann:1 representing:1 scheme:1 library:1 risky:1 axis:1 martial:1 extract:1 sept:1 bending:2 literature:1 tangent:1 relative:5 permutation:1 digital:1 foundation:1 affine:3 pi:9 translation:2 supported:1 last:1 free:2 hebert:2 allow:1 burges:1 warp:2 neighbor:6 fall:1 face:1 template:1 eigenfaces:1 sparse:1 curve:3 rich:5 contour:3 made:1 commonly:1 far:1 compact:1 silhouette:1 global:2 tolerant:1 belongie:2 discriminative:2 stereopsis:1 iterative:1 decade:1 reassigned:1 robust:4 ca:1 elastic:1 inherently:1 improving:1 bottou:1 papageorgiou:1 dense:2 fig:6 cubic:1 position:3 concatenating:1 pe:1 third:1 wavelet:1 mnist:5 adding:1 effectively:1 texture:1 easier:1 appearance:5 positional:1 visual:2 corresponds:1 loses:1 coil:2 marked:2 experimentally:1 change:1 uniformly:1 principal:2 total:1 called:1 invariance:4 experimental:1 il:1 puzicha:3 internal:1 support:2 proto:2 nayar:1