Document 1595037
Transcript
Document 1595037
Processi stocastici (3): processi gaussiani Corso di Modelli di Computazione Affettiva Prof. Giuseppe Boccignone Dipartimento di Informatica Università di Milano [email protected] [email protected] Table of content Fig. 14 A conceptual map of stochastic processes that are likely to play a role in eye movement modelling and analyses http://boccignone.di.unimi.it/CompAff2016.html . Processi stocastici // Classi di processi propagator ! "!!!#$!!!% P(2) = P(2 | 1) P(1)dx1 where P(2 | 1) serves as the evolution kernel or propagator P A stochastic process whose joint pdf does not change wh called a (strict sense) stationary process: P(x1 ,t 1 ; x2 ,t 2 ; · · · ; xn ,t n ) = P(x1 ,t 1 + τ; x2 ,t 2 + τ; · · · τ > 0 being a time shift. Analysis of a stationary process is freq than for a similar process that is time-dependent: varying t, all X t have the same law; all the moments, if they exist, are consta bution of X(t 1 ) and X(t 2 ) depends only on the difference τ = P(x1 ,t 1 ; x2 ,t 2 ) = P(x1 , x2 ; τ). A conceptual map of main kinds of stochastic processes th the remainder of this Chapter is presented in Figure 14. 5 How to leave the past behind: Markov Process The most simple kind of stochastic process is the Purely Rand there are no correlations. From Eq. (27): P(1, 2) = P(1)P(2) P(1, 2, 3) = P(1)P(2)P(3) P(1, 2, 3, 4) = P(1)P(2)P(3)P1 (3) ··· One such process can be obtained for example by repeat complete independence property can be written explicitly as: Var(Z) = I. Now, as A is SPSD, A = LL . So, x! Var (Y) x≥= 0, LVar(Z)L = LIL Var(LZ) ions Proof. Let A be an SPSD matrix and Z be a vector of random variables with ! ! ! Var(Z) Var(LZ) = I. Now, = as LVar(Z)L A is SPSD,!A==LIL LL! .! So, = A, so Σ is positive semi-definite. comesoffrom so A is theSymmetry covariance matrix LZ. the fact that Processi stocastici aussian Processes Cov(Y, X). Var(LZ) = LVar(Z)L = LIL = A, //theClassi di processi Processes so A is covariance matrix of ussian Processes COVARIANCE processes are a reasonably largeLZ. class of processes thatPOS haveDEF ... ! ! so in A isfinance, the matrix of analysis LZ. eprocesses a reasonably large class of covariance processes that SPSD have ations, including time-series machine Lemma covariance matrices. are a reasonably large class3.1.10. of processes that matrices have and are Processi Gaussiani: prendendo un qualsiasi numero finito di variabili • ing inincluding finance, analysis COVARIANCE POS DEFand ... machine ertain classes of Gaussian processes can also be thought of as Normals tions, intime-series finance, time-series analysis andDensities machine of Multivariate aleatorie, dall’ensemble che forma il... processo aleatorio stesso, of Gaussian processes can also be thought of as COVARIANCE POS DEF tain of Gaussian processes also be to thought as 3.1. GAUSSIAN PROCESSES 93 sses.classes We will use these as Let acanvehicle startofconsidering more esse hanno una distribuzione dian probabilità congiunta gaussiana Proof. A be SPSD matrix and Z be a vector ofΣrandom v use these as a vehicle to start considering more If X = (X , . . . , X ) is N(µ, Σ) and is positive 1 n ses. We will use these as a vehicle to start considering more al objects. (multivariata) Densities of Multivariate Normals Var(Z) = I. Now, asdensity A is SPSD, A = LL! . So, l objects. Simulating Multivariate Normals Densities of Multivariate Normals 3.1.1 (Gaussian Process). A stochastic process {X }t≥0 is Gaus" an A stochastic process {X Gaus1.1Process). (Gaussian Process). A stochastic process {X } is positive Gaus• Più precisamente: è Gaussiano se tper qualunque t }t≥0 is t Σ t≥0is If X One = (Xpossible N(µ, Σ) and definite, then X has the 1 , . . . , Xway n ) isto 1 1 simulate a multivariate normal random vector is ! ! (X , .,si . .the ,ha Xvector ), .is. .N(µ, Σ) is(X positive definite, then X has=the imes t1 , of . . .times ,of tscelta the vector (X ,(X XtVar(LZ) ) . and 1n yny choice t1 , di . .random .If,tt1X the random . ,fX(x) ! choice . . , t , . . . , X ) n , times tnche (x − f (x , . , x ) = exp − n , .= tΣ n 1random = LVar(Z)L = LIL A, n) = 1 , .vector 1 n t t n 1 density 2 using thedensity Cholesky decomposition. (2π)n |Σ| iate normal distribution. distribution. ariate normal distribution. # and # Lemma 3.1.11. If Z ∼ N(0,1I), Σ = " AA!1 is " a covariance matrix, 1matrix of 1LZ. so A is the covariance ! −1 ! −1 (x −−µ) µ)− µ) . . fX(x) =+f (x . . ,= xX =1N(µ, − exp =µ AZ, ∼ e Multivariate Normal Distribution (x Σ − µ)(xΣ− (x (x) f)(x ,! . . . , xΣ). ariate Normal Distribution 1 ,f.then n n ) = !exp n n 2 (2π) |Σ| (2π) |Σ| 2 he Multivariate Normal Distribution Proof. We know from lemma that AZnoris multivariate normal and so aussian processes are defined in terms of the3.1.4 multivariate esses are defined in terms of the multivariate noron, toAZ. havegood a pretty good of DEF this COVARIANCE POS ... vectornoris need µa+pretty Because a multivariate normal random is completely needwetowill have understanding of this Gaussian processes are defined inunderstanding terms of the multivariate nd its properties. described by its mean vector and covariance matrix, we simple need to find rties. tion, we will need to have a pretty good understanding of this EX andNormal Var(X). Now, 1.2 (Multivariate Distribution). A vector riate Normal Distribution). A vector X = (X1 X , . .=. ,(X Xn1 ), . . . , Xn ) and its properties. multivariate normal (multivariate Gaussian) if all linear combiDensities of normal (multivariate Gaussian) if all=linear EX EµMultivariate +combiEAZ = µ + A0Normals =µ i.e. all (Multivariate random variablesNormal of the form 3.1.2 Distribution). A vector X = (X1 , . . . , Xn ) om variables of the form n and normal ! (multivariate If X = (X1Gaussian) , . . . , Xn ) isif N(µ, Σ) and Σ is positive definite, th multivariate all linear combin ! α k Xk αVar(X) , i.e. all random variables the form = AVar(Z)A! = AIA! = AA! = Σ. density k Xk = Var(µ + of AZ) = Var(AZ) k=1 n Processi ! stocastici e normal distributions. tributions. α X k=1 " 1 k k ! −1 //Thus, Classi disimulate processi ! (x − µ) Σ (x f (x) = f (x , . . . , x ) = exp − we89can a random vector X ∼ N(µ, Σ) by 1 n k=1 n 2 (2π) |Σ| 89 ! (i) Finding A such that Σ = AA . ate normal• Processi distributions. Gaussiani: prendendo un qualsiasi numero finito di variabili 1 (ii)aleatorie, Generating ∼ N(0, I). dallaZcollezione che forma il processo aleatorio stesso, esse hanno una distribuzione di probabilità congiunta gaussiana (iii) Returning X 89 = µ + AZ. (multivariata) 0 CHAPTER 3. GAUSSIAN PROCESSES ETC. Matlab makes things a bit confusing because its function ‘chol’ produces Sono processiof continui nel tempo nello is, spazio a •decomposition the form B ! B. eThat A =degli B ! .stati It is important to This is quite a strong definition. Importantly, it implies that, even if all of be careful and think about whether you want to generate a row or column s components are normally distributed, vector is not i.i.d necessarily il rumore bianco è a unrandom processo Gaussiano • Esempio: vector when generating multivariate normals. ultivariate normal. The following code produces column vectors. xample 3.1.3 (A random vector with normal marginals that is not multiListing 3.1: Matlab code ariate normal). Let X1 ∼ N(0, 1) and ! 1 mu = [1 2 3]’; |X1 | ≤ 1 1 2 1 if5]; 2 Sigma = [3 1 2; 1 2 X 1; X2 = . 3 A = chol(Sigma); −X1 if |X1 | > 1 4 X = mu + A’ * randn(3,1); ote that X2 ∼ N(0, 1). However, X1 + X2 is not normally distributed, following produces row ecause |X1 + The X2 | ≤ 2, whichcode implies X1 + X bounded and, hence, cannot 2 is vectors. e normally distributed. Listing 3.2: Matlab code Linear transformations 1 mu = [1 2 3]; of multivariate normal random vectors are, again, ultivariate normal. 2 Sigma = [3 1 2; 1 2 1; 2 1 5]; Processi stocastici // Classi di processi 93 3.1. GAUSSIAN PROCESSES • Processi Gaussiani: prendendo un qualsiasi numero finito di variabili Simulating Multivariate aleatorie, dalla collezione che formaNormals il processo aleatorio stesso, esse hanno unapossible distribuzione di simulate probabilitàa congiunta gaussiana One way to multivariate normal random vector is (multivariata) using the Cholesky decomposition. Lemma 3.1.11. Ifè completamente Z ∼ N(0, I), Σspecificato = AA! ismediante a covariance matrix, and • Un processo Gaussiano µ + AZ, then X ∼ N(µ, Σ). mediaXe=matrice di covarianza Proof. We know from lemma 3.1.4 that AZ is multivariate normal and so is µ + AZ. Because a multivariate normal random vector is completely described by its mean vector and covariance matrix, we simple need to find EX and Var(X). Now, EX = Eµ + EAZ = µ + A0 = µ and Var(X) = Var(µ + AZ) = Var(AZ) = AVar(Z)A! = AIA! = AA! = Σ. Thus, we can simulate a random vector X ∼ N(µ, Σ) by Processi stocastici (i) Finding A such that Σ = AA! . // Classi di processi (ii) Generating Z ∼ N(0, I). (iii) Returning X = µ + AZ. • Processi Gaussiani: moto Browniano Matlab makes things a bit confusing because its function ‘chol’ produces a decomposition of the form B ! B. That is, A = B ! . It is important to be careful and think about whether you want to generate a row or column vector when generating multivariate normals. The following code produces column vectors. Listing 3.1: Matlab code 1 2 3 4 mu = [1 2 3]’; Sigma = [3 1 2; 1 2 1; 2 1 5]; A = chol(Sigma); X = mu + A’ * randn(3,1); The following code produces row vectors. Listing 3.2: Matlab code 1 2 3 4 mu = [1 2 3]; Sigma = [3 1 2; 1 2 1; 2 1 5]; A = chol(Sigma); X = mu + randn(1,3) * A; Può essere considerato il limite continuo del random walk semplice discusso la nozione di -algebra associata a un processo. Ricordiamo l’Osservazione 1.2: fissato uno spazi . [. . . ]” come abbreviazione di “esiste A 2 F, con 2.2. Il moto browniano propagator ! "!!!#$!!!% Definiamo ora il moto browniano, detto a Processi stocastici P(2) = P(2 | 1) P(1)dx centrale di questo corso. Si tratta (⌦ Ricordiamo l’oggetto l’Osservazione 1.2: fissato uno spazio di probabilità // Classi di processi P(2 | 1)A serves evolution kernel= [. . . ]” come stocastico abbreviazione diwhere “esiste 2 as F,thecon P(A) 1, tale Pcv a tempo continuo. Esso puòor propagator essere A stochastic process whose joint pdf does not change wh called a (strict sense)detto stationary process: unaorapasseggiata aleatoria realeanche con incrementi Definiamo il moto browniano, processo di g P(x ,t ; x Si ,t ; ·tratta · · ; può xn ,t n ) = P(x ,t + τ; x ,t +più τ; · · · l’oggetto centrale corso. dell’esempio avanti,diil questo moto browniano essere ottenuto stocastico apasseggiata tempo continuo. essere come l’analo τ > Esso 0 being apuò time Analysisvisto of a stationary is freq aleatoria conshift. incrementi di process varianz 1 1 1 2 2 1 1 2 2 than for a similar process that is time-dependent: varying t, all una passeggiata aleatoria reale con incrementi gaussiani. In effet X t have the same law; all the moments, if they exist, are consta bution of X(t 1 ) andottenuto X(t 2 ) dependscome only on the τ= avanti, il moto browniano può essere undifference opport Definizione browniano). Si P(x12.3 ,t 1 ; x2 ,t 2(Moto ) = P(x1 , x2 ; τ). passeggiata aleatoria con incrementi (cf.processes il sotth A conceptual di mapvarianza of main kinds finita of stochastic stocastico reale B = {B }t2[0,1) che soddisfa the remainder of thistChapter is presented in Figure 14. Definizione browniano). Si dice moto brownia (a)2.3 B0(Moto = 0 q.c.; stocastico reale B = {Bt }t2[0,1) le seguenti propri 5 How toche leavesoddisfa the past behind: Markov Process (b) B ha incrementi indipendenti, cioè per o CHAPTER 3. GAUSSIAN PROCESSES ETC. (a) B = 0 q.c.; t < 1 The 0 2. MOTO BROWNIANO simple kind of stochastic process is the 2. MOTO BROWNIANO lemost variabili aleatorie {B BRand ti Purely ti k there are no correlations. From Eq. (27): ein-Uhlenbeck Process). The Ornstein-Uhlenbeck (b) B ha incrementi indipendenti, cioè per ogni scelta di k (c) B ha incrementi stazionari gaussiani ce 2) = P(1)P(2) tk < 1 le variabili aleatorie {Bti BP(1, ti 1 }1ik sono indi I P(1, 2, 3) s =)P(1)P(2)P(3) di t > s 0 si ha (B B ⇠ N (0, t = EX 0 talvolta t i talvolta ti = spazio detto dellespazio traiettorie del processo Come X. Come E ), µche è detto delle (c) traiettorie delX. processo B ha incrementi stazionari gaussiani centrati : più precisa P(1, 2, 3, 4) = P(1)P(2)P(3)P 1 (3) I , E I ) la sua induce sullo spaziosullo d’arrivo (E d’arrivo µsua : questa aleatoria, X induce spazio (EdiI ,tlegge E>I )sla legge µ : questa X X (d) q.c. B ha traiettorie continue, cioè q.c. l 0 si ha (Bt Bs ) ⇠ N (0, t s);· · · =legge exp{α|t ti+1 |/2} el processo. ,i+1 i −processo. etta del One such process be obtained for example tby7! repeat (d) q.c. B ha traiettorie continue, cioècanq.c. la funzione B I , sicomplete I :, . .x. ,isx2 A I , si independence property can be written explicitly as: 2 A 2 A } è un sottoinsieme cilindrico di E ting formula 2 E , . . . , x 2 A } è un sottoinsieme cilindrico di E 1 t k t1 k 1 tk k 1 Nella definizione è27 sottinteso lo spazio di 2.2. IL MOTO BROWNIANO = P((X , . . . , X ) 2 A ⇥ · · · ⇥ A ), dunque la probabilità ! # P(X 2 C) = P((X , . . . , X ) 2 A ⇥ · · · ⇥ A ), dunque laper probabilità t1 1 " tk t1 tk 1k Nella definizione k processo cui si Bt = 2 F⌦ èB, sottinteso lo ha spazio di B probabilità t (!) con !(⌦, − ti+1calcolata |/2}Xti + conoscendo 1 − exp{α|t ti+1 |} Z. i − conoscendo le leggi finito-dimensionali di X. Ricordando che i ere le leggi finito-dimensionali di X. Ricordando che i per cui si hadopo =B ! 2 ⌦.inLagrado dipendenza la formalizzazione precisa di questo processo risultato allaB, Proposizione 2.25, avremo omessa, ma èBimportante di esp t che t (!) conessere I , segue I la discusso la nozione diE -algebra associata a un processo. una base della -algebra che legge µ del processo indrici sono una base della -algebra E , segue che la legge µ del processo omessa, mala èXimportante grado di esplicitarla proprietàXessere (d) siinpuò riformulare nelquando modo ès I , EMatlab I ) è determinata I I Listing 3.9: code orie (E dalle leggi finito-dimensionali di X. delle traiettorie (E , E ) stocastici è determinata dalle leggi(d) finito-dimensionali dinel X. modo seguente: esiste A Processi la proprietà si può riformulare che per ogni ! 2 A la funzione t 7! Bt (!) è co 2.2. Il moto browniano termine del processo X processo si indica talvolta delle ione, conlegge il termine legge del X si indica talvolta la famiglia delle che per ognila !famiglia 2 A la funzione t 7! Bt (!) è continua. Oltre a es // Classi didue processi naturale dal punto di“q.c.vista la continuità d 0di= 0 (⌦, 0} Ricordiamo l’Osservazione 1.2: fissato probabilità P), scriveremo n particolare, due processi X = {Xtnaturale }uno ,= X{X con lo ensionali.) In particolare, processi Xspazio }punto , t2I Xdi0 F, = {X lo fisico, t2I dal vista la continuità delle traiettorie è t {X t2It } t2I con tfisico, ; [. . . ]” come abbreviazione di “esiste A 2 F, con P(A)importanza = 1, tale che per ogni ! 2 Ada [. . . ]”. di vista matem Idegli e a indici valori Inello stesso nello spaziostesso (E, E) hanno laE)stessa legge seanche e legge e a valori spazio (E, hanno la stessa seune punto importanza anche da un punto di vista matematico (si veda il s Definiamo ora il moto browniano, detto anche processo di Wiener, che costituisce Talvolta parleremo di moto browniano con gie stesse finito-dimensionali. leggi finito-dimensionali. Talvolta parleremo di moto browniano con insieme dei tempii l’oggetto centrale di questo corso. tratta dell’esempio più importante di processo Gaussiani: motoSi Browniano • Processi stocastico a tempo continuo. Esso può tempo continuo di (0,intendendo Tdove =l’analogo [0, dove 2 1) è fissato, intend T essere = [0,visto t0 ],come t0 t20a],(0, 1) èt0fissato, naturalmen * h / 2)*X(i)... una passeggiata aleatoria reale con incrementi gaussiani. In effetti, come discuteremo più dle, con siani. Un avanti, processo vettoriale Xvettoriale = {X ache in Rache {B soddisfa condizioni Definizionedella 2.3 per t ris {B }t2I soddisfa le condizioni Definiz essi Un processo Xcome =valori {X valori Rd , della con t }t2T t2I t} t2T il moto browniano può essere ottenuto un opportuno limite diin qualunque alpha gaussiani. * h))*randn; (d) passeggiata aleatoria con incrementi di varianza finita (cf.Nella il2.1 sottoparagrafo 2.7.1). sono Nella Figura mostrate tre traiettorie del Figura mostrateillustrative tre traiettor vistosempre come un processo stocastico reale asono patto ,essere X ),sempre può essere visto come un processo stocastico reale2.1 a patto t (i) (i) primo Veniamo ora al risultato moto brown Definizione 2.3 (Moto browniano). Si X dice moto browniano processo i indici:degli infatti basta scrivere X scrivere = {X }(i,t)2{1,...,d}⇥I .qualunque Per nsieme indici: infatti basta = {X . fondamentale Per risultatosul Veniamo ora al primo fondamenta t le t }(i,t)2{1,...,d}⇥I stocastico reale B = {Bt }t2[0,1) cheprima soddisfa seguenti proprietà: volta da Norbert Wiener nel 1923. A dispetto delle ap ulta conveniente, è possibile limitare la trattazione ai volta processi quando risulta conveniente, è possibile limitare la trattazione processi prima daaiNorbert Wiener nel 1923. A (a) B0 = 0 q.c.; Può essere considerato risultato non banale. ralità. è quello che faremo sempre nel caso dei processi dita di Questo generalità. Questo è quello che faremo sempre nel caso dei processi (b) B ha incrementi indipendenti, cioè per ogni scelta di k 2 e non 0 t0 banale. < t1 < . . . < il limite continuo del risultato t < 1 le variabili aleatorie {B B } sono indipendenti; random walk semplice Motion mo. t t 1ik ora definiamo. k Teorema 2.4 (Wiener). Il moto browniano esiste. (c) B ha incrementi stazionari gaussiani centrati : più precisamente, per ogni scelta i i 1 Teorema 2.4 (Wiener). Il moto brownia mental stochastic processes. Ithais (B a tGaussian prodi t > s 0 si Bs ) ⇠ N (0, t s); Lévy process, a(d)Martingale and process that ispossibili processo stocastico reale Xa = {X è= detto gaussiano se, e 2.2. Un processo stocastico reale X q.c. {X gaussiano se,di questo teorema. Un m q.c. B ha traiettorie continue, cioè la funzione tè7!detto Bt è continua. t }Sono t2I t }t2I diverse dimostrazioni of harmonic functions (do not worry if you do not 2 tI, aleatorio (Xaleatorio . .spazio , Xun )tteorema è, .normale, se definito atndi , tn 2 I, il vettore (X . .Sono , Xtmolto ) F, ècioè normale, cioè se t1 , . lo tndi su dovuto a Kolmogorov, di consiste n 1 , .il. .vettore n 1probabilità questo definizione è sottinteso (⌦,possibili P)generale su cui èdiverse il dimostrazioni can be used as Nella a building block when considering lineare finita delle X è una variabile aleatoria normale. mbinazioneMoto lineare delle XBt t (!) è una variabile normale. processo B, finita pert cui sigli Bt = con ⌦. La dipendenza da ! verrà quasi sempregenerale dovuto a Kolmo Browniano: incrementi sono sualeatoria un teorema molto processes. • For this reason, we ha will consider it !in2genericamente omessa, ma è importante essere in grado di esplicitarla quando è necessario. Per esempio, Gaussiani y other Gaussian process. la proprietà (d) si può riformulare nel modo seguente: esiste A 2 F con P(A) = 1 tale che per ogni ! 2 Auna la funzione t 7! Bt (!) è continua. Oltre a esserealeatori una richiesta molto stituiscono una generalizzazione dei vettori aleatori aussiani generalizzazione dei vettori normali. an Motion). A stochastic process {W Processo Wiener: secalled gli incrementi sonoè normali. • costituiscono t }t≥0 naturale daldipunto di vista fisico, la is continuità delle traiettorie una proprietà di basilare oer Iprocess) =quando {t1 ,if: .importanza . . I, t= ècentrati insieme finito, un1) processo che, , . .da . ,(media tk }punto è 0, un insieme finito, ungaussiano processo gaussiano k } {t anche un di vista matematico (si veda il sottoparagrafo § 2.2.2). Gaussiani varianza 1un k Talvolta parleremo di moto browniano con insieme dei tempi ristretto a un Xt(X ) non è altro che un vettore aleatorio normale a valori in R . intervallo k = k t1 , . . . , Xtk ) non è altro che un vettore aleatorio normale a valori in R . ncrements. T = [0, t0 ], dove t0 2 (0, 1) è fissato, intendendo naturalmente con ciò un processo mali, dato un gaussiano Xgaussiano = {X introduciamo le vettori normali, un processo X 2.3 = per {Xttristretto }t2I introduciamo le {Btprocesso }t2T dato che soddisfa le condizioni della Definizione a T. t }t2I Nella Figura 2.1 sono mostrate tre traiettorie illustrative del moto browniano. rements. Xµ(t) t) := Cov(X XtCov(X ), ben definite in quanto := E(Xt ) e K(s, covarianza K(s, t)s ,:= definite in quanto t ) e covarianza s , Xt ), ben able. µi = EXt = 0 Motion variables. By virtue of the central limit theorem, it converges to a Gaussian the limit, the randombinary walkrandom tends toward a Gaussian random process with zero i random variable. (t) = Definition t and independent increments. Such aproprocess is by definition a standard nce σ X2stochastic 3.1Taken W(t)It a standard Wiener process ental processes. alimit, Gaussian tois is the the random walk iftends toward a Gaussian random process with zero = exp{α|t − t |/2} 1 i i+1 cess. 2 independent increments. Such a process is by definition a standard mean,stocastici variance évy process,Processi a Martingale and aσprocess that is X (t) = t and (i) W(0) = 0, Wiener process. ngharmonic formula functions is Classi (dodi notprocessi worry if you do not // (ii) W(t) has independent increments, which means that n be used as !a" building block when considering # 3.1 W(t) is a standard Wiener process if tt2i+1 <consider t3 <Z. t4 , itWin (t4 ) − W (t3 ) and W (t2 ) − W (t1 ) ∀ we t1i < |/2}XFor − exp{α|t − |} cesses. will i+1 ti + this1reason, Definition 3.1 W(t) is a standard Gaussiani: moto Browniano Wiener process if • Processi process. =other 0, Gaussian are independent, (i) W(0) = 0, is Gaussian (iii) increments, Z(t) = W(t) −which W(t as independent that with zero mean and variance 0 ) means ting 3.9: Matlab code Motion). A stochastic process {Wt }t≥0 is increments, called (ii) W(t) has independent which means that 2 = t − t , t. (t ) σ process) if: 0 ∀ t1 < t2 < t3 < t4 , W (t4 ) − W (t3 ) Zand W (t2 )t0−≤W 1 ∀ t1 < t2 < t3 < t4 , W (t4 ) − W (t3 ) and W (t2 ) − W (t1 ) sono Gaussiani rements. ependent, • Poichè gli incrementi are independent, this definition we can deduce the following basic properties for a standard Wiener is Gaussian with=zero variance W(t) − W(t0 ) From Può essere considerato ) is Gaussian with zero mean and variance (iii) Z(t) W(t) mean − W(t0and process. ments. il limite continuo del 2 2 random walk semplice σ Z = t − t0 , t0 ≤ t. σ Z = t − t0 , t0 ≤ t. ≥ 0. h / 2)*X(i)... Property 3.2 Var(W(t)) = t and mW (t) = E[W(t)] = 0. Since its variance increases with time pha * h))*randn; t, the trajectory of this process moves away from the horizontal axis when t increases. Fromthe thisfollowing definition we canproperties deduce the for following basic properties s definition we can deduce basic a standard Wiener for a standard Wiener process. La varianza cresce linearmente nel tempo Property 3.3 From the law of large numbers, e il processo si sposta lontano dall’assewe t can prove that: Property 3.2 Var(W(t)) = t and mW (t) = E[W(t)] = 0. Since its variance increases with time 2 Var(W(t)) = t andt,mthe = E[W(t)] = 0. Since its variance increases with time W (t) (t) −→away from trajectory of this processWmoves the horizontal axis when t increases. 0. (7.46) ory of this process moves away from the horizontal axis when t increases. t t→∞ Motion ental stochastic processes. Gaussian PropertyIt3.3is aFrom the law proof large numbers, we can prove that: Processi stocastici évy process, a Martingale and a process that is 3 From the law of large numbers, we can prove that: W (t) −→ harmonic functions (dodi notprocessi worry if you do not 0. // Classi t→∞ Introduction to Randomt Processes 117 W (t)when n be used as a building block considering −→ 0. (7.46) t→∞ cesses. For this reason, wet will consider it in Gaussiani: Browniano •3.4Processi W’s trajectory, even if itmoto is continuous, varies in a very irregular way that otherProperty Gaussian process. makes it non differentiable. Heuristically, this result comes from !W(t) = W(t + !t) − W(t) √ !t. This property is often Levy’s oscillatory property. being proportional to Motion). A stochastic process {W } called is called t t≥0 process) if: funzione di auto-correlazione è given by: • LaThe Property 3.5 autocorrelation function of W(t) is rements. ments. ≥ 0. RW W (t1 , t2 ) = min(t1 , t2 ). (7.47) RW W (t1 , t2 ) = E [W (t1 )W (t2 )] . (7.48) Proof: Indeed, Without loss of generality, we suppose t1 < t2 then " ! RW W (t1 , t2 ) = E (W (t2 ) − W (t1 )) W (t1 ) + W (t1 )2 . (7.49) Since W(t) has independent increments, E [(W (t2 ) − W (t1 )) W (t1 )] = E [W (t1 )] E [W (t2 ) − W (t1 )] = 0. (7.50) Which implies that " ! RW W (t1 , t2 ) = E W (t1 )2 = Var(W (t1 )) = t1 = min(t1 , t2 ). (7.51) Può essere considerato il limite continuo del random walk semplice (7.46) ⎛ (i) Set%Xt1 = µ1 + σi,i Z.⎞ µi Draw Zσ∼i,iN(0, 1). σi,i+1 ∼N , . 2 σi,i+1 σi,i+1 σi,i+1 σi+1,i+1 µi+1 ⎝ (ii) For i = 1, . . . , m − 1 draw Z ∼ N(0, 1) and⎠seZ Xti+1 = µi + (Xti − µi ) + σi+1,i+1 − σi, i σi,i ⎛% theorem we have Symmetric Positive Using Definite and3.1.25, Semi-Positive Definite Matrices σi,i+1 ! Xti+1 = µi + (Xti2− µi" )+⎝ σ σi,i+1 motion,i σi, iBrownian σ Example 3.1.27 (Brownian i,i+1Motion). Consider Covariance matrices are members a xfamily . Xti+1 | Xof µi matrices + (xcalled σi+1,i+1 − ti = i ∼ N of i − µi ) ,symmetric a Markov process. Now, σi, i σi,i positive definite matrices. Example µ3.1.27 (Brownian Motion). Consider B i = EX ti = 0 Algorithm 3.1.2 (Generating aaMarkov Markovian Gaussian process. Now, Process). and Matrices Browniano: è unDefinite processo Gaussiano Markoviano √ An n × n real• Moto3.1.6 Definition (Positive (Real-Valued)). µ = EXti = 0 (i) Draw Z ∼ N(0, 1). Set Xt1 =σµi,i+1 σi,i Z.i , ti+1 ) = ti . i 1 + = min(t variance matrices as possible. Xt i Xti+1 Processi stocastici // Classi di processi valued matrix, A, is positive definite if and only if and x1 x2 (ii) our. . updating formula is N(0, 1) and set σi,i+1 = min(ti , ti+1 ) = ti . x. ,4m − 1 draw Forx3iSo, = 1, Z∼ * ) ⎛( ⎞ % x! Ax > 0 for all x ̸= 0. So, our updating formula is X = X + t − t ti+1 ti i+1 i Z. t4 t3 2 ( t1 t2 * σi,i+1 ) σi,i+1 Xti+1 = µi + (Xti − µi ) + ⎝ σi+1,i+1 −Xti + ⎠ tZ. Xti+1 = − t i+1 i σi, i positive definite (SPD).σi,i If A is also symmetric, A is called symmetric P(x = Gaussiana t | xt-1 )then Listing 3.8: Matlab code Some important properties of 1SPD matrices are Listing 3.8: Matlab code m = 10^4; X = zeros(m,1); 3.1. GAUSSIAN PROCESSES 91motion, Example 3.1.27 (Brownian Motion). Consider Brownian which i 2 h = 1/m; m = 10^4; X = zeros(m,1); a Markov process. Now, (i) rank(A) = n. h = 1/m; 3 X(1) = sqrt(h)*randn; X(1) sqrt(h)*randn; µ EX 4 INDEPENDENCE i = = ti = 0 5 for i = dealing 1:m-1 with the mean vector µ. Howdifficult about (ii)There |A| >is0.nothing tooand i = 1:m-1 3.1. GAUSSIAN PROCESSES 6 91 X(i+1) = X(i) + for sqrt(h) * randn; σ = min(t , t=i+1X(i) ) = +ti .sqrt(h) X(i+1) * randn; ever, the covariance matrix makes things pretty difficult, when we i,i+1 iespecially 7 end end (iii) > 0. INDEPENDENCE wish Atoi,i simulate a high randomis vector. In order to simulate So, dimensional our 8updating formula There is nothingeffectively, too difficult about dealing mean vector µ. How- properties of co9 we plot(h:h:1,X); Gaussian processes needwith to the exploit as many ( * plot(h:h:1,X); ever, the covariance matrix makes things pretty difficult, especially when) we −1 (iv) A wishmatrices istoSPD. XtIn = Xt i + variance possible. i+1order to simulateas a high dimensional random vector. simulateti+1 − ti Z. 1 2 3 4 5 6 7 8 9 Gaussian processes effectively, we need to exploit as many properties of coas possible.and sufficient conditions for an Lemmavariance 3.1.7matrices (Necessary SPD). Listing 3.8: Matlab code The folSymmetric Positive Definite and Semi-Positive Definite Matrices lowing are necessary 1and sufficient conditions for an n × n matrix A to be m = 10^4; X = zeros(m,1); Symmetric Positive Definite and Semi-Positive Definite Matrices SPDCovariance 2 h are = 1/m; Processi stocastici matrices members of a family of matrices called symmetric Covariance matrices are members of a family of matrices called symmetric 3 X(1) = sqrt(h)*randn; positive definite matrices. positive definite matrices. // the Simulazione 4 (i) All eigenvalues λ1di , . . .processi , λn of A areGaussiani strictly positive. 5 for Definite i = 1:m-1 Definition 3.1.6 (Positive Matrices (Real-Valued)). An n × n realDefinition 3.1.6A,(Positive Definite Matrices (Real-Valued)). An n × n realmatrix, is6 positiveX(i+1) definite only+if sqrt(h) X(i) * PD randn; • valued Matrice definita positiva: unaif=and matrice Athat (n x n) è= se! ,ewhere solo se C is a real(ii) There exists a unique matrix C such A CC valued matrix, A, is 7positive definite if and only if end ! x Ax > 0 for all x ̸= 0. positive diagonal entries. This is valued lower-triangular matrix with 8 9 plot(h:h:1,X); called Cholesky decomposition If Athe is also symmetric, then is called positive x!AAx > 0symmetric for of allA. x ̸= 0.definite (SPD). properties of SPD matrices are Se èimportant anche simmetrica, allora è simmetrica definita positiva (SPD) • Some Definition 3.1.8 (Positivethen Semi-definite (Real-Valued)). An(SPD). n×n If A is symmetric, A is calledMatrices symmetric positive definite (i) also rank(A) = n. E’ semidefinita se e solo se real-valued matrix,properties A,positiva is positive semi-definite if and only if Some •important of(SPSD) SPD matrices are (ii) |A| > 0. (iii) Ai,i >=0. n. (i) rank(A) x! Ax ≥ 0 for all x ̸= 0. A−1bene: is SPD.le matrici di covarianza sono SPSD (e viceversa) Nota • (iv) (ii) |A| > 0. (iii) Lemma 3.1.7 (Necessary and sufficient conditions for an SPD). The following are necessary and sufficient conditions for an n × n matrix A to be ASPD i,i > 0. (i) All the eigenvalues λ1 , . . . , λn of A are strictly positive. (iv) A−1 is SPD. (ii) There exists a unique matrix C such that A = CC ! , where C is a realvalued lower-triangular matrix with positive diagonal entries. This is Lemma 3.1.7 (Necessary and sufficient conditions for an called the Cholesky decomposition of A. Decomposizione di Cholesky SPD). The following are necessary and sufficient conditions for an n × n matrix A to be 3.1.8 (Positive Semi-definite Matrices (Real-Valued)). An n × n SPD Definition real-valued matrix, A, is positive semi-definite if and only if ! X). Now, EX = Eµ + EAZ = µ + A0 = µ Processi stocastici 3.1. GAUSSIAN PROCESSES ar(µ + AZ) = Var(AZ) = AVar(Z)A = AIA = AA = Σ. // Simulazione di processi Gaussiani Simulating Multivariate Normals ! ! 93 ! One possible way to simulate a multivariate normal random vector is an simulate a• random vector X ∼ N(µ, Σ) :by Simulazione utilizziamo la decomposizione di Cholesky using the Cholesky decomposition. A such that Σ = Lemma AA! . 3.1.11. If Z ∼ N(0, I), Σ = AA! is a covariance matrix, and ng Z ∼ N(0, I). X = µ + AZ, then X ∼ N(µ, Σ). Proof. We know from lemma 3.1.4 that AZ is multivariate normal and so g X = µ + AZ. is µ + AZ. Because a multivariate normal random vector is completely described by its mean vector and covariance matrix, 3.1. need PROCESSES to find GAUSSIAN kes things a bit confusing because its function ‘chol’ produces we simple 3.1. GAUSSIAN PROCESSES and Var(X). Now, AN PROCESSESEX 93 on of the form B ! B. That is, A = B ! . It is important to Simulating Multivariate Normals EX = Eµa+row EAZor = column µ + A0 = µ think about whether you want to generate Simulating Multivariate Normals Multivariate Normalsnormals. One possible way to simulate a multivariate normal enerating multivariate andPROCESSES One way to simulate a multivariate normal 3.1. GAUSSIAN PROCESSES 1. GAUSSIAN 9393 using thepossible Cholesky decomposition. AN PROCESSES 93 ng code produces column vectors. using the Cholesky decomposition. le way to simulate a multivariate normal random vector is ! ! ! Var(X) = Var(µ + AZ) = Var(AZ) = AVar(Z)A = AIA = AA = Lemma 3.1.11. If Σ. Z ∼ N(0, I), Σ = AA!! is a covaria esky decomposition. Listing 3.1: Matlab code Lemma 3.1.11. N(0, X = µ + AZ, thenIfXZ∼ ∼ N(µ, Σ).I), Σ = AA is a covari X = µ + AZ, then X ∼ N(µ, Σ). Simulating Multivariate Normals mulating Multivariate Normals Multivariate Normals ; If3.1. 11. Z ∼GAUSSIAN N(0, I), Σ =PROCESSES AA! is a covariance matrix, and Proof. We know from lemma933.1.4 that AZ is multivaria Proof. from alemma 3.1.4 that AZ israndom multivaria 3.1. GAUSSIAN PROCESSES a random vector X ∼ N(µ,is Σ) µ by + We AZ.know Because multivariate normal vec 2; 1 X 2∼ 1;N(µ, 2 1 Σ). 5]; Thus, we can simulate then One possible way to simulate a multivariate normal random vector is is µ + AZ. Because a multivariate normal random One possible way to simulate a multivariate normal random vector is ea);way to simulate a multivariate normal random vector is described by its mean vector and covariance matrix, we sim 3.1.A such GAUSSIAN PROCESSES 93 wevec ! (i) Finding that Σ = AA . described by its mean vector and covariance matrix, si EX and Var(X). Now, ow from lemma 3.1.4 that AZ is multivariate normal and so Simulating Multivariate Normals using the Cholesky decomposition. randn(3,1); ing the Cholesky decomposition. esky decomposition. EX and Var(X). Now, Simulating Multivariate Normals Because a multivariate normal random is completely (ii) Generating Z ∼ N(0, vector I). EX =Decomposizione Eµ + EAZ = µ + A0 = µ ! Simulating Normals !N(0, Multivariate ! EX =diEµ + EAZ = µ + A0 = µ One possible way to simulate a multivariate normal random vector is Cholesky Lemma 3.1.11. If Z ∼ I), Σ = AA is a covariance matrix, and ng code produces row vectors. ts andI), covariance matrix, we need to find 1.mean If Zvector ∼ N(0, Σ ∼ = AA isOne a Σsimple covariance matrix, and emma 3.1.11. If Z N(0, I), = AA is a covariance matrix, possible way to simulate a multivariate and normal random vect and (iii) Returning X = µ + AZ. using the Cholesky decomposition. and X). Now, X = µ + AZ, then X ∼ N(µ, Σ). hen X ∼ N(µ, Σ). One possible way to simulate a multivariate normal random vector is! = µ + AZ, then3.2:XMatlab ∼ N(µ, Σ). Listing code using the Cholesky decomposition. Var(X)‘chol’ = Var(µ + AZ) = Var(AZ) = AVar(Z)A ! = AIA Matlab makes things a bit confusing because its function produces ! Var(X) = Var(µ + AZ) and = Var(AZ) = AVar(Z)A = AI theN(0, decomposition. Lemma 3.1.11. Ifusing ∼ I), BΣ!that AA isA multivariate a covariance matrix, EX = Eµknow EAZ = µZ+ A0 =Cholesky µform a+ decomposition ofLemma the B.= That B ! . so It isΣimportant Proof. We from lemma 3.1.4 AZ is normal so w from lemma 3.1.4 that AZ is multivariate normal 3.1.11. Ifisis, Zmultivariate ∼=and N(0, I), = AA!to isand a so covariance matrix, oof. We know from lemma 3.1.4 that AZ normal and 2; 1 X 2 1; 2 + 1 5]; =µ AZ, then and X ∼think N(µ, Σ). whether ! or column be careful about you want to generate aAA row Lemma 3.1.11. If Z ∼ N(0, I), Σ = is a covariance matrix, and ecause a multivariate vector completely Xrandom = µ + AZ, then is X ∼random N(µ, Σ). + AZ. Because a multivariate normal vector completely Thus, we can is simulate a random vector X ∼ N(µ, Σ) b µis+µAZ. Because anormal multivariate normal random vector a); vector when generating multivariate normals. Thus, we is can completely simulate a random vector X ∼ N(µ, Σ) b Processi stocastici 3.1. GAUSSIAN PROCESSES 93 X = µ + AZ, then X ∼ N(µ, Σ). Proof. We know fromvector lemma 3.1.4covariance that AZ is multivariate normal andΣto so sar(µ mean vector and covariance matrix, we simple need to find !produces !column !vectors. (i) Finding A such that = find AA! . described by its mean and matrix, we simple need n(1,3) * A; The following code scribed by=its mean=vector and=covariance matrix, we(i)3.1.4 simple toΣmultivariate find + AZ) Var(AZ) AVar(Z)A AIA = AAfrom = Σ.lemma Proof. We know that AZthat is normal an Finding Aneed such = AA! . is and µ +// AZ. Because a multivariate normal random vector is completely Simulazione di processi Gaussiani ). Now, Proof. We know from lemma 3.1.4 that AZ is multivariate normal and EX Var(X). Now, (ii) Generating Z ∼ N(0, I). is µ +Listing AZ. Because a multivariate normal random vector is so comple Normals 3.1: Matlab code X and Var(X).Simulating Now, Multivariate (ii)simple Generating Z ∼ to N(0, I). described by its mean vector and covariance matrix, we need find is µ + AZ. Because a multivariate normal random vector is completely 1 mu = [1 2 3]’; described by its mean vector (iii) X = µmatrix, + andReturning covariance we simple need to possible to simulate a= multivariate normal random vector isAZ. EX = Eµ +One EAZ =X2;µ∼way +N(µ, A0 = µ X = µwe + AZ. Eµ + EAZ µA0 + A0 =(iii) µ Returning EX and Var(X). Now, by its mean vector and covariance matrix, simple need to find 2Simulazione Sigma vector =EX [3described 1EX 1= 2 1; 2 1 5]; an simulate a• random Σ) by : utilizziamo la decomposizione di Cholesky = Eµ + EAZ = µ + = µ and Var(X). Now, Matlab makes things a bit confusing because its functio using the CholeskyEX decomposition. Matlab makes things a bit confusing because its functio 3 A = !chol(Sigma); EX and Var(X). Now, a decomposition of the form B !!B. That is, A = B !!. It EX = Eµ + EAZ = µ + A0 = µ ! A such that Σ = AA . a decomposition of the form B B. That is, A = B . I 4 X = mu + 3.1.11. A’ * randn(3,1); Lemma If Z ∼ N(0, I), Σ = AAEX is = a covariance matrix, andA0whether be careful and think want to generate Eµ + EAZ = µ about + = µ you d and be careful and think about whether you want =when µ +generating A0 = µ multivariate normals. to generate X = µ + AZ, then X ∼ N(µ, vector ! Σ). EX ! = Eµ !+ EAZ ! ! ng Z ∼ N(0, I). and when generating !multivariate normals. The following code produces row vectors. r(µ + AZ) ==Var(AZ) =AZ) AVar(Z)A = AIA = AA != vector Σ.=The Var(X) Var(µ + = Var(AZ) = AVar(Z)A AIA AA = Σ.column vectors. ! ! produces following code and The following code Proof. know lemma 3.1.4 that AZ is multivariate normal and produces so Σ. column vectors. Var(X) = Var(µ +We AZ) = from Var(AZ) = AVar(Z)A = AIA == AA = and ! ! Listing 3.2: Matlab g X = µVar(X) + AZ. = Listing Matlab Var(µ AZ) = Var(AZ) = AVar(Z)A = AIA = completely AA! = Σ. is µ + AZ.+ Because a multivariate normal code random vector is ! 3.1: ! code Listing 3.1: Matlab Genera vettori colonna: Var(X) = Var(µ + AZ) = Var(AZ) = AVar(Z)A =code AA! = ! find ! = AIA ! 1 mu = [1 2 by 3]; mu = [1 2 3]’; described its mean vector and covariance matrix, we simple need to Var(X) = Var(µ + AZ) = Var(AZ) = AVar(Z)A = AIA = AA = Σ. mu = [1 2 3]’; kes things a bit confusing because its function ‘chol’ produces 1 1 2 Sigma = [3 1 2; 1 2 1; 2 1 5]; Sigma = Var(X). [3 1 2; Now, 1 2 1; 2 1 5]; EX 2 Sigma = [3 1 2; 1 2 1; 2 1 5]; Simulazione: n simulate awe random X N(µ, by ! and vector ! on of the form B. That is, A B . ItΣ) isvector important 3 A = Σ) chol(Sigma); 3 B A can = chol(Sigma); Thus, simulate a=∼random X ∼toN(µ, by 3 A = chol(Sigma); Thus,about we can simulate a random vector X ∼ N(µ, Σ) by 4 X = mu + A’ * randn(3,1); EX = Eµ + EAZ = µ + A0 = µ 4 X = mu + randn(1,3) * A; think whether you want to generate a row or column ! Thus, we1. canTrovare random vector X ∼ N(µ, by+ A’ * randn(3,1); 4 XΣ) = mu chewe A such that Σ= AA . simulateA atale ! can simulate a random vector X ∼ N(µ, Σ) by Thus, (i) Finding A such that Σ = AA . Thus, we can simulate a random vector X ∼ N(µ, Σ) by row vectors. enerating multivariate normals. The following code produces ! ! The following code produces row vectors. (i) Finding A and such that that Σ =ΣAA . . (i) Finding A such = AA ! ngZcode produces Genera vettori Listing 3.2: Matlab code (i) Finding such that Σ ng ∼ N(0, I). column vectors. !! . = AA ! . riga: ! (i) Finding A such A that Σ= AA Listing 3.2: Matlab code 2 Var(X) = Var(µ AZ) = Var(AZ) = AVar(Z)A = AIA = AA = Σ. 2. (ii) Generating Z∼ N(0,+I). mu = [1 2 3]; ii) Generating Z N(0, I). (ii) Listing Generating Z ∼ N(0, mu = [1 2 3]; 3.1:∼Matlab codeI). Sigma = [3 1 2; 1 2 1; 2 1 5]; (ii) Generating Z I). ∼ N(0, I). g; X = µ + AZ. (ii) Generating Z ∼ N(0, Sigma = [3 1 2; 1 2 1; 2 1 5]; A = chol(Sigma); (iii)(iii)Returning X = µ + AZ. = chol(Sigma); Returning X = µ +simulate AZ. a random vector X ∼ N(µ,XAΣ) 3.Thus, = mu * A; ii) Returning X = µ + AZ. we can by ++ randn(1,3) 2; 1 2 1; 2 1 5]; X = mu randn(1,3) * A; (iii) Returning X = µ + AZ. kes things a bit confusing(iii) because its function Returning X = µ +‘chol’ AZ. produces a); Matlab ! because its function ‘chol’ produces makes aa bit confusing ! Finding (i) such that ΣB =!AA . because things bit confusing itsfunction function ‘chol’ produces nrandn(3,1); of theMatlab form Bmakes B. things That is, A = . It isathings important to Matlab makes things a AMatlab bit confusing because its ‘chol’ produces Matlab makes a bit confusing because its function ‘chol’ prod ! ! because makes things bit confusing its function ‘chol’ produces ! ! a decomposition of the form B B. That is, A = B . It is important to a decomposition of the form B. That is, A = B . It is important to ! ! ! ! ! ! think about whether want to row column (ii) ZaB ∼decomposition N(0, I).That of theor . It is importan decomposition of Generating theyou B.generate A =form B That .B ItB.is, isThat important aform decomposition of theais, form B B. A = is, B .A It= to isBimportant to be careful and think about whether you want to generate a row or column be careful and think about whether you want to generate a row or column ng code produces row vectors. nerating multivariate normals. be careful and think about whether you want to want generate a row or acolumn be andwant thinkto about whether you to generate row or col (iii) Returning X = µcareful + AZ. careful and think about whether you generate a row or column vector when generating multivariate normals. vector when generating multivariate normals. Listing 3.2: Matlab code ng code produces column vectors. vector whenagenerating multivariate normals.normals. vector when generating ctor when generating multivariate normals. Matlab makes things bit confusing becausemultivariate its function ‘chol’ produces The following code produces column vectors. The following code produces column vectors. ! ! The following code produces column vectors. The following code produces column vectors. a decomposition of thecolumn That is, A = B . It is important to code 3.1: produces Matlab codeform B B.vectors. 2;The 1 2 following 1; 2Listing 1 5]; be careful and think about 3.1: whether you want to generate a row or column Listing Matlab code Listing 3.1: Matlab Listing 3.1: Matlab code Listing 3.1: code Matlab code a); 1 1 2 2 3 3 4 4 Because Gaussian processes are defined to be processes where, for any We can do this using an expectation function Becauseµ(t) Gaussian are µdefined to be processes where, for any distributi = EXprocesses matrix Σ, the probability operations. simulate the values of a Gaussian process at choice of times t1 , . . . , tn , the random vector (X a,t .mean . . , Xvector mean vector and covariance matrix for an arbitrary choice ofhas tUsing .and .multivariate .these, , tancovariance . we can described tn ) 1, a process is completely if we have aETC. way to construc , the random vector (X , . . . , X ) has a multivariate 94 choice of times t1 , . . .t,1tGaussian CHAPTER 3. GAUSSIAN PROCESSES n t1 t1 , . . . , tn by calculating µtnwhere µi = µ(t1 ) and Σ, where Σi,j = r µ(t) = EX t normal density and a an multivariate normal density is completely describe by We can do this using expectation function mean vector and covariance matrix for an arbitrary choice of then simulating a multivariate normaldescribe vector. by t1 , . . . , tn . normal density and a multivariate normal density is completely and a covariance function a mean vector µ and aVersion covariance matrix thea probability distribution ofexamples,function Inusing theΣ, following we distribution simulate the Gaussian processe We can do matrix this an expectation Gaussian Processes a mean1vector µΣ, covariance the probability of and covariance function The complexity ofaand the Cholesky decomposition of an arbitrary real-valued Processi stocastici // Simulazione di processi Gaussiani spaced times t1 =the 1/h, t = 2/h, . . . , tn = 1. Gaussian process is completely described wecompletely have a way to construct µ(t) = EX t if is r(s, t) Gaussian process described if we have a2 way to construct the s , Xt ). 3 = Cov(X µ(t) = EXt ). matrix is O(n floating pointchoice operations. es are defined be covariance processes where, for t mean vectortoand matrix for) any an t1 ,an .r(s, . .arbitrary ,3.1.12 tt)n .= Cov(X s, X (Brownian Motion). mean vector andarbitrary covariance matrixofExample for choice of t1 , . .Brownian . , tn . motion is a very andom vector , . . .we ,using Xcan a We multivariate stochastic process which is, in some sense, the continuous time tfunction tn ) has nd a covariance 1this We can(X do an expectation function Using these, simulate the do values of and a Gaussian process at fixed times can this using an expectation function a covariance function state space analogue of of a simple random walk. It hasat an fixed expectatio Using these, we can simulate the values a Gaussian process tim riate normal density is completely describe by t1 , . . . , tn by calculating µti1 ,=. . µ(t Σ, where Σ = r(t , tjµ(t ) and 3.1.2µ where Simulating aand Gaussian Processes Version µ(t) =where 0 and aµcovariance function r(s, t)1 = min(s, t). 1 )by i,j i= . , t calculating µ ) and Σ, where Σ = r(t , t )a n i 1 i,j i j ance matrix Σ, the probability distribution µ(t) =Browniano: EX Simulazione di t) un r(s, t) = Cov(Xs , Xt ). r(s, =moto Cov(X Xt tvector. ). µ(t) = EX s ,of then •simulating a multivariate normal then simulating a multivariatet normal vector. Listing where, 3.3: Matlab Because Gaussian forcode any y described if we have a way to construct the processes are defined to be processes In theUsing following examples, we simulate the Gaussian processes at eve In the following examples, we simulate the Gaussian processes at evenly %use mesh size h these, we can simulate the values of a Gaussian process and a covariance function and a covariance function choice , . Gaussian . . , tntimes , the process random vector (X .,.t. ,t= X=t1. ):has a1;multivariate at fixed t matrix for an we arbitrary choice of t1of , . .times . , tnof . t1spaced Using these, can simulate the values a at fixed times t.=1.,1/m; n m = 1000; h h h : t = 1/h, t = 2/h, . 1 2 n spaced times t1 = 1/h, t2 = 2/h, . . . , tn = 1.t1 , . . . , tn by calculating µ where µi = µ(t1 ) and Σ, where Σi,j = r(ti , tj ) n =r(t length(t); normalµdensity a multivariate is completely describe by xpectation function calculating µ where ) and Σ,then where Σt)i,jnormal = tj,)Xand 1 , . . . , tn by i = µ(t1and i ,density simulating a multivariate normal vector. r(s, = Cov(X ). s t r(s, t) = Cov(X , X ). s t 3.1.12 (Brownian Motion). Brownian motion is a very import mean vector µExample andBrownian a covariance matrix Σ, the vector probability distribution of PROCES 94 CHAPTER 3. GAUSSIAN hen simulating a multivariate normal vector. Example 3.1.12 a(Brownian Motion). motion is aexamples, very In the following we simulate the Gaussian processes at ev %Make the meanimportant stochastic process which is, in some sense, the continuous time continu µ(t) =stochastic EXt Gaussian process is completely described if we have a way to construct the mu = zeros(1,n); spaced times t = 1/h, t = 2/h, . . . , t = 1. 1 2 Gaussian n which is, values in some the continuous time continuous In the these, following examples, we simulate thecan Gaussian processes at Using these, we simulate the values of aevenly process at fixed times Using we process can simulate the of asense, Gaussian process times state space analogue offor aat simple walk. It has an expectation funct i fixed = complexity 1:n random The of the Cholesky decomposition of an arbitrary mean vector and covariance matrix for an arbitrary choice of t , . . . , t . 1 n t , . . . , t by calculating µ where µ = µ(t ) and where Σ = r(t , t ) and 3 Σ, state space analogue of a simple random walk. It has an expectation function paced times t = 1/h, t = 2/h, . . . , t = 1. 1 n i 1 i,j i j mu(i) = 0; 1 calculating 2 is iO(n operations. µ(t) 0 and a covariance function r(s, t) point = Brownian min(s, t).motion is a very impo t1 , . . . , tn by µ where µni = µ(t and Σ, where Σi,jmatrix =(Brownian r(t , tj )) floating and Example 3.1.12 Motion). 1) = We thisr(s, using an expectation end function thencan simulating a multivariate normal vector. µ(t) = 0 and aCHAPTER covariance function t) =stochastic min(s, t). 3. do GAUSSIAN PROCESSES ETC. process which is, in some sense, the continuous time contin then simulating a multivariate normal vector. Listing 3.3: Matlab code It has In the following simulate the Gaussian atProcesses evenly Example 3.1.12 Motion). Brownian motion iswe aprocesses very important 3.1.2 Simulating a processes Gaussian Version state space analogue ofat acovariance simple random walk. an expectation fun 3.1. PROCESSES %MakeGAUSSIAN the matrix , t) = Cov(X , Xt ).(Brownian In the examples, we simulate theexamples, Gaussian evenly sfollowing µ(t) = EX 1 3.3: %use mesh size h Listing Matlab code t spaced times t = 1/h, t = 2/h, . . . , t = 1. µ(t) aSigma covariance functionprocesses r(s, t) =aremin(s, t).to be processes whe 1the continuous 2 = 0 and nBecause = zeros(n,n); tochastic process which in decomposition some. . .sense, time continuous Gaussian defined spaced timesoftthe 1/h,is, t2 = 2/h, , t2n = 1.of1000; The complexity an arbitrary 1 = Cholesky m = h = 1/m;real-valued tfor = ih of :1:n h : tCHAPTER 1; = 94 3. GAUSSIAN choice times , . . . , t , the random vector (Xt ,PROCESSES . . . , Xt ) has a mE 1 %use size h 1 n tate space of a process simple random It(Brownian has an expectation function 3 of the a mesh Gaussian at fixed3walk. times 3.3: Matlab code ix isvalues O(n )analogue floating point operations. for j = Listing 1:nand n function = length(t); Example 3.1.12 Motion). Brownian motion is a very important and a covariance normal density a multivariate normal density is completely d m =1 )a1000; = 1/m; t = hr(t :i , htt) 1;min(s, 3.1.12 (Brownian Motion). Brownian motion issize aavery important Sigma(i,j) = min(t(i),t(j)); µ(t) 0 µ(t and covariance function r(s, t). ere Example µ=i 2= and Σ,h where Σi,j = ):= and j4process 1 %use hsense, mean vector µcontinuous and a covariance matrix Σ, the probability dist stochastic which is,mesh in some the time continuous The complexity of the Cholesky decomposition ofif an arbitrary real-va end 94 CHAPTER 3. GAUSSIAN PROCESSES ETC. 3 n = length(t); stochastic process which is, in some sense, the continuous time continuous 5 %Make the mean vector Gaussian process is completely described we have a way to con e normal vector. 2 m = 1000; h = 1/m; t = h : h : 1; state space analogue ofmatrix a r(s, simple random walk. It has an expectation function 3end t) = Cov(X , X ). s t O(n1mean ) floating point operations. 2westate Simulating a Gaussian Processes Version 3.3: Matlab vector and covariance matrix for an arbitrary choice of t1 , . 4 6 mu =code zeros(1,n); 3Itn = length(t); spacethe analogue ofListing aprocesses simple random has anisexpectation simulate Gaussian at evenly µ(t) = 0 and awalk. covariance function r(s,Wet)can =function min(s, t). an expectation function do this using 7 for=imin(s, = 1:n t). 4 the %Make mean vector %Generate the multivariate normal real-valued The complexity of Cholesky decomposition of anprocess arbitrary andthe covariance function r(s, use ha processes h, .µ(t) . .mesh ,5tn= =0size 1. Using these, we can simulate the values of a Gaussian at vector fixed times Because Gaussian are defined to t) be processes where, for any 3 mu(i) A = chol(Sigma); 5 %Make the mean vector 8 = 0; 3.1.2 Simulating Gaussian Processes Version 1 Listing 3.3: Matlabacode 6 muh ==zeros(1,n); matrix is O(n ) floating point operations. µ(t) = EXt = 1000; 1/m; t = h : h : 1; = mu randn(1,n) * A;3.Σi,jGAUSSIAN t1 ,Listing . . .94 ,vector tn by calculating where Σ, where = r(ti , tj ) and PROC ce of times t1 , . . . , tn , the random (X has aµ multivariate 6 µ zeros(1,n); i =X µ(t 1 )+ and 9 CHAPTER tend tmu n) = 1, . . . , X 3.3: Matlab code 7 for i = 1:n Motion). Brownian motion is1 a %use very mesh important size h 7 for Because Gaussian processes are defined to be processes where, for length(t); i =normal 1:n 10 andvector. a covariance function thennormal simulating a multivariate mal= density and a multivariate density is completely describe by %Plot 8 mu(i) = 0; 1 %use mesh size h 2 m = 1000; h = 1/m; t = h : h : 1; n some sense, the continuous time continuous choice of times t , . . . , t , the random vector (Xt1 , .1 . . , Xtn ) has a multiva 8 mu(i) = 0; 1 n 11 %Make the covariance matrix 3.1.2 a Gaussian Processes In the: following examples, we simulate the processes at evenly plot(t,X); an vectorend µ and a 1/m; covariance matrix Σ,Simulating the probability distribution of GaussianVersion r(s, t)density = Cov(Xiss , completely Xt ). 2 random m =9 1000; h It = has t expectation = 3 h n: =h length(t); 1;12 Sigma =9 zeros(n,n); normal density and a multivariate normal describ end le walk. an function Make the mean vector The complexity of the Cholesky decomposition of an arbitra spaced times t = 1/h, t = 2/h, . . . , t = 1. ssian process is completely described if we have a way to construct the 1 2 n Because Gaussian processes are defined to be processes where, for any 4 10 a mean vector µ and a covariance matrix Σ, the probability distributi 3 n = length(t); 10 ction r(s, t) = min(s, t). we can simulate the values of a Gaussian process at 3 = zeros(1,n); nu4 vector and covariance matrix for anof arbitrary of the t1 ,process .covariance . . point ,Using tn. .is .. ,these, matrix matrix is O(n operations. 5 %Make the mean Gaussian completely ifµa we have a way to construc choice times t1 ,vector . choice .11. ,)t%Make , the random (Xt1 , . . described . , Xµtnwhere ) has multivariate nfloating 11 %Make the PROCESSES covariance matrix t1 ,vector t0.2 n by calculating i = µ(t1 ) and Σ, where Σi,j = r 3.1. GAUSSIAN 95 Example 3.1.12 (Brownian Motion). Brownian motion isananormal very important or can i =do 1:n 12 Sigma = zeros(n,n); We this using an expectation function mean vector and covariance matrix for arbitrary choice 6 mu = zeros(1,n); then simulating a multivariate vector. normal density and a multivariate normal density is completely describe byof t1 , . . . , tn . 0 ng 3.3: Matlab code 5 %Make the mean vector 12 Sigma = zeros(n,n); stochastic process which is, in some sense, the continuous time continuous mu(i) = 0; In the following examples, we simulate the Gaussian processe We can do this using an expectation function 7 for i = 1:n −0.2Σ, the probability distribution of a mean vector µ and a covariance matrix 6 mu for = zeros(1,n); i = 1:n spaced times t = 1/h, t = 2/h, . . . , t = 1. 1 2 n state space analogue of a simple random walk. It has an expectation function nd 8 = 0;Simulating µ(t) =mu(i) EXt process −0.4 j = 1:n Gaussian is completely described if we have a wayProcesses to construct the Versi = 1:n 3.1.2 a Gaussian h7 :for 1; i for µ(t) = EXt Sigma(i,j) = min(t(i),t(j)); 9 end −0.6 min(s, µ(t) = 0 and a covariance function r(s, t) = t). Example 3.1.12 (Brownian Motion). mean vector and covariance matrix for an arbitrary choice of t1 , . .Brownian . , tn . motion is a very 8 mu(i) = 0; end stochastic process which is, in some sense, the continuous time −0.8 10 aMake covariance function matrix the covariance WeBecause can do thisGaussian using expectation function 9 endend andan a covariance function processes are defined to be processes w state space analogue of a simple random walk. It has an expectatio Listing 3.3: Matlab code %Make the covariance matrix ETC. −1 CHAPTER 3. 11GAUSSIAN PROCESSES igma = zeros(n,n); 10 = random 0 and a covariance function r(s, t), = min(s, t). choice times t1 , . . . , tnµ(t) , µ(t) the −1.2 =size r(s, t)12normal =Sigma Cov(X , of XBrowniano: %Generate• the multivariate vector Simulazione di un moto 1 %use mesh h r(s, t)vector = Cov(Xs(X , Xtt).1 . . . , Xtn ) has a szeros(n,n); t ). = EX t 11 %Make the covariance matrix 1 2 3 4 5 6 7 8 9 10 11 12 13 1 n 14 15 16 17 18 19 20 21 22 23 24 Processi stocastici // Simulazione di processi Gaussiani 13 14 15 Xt 16 17 18 19 20 A = chol(Sigma); Listing 3.3: Matlabis code normal normal density completel 2 m = 1000; h = density 1/m; t = and h : ha:multivariate 1; Sigma = +zeros(n,n); 21 X = mu randn(1,n) * A; −1.6 1we %use mesh size h Using these, can simulate the values of a Gaussian process at fixed t the Cholesky decomposition of an arbitrary real-valued gofthese, we can simulate the values of a Gaussian process at fixed times a covariance function 3 n = and length(t); 22 m = 1000; t = h : hΣ, : 1;the probability d a mean vector µ and a 2 covariance matrix −1.8 h = 1/m; −1.4 12 t1 , .Σ. . , tn=byr(t calculating where Σi,j = r(ti , tj ) . , tn by calculating µ where µi = µ(t1 ) and Σ, where andµ where µi = µ(t1 ) and Σ, %Plot ating point operations. i,j in, t=j )length(t); 4 t then simulating a multivariate normal vector. 0 23 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 3 plot(t,X); Gaussian process isr(s, completely described if we have a way to t) = Cov(Xs , X t ). simulating a multivariate normal vector. 5 %Make the mean vector In the following we simulate the Gaussian processes at ev %Make examples, the mean vector Figure 3.1.1: Plot of Brownain motion on [0,of 1]. t vector and covariance matrix an choice n the following examples, the Gaussian processes evenly 6 we mu simulate = mean zeros(1,n); zeros(1,n); spaced times tat = 2/h,for . . . ,process tn = arbitrary 1. at fixed times 1mu==1/h, Using these, we can simulate the values of at2 Gaussian ting Processes Version 1 i = 1:n 7 for 1:n ed timesat1 Gaussian = . . .ti, t, n.=.= 1. calculating 0.2 1/h, t2 = 2/h, do this usingµ for an expectation . ,We t bycan µ where = µ(t ) and Σ, wherefunction Σ = r(t , t ) and 24 4 5 6 7 1 n 1 =3.1.13 i,j iProcess). j 8 i Example mu(i) 0; Motion). (Ornstein-Uhlenbeck very Example 3.1.12 (Brownian Brownian motion is Another a very impo 0 8 mu(i) = 0; 9 end vector. then simulating a multivariate normal Gaussian process is the Ornstein-Uhlenbeck process. This has stochastic which is, in some sense, the continuous time contine mple 3.1.12 Brownian motion is a very important an processes are defined processes where, forprocess any −0.2 (Brownian 9 Motion). end to be 10 function µ(t) =Gaussian 0 and covariance function r(s, t) = e−α|s−t|/2 . In the following examples, we simulate the processes at evenly space analogue of acovariance simple random It has an expectation fun 3.1. µ(t)PROCESSES =matrix EXwalk. t 11 %MakeGAUSSIAN the hastic which is, sense, time continuous 10in some . . , tn ,process the−0.4 random vector (X . , the Xtntcontinuous ) =has astate multivariate t1 , . . times spaced 1/h, t = 2/h, . . . , t = 1. Listing 3.4: Matlab µ(t) = 0 and a covariance function r(s, t) = min(s, t). code 1 2 n 12 Sigma = zeros(n,n); −0.6 Xt analogue of anormal simple random It has anmatrix expectation function 11 %Make thewalk. d space a multivariate density iscovariance completely describe for by imesh = 1:n %use size h = 0 and a covariance function r(s, t) = min(s, t). 12 Sigma = zeros(n,n); and a covariance function j =h Listing 1:n Example 3.1.12 (Brownian Motion). Brownian motion is important m = for 1000; = 1/m; t3.3: = hMatlab : ha :very 1;code nd a covariance matrix Σ, the probability distributionn of Sigma(i,j) = min(t(i),t(j)); length(t); %useis,mesh size =hsense, stochastic process which in some the continuous time continuous end %paramter OU process Listingif 3.3: Matlab code s completely described we have way toofconstruct m a=simple 1000; random h =the 1/m;walk. t =ofhIt : has h : an 1; expectation function state spaceaanalogue end alpha = 10; r(s, t) = Cov(X s , Xt ). n = length(t); mesh size matrix h ovariance for an arbitrary choice of t1 , . .function . , tn . r(s, t) = min(s, t). µ(t) = 0 and a covariance %Generate the multivariate normal vector 1000; = 1/m; t = h : function h : 1; usinghan expectation A =vector chol(Sigma); %Make the mean 13 1 −0.8 14 2 −1 15 3 1 −1.2 16 4 2 17 5 −1.4 3 −1.6 −1.8 0 18 6 4 19 Using these, wemu = zeros(1,n); of a Gaussian process t X = mu + randn(1,n) * A; %use h calculating t1 , .mesh . . , tsize for i = 1:n µ where µi = µ(t1 ) and Σ, where Σi,j = n by µ(t) EX t of %Plot Figure= 3.1.1: Plot on [0, 1]. m Brownain = 1000;motion h = 1/m; t = mu(i) h : h =: 0; 1; e the mean vector plot(t,X);normal vector. then simulating a multivariate n = length(t); zeros(1,n); end Example 3.1.13 (Ornstein-Uhlenbeck In Process). very important unction the Another following examples, we simulate the Gaussian proces i = 1:n Gaussian process is the Ornstein-Uhlenbeck process. This has expectation %Make the covariance matrix %Make the mean vector mu(i) = 0; spaced t1.Sigma = 1/h, t2 = 2/h, . . . , tn = 1. function µ(t) = 0 and covariance mu function r(s, t)times = e−α|s−t|/2 = zeros(n,n); = zeros(1,n); length(t); 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.85 6 1 2 7 22 24 4 10 5 11 7 for i = 1:n Listing 3.4: Matlab code 21 23 9 r(s, t) = Cov(Xs , Xt ). 20 8 3 6 1 Listing 3.3: Matlab can simulate the code values 0.9 12 0.2 0 −0.2 . Processi stocastici // Classi di processi 3.1. GAUSSIAN PROCESSES propagator ! "!!!#$!!!% P(2) = P(2 | 1) P(1)dx1 where P(2 | 1) serves as the evolution kernel or propagator P A stochastic process whose joint pdf does not change wh 13 for i = 1:n called a (strict sense) stationary process: USSIAN PROCESSES 14 for j = 95 1:n 15 Sigma(i,j)P(x = 1min(t(i),t(j)); ,t 1 ; x2 ,t 2 ; · · · ; xn ,t n ) = P(x1 ,t 1 + τ; x2 ,t 2 + τ; · · · 16 end τ > 0 being a time shift. Analysis of a stationary process is freq 17 end :n than for a similar process that is time-dependent: varying t, all 18 = 1:n 94 CHAPTER 3. GAUSSIAN PROCESSES ETC. law; all thevector moments, if they exist, are consta t have the same 19 %Generate the Xmultivariate normal bution of X(t ) and X(t ) depends only on the difference τ = 1 2 gma(i,j) = min(t(i),t(j)); 20 A = chol(Sigma); P(x1 ,t 1 ; * x2 ,t 2 ) = P(x1 , x2 ; τ). 21 X = mu + randn(1,n) A; Areal-valued conceptual of main kinds ofPROCESSES stochastic processesET th The complexity of the Cholesky94decomposition of an arbitrary CHAPTER 3. map GAUSSIAN 22 the remainder of this Chapter is presented in Figure 14. 3 23 %Plot matrix is O(n ) floating point operations. 24 plot(t,X); The complexity of the Cholesky decomposition of an arbitrary real-valu Xt the multivariate normal vector matrix is O(n3 ) floating point 5operations. How to leave the past behind: Markov Process 3.1.2 Simulating a Gaussian Processes Version 1 Sigma); 0.2 The most simple kind of stochastic process is the Purely Rand randn(1,n)Because * A; Gaussian processes are defined to be processes where, for any 3.1.2 Simulating 0a Gaussian Processes Version 1 there are no correlations. From Eq. (27): −0.2 (Xt1 , . .PROCESSES . , Xtn ) has a multivariate 94 choice of times t1 , . . . , tCHAPTER 3. vector GAUSSIAN ETC. n , the random P(1, 2) = P(1)P(2) Because Gaussian processes are defined −0.4 normal density and a multivariate normal density is completely describe byto be processes where, for a P(1, choice of times t1 , . . . , tn−0.6 , the random vector (Xt1 , . . 2, . ,3)X= )P(1)P(2)P(3) has a multivari ; a mean vectorofµthe andCholesky a covariance matrix Σ,ofthe probability distribution of P(1, 2, 3, 4) =tn P(1)P(2)P(3)P1(3) The complexity decomposition an arbitrary real-valued normal density and a multivariate normal density is completely describe −0.8 Gaussian completely described if we have a way to construct the ··· matrix is O(n3 )process floatingispoint operations. a mean vector µ and a covariance matrix Σ, the probability distribution −1 mean vector and covariance matrix for an arbitrary choice of t , . . . , tn .process 1 Gaussian process is completely described if we a way construct One such canhave be obtained for to example by repeat −1.2 HAPTER 3. We GAUSSIAN PROCESSES ETC. can do this using an expectation function complete independence property can be written mean vector and covariance matrix for an arbitrary choice of t1explicitly , . . . , tnas: . −1.4 3.1.2 Simulating a Gaussian Processes Version 1 −1.6an expectation function We can do this using lesky decomposition of an arbitrary µ(t) = to EXbe t processes where, Because Gaussian processes real-valued are defined for 0.2 any 0.3 0.4 0.5 0.6 0.7 0.8 0.9 −1.8 0 0.1 t GAUSSIAN ETC. operations. µ(t) = EXPROCESSES choice of times t1 , . . . , tn94 , the random vector (Xt1 , . . CHAPTER . , Xtn ) has a3.multivariate t and a covariance function normal density and a multivariate normal density is completely describe by Figure 3.1.1: Plot of Brownain motion on [0, 1]. andΣ, a covariance function a mean vector µ and aVersion covariance matrix theCholesky probability distributionofofan arbitrary real-valued The complexity of the decomposition Gaussian Processes 1 r(s, t) = Cov(X s , Xt ). Gaussian process is completely if we have aoperations. way to construct matrix isdescribed O(n3 ) floating point r(s, t) (Ornstein-Uhlenbeck =the Cov(Xs , Xt ). Process). Another very Example 3.1.13 es are defined to be processes where, for any mean vector and covariance matrix for an arbitrary choice of t , . . . , t . 1 n Gaussian process the Ornstein-Uhlenbeck process. This has e Using(X these, simulate the values of a Gaussian process at isfixed times andomWe vector , . . .we ,using Xcan a multivariate tn ) has can dot1this an expectation function Using these, we can simulate values of a Gaussian process at fixed function µ(t) =the 0 and covariance function r(s, t) = e−α|s−t|/2 . tim t1 , . .density . , tn by iscalculating where µiby = µ(t1 ) aand Σ, where ΣProcesses 3.1.2µ describe Simulating Gaussian Version 1 i,j = r(ti , tj ) and riate normal completely t , . . . , t by calculating µ where µ = µ(t ) and Σ, where Σ = r(t , 1 di Ornstein-Uhlenbeck n i 1 i tj ) a •simulating Simulazione di un processo (O-U): Listing 3.4: Matlab codei,j thenΣ, a multivariate normal vector. ance matrix the probability distribution of µ(t) = EX thent simulating multivariate normal vector. where, for any Because Gaussian processesa%use are defined processes mesh sizeto h be thehave following examples, we simulate the Gaussian processes at evenly y described In if we a way to construct the In the following examples, we simulate the Gaussian processes at eve m = 1000; h =(X 1/m; : 1; choice of times t1 , . . . , tn , the random vector a multivariate t1 , .t. .= , hX:tnh) has spaced times t = 1/h, t = 2/h, . . . , t = 1. and a covariance function 1 n = length(t); matrix for an arbitrary choice of t21 , density . . . , tn . and spaced times t1 =n1/h, t2 = 2/h, . . . , tisn = 1. normal a multivariate normal density completely describe by 96 CHAPTER 3. GAUSSIAN PROCESSES %paramter of OU process xpectation function mean vector µExample and, Brownian a0.8covariance matrix theimportant probability of import 0.1 Example 0.2 0.33.1.12 0.4 a 0.5 t) 0.6 0.7 0.9motion 1 = is alpha 10; (Brownian Motion). a Σ, very (Brownian Motion). Browniandistribution motion is a very r(s, = Cov(X s Xt ). 3.1.12 t %Make the is, mean vector issense, completely described ifinwe havesense, a waythetocontinuous construct time the continu processGaussian which is, process in some the continuous time continuous stochastic process which some µ(t) =stochastic EXt mu = zeros(1,n); mean vector and covariance matrix for an arbitrary choice of t , . . . , t . 1 an expectation n Usingstate these, we can simulate values of a space Gaussian process fixedrandom times state of aat simple walk. It has funct space analogue of athe simple random walk.analogue It has function foran i =expectation 1:n We can do this using an expectation function mu(i) = 0; Plot of Brownain 1]. µ(t) = and a covariance function r(s, t) = min(s, t). t1Figure , . . .µ(t) , tn 3.1.1: by µ wherefunction = µ(t and Σ, where r(ti , tj ) and 1 ) on = 0calculating and aCHAPTER covariance r(s, t)0[0, = min(s, t).Σi,j =ETC. 3.µi motion GAUSSIAN PROCESSES end then simulating a multivariate normal vector. Listing 3.3: Matlab code = Listing 3.3: the Matlab codeµ(t) t at evenly %Make theEX covariance matrix , t) = Cov(X ,X In the sfollowing examples, we simulate Gaussian processes t ). The complexity of the Cholesky decomposition of an arbitrary real-valued 1 %use mesh size hSigma 3.1.13 (Ornstein-Uhlenbeck Process). Another very important = zeros(n,n); 1 %use mesh spaced t1 =size 1/h,htoperations. 3 times 2 = 2/h, . . . , tn = 1. for it ==1:n ix is O(n )ofthe floating point 2 m function = 1000; h = 1/m; h : h : 1; and a covariance process is Ornstein-Uhlenbeck the values a Gaussian process 2 m = 1000; h = 1/m; t =athfixed : process. h times : 1; This has expectation for j = 1:n 3 n = length(t); 3.1.12 (Brownian Motion). Brownian motion is a very= important Sigma(i,j) exp(-alpha * abs(t(i) - t(j)) / 2); ere µ=i 3= and Σ, where Σi,j = r(t and (t)Example 0 µ(t and covariance function r(s, e−α|s−t|/2 . n =1 )length(t); i , tt) j) = 4 r(s, t) = Cov(X , Xt ). end s stochastic process which is, in some sense, the continuous time continuous Simulating a Gaussian Processes 1 e2normal vector. 4 5 %Make theVersion mean vector end Listing Matlab spacethe analogue aprocesses simple random walk. It has an expectation function westate simulate Gaussian atcode evenly 5 %Make the meanof3.4: vector 6 mu =processes zeros(1,n); Using these, we can simulate the values a multivariate Gaussian process at fixed times Because Gaussian processes are defined to be where, forofthe any %Generate normal vector a covariance function r(s, t) =imin(s, =and zeros(1,n); h,size .µ(t) . . ,6thn=mu =0 1. 7 for = 1:n t). A = chol(Sigma); t1 , . . . ,vector tn by calculating ce of times t1 , . . . , tn , the random (Xt1 , . . . , Xµ )where has a µ multivariate i = µ(t1 ) and Σ, where Σi,j = r(ti , tj ) and for ti == h1:n 8 mu(i)tn = 0; X = mu + randn(1,n) * A; h = 71/m; : h : then 1;Listing simulating a multivariate normal vector. 3.3: Matlab code mal density and a multivariate normal density Motion). 9 end is completely describe by mu(i) =motion 0; is a very important h(t);8 Brownian In thecontinuous following examples, we simulate the %Plot vector µ and acontinuous covariance matrix Σ, the probability distribution of Gaussian processes at evenly 1 some %use mesh size h nan the time 10 9 sense, end of OU process plot(t,X); spaced t1 = 1/h,athe t2way =covariance 2/h, . . . , tnmatrix = 1.the ssian process ish completely have to construct m 10 = 1000; = has 1/m;ant expectation =described h : htimes : if 1;11we %Make le2 random walk. It function 0; nction and an arbitrary of t1 , . . . , tn . 3 vector n 11 = length(t); 12 Sigma choice = zeros(n,n); r(s, t) =covariance min(s, t). matrix for %Make the covariance matrix Example 3.1.12 (Brownian Motion). Brownian motion is a very important 3 4 can do this using an expectation function We 12 Sigma = zeros(n,n); the continuous time continuous ng 3.3: Matlab codevectorstochastic process which is, in some sense, 5 %Make the mean 2.5 Processi stocastici // Simulazione di processi Gaussiani 1 2 3 4 5 67 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 11 end 12 13 14 15 16 ROCESSES 17 18 19 20 21 22 23 24 25 0.2 0.3 26 −1.8 0 %Make the covariance matrix Sigma = zeros(n,n); for i = 1:n for j = 1:n Sigma(i,j) = exp(-alpha * abs(t(i) - t(j)) / 2); end end 0.2 %Generate the multivariate normal vector A = chol(Sigma); X = mu + randn(1,n) * A; 0.4 0.5 t 0.6 0.7 0.8 0.9 function µ(t) = 0 and covariance function r(s, t) = e−α|s−t|/2 . • Simulazione di un processo di Ornstein-Uhlenbeck (O-U): 0.6 0.7 0.8 0.9 1 %Plot plot(t,X); 0.4 0.5 t 1 2 of an Ornstein-Uhlenbeck process on [0, 1] with α = 10. 2.5 3 2 4 1.5 5 Xt (Fractional Brownian Motion). Fractional Brownian moeralisation of Brownian Motion. It has expectation funcovariance function Cov(s, t) = 1/2(t2H + s2H − |t − s|2H ), 1 67 0.5 8 0 9 −0.5 10 11 −1 −1.5 0 0.1 0.6 0.2 0.30.7 0.4 0.4 0.5 3.1. GAUSSIAN PROCESSES t 0.3 0.3 Figure 3.1.1: Plot of Brownain motion on [0, 1]. Processi stocastici 97 Example 3.1.13 (Ornstein-Uhlenbeck Process). Another very // Simulazione di processi Gaussiani Gaussian process is the Ornstein-Uhlenbeck process. This has e 3 0.2 0.1 0.5 0.8 t 0.6 0.90.7 1 0.8 0.9 Listing 3.4: Matlab code %use mesh size h m = 1000; h = 1/m; t = h : h : 1; n = length(t); 96 %paramter of OU processCHAPTER 3. GAUSSIAN PROCESSES alpha = 10; %Make the mean vector mu = zeros(1,n); for i = 1:n mu(i) = 0; end 12 1 97 13 14 Figure 3.1.2: Plot of an Ornstein-Uhlenbeck process on [0, 1] with α = 10. 4 f an Ornstein-Uhlenbeck process on [0, 1] with α = 100. 3 Example 3.1.14 (Fractional Brownian Motion). Fractional Brownian motion (fBm) is a generalisation of Brownian Motion. It has expectation func2 tion µ(t) = 0 and covariance function Cov(s, t) = 1/2(t2H + s2H − |t − s|2H ), called the Hurst parameter. When H = 1/2, fBm reBrownian motion. Brownian motion has independent inst, for H > 1/2 fBm has positively correlated increments m has negatively correlated increments. Xt 1 15 16 17 18 19 −2 are likely to play a role in eye movement modelling and 20 0 −1 %Make the covariance matrix Sigma = zeros(n,n); for i = 1:n for j = 1:n Table of content Sigma(i,j) = exp(-alpha * abs(t(i) - t(j)) / 2); end Fig. 14 A conceptual map of stochastic processes that end 21 22 23 %Generate the multivariate normal vector analyses A = chol(Sigma); X = mu + randn(1,n) * A; 24 Listing 3.5: Matlab code −3 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 25 26 %Plot plot(t,X); Figure 3.1.3: Plot of an Ornstein-Uhlenbeck process on [0, 1] with α = 100. t = h : h . where H ∈ (0, 1) is called the Hurst parameter. When H = 1/2, fBm reduces to standard Brownian motion. Brownian motion has independent increments. In contrast, for H > 1/2 fBm has positively correlated increments : and1; for H < 1/2 fBm has negatively correlated increments. 3 Processi stocastici // Classi di processi 2.5 2 Listing 3.5: Matlab code 2 tor 3 4 5 6 %use mesh size h m = 1000; h = 1/m; t = h : h : 1; n = length(t); 7 8 9 10 11 ce matrix ; 12 1.5 %Hurst parameter H = .9; Xt 1 %Make the mean vector mu = zeros(1,n); for i = 1:n mu(i) = 0; end 13 %Make the covariance matrix Sigma = zeros(n,n); 16 for i = 1:n 17 for j = 1:n 18 Sigma(i,j) = 1/2 * (t(i)^(2*H) + t(j)^(2*H)... (t(i)^(2*H) + t(j)^(2*H)... 19 - (abs(t(i) - t(j)))^(2 * H)); 14 15 = 1/2 * t(i) - t(j)))^(2 * H)); 1 0.5 0 −0.5 propagator ! "!!!#$!!!% P(2) = P(2 | 1) P(1)dx1 where P(2 | 1) serves as the evolution kernel or propagator P A stochastic process whose joint pdf does not change wh called a (strict sense) stationary process: P(x1 ,t 1 ; x2 ,t 2 ; · · · ; xn ,t n ) = P(x1 ,t 1 + τ; x2 ,t 2 + τ; · · · τ > 0 being a time shift. Analysis of a stationary process is freq than for a similar process that is time-dependent: varying t, all −1.5 0 0.1X have 0.2 the 0.3 0.4 all 0.5 0.6 0.7 1 same law; the moments, if they0.8exist, 0.9 are consta t t bution of X(t 1 ) and X(t 2 ) depends only on the difference τ = P(x1 ,t 1 ; x2 ,t 2 ) = P(x1 , x2 ; τ). Figure 3.1.2: PlotAofconceptual an Ornstein-Uhlenbeck process on [0, 1] with α th= map of main kinds of stochastic processes the remainder of this Chapter is presented in Figure 14. −1 Example 3.1.14 (Fractional Brownian Motion). Fractional Brownian tion (fBm) is a generalisation of Brownian Motion. It has expectation 2H tion µ(t) = 0 and function Cov(s, t) = 1/2(t + s2H − |t − 5 covariance How to leave the past behind: Markov Process The most simple kind of stochastic process is the Purely Rand there are no correlations. From Eq. (27): P(1, 2) = P(1)P(2) P(1, 2, 3) = P(1)P(2)P(3) P(1, 2, 3, 4) = P(1)P(2)P(3)P1 (3) ··· One such process can be obtained for example by repeat complete independence property can be written explicitly as: −2 ition ofBecause an arbitrary real-valued µ(t) = to EXbe t processes where, for any Gaussian processes are defined −3 GAUSSIAN PROCESSES ETC. choice of times t1 , . . . , tn94 , the random vector (Xt1 , . . CHAPTER . , Xtn ) has a3.multivariate 0 0.1µ(t) 0.2 = EX 0.3 0.5 0.6 0.7 0.8 0.9 1 t 0.4 t and a covariance function normal density and a multivariate normal density is completely describe by andΣ, a covariance function Figure 3.1.3: Plot of of an process on [0, 1] with α = 1 a mean Version vector µ and matrix theCholesky probability distribution ofOrnstein-Uhlenbeck The complexity of the decomposition an arbitrary real-valued ocesses 1 a covariance r(s, t) = Cov(X , X ). s t 3 Gaussian process is completely if we have way to construct the matrix isdescribed O(n ) floating pointaoperations. Cov(X , Xt ).parameter. When H = 1/2, fBm where H r(s, ∈ (0,t) 1) = is called the sHurst to be processes where, for any mean vector and covariance matrix for an arbitrary choice duces of t1to, .standard . . , tn . Brownian motion. Brownian motion has independent Using these, we can simulate the values of a Gaussiancrements. process at fixedfortimes contrast, fBm has positively correlated increme Xt1 , . . .We ,X multivariate tcan n ) has do athis using an expectationUsing function these, we can simulateInthe valuesHof> a1/2Gaussian process at fixed tim and forΣHProcesses <= 1/2r(t fBmi ,has negatively correlated1increments. t , . . . , t by calculating µ where µ = µ(t ) and Σ, where t ) and 3.1.2 Simulating a Gaussian Version 1 n i 1 i,j j nsity is completely describe by t1 , . . . , tnfrazionario by calculating µ where µi = µ(t1 ) and Σ, where Σi,j = r(ti , tj ) a Simulazione di moto Browniano (fBm): Listing 3.5: Matlab code then •simulating a multivariate normal vector. the probability distribution ofBecause µ(t)Gaussian = then EXt simulating a multivariate normal vector. where, for any processes are%Hurst defined to be processes parameter In to theconstruct following the examples, we simulate the Gaussian processes at evenly we have a way following examples, we simulate the Gaussian processes at eve H = .9; choice of times t1 , . .In . , tthe n , the random vector (Xt1 , . . . , Xtn ) has a multivariate 1/h, t = 2/h, . . . , t = 1. and choice aspaced covariance 2 n bitrary oftimes t1 , . function . .t,1t= . spaced times t = 1/h, t = 2/h, . . . , t = 1. n normal density and a multivariate 1 normal isn completely describe by %use2 mesh density size h ion m = 1000; h = 1/m; t = h : h : 1; mean vector µExample and, Brownian a covariance matrix theimportant probability of import Example 3.1.12 a(Brownian motion is a Σ, very (Brownian Motion). Browniandistribution motion is a very n = length(t); r(s, t) =Motion). Cov(X s Xt ). 3.1.12 Gaussian issense, completely described ifinwesome havesense, a waythetocontinuous construct time the continu stochastic process is,0.8process in some the continuous continuous process which is,time 0.2 0.3 0.4 0.5 0.6 which 0.7 0.9stochastic 1 %Make the mean vector t mean vector and covariance matrix forexpectation an arbitrary choice . , expectation tn . 1 , . .an Usingstate these, we can simulate values of a space Gaussian process fixed times state of aat simple random walk. of It thas funct mu = zeros(1,n); space analogue of athe simple random walk.analogue It has an function for i = 1:n We can do this using an expectation function µ(t) = 0 and a covariance function r(s, t) = min(s, t). t1 , . . .µ(t) , tn by calculating µ where µ = µ(t ) and Σ, where Σ = r(t , t ) and 1 = 0 and aCHAPTER covariance function r(s, t) =PROCESSES min(s, t). i,j mu(i) 3. i GAUSSIAN ETC. = i0; j of then an Ornstein-Uhlenbeck process on [0, 1] with α = 10. end simulating a multivariate normal vector. Listing 3.3: Matlab code = EX Listing 3.3: the Matlab codeµ(t) t at evenly X ). In the following examples, we simulate Gaussian processes %Make the covariance matrix Thet complexity of the Cholesky decomposition of an arbitrary real-valued 1 %usedimesh size h 3.1. Coefficiente Hurst PROCESSES SigmaGAUSSIAN = zeros(n,n); (Fractional Brownian Motion). mo1 %use mesh size ht2 = Fractional spaced t 1/h, 2/h, . . . , t = 1. 3 times 1 = nBrownian ix is O(n process ) floating operations. 2 m function = 1000; h = 1/m; for t =i =h 1:n : h : 1; andtaIt covariance Gaussian at hpoint fixed times neralisation Brownian func2 m =of 1000; = Motion. 1/m; = has h : expectation h : 1; for j = 1:n 3 n = length(t); 2H PROCESSES 2Hmotion is 98 4 important 3. GAUSSIAN 3.1.12 Motion). a verySigma(i,j) 3. +GAUSSIAN PROCESSES ET = 1/2 *CHAPTER (t(i)^(2*H) t(j)^(2*H)... nd Example Σ, 3where Σ r(t(Brownian )= and covariance 1/2(t + s2HBrownian − |t − s|ETC. ), n function =CHAPTER length(t); i,j =Cov(s, i , tjt) 4 r(s, t) = Cov(X , ). -X (abs(t(i) - t(j)))^(2 * H)); s t processawhich is, in some sense, the Version continuous 1 time 3continuous Simulating Gaussian Processes .2 stochastic 4 SIAN PROCESSES 5 %Make the 99 mean vector end state space analogue a simple random walk. It has an expectation function end he Gaussian processes atofevenly 5 %Make the mean vector 2 6 zeros(1,n); Using these, we can simulate the values a Gaussian process at fixed times Because Gaussian processes are defined to mu be==processes where, forofany µ(t) = 0 and a covariance function r(s, t) min(s, t). 6 mu = zeros(1,n); 7 for i = 1:n 1 the multivariate normal vector %Generate t1 , . . . ,vector tn by calculating ce of times t1 , . . . , tn , the random (Xt1 , . . . , Xµ )where has a µ multivariate i = µ(t1 ) and Σ, where Σi,j = r(ti , tj ) and A = chol(Sigma); 7 for i = 1:n 8 mu(i)tn = 0; multivariate normal vectorthen simulating a multivariate normal vector. Listing 3.3: Matlab code mal density and multivariate normal density by X = mu +0 randn(1,n) * A; nian motion ismu(i) a avery 9 end is completely describe 8 = important 0; ); −1 the In the following examples, we simulate Gaussian processes at evenly an vector µ and a covariance matrix Σ, the probability distribution of 1 %use mesh size he continuous time hcontinuous 10 %Plot end (1,n) 9* A; t1 = 1/h,athe t2way =covariance 2/h, . . . ,plot(t,X); tnmatrix = −21.the ssian process have to construct = 1000; ish completely = 1/m; t spaced =described h : htimes : if 1;11we %Make .2 Itmhas 10 an expectation function n3 vector and covariance matrix for an arbitrary of t1 , . . . , tn . −30 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 n 11 =t).length(t); 12 Sigma choice = zeros(n,n); min(s, %Make the covariance matrix Example 3.1.12 (Brownian Motion). Brownian motion is a very important t 4 can do this using an expectation function We 2 12 Sigma = zeros(n,n); sense, the continuous time continuous 5code %Make the mean vectorstochastic process which is, in someFigure 1.5 3.1.3: Plot of an Ornstein-Uhlenbeck process on [0, 1] with α = 1 state space analogue of a simple random walk. It has an expectation function 6 mu = zeros(1,n); µ(t) = EXt 1 µ(t) = 0 and a covariance function r(s, min(s, t). the Hurst parameter. When H = 1/2, fBm 7 for i = 1:n wheret) H0.5= ∈ (0, 1) is called duces to standard Brownian motion. Brownian motion has independent a8 covariance mu(i) function = 0; 0 In contrast, for H > 1/2 fBm has positively correlated increme crements. Listing 3.3: Matlab code 9 end and for−0.5 H < 1/2 fBm has negatively correlated increments. 0.1 0.2 0.3 0.4 PROCESSES 0.5 0.7 0.8 0.9 1 3.1. GAUSSIAN 99 r(s, t)0.6di = moto Cov(X • Simulazione Browniano frazionario (fBm): mesh size h 10 s , Xt ). t 1 %use Listing 3.5: Matlab code Processi stocastici // Simulazione di processi Gaussiani 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Xt 23 24 25 26 27 28 Xt Processi stocastici // Simulazione di processi Gaussiani −1 %Make the covariance 2 mmatrix = 1000; h = 1/m; t = h : h : 1; %Hurst parameter −1.5 0.2 g.5:these, we can simulate the values a Gaussian process times H = .9; 98 CHAPTER 3. of GAUSSIAN PROCESSES ETC. at fixed 12 Sigma = zeros(n,n); 3 n = length(t); Plot of fractional Brownian motion on [0, 1] with H = 0.9. 11 1 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 . , tn by calculating µ where µi = µ(t1 ) and Σ, where Σi,j = r(t%use ) −2and i , tjmesh 4 t size h end 0 m = 1000; h = 1/m; t = h : h : 1; simulating normalthe vector. 5 %Make mean vector end a multivariate ditions for a stochastic process to be stationary are quite strict. n Figure = length(t); 3.1.4: Plot of fractional Brownian motion on [0, 1] with H = 0.1 −0.1 examples, nghtly the weaker following simulate the Gaussian processes at evenly 6 we munormal = zeros(1,n); version of stationarity, called %Generate the multivariate vector weak stationarity is %Make the mean vector 7 for ed timesA t=1 chol(Sigma); =−0.21/h, t2 = 2/h, . . . i, tn= =1:n 1. tead. 3 0.1 4 20 5 21 6 22 7 Xt 23 8 24 25 X = mu + randn(1,n) * A; −0.3 26 8 mu(i) = 0; 9 10 mu = zeros(1,n); 3.1.3 Stationary for i = 1:n mu(i)cesses = 0; end and Weak Stationary Gaussian Pr mple (Brownian Brownian motionprois a very important %Plot Stationary 3.1.16 3.1.12 (Weak Stochastic A stochastic 9 Motion). end Process). −0.4 plot(t,X); .1is said 0.2process 0.3 weak 0.4stationary 0.5 0.8 the 0.9continuous 1 to be (sometimes wide sense stationary hastic which is, in0.6 some0.7sense, time continuous Gaussian processes (and stochastic processes in general) are easier to w t 10 −0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 der stationary) if EX = c for all t ≥ 0 and Cov(X , X ) does with if the theycovariance are stationary stochastic processes. matrix t t t+s space analogue of a simple random walk. It has anmatrix expectation%Make function 11 %Make the covariance t Sigma = zeros(n,n); on t. 2 = 0 and a covariance function r(s, = min(s, t). Definition 12 Sigma = t) zeros(n,n); for i = 1:n 3.1.15 (Stationary Stochastic Process). A stochastic proc 27 11 28 12 13 14 15 Xt Plot of fractional Brownian motion on [0, 1] with = 0.1. Figure1.53.1.5: Plot of fractional Brownian motion on [0, H 1] with H = 0.9. mple of a weak stationary Listing process is3.3: the Matlab Ornstein-Uhlenbeck procode 1 −α|t−(t+s)|/2 and Cov(X t +tos) e The conditions fort ,aXstochastic process be = stationary are quite = strict. t = µ(t) = 0 t+s ) = r(t, mesh size Often, ha0.5slightly weaker version of stationarity, called weak stationarity is ionary Gaussian Proinstead. 0 Weak 1000; h required =and 1/m; t = h Stationary : h : 1; −0.5 es length(t); 1.17. Gaussian processes that Stationary are weakStochastic stationary are stationary. Definition 3.1.16 (Weak Process). A stochastic pro- cess {Xt−1 }t≥0 is said to be weak stationary (sometimes wide sense stationary ocesses (and stochastic processes easier work ortheorem second order stationary) if EXin c for all t ≥are 0 and Cov(X t =general) t, X t+s ) does now from 3.1.5 that Gaussian distributions are to entirely −1.5 e the mean vector not depends on t. stationary stochastic processes. by their mean vector and covariance matrix. Since the mean and zeros(1,n); −2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.8 0.9 1 An example a weak stationary process is when the0.7Ornstein-Uhlenbeck prof a weakly stationary ofprocess do not change the times are t −α|t−(t+s)|/2 i = 1:n 1.15 (Stationary cess, as EXt Stochastic = µ(t) = 0 andProcess). Cov(Xt , Xt+sA ) =stochastic r(t, t + s) = eprocess = y s, a weakly −αs/2 stationary Gaussian process is stationary. e0; . 3.1.4:ifPlot mu(i) to be =stationary theof random vectorsmotion (Xt1on, X . . . ,HX=tn0.1. ) Figure fractional Brownian [0,t21], with have distribution forstationary all process, choices of +s , . . . , XLemma tn +s ) process 3.1.17.the Gaussian processes that are Gaussian weak are stationary. tein-Uhlenbeck is asame weak stationary so ..,t . 16 17 18 19 20 21 22 23 24 25 {Xt }for t≥0 jis =said 1:n to be stationary if the random vectors (Xt1 , Xt2 , . . . , X 98 PROCESSES ET and (XtSigma(i,j) . , 1/2 Xtn +s )(t(i)^(2*H) have the 3. same distribution for all choices *CHAPTER +GAUSSIAN t(j)^(2*H)... 1 +s , Xt2 +s , . . = s, n and t1 , t-2 , .(abs(t(i) . . , tn . - t(j)))^(2 * H)); end An example of such a process would be an irreducible positive recurr continuous time Markov chain started from its stationary distribution. end %Generate the multivariate normal vector A = chol(Sigma); X = mu + randn(1,n) * A; 26 27 28 %Plot plot(t,X); 2 1.5 Processi stocastici // Simulazione di processi Gaussiani 0.2 3.1. 0.1 0.3 0.4 PROCESSES 0.5 0.6 GAUSSIAN • Simulazione di t 0.7 0.8 0.9 1 99 moto Browniano frazionario (fBm): 0.2 3. GAUSSIAN PROCESSES ETC. .5: Plot of98fractional BrownianCHAPTER motion on [0, 1] with H = 0.9. 0.1 end 20 21 0 end ditions for a stochastic process to be stationary are quite strict. −0.1 ghtly weaker version of stationarity, %Generate the multivariate normalcalled vector weak stationarity is A = chol(Sigma); −0.2 tead. 23 Xt 22 H > 1/2 incrementi positivamente correlati 24 25 X = mu + randn(1,n) * A; −0.3 26 %Plot Stationary Stochastic Process). A stochastic pro3.1.16 (Weak −0.4 .1is said 0.2 toplot(t,X); 0.3 weak 0.4stationary 0.5 0.6 0.7 0.8 0.9 1 be (sometimes wide sense stationary t −0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9) does 1 der stationary) if EXt = c for all t ≥ 0t and Cov(Xt , Xt+s on t. 2 Plot of fractional Brownian motion on [0, 1] with = 0.1. Figure1.53.1.5: Plot of fractional Brownian motion on [0, H 1] with H = 0.9. mple of a weak stationary process is the Ornstein-Uhlenbeck pro1 and Cov(X t +tos) e−α|t−(t+s)|/2 The conditions fort ,aXstochastic process be = stationary are quite = strict. t = µ(t) = 0 t+s ) = r(t, 27 28 Often, a0.5slightly weaker version of stationarity, called weak stationarity is required instead. 0 H = 1/2 moto Browniano Table of content Fig. 14 A conceptual map of stochastic processes that are likely to play a role in eye movement modelling and analyses Xt ionary and Weak Stationary Gaussian ProH < 1/2 es 1.17. Gaussian processes that Stationary are weakStochastic stationary are stationary. Definition 3.1.16 (Weak Process). A stochastic pro- incrementi negativamente correlati −0.5 cess {Xt−1 }t≥0 is said to be weak stationary (sometimes wide sense stationary ocesses (and stochastic processes easier work or second order stationary) if EXin c for all t ≥are 0 and Cov(X t =general) t, X t+s ) does now from theorem 3.1.5 that Gaussian distributions are to entirely −1.5 not depends on t. processes. stationary stochastic by their mean vector and covariance matrix. Since the mean and −2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.8 0.9 1 An example ofprocess a weak stationary process is when the0.7Ornstein-Uhlenbeck prof a weakly stationary do not change the times are t −α|t−(t+s)|/2 1.15 (Stationary cess, as EXt Stochastic = µ(t) = 0 andProcess). Cov(Xt , Xt+sA ) =stochastic r(t, t + s) = eprocess = y s, a weakly −αs/2 stationary Gaussian process is stationary. e . to be stationary theof random vectorsmotion (Xt1on, X . . . ,HX=tn0.1. ) Figure 3.1.4:ifPlot fractional Brownian [0,t21], with , . . . , X ) have the same distribution for all choices of +s t +s Lemma 3.1.17. Gaussian processes that are Gaussian weak stationary are stationary. n tein-Uhlenbeck process is a weak stationary process, so .nary . , tn .stochastic process. Proof. know from theorem thatStationary Gaussian distributions are entirely 3.1.3 WeStationary and 3.1.5 Weak Gaussian Proariance matrix ofcesses a their stationary Gaussian process at and determined by mean vector and covariance matrix.evaluated Since the mean of such acovariance process would be an irreducible positive recurrent stationary times are times (e.g. t0 = of0,at1weakly = 1/h, t2 = process 2/h, .do . . ,not tn change = 1) when has the a parGaussian processes (and stochastic processes in general) are easier to work shiftedstarted by s, a weakly Gaussian process is stationary. e Markovallchain fromstationary its stationary distribution. Processi stocastici // Classi di processi with if they are stationary stochastic processes. An Ornstein-Uhlenbeck process is a weak stationary Gaussian process, so Definition 3.1.15 (Stationary Stochastic Process). A stochastic process it is a stationary stochastic process. {XtThe }t≥0 covariance is said to be stationary if the random vectors (Xt1 , X t2 , . . . , Xtat n) matrix of a stationary Gaussian process evaluated and (X , X , . . . , X ) have the same distribution for all choices of t +s t +s t +s n 1 2 equidistant times (e.g. t0 = 0, t1 = 1/h, t2 = 2/h, . . . , tn = 1) has a pars, n and t1 , t2 , . . . , tn . An example of such a process would be an irreducible positive recurrent continuous time Markov chain started from its stationary distribution. . propagator ! "!!!#$!!!% P(2) = P(2 | 1) P(1)dx1 where P(2 | 1) serves as the evolution kernel or propagator P A stochastic process whose joint pdf does not change wh called a (strict sense) stationary process: P(x1 ,t 1 ; x2 ,t 2 ; · · · ; xn ,t n ) = P(x1 ,t 1 + τ; x2 ,t 2 + τ; · · · τ > 0 being a time shift. Analysis of a stationary process is freq than for a similar process that is time-dependent: varying t, all X t have the same law; all the moments, if they exist, are consta bution of X(t 1 ) and X(t 2 ) depends only on the difference τ = P(x1 ,t 1 ; x2 ,t 2 ) = P(x1 , x2 ; τ). A conceptual map of main kinds of stochastic processes th the remainder of this Chapter is presented in Figure 14. 5 How to leave the past behind: Markov Process The most simple kind of stochastic process is the Purely Rand there are no correlations. From Eq. (27): P(1, 2) = P(1)P(2) P(1, 2, 3) = P(1)P(2)P(3) P(1, 2, 3, 4) = P(1)P(2)P(3)P1 (3) ··· One such process can be obtained for example by repeat complete independence property can be written explicitly as: Usually, grows at least linearly m.toInreducing the casediscretization of Cholesky decompoonfmuch how much effort to allocate error (how large m 3effort should be) and how to allocate toinreducing statistical error (how should how effort to reducing statistical error (how sition, for example, it and isFinding O(m ). much For aoptimal fixed level ofallocate work,can weto need to decide 3 on how muchbe) effort to allocate reducing discretization error (how large mwill large N should be). the tradeoff be difficult. We sition, for example, it ism.to O(m ). For aoffixed level of work, weWe need to decideerror (how In the case Cholesky decompof grows at least linearly in decomposhould be) and how much effort to allocate to reducing statistical large N should be). Finding the optimal tradeoff can be difficult. will nearly in m. In the Usually, case of Cholesky decompo3.1.5 Marginal and Conditional Multivariate Normal on how much effort to allocate to reducing discretization error (how large mbe large N be). Finding the tradeoff can difficult. We will should be) andshould how effort to allocate tooptimal reducing statistical error (how 3much consider such tradeoffs inin aashould number of situations. situations. 26 %Make asuch covariance matrix for whole thing on how much effort to allocate to reducing discretization error (how m We sition, level ofthe work, need totradeoff decide sition, forconsider example, it is O(m ). atofixed decide large NFor be). Finding thewe optimal can be large difficult. will ). For a fixed level of work, we need to decide tradeoffs number of should be) and how much effort allocate to reducing statistical error (how largeconsider Nshould shouldbe) be). Finding the optimal tradeoff can be difficult. We error will (how Distributions such tradeoffs ineffort a number of situations. and how much toinallocate to reducing statistical on how how much discretization error (how large m 27 Sigma_temp = zeros(n_temp,n_temp); on effort to allocate to reducing large m consider such tradeoffs a number of situations. e to reducing discretization error (how large m largeconsider N should be). Findinginthe optimaloftradeoff can be difficult. We will such a number large Ntradeoffs should be). Finding thesituations. optimal tradeoff can be difficult. We will should todistribution reducing statistical error (how should be) and how much effort to allocate (how for i = 1:n_temp rt to allocate to28 reducing statistical error (how The multivariate normal has many attractive properties. In 3.1.5 Marginal Conditional Multivariate Normal consider such tradeoffs in and aand number of situations. Multivariate 3.1.5 Marginal Conditional Normal large N should be). Finding the optimal tradeoff can be difficult. We will consider such tradeoffs in a number of situations. large can N should be).j Finding the optimal tradeoff can be difficult. We will the optimal tradeoff be difficult. We will particular, its marginal distributions are also multivariate normal. In addi29 for = 1:n_temp 3.1.5 Marginal and Conditional Multivariate Normal 3.1.5 and Conditional Multivariate Normal Distributions Distributions 3.1.5 Marginal and Conditional Multivariate Normal consider such such tradeoffs inMarginal number of situations. consider tradeoffs in aacondition number of situations. umber of situations. tion, if we on part of the a multivariate normal vector, the remain30 Sigma_temp(i,j) = min(t_temp(i),t_temp(j)); Distributions 3.1.5 TheMarginal and Multivariate Normal Distributions ing valuesDistributions are also Conditional multivariate normal. 3.1.5 Marginal and Conditional Multivariate Normal multivariate normal distribution has properties. In In The multivariate normal distribution hasmany manyattractive attractive properties. 31 end Distributions 3.1.5 Marginal and Conditional Multivariate Normal To see this, we need to write normal random vectors in the appropriate The multivariate normal distribution has many attractive properties. In particular, its marginal distributions are also multivariate normal. In addi3.1.5 Marginal and Conditional Multivariate Normal Conditional32 Multivariate Normal Distributions particular, marginal distributions are also multivariate normal. In In addimultivariate normal distribution has many attractive properties. endTheits ! The multivariate normal distribution has many attractive properties. • Simulazione : possiamo sfruttare la Markovianità per rendere le simulazioni form. Let X ∼ N(µ, Σ). We can decompose X into two parts, X = (X , X particular, its marginal distributions are also multivariate normal. In addi-In 1 2) . tion, if we condition on part of the a multivariate normal vector, the remainDistributions particular, its marginal distributions are also multivariate normal.the InInaddition,The if we condition on part of the a multivariate normal vector, remainmultivariate normal distribution has many attractive properties. Distributions 33 ! Thealso multivariate normal distribution has attractive properties. In particular, its marginal distributions are alsovector, multivariate normal. addition, if part weµ condition of as the amany multivariate normal vector, the In remaincan then write as (µ ) part and Σ più efficienti ing values multivariate normal. 1 ,aµmultivariate 2on tion, ifWe weare condition on ofnormal. the normal the remainparticular, its marginal distributions are also multivariate normal. InIn addiing values are also multivariate The multivariate normal distribution has many attractive properties. %Make the separate Sigma components particular, its marginal distributions are also multivariate normal. In addi! " The multivariate normal distribution has many attractive properties. In ing values are also multivariate normal. istribution has 34 many attractive properties. In tion, ifarewe wealso condition ona normal. part ofrandom the a Σmultivariate normal vector, the remainTo see this, we need write normal vectors in the appropriate ing values multivariate tion, ifits we condition on parttoto ofwrite the multivariate normal vector, the remainparticular, marginal distributions are also multivariate normal. In addiTo see this, need normal vectors in the appropriate •particular, 11 12 Ricordiamo alcune proprietà delle Gaussiane MV tion, ifInN(µ, we condition on part of therandom aΣ multivariate normal vector, the remainSigma_11 = Sigma; butions are also35 multivariate normal. addiTo see this, we need to write normal random vectors the appropriate Σ = . its marginal distributions are also multivariate normal. In addiform. Let X ∼ Σ). We can decompose X into two parts, X = (X , X )! .in To see this, we need to write normal random vectors in the appropriate 1 2 ing values are multivariate normal. ! ing are also multivariate normal. Σ Σ tion,form. if wevalues condition on part of also the amultivariate multivariate normal vector, remain21 into 22 twothe Let X ∼ N(µ, Σ). We can decompose X parts, X = (X , X 1 ing values are also normal. ! ! 2 )X. = (X1 , X2 )! . f the a multivariate vector, the remainform. Let X ∼ N(µ, Σ). We can decompose X into two parts, 36 Sigma_21 = Sigma_temp(n+1:n_temp, 1:n); tion,normal if we condition on part of the a multivariate normal vector, the remainWe can then write µ as (µ , µ ) and Σ as form. Let Xwe ∼ need N(µ, Σ).normal. We can X into twoin parts, X= (X1 , X2in ) .the appropriate 1 need 2 ! decompose seealso this, to write random vectors the appropriate To see this, we towrite write normal random vectors ing values are multivariate ! We To can write µ as (µ µ, 2normal )write Σ as To see this, we to normal random vectors in the! appropriate 1 ,need !!and e normal. Theorem 3.1.24 (Multivariate Normal Distributions). Given X = We can then µn+1:n_temp); as (µ1", µMarginal as ing values arethen also multivariate normal. 2 ) and Σ 37 Sigma_12 = Sigma_temp(1:n, We can then write µ as (µ µ ) and Σ as 1 2 ! Toform. see this, we to write normal random the appropriate Letform. X ∼need N(µ, Σ). WeN(µ, can decompose X into parts, Xtwo = (X , XX . (X X Σvectors 2 )parts, !We "twoin Let XXwrite ∼ Σ). X into 11can 12decompose !X " 1two form. Let ∼ N(µ, Σ). We can decompose into parts, = )! .(X1 , X2 ) . 1 , X2= ite normal random vectors in the !Σ " (X ,X ∼ N(µ, Σ), To seeSigma_22 this, we need to normal random vectors in the appropriate 1appropriate 2 )Sigma_temp(n+1:n_temp, Σ = . ! ! 38 = n+1:n_temp); ΣΣ21 form.We Letcan X∼ N(µ, Σ). µ We X into two Σ= Σ112, X2 ) . 11 12 then write ascan (µ , µ2µΣ ) as and Σ as ! parts, 11 (X Σ ΣΣ Σ 1decompose !write 11 12 22 We can then ,X µ and as We can µ as=(µ(µ ,21)µ!∼ )N(µ and ΣX = 1X an decompose X form. into two parts, X = (X ,We X2 )can .write Σ1.,.= Σ11 ),as Let X ∼write N(µ, Σ). decompose parts, X = (X1 , X2.)! . 1into 2"two 1then Σ ! ! 39 We can then µ as (µ , µ ) and Σ as Σ Σ Σ Σ 1 2 21 21 22 Σ Σ ! 22! " ! "22 Given X = µ2 )! and Σ as We can then write µ as (µ(Multivariate as11 "Σ2112Marginal Theorem 3.1.24 Normal Distributions). 1 , µ2 ) !and Σ Σ Σ Σ and 11 12 Σ = . temp_mean = Sigma_21 inv(Sigma_11) *Σ X; Σ11Normal ! "40 Σ . 12 MarginalGiven ! Σ11* Normal "Σ= ΣNormal Theorem Distributions). 12 (Multivariate Σ Theorem 3.1.24 (Multivariate Marginal Distributions). X= (Multivariate Marginal Distributions). Given X = Given X = (X ∼ N(µ, Σ), 21 1 , X2 )3.1.24 Σ∼ =N(µ . Σ*21 2inv(Sigma_11) X.222 ,Σ Σ22 Σ = Σ3.1.24 Σ11 Σ12 41 Theorem 22 ). Σ 11 12 Sigma_new = Sigma_22 Sigma_21 * Sigma_12; Σ Σ Σ Σ = . (X1 , (X 21 22 (X , X ) ∼ N(µ, Σ), 21 22 Σ = . , X ) ∼ N(µ, Σ), 1 2 X1 ∼ N(µ1 , Σ11 ),Distributions). Given X = 2 N(µ, Σ), X21) ∼3.1.24 Σ21 Σ22 Σchol(Sigma_new)’ Theorem (Multivariate Normal 21 Σ22 Marginal Theorem 3.1.25 (Multivariate Normal Conditional Distributions). Given Theorem 3.1.24+(Multivariate Marginal Distributions). Given X= 42 X_new =(Multivariate temp_mean * N(µ randn(n_new,1); X N(µNormal ,Σ Σ11 1 , Σ11 ), 1∼∼N(µ 11 ), Theorem 3.1.24 Normal Given XDistributions). = X ),X1 ∼ 1Marginal 11,Distributions). (X1and , X2 )Theorem ∼X N(µ, Σ), 3.1.24 (Multivariate Normal Marginal Given X = = (X , X ) ∼ N(µ, Σ), X conditional on X is multivariate normal with (X , X ) ∼ N(µ, Σ), te Normal Marginal Distributions). Given X = 1 2 2 1 1 2 43 =N(µ, [X; X_new]; Theorem 3.1.24 (Multivariate Marginal Distributions). Given X = (X1 , X2 )X∼ Σ), and Normal and ∼ N(µ Σ22 ). , Σ ), X1 X ∼2N(µ Σ1211,∼ ), 1 ,X and N(µ (X , X ) ∼ N(µ, Σ), 1 11 −1 1 2 (X1 , X2 ) t∼ = N(µ, Σ), 44 t_temp; X1 ∼ N(µ N(µ , Σµ221 )). ∼111), N(µ ).X E[XX X ]=µ 12,2|Σ 2 ,2Σ+ 22Σ 212Σ∼ 1 2− 11 (X X ∼ N(µ , Σ ). Theorem 3.1.25 (Multivariate Normal Conditional Distributions). Given X1 ∼ N(µ1 , Σ11 ), and 2 2 22 X ∼ N(µ , Σ ), X ∼ N(µ , Σ ), and 1 1 11 1 1 11 45 n = n_temp; and 3.1.25 (Multivariate Normal Conditional Distributions). Given Theorem 3.1.25 (Multivariate Normal Conditional Distributions). Given XTheorem = (Xand conditional on X is multivariate normal with N(µ Σ ). 1 , X2 ) ∼ N(µ, Σ), X2 ∼ 1 2 ,X 22 N(µ2 , Σ22 ). −1 2 ∼Conditional (Multivariate Normal Distributions). Given normal with and Theorem 46 Sigma =1 , X Sigma_temp; X ∼ N(µ , Σ ). 2 2 22 X =and (X3.1.25 ) ∼ N(µ, Σ), X conditional on X is multivariate normal with X = (X , X ) ∼ N(µ, Σ), X conditional on X is multivariate 2 22 1 Var(X Σ2221− Σ21 Σ11 Σ12 . 1 2 | X1 ) =−1 X2 ∼ N(µ2 , Σ22 ). Theorem 3.1.25 (Multivariate Distributions). Given Xsort(t); N(µ Σ222+ ).Conditional |X ]Normal = Σ21Normal Σ (X11 Conditional − µ1 ) X =[dummy (X )(Multivariate ∼ N(µ, Σ), X on11∼ X is multivariate normal with 2 2∼ 2, µ 1conditional 47 =E[X Theorem 3.1.25 (Multivariate Distributions). Given 1, X 2index] 2 −1 Theorem 3.1.25 Normal Distributions). Given N(µ Σ1 )Σ ).Σ−1 2Σ 2221 E[X | X1 ] =Conditional µ2 + ΣX −22,µ E[X | X ] = + (X − µ ) 2conditional 21 1µ 111(X 2 1 1 X = (X , X ) ∼ N(µ, Σ), X on X is multivariate normal with 11 1 2 2 1 X = (X , X ) ∼ N(µ, Σ), X conditional on X is multivariate normal with ate Normal Conditional Distributions). 2 conditional 2X1 is multivariate 1normalGiven 48 another_dummy waitforbuttonpress; Theorem (Multivariate Conditional X = (X1and , X3.1.25 Σ),1Given X on with 2 ) ∼ N(µ, 2= E[X |Normal X ]=µ + Σ Σ−1Distributions). (X − µ ) 25 Processi stocastici // Simulazione di processi Gaussiani Markoviani 2 1 2 21 11 1 1 −1 multivariate 3.1.25 (Multivariate Normal −1 −1 Conditional and Theorem conditional on X is (X multivariate normal with and X1 = X conditional X is with Distributions). Given 49 |=Xµ Σ (X − µ−1 −1 |µ X )on = Σ − Σµ Σ . normal 1 , X2 ) ∼ N(µ, Σ),E[X 1Σ 1] = 212+ 21 1Σ 1 )11 2+ 22 21 12(X E[X |ΣX ] = µ + Σ 11 11 1 2 21 1 − µ1 ) −1 E[X2 | X212]Var(X Σ (X − ) 2 21 1 1 11 Var(X | X ) = Σ − Σ Σ Σ . 2 1 22 21 12 and Var(X | X ) = Σ − Σ −150 X2 conditional on22X1 is multivariate normal with 11 2 1 21 Σ 1 , X2 ) ∼ N(µ, Σ), 11 Σ12 . t(index)’],[0; X(index)]); −1 = µ2 + Σ21 Σ11 (X1 −and µ1plot([0; ) X = (X E[X | X ] = µ + Σ Σ (X − µ ) −1 2 1 2 21 1 1 11 and and Var(X | X ) = Σ − Σ Σ Σ . 2 1 22 21 12 11 −1 51 axis([0 1 -2.5 2.5]); −1 −1 Var(X | X1E[X )22=−Σ Σ12µ Σ+ 22 12 |221 X = Σ−1 Var(X |−X = Σ. 222 −. Σ21 11 Var(X Σ1Σ Σ 1] )21 12 . 1 − µ1 ) 2Σ 21Σ 11 2 | X12) = Σ 11Σ(X and end 11 52 −1 −1 X1 ) = Σ22 − Σ21 Σ11 Σ12 . Var(X2 | X1 ) = Σ22 − Σ21 Σ11 Σ12 . 1. and Var(X2 | X1 ) = Σ22 − Σ21 Σ−1 11 Σ12 . 107 Processi stocastici 3.1.7 Markovian Gaussian Processes GAUSSIAN PROCESSES 107 di processi Gaussiani Marsovini and // Simulazione If a Gaussian process, {Xt }t≥0 , is Markovian, we can exploit this structure 3.1. GAUSSIAN PROCESSES µi =GAUSSIAN EXti .efficiently. to simulate the process much more Because {Xt }t≥0107 is Markovian, 3.1. PROCESSES 3.1. GAUSSIAN PROCESSES • Simulazione : possiamo sfruttare la Markovianità per rendere le simulazioni 3.1. GAUSSIAN PROCESSES 107 ! we can use we to (X know the value of Xti in order to normal generate By theorem 3.1.24, we that 3.1.only GAUSSIAN 107 Xti+1 . ti , Xti+1 ) has a multivariate µknow =need EX iGAUSSIAN ti .PROCESSES 3.1. PROCESSES 107 and più efficienti Define distribution. In particular, and ! µ = EX . 3.1. GAUSSIAN PROCESSES 107 i y theoremand 3.1.24, we ! know that (X!! ) ! has a tmultivariate and ti , Xti+1 σi,i+1 = Cov(X ) normal • Processo Markoviano: " " "" µi =normal EXti . ti ,! Xti+1 and µ = EX . i t By X theorem 3.1.24, we µknow that σ (Xtµ,iX ) t .has a multivariate =t EX istribution. In and particular, σ ti i i,i i,i+1 µ = EX . i t !, distribution. ∼In Nparticular, . a that ! By theorem 3.1.24, know that (X has athat multivariate normal t , Xttheorem By 3.1.24, we (Xti , normal Xti+1 )! has a multivariate By theorem 3.1.24, we) . know (X )know multivariate t ,X t ! " Xwe !! " ! "" µ σ σ ! has µ = EX t i+1 i,i+1 i+1,i+1 i t ! " !! " ! "" i+1 By theorem 3.1.24, we know that (Xt , Xt ) has a multivariate normal distribution. distribution. particular, Xti In particular, µi XIn σi,i σµ!ii,i+1In particular, σi,i σ distribution. ∼tN, X , a" . "" Gaussianità !∼ !particular, "" N" distribution. ,"tIn . σ i,i+1 normal By theorem 3.1.24, we !! know that (X ) !! has multivariate t ! " ! X µ σ i+1!! ! i,i+1 i+1,i+1 X µi t σ σi,i σi,i+1 Xti+1 3.1.25, µ σ Using theorem we have ! " " ! "" " !! " ! "" i+1 i,i+1 i+1,i+1 X µ σ σ t i i,i i,i+1 distribution. Int particular, ∼N , Xt ∼ N , . µi. X σi,i σ σi,i+1 µ σ σ X µ σ σ t i i,i i,i+1 t i+1 i,i+1 i+1,i+1 X µ σ ! theorem " ∼N , i i,i+1 ∼ i+1,i+1 . t ! i+1 "" Using 3.1.25, !!! µwe" N σ2 " , . Xhave µi+1 σi,i+1 σi+1,i+1 t Xt σi,i σi,i+1 i σ sing theorem 3.1.25, we have X µ σi,i+1 σi+1,i+1 ∼ N , . i,i+1 i,i+1 t i+1 " ! i+1 Using X theorem| X 3.1.25, we have 2 Using theorem 3.1.25, we have X µ σ σ i+1 i,i+1 µi + (xhave −i+1,i+1 µi ) , σi+1,i+1 − σi,i+1 . σi i,i+1 ti+1 ti = xtUsing i ∼ N theorem 3.1.25, " − σi,i .2 " ! X N µiwe (xi −2 µi ) " , 2σi+1,i+1 t ! | Xt = xi ∼ σi, i+! σi, i σi,i+1σi,i+1 σ3.1.25, σi,i+1Using theorem " ! we haveσi,i σi,i+1 Using theorem 3.1.25, weσhave i,i+1 i,i+1 2 . X(x Nσσ µi + −σi,i+1 (xi −. µi ) , σ.i+1,i+1 − σi,i+1 +t |(x µ ) , t − i i∼ i=−x i+1,i+1 Xti+1 | Xti X=t x|i X∼t =Nxi ∼ µiN+ µi X µ ) , − i i i+1,i+1 σi, i σ " ! σi, i σ . Xt i (Generating |a XtMarkovian = x ∼ N µi + µi ) , σ! i,i i,i(xi2− Process). i+1,i+1 − " Algorithm 3.1.2 Algorithm 3.1.2 (Generating Gaussian σi, σi,i+1 i a Markovian 2 i,i Process). σi, Gaussian i σσ σi,i i,i+1 σ σ . Xt | Xt = xi ∼ N µ i + (xi − µi ) , σi+1,i+1 − i,i+1 i,i+1 √ Algorithm 3.1.2 (Generating Markovian Gaussian σi,Set i Gaussian Algorithm 3.1.2 (Generating a Markovian . |a+aX N µProcess). + (xi − µi ) , σi+1,i+1 − √ (i) Draw Z∼ N(0, 1). XtX= µ1Process). Z.xσii,i∼ ti+1 tσi i,i= Algorithm (Generating Markovian Gaussian iProcess). Draw(Generating Z ∼ N(0, 1). Set X3.1.2 µ + σ Z. σi, i σ √ Algorithm(i)3.1.2 a Markovian Gaussian Process). t1 = 1 i,i i,i √ (i) X Draw Za1 ∼ N(0, 1). Xt = µProcess). + σi,i Z. 3.1.2 (Generating Markovian Gaussian (i) Algorithm Draw Z ∼ N(0, Set Z. Set t1, = (ii)1).For i =Draw . . .µ,Z m+ 1σi,i draw Z ∼XN(0, 1)1 and√set (i) ∼−N(0, 1). Set σi,i Z. t = µ1 + √ √ ⎛ % Algorithm (Generating a⎞Markovian Gaussian Process). For = .Set .m,N(0, m − draw Z N(0, 1) 3.1.2 and set = 1, . µ.σ.1) ,+ m −σ1i,i draw Z ∼ N(0, 1) and set (i) Draw(ii) Z(ii)∼ N(0, X1t(ii) =1For µ + Z. (i)ii Draw Set = Z. t N(0, 1∼ i,i For =1). 1,1,. .Z ..,∼ − draw Z1iX ∼ and set 11). 2 (ii) For i = 1, . . . σ ,m − 1 draw Z ∼ N(0, 1) and set σ ⎛ i,i+1 % − i,i+1 ⎠ ⎛ ⎛ ⎞2Z. ⎞⎞ √ % =Zµi∼+N(0, (X µi )% + ⎝ ⎞σi+1,i+1 t t − (ii) For i = 1, . . . , m − 1Xdraw 1) and set ⎛ % σi,i+1 (i) Draw Z ∼ N(0, 1). Set X µ + σi,i Z. 2 σi, i set σi,i+1 (ii) For i = 1, . . . , m − 1 drawσi,i+1 Z∼ N(0, 1 = 2σi,it− σi,i+1 2 1⎠ Z. Xµt 1) =and µi +σ⎛σ% (X µi ⎠ ) +Z.⎝⎞ σi+1,i+1 σ t − σ σ(Xi,i+1 ⎝ i,i+1 i,i+1 i,i+1 Xt = µ i + − ) + − t i ⎠ Z. X (Xt σ− µσi )2 + ⎝⎞ t t= Xti+1 = µiσi, +i σi,i+1 (X −µiµ+ii+1,i+1 )σi, +ii⎝ σi,ii+1,i+1 −σi+1,i+1 −⎠σσi,iZ. i⎛% ⎝ i,i+1 ⎠ σi, i,i N(0, 1) and set (ii) For i = 1, . . . , m − 1 draw Z ∼ X = µ + (X − µ ) + Z. σ − σi, i σ t i t i i+1,i+1 i,i Example 3.1.27 Brownian motion, which is 2σi,i σi, i (Brownian Motion). Consider σConsider σ i,i+1 i,i+1 a Markov process. Now, ⎞ Example 3.1.27 (Brownian Motion). Brownian motion, which⎛ is% ⎝ ⎠ Xti+1 =3.1.27 µi + (Brownian (XMotion). µi ) Consider + σ − ti − 3.1.27 i+1,i+1 Example Brownian motion, whichZ. is Motion). Brownian motion, which is 2 aiExample Markov process. (Brownian Now, µi = EX t = 0 Consider σi, σ i,i σ σi,i+1is a Markov process. Now,(Brownian Example 3.1.27 Motion). Consider Brownian motion, a Markov process. Now, µX EX 0i + which Example 3.1.27 (Brownian Motion). Consider Brownian motion, which is⎝ σi+1,i+1 − i,i+1 ⎠ Z i = t = = µ (X − µ ) + t t i and i i+1 µ = EX = 0 a Markov process. Now, i t µ = EXt = 0 σi, i σi,i and a Markov process. Now, σ t == min(tii, ti+1 ) = ti . µ 0 i = EXi,i+1 and and = min(t ,motion, t ) = t . which is xample 3.1.27 (Brownian Motion). Consider σBrownian nd i i i i+1 i i i i+1 i i i+1 i+1 i i i i+1 i+1 i i i i+1 i+1 i+1 i i+1 i+1 i+1 i i+1 i i i+1 i+1 i i 1 1 1 1 1 i i+1 i i+1 i i+1 i i+1 i+1 i i i i i i distribution. In particular, σi,i+1 σi,i+1 (i) Draw Z ∼ N(0, 1). Set Xt1 = µ1 + σi,i Z. . (xi − µi ) , σi+1,i+1 − ! " !! " ! "" σi, i σ (ii) For i,ii = 1, . . . , m set Xti− 1 draw µZi ∼ N(0, σi,i 1) σand (ii)∼ For − 1 i,i+1 draw Z. ∼ N(0, 1) and se N i = 1, ., . . , m Xti+1 µi+1 σi,i+1 σ⎛ i+1,i+1 Algorithm 3.1.2 (Generating a Markovian Gaussian Process). % ⎛%⎞ 2 √ σi,i+1 Using theorem 3.1.25, we σ have (i) Draw Z ∼ N(0, 1). Set Xt1 = µ1 + σi,i Z. i,i+1 σi,i+1 ⎠ Xti+1 = µi + ! (XXtiti+1 −µ − =i )µi++⎝ σi+1,i+1 (Xti 2− µ ) "i + ⎝ σiZ σi, i σi,i+1 σi,i σi,i+1 σi, i (ii) For i = 1, . . . , m − 1 draw Z ∼ N(0, 1) and set . X | Xt i = xi ∼ N µ i + (xi − µi ) , σi+1,i+1 − ⎛% ⎞ ti+1 σi, i σi,i 2 σi,i+1 σi,i+1 ⎠ Z. 3.1.2 (Generating Xti+1 = µi + (Xti − µi ) + ⎝ σi+1,i+1 − Algorithm a 3.1.27 Markovian Gaussian Process). ExampleMotion). (Brownian Motion). Consider B Example (Brownian Consider Brownian motion, σi, i σi,i 3.1.27 √Now, a Markov process. (i) process. Draw Z ∼ Now, N(0, 1). Set Xt1 = µ1 + σi,i Z. a Markov Gaussiano Markoviano • Moto Browniano: è un processo Example 3.1.27 (Brownian Motion). Consider Brownian motion, which is EX1)tiand = 0set µi = EXti = 0 (ii) For i = 1, . . . , m − 1 draw µ Zi ∼=N(0, a Markov process. Now, and ⎛% ⎞ µi = EXti = 0 and 2 σi,i+1 =σmin(t σi,i+1 i,i+1 ⎠i , ti+1 ) = ti . Xti+1 = µi +σi,i+1(X − µi ) +i ,⎝ti+1 Z. σi+1,i+1 −. ti min(t and = ) = t i σi,updating i σi,i So, our formula is σi,i+1 = min(ti , ti+1 ) = ti . Xti+1 | Xti = xi ∼ N µi + Processi stocastici // Classi di processi So, our updating formula is So, our updating formula is Xti+1 = Xti + 1 2 3 4 5 6 () * ti+1 − ti Z. Listing 3.8: Matlab code 3.1. GAUSSIAN PROCESSES m = 10^4; X = zeros(m,1); h = 1/m; () Example 3.1.27 (Brownian Motion). Consider which + ti+1 −isti () XBrownian ti+1 =*Xtmotion, i a Markov process. Now, Xti+1 =µ X=tiEX + = 0 ti+1 − ti Z. i ti Listing 3.8: Matlab code and 10^4; = zeros(m,1); σ min(t i,i+1 = X i , tMatlab i+1 ) = ti . code 3.8: 1071 m = Listing 2 h = 1/m; So, our updating formula is 10^4; X = zeros(m,1); 3 X(1) = sqrt(h)*randn; () * 1/m; Xti+1 = Xti + ti+1 − ti Z. 4 m = µi = EXti . 2 h = for i = 1:m-1 3 X(1) = sqrt(h)*randn; By theorem know that (Xti , Xti+1 )! has a multivariate normal5 for i = 1:m-1 X(i+1) = 3.1.24, X(i) +we sqrt(h) * randn; 1 X(1) = sqrt(h)*randn; and * X(i+1) = X(i) + code sqrt(h) * randn; 4 Listing 3.8: Matlab distribution. In particular, 3.1. GAUSSIAN6PROCESSES 107 end 7 end ! " !! " ! "" 5 for 1i m= =1:m-1 8 10^4; X = zeros(m,1); Xt i µi σi,i σi,i+1 9 plot(h:h:1,X); 2 h . = 1/m; ∼N , = X(i) 8+ sqrt(h) * randn; and Xti+1 µi+1 σi,i+16 σi+1,i+1X(i+1) 3 X(1) = sqrt(h)*randn; 9 plot(h:h:1,X); µi = EXti . 7 end 4 Using theorem 3.1.25, we have By 3.1.24, we know that (Xti , Xti+1 )! has a multivariate normal 8 5 for theorem i ="1:m-1 ! 2 distribution. particular, σ σi,i+1 6 X(i+1) = In X(i) + sqrt(h) * randn; i,i+1 . Xti+1 | Xti = xi ∼ N µi + (xi − µ9i ) , plot(h:h:1,X); σi+1,i+1 − ! " !! " ! "" 7 endσi,i σi, i Xt i µi σi,i σi,i+1 3.1. GAUSSIAN PROCESSES 107 8 ∼N , . µi+1 σi,i+1 σi+1,i+1 Algorithm 3.1.2 (Generating a Markovian Gaussian Process). 9 plot(h:h:1,X); Xti+1 andXt = µ1 + √σi,i Z. Using theorem 3.1.25, we have (i) Draw Z ∼ N(0, 1). Set 1 µi = EXti . " ! 2 σi,i+1 (ii) For i = 1, . . . , m − 1 draw Z ∼ N(0, 1) and set σi,i+1 ! By theorem 3.1.24, we know that (X , X ) has a multivariate normal . N µi + (x tiXti+1 ti+1| Xti = xi ∼ i − µi ) , σi+1,i+1 − 108 PROCESSES ETC. ⎛108 CHAPTER 3.σi, iGAUSSIAN PROCESSES ETC. %CHAPTER 3.⎞ GAUSSIAN σi,i distribution. In particular, 2 108 CHAPTER 3. GAUSSIAN PRO σ σ i,i+1 i,i+1 108 3.µi )GAUSSIAN ! !! "⎠!Z. ETC. "" Xti+1 = µiCHAPTER + (Xti − +⎝ " σi+1,i+1PROCESSES − Algorithm 3.1.2 (Generating a Markovian Gaussian Process). Xt i µ σi,i σi,i+1 i σii,i Process). Example σi, 3.1.28 (Ornstein-Uhlenbeck The Ornstein-Uhlenbeck ∼N , (Ornstein-Uhlenbeck . Example 3.1.28 Process). The Ornstein-Uhlenbeck √ Xti+1 µi+1 σ σ i,i+1 i+1,i+1 Example 3.1.28 (Ornstein-Uhlenbeck Process). The Orn (i) Draw Z ∼ N(0, 1). Set X = µ + σ Z. t 1 i,i 1 Example 3.1.28 (Ornstein-Uhlenbeck Process). The has Ornstein-Uhlenbeck process has process O-U: è un processo Gaussiano Markoviano • Processo process has 3.1.27 (Brownian Motion). Brownian motion, which is Using theoremConsider 3.1.25, we processExample has = EXZti∼=N(0, 0 1) and set (ii)ti For µhave = 0i = 1, . . . , m −µ1i draw i = EX µi = EXti = 0 a Markov process. Now, " ! 2 µi = EXti = 0 ⎛ ⎞ % σ σ µi X = EX|tX = 0 x ∼ N µ + i,i+1 (x − µ ) , σwith − i,i+1 . 2 with ti+1 i ti =with i i i i i+1,i+1 σ σ i,i+1 i,i+1 ⎠ σi, i σi,i (X − µ ) + ⎝ σ with and = µi= + exp{α|t Z. − i − ti+1 ti+1 tii − ti+1 i σi,i+1 |/2} σi,i+1 =i+1,i+1 exp{α|t |/2} σi,i+1 = exp{α|ti − ti+1X |/2} σi, i σi,i σi,i+1 σ= exp{α|t − t |/2} = ti . i 3.1.2 i+1)(Generating Algorithm a Markovian Gaussian Process). i,i+1 = min(t i , ti+1 and σi,i = 1. So,is our updating formula is and σformula i,i = 1. So, and σ = 1. So, our updating is our √ updating formula i,i ourSo, updating formula isformula and σi,iSo, = 1. our updating (i) Drawis Z ∼ N(0, 1). Set Xt1Example = µ1 + σ3.1.27 i,i Z. (Brownian Motion). !"# which is !"Consider Brownian motion, () * ! # " ! # a Markov process. Now, X = exp{−α|t − t |/2}X + 1 − exp{α|ti − " ti+1 i i+1 ti Xti+1(ii) =X ti+1 For i = 1, . . . ,−mti− Z. draw N(0, 1) andi set ti + Xti+1Z=∼+exp{−α|t − ti+1 |/2}X + Z. 1 − exp{α|t ti |} i − ti+1 |} Z. = exp{−α|t − exp{α|t ti+11|/2}X 1 − exp{α|t Xti+1 = exp{−α|ti −Xti+1 1i − ti+1|/2}X i − ti+1 µ = EX = 0 ti + i − ttii+1 |} Z. i t i ⎛% ⎞ 2 σ and σ i,i+1 i,i+1 ⎠ Listing 3.8: Matlab Listing 3.9: Matlab code Xti+1 =code µi + (Xti − µi ) + ⎝ σi+1,i+1 − σ = Z. min(ti , tcode i,i+1 i+1 ) = ti . σi, i σ Listing 3.9: i,i Matlab 1 m = 10^4; X = zeros(m,1); Listing 3.9: Matlab code 1 alpha = 50; m = 10^4; Listing 3.9: Matlab code 2 h = 1/m; So, mour 1 alpha = 50; = updating 10^4;2 formula is lpha = 50; m =1 10^4; alpha = 50; m = 10^4; 3 X(1) = sqrt(h)*randn; * Example 3.1.27 motion, which is X = zeros(m,1); h(=)1/m; 2 (Brownian Motion). Consider mean-reverting 3 Brownian 4 X = X + t − t Z. 2 t t i+1 i i+1 i a Markov process. Now, 4 X(1) = randn; 3 X = zeros(m,1); h = 1/m; = zeros(m,1); h = 1/m; 5 for i = 1:m-1 3 X = zeros(m,1); h = 1/m; µ = EX = 0 5 i ti (1) 6 = randn; X(i+1) = X(i) + sqrt(h) * randn; 4 X(1) = randn; 6 for i =Listing 1:m-1 3.8: Matlab code 4 X(1) = randn; and 7 end 5 7 X(i+1) = exp(-alpha * h / 2)*X(i)... =m min(t ti . = 10^4; = zeros(m,1); or i8 = 1:m-1 5 i , ti+1X) = 6 for iσi,i+1 = 11:m-1 8 + sqrt(1 - exp(-alpha * h))*randn; 9 plot(h:h:1,X); 1/m; 6 for i *= h1:m-1 X(i+1) = exp(-alpha / 2)*X(i)... * h / 2)*X(i)... So, our updating7 formulaX(i+1) is 2 h == exp(-alpha 9 end X(1) (= sqrt(h)*randn; + sqrt(17 - exp(-alpha h))*randn; X(i+1) =* exp(-alpha * h / 2)*X(i)... * 8 +3 sqrt(1 * h))*randn; ) - exp(-alpha 10 4 nd Xt i + ti+1 −11ti plot(h:h:1,X); Z. 8 + sqrt(1 - exp(-alpha 9 endXti+1*=h))*randn; 7 Processi stocastici // Classi di processi 9 lot(h:h:1,X); 10 11 for i = 1:m-1 X(i+1) = X(i) + sqrt(h) * randn; Listing 3.8: Matlab code 11 plot(h:h:1,X); 7 end 1 m = 10^4; X = zeros(m,1); 8 plot(h:h:1,X); 9 plot(h:h:1,X); 2 h = 1/m; 3 X(1) = sqrt(h)*randn; end 5 10 6 3.2 Brownian Motion