Delayed feedback makes neuronal firing statistics non-Markovian

The instantaneous state of a neural network consists of both the degree of excitation of each neuron and the positions of impulses in communication lines between the neurons. In neurophysiological experiments, the times of neuronal firing are recorded but not the state of communication lines. Howeve...

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2012
Автори: Vidybida, A.K., Kravchuk, K.G.
Формат: Стаття
Мова:English
Опубліковано: Український математичний журнал 2012
Назва видання:Український математичний журнал
Теми:
Онлайн доступ:http://dspace.nbuv.gov.ua/handle/123456789/165260
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Delayed feedback makes neuronal firing statistics non-Markovian / A.K. Vidybida, K.G. Kravchuk // Український математичний журнал. — 2012. — Т. 64, № 12. — С. 1587-1609. — Бібліогр.: 42 назв. — англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-165260
record_format dspace
spelling irk-123456789-1652602020-02-13T01:26:43Z Delayed feedback makes neuronal firing statistics non-Markovian Vidybida, A.K. Kravchuk, K.G. Статті The instantaneous state of a neural network consists of both the degree of excitation of each neuron and the positions of impulses in communication lines between the neurons. In neurophysiological experiments, the times of neuronal firing are recorded but not the state of communication lines. However, future spiking moments substantially depend on the past positions of impulses in the lines. This suggests that the sequence of intervals between firing moments (interspike intervals, ISI) in the network can be non-Markovian. In the present paper, we analyze this problem for the simplest possible neural “network,” namely, for a single neuron with delayed feedback. Стан нейронної мережi складається як з величини збудження в кожному з нейронiв, так i зi значень положення iмпульсiв у лiнiях зв’язку. В нейрофiзiологiчних експериментах реєструються моменти пострiлiв окремих нейронiв, а не стани лiнiй зв’язку. Але моменти наступних пострiлiв iстотним чином залежать вiд положення iмпульсiв у лiнiях зв’язку в попереднi моменти. Це наводить на думку, що послiдовнiсть iнтервалiв мiж послiдовними пострiлами окремого нейрона в мережi (мiжспайковi iнтервали, МСI) може складати немарковський точковий стохастичний процес. У цiй роботi дослiджується така можливiсть для найпростiшої з можливих нейронної „мережi”, а саме, поодинокого нейрона з затриманим зворотним зв’язком. 2012 Article Delayed feedback makes neuronal firing statistics non-Markovian / A.K. Vidybida, K.G. Kravchuk // Український математичний журнал. — 2012. — Т. 64, № 12. — С. 1587-1609. — Бібліогр.: 42 назв. — англ. 1027-3190 http://dspace.nbuv.gov.ua/handle/123456789/165260 519.21 en Український математичний журнал Український математичний журнал
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
topic Статті
Статті
spellingShingle Статті
Статті
Vidybida, A.K.
Kravchuk, K.G.
Delayed feedback makes neuronal firing statistics non-Markovian
Український математичний журнал
description The instantaneous state of a neural network consists of both the degree of excitation of each neuron and the positions of impulses in communication lines between the neurons. In neurophysiological experiments, the times of neuronal firing are recorded but not the state of communication lines. However, future spiking moments substantially depend on the past positions of impulses in the lines. This suggests that the sequence of intervals between firing moments (interspike intervals, ISI) in the network can be non-Markovian. In the present paper, we analyze this problem for the simplest possible neural “network,” namely, for a single neuron with delayed feedback.
format Article
author Vidybida, A.K.
Kravchuk, K.G.
author_facet Vidybida, A.K.
Kravchuk, K.G.
author_sort Vidybida, A.K.
title Delayed feedback makes neuronal firing statistics non-Markovian
title_short Delayed feedback makes neuronal firing statistics non-Markovian
title_full Delayed feedback makes neuronal firing statistics non-Markovian
title_fullStr Delayed feedback makes neuronal firing statistics non-Markovian
title_full_unstemmed Delayed feedback makes neuronal firing statistics non-Markovian
title_sort delayed feedback makes neuronal firing statistics non-markovian
publisher Український математичний журнал
publishDate 2012
topic_facet Статті
url http://dspace.nbuv.gov.ua/handle/123456789/165260
citation_txt Delayed feedback makes neuronal firing statistics non-Markovian / A.K. Vidybida, K.G. Kravchuk // Український математичний журнал. — 2012. — Т. 64, № 12. — С. 1587-1609. — Бібліогр.: 42 назв. — англ.
series Український математичний журнал
work_keys_str_mv AT vidybidaak delayedfeedbackmakesneuronalfiringstatisticsnonmarkovian
AT kravchukkg delayedfeedbackmakesneuronalfiringstatisticsnonmarkovian
first_indexed 2025-07-14T18:15:41Z
last_indexed 2025-07-14T18:15:41Z
_version_ 1837647205758926848
fulltext UDC 519.21 A. K. Vidybida, K. G. Kravchuk (Bogolyubov Inst. Theor. Phys. Nat. Acad. Sci. Ukraine, Kyiv) DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN ЗАТРИМАНИЙ ЗВОРОТНИЙ ЗВ’ЯЗОК СПРИЧИНЯЄ НЕМАРКОВIСТЬ СТАТИСТИКИ ПОСТРIЛIВ НЕЙРОНА The instantaneous state of a neural network consists of both the degree of excitation of each neuron and the positions of impulses in communication lines between the neurons. In neurophysiological experiments, the neuronal firing moments are registered, but not the state of communication lines. However, future spiking moments depend substantially on the past positions of impulses in the lines. This suggests that the sequence of intervals between firing moments (interspike intervals, ISI) in the network can be non-Markovian. In this paper, we address this question for a simplest possible neural “network”, namely, a single neuron with delayed feedback. The neuron receives excitatory input both from the input Poisson process and from its own output through the feedback line. We obtain exact expressions for the conditional probability density P (tn+1 | tn, . . . , t1, t0)dtn+1 and prove that P (tn+1 | tn, . . . , t1, t0) does not reduce to P (tn+1 | tn, . . . , t1) for any n ≥ 0. This means that the output ISI stream cannot be represented as a Markov chain of any finite order. Стан нейронної мережi складається як з величини збудження в кожному з нейронiв, так i зi значень положення iмпульсiв у лiнiях зв’язку. В нейрофiзiологiчних експериментах реєструються моменти пострiлiв окремих нейронiв, а не стани лiнiй зв’язку. Але моменти наступних пострiлiв iстотним чином залежать вiд положення iмпульсiв у лiнiях зв’язку в попереднi моменти. Це наводить на думку, що послiдовнiсть iнтервалiв мiж послiдовними пострiлами окремого нейрона в мережi (мiжспайковi iнтервали, МСI) може складати немарковський точковий стохастичний процес. У цiй роботi дослiджується така можливiсть для найпростiшої з можливих нейронної „мережi”, а саме, поодинокого нейрона з затриманим зворотним зв’язком. Нейрон отримує в якостi стимулу збуджувальнi iмпульси вiд пуассонiвського вхiдного процесу i власнi вихiднi збуджувальнi iмпульси через лiнiю зворотного зв’язку. Одержано точнi вирази для щiльностi умовної ймовiрностi P (tn+1 | tn, . . . , t1, t0)dtn+1 i доведено, що P (tn+1 | tn, . . . , t1, t0) не зводиться до P (tn+1 | tn, . . . , t1) для будь-якого n ≥ 0. Це означає, що вихiдний потiк МСI неможливо подати як марковський ланцюг скiнченного порядку. 1. Introduction. Activity of many central neurons is seemingly random. This fact allows to describe the firing activity as stochastic process [1, 2]. If a single neuron is considered, which is stimulated with a point renewal process, then the firing activity will be as well renewal1. We now put a question: Is it feasible that a neuron embedded into a recurrent network will have an activity, which is as well renewal? In a neural network, the main component parts are neurons and inter-neuronal communication lines — axons [3]. These same units are the main ones in most types of artificial neural networks [4]. If so, then the instantaneous dynamical state of a network must include dynamical states of all the neurons and communication lines the network is composed of. The state of a neuron can be described as its degree of excitation. The state of a line consists of information of whether the line is empty or conducts an impulse. If it does conduct, then further information about how much time is required for the impulse to reach the end of the line (time to live) describes the line’s state. In neurophysiological experiments, the triggering (spiking, firing) moments of individual neurons but not the states of communication lines are registered. The sequence of intervals between the 1We do not take into account adaptation mechanisms here. c© A. K. VIDYBIDA, K. G. KRAVCHUK, 2012 ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1587 1588 A. K. VIDYBIDA, K. G. KRAVCHUK consecutive moments (interspike intervals, ISIs) is frequently considered as a renewal [5] or Markovi- an [6 – 8] stochastic process. For a renewal process, the consecutive ISIs are mutually statistically independent. Moreover, all statistical characteristics of a spike train must be derivable from the single-ISI probability distribution. Additionally, those characteristics must be the same for a shuffled spike train, obtained by randomly reordering the ISIs, since shuffling does not change the single-ISI probability distribution. On the other hand, the experimentally obtained spike trains in auditory [9] and visual [10] sensory systems do not support the ISIs’ mutual independence. This is revealed by calculating the correlation coefficient between the adjacent ISIs, which appeared to be nonzero for the experimental spike trains, while it must be zero for any renewal process. Also, such characteristics as Fano factor curve and firing rate distribution calculated for shuffled spike trains differ qualitatively from those obtained for the intact ones. These observations can be associated with memory effects in the ISI sequence which arise from an underlying non-renewal process. Recently [11], such a possi- bility was studied experimentally for weakly electric fish electrosensory afferents using high-order interval analysis, count analysis, and Markov-order analysis. The authors conclude that the experi- mental evidence cannot reject the null hypothesis that the underlying Markov chain model is of order m or higher, or maybe non-Markovian. The limited data sets used in [11] allow to establish a lower bound for m as m ≥ 7 for some neural fibers. What could be possible sources of such non-renewal, or even non-Markovian, behavior in a real neural network? First, this behavior could be inherited from non-renewal (non-Markovian) character of the input signal. Second, intrinsic neuronal properties, such as adaptation, could be responsible. In this paper, we show that the presence of delayed recurrent neuronal interconnections represents the natural cause of the non-Markovian behavior. For this purpose, we consider the simplest possible neural “net”, namely, a single neuron with delayed feedback, which is driven with Poisson process. As neuronal model we take binding neuron as it allows rigorous mathematical treatment. We study the output ISI stream of this system and prove that it cannot be presented as Markovian chain of any finite order. This suggests that activity of any network with delayed interconnections, if presented in terms of neuronal firing moments, should be non-Markovian as well. 2. The object under consideration. 2.1. Binding neuron model. The understanding of mecha- nisms of higher brain functions expects a continuous reduction from higher activities to lower ones, eventually, to activities in individual neurons, expressed in terms of membrane potentials and ionic currents. But the description of the higher brain functions in terms of potentials and currents in parts of individual neurons would be difficult, similarly as it would be difficult to describe execution of computer programs by a CPU in terms of Kirhgoff’s laws. In this connection, it would be helpful to abstract from the rules by which a neuron changes its membrane potentials to rules by which the input impulse signals are processed in the neuron and determine its output firing activity. The “coincidence detector”, and “temporal integrator” are the examples of such an abstraction, see discussion in [12]. One more abstraction, the binding neuron (BN) model, is proposed as signal processing unit [13], which can operate either as coincidence detector, or temporal integrator, depending on quantitative characteristics of stimulation applied. This conforms with behavior of real neurons, see, e.g., [14, 15]. The BN model describes functioning of a neuron in terms of discrete events, which are input and output impulses, and degree of temporal coherence between the input events, see [16] for detailed ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1589 � 0 0 Delayed feedback s NΣ ≤ τ ∆ Output stochastic process Input stochastic process memory Fig. 1. Binding neuron with feedback line under Poisson stimulation. Multiple input lines with Poisson streams are joined into a single one here. ∆ is the delay duration in the feedback line. description. Mathematically, this model can be realized as follows. We expect that all input impulses in all input lines are identical. Each input impulse is stored in the BN for a fixed time, τ. The τ is similar to the “tolerance interval” discussed in [17]. All input lines are excitatory. The neuron fires an output impulse if the number of stored impulses, Σ, is equal or higher than threshold value, N0. After that, BN clears its memory and is ready to receive fresh inputs. That is, every input impulse either disappears contributing to a triggering event, or it is lost after spending τ units of time in the neuron’s internal memory. The BN model is not general, but somewhat inspired by neurons as integrators up to a threshold. Its name is suggested by binding of features/events in large-scale neuronal circuits [18 – 20]. Its operational simplicity is provided by the fact that trace of each input impulse entirely disappears after finite time τ. This is in the contrast to more familiar models where the traces (excitatory postsynaptic potentials, EPSP) decay exponentially. E. g., in the leaky integrate-and-fire model, EPSP is mimicked as pure exponential function the traces of which can disappear only after triggering. In the BN model, the EPSP is mimicked as box function of width/duration τ and the traces are stored in the neuron no longer than τ units of time. Further, we expect that input stream in each input line is the Poisson one with some intensity λi. In this case, all input lines can be collapsed into a single one delivering Poisson stream of intensity λ = ∑ i λi, see Fig. 1. For analytic derivation, we use BN with N0 = 2 in order to keep mathematical expressions shorter. It seems, that cases with higher thresholds might be considered with the same approach, but even N0 = 3 without feedback requires additional combinatorial efforts, see [21]. Therefore, cases of higher threshold are tested here only numerically. As regards real biological neurons, the number of synaptic impulses in the internal memory which is necessary to trigger a neuron, varies from one [22], through fifty [23], to 60 – 180 [24], and 100 – 300 [25]. 2.2. Feedback line action. In real neuronal systems, a neuron can form synapses from its axonal branches to its own dendritic tree [26 – 33]. Synapses of this type are called autapses. Some of the neurons forming autapses are known to be excitatory, see [26, 27, 29, 30, 33] for experimental evidence. As a result, the neuron stimulates itself obtaining an excitatory impulse after each firing with some propagation delay. We model this situation assuming that output impulses of BN are fed back into BN’s input with delay ∆. This gives the BN with delayed feedback model, Fig. 1. Impulses ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1590 A. K. VIDYBIDA, K. G. KRAVCHUK from the feedback line have the same excitatory action on BN as those arrived from the input Poisson stream. Namely, each one of them is stored in the BN’s memory for time τ, after which it disappears completely, if not involved in triggering and generating a spike, see Subsection 2.1. The feedback line either keeps one impulse, or keeps no impulses and cannot convey two or more impulses at the same time. Biological correlates, which support to an extent this assumption, could be a prolonged refractory time and/or short-term synaptic depression. The latter can have the recovery time up to 20 s [34]. If the feedback line is empty at the moment of firing, the output impulse enters the line, and after time interval equal ∆ reaches the BN’s input. If the line already keeps one impulse at the moment of firing, the just fired impulse ignores the line. Any output impulse of BN with feedback line may be produced either with impulse from the line involved, or not. We assume that, just after neuronal firing and sending output impulse, the line is never empty. This assumption is self-evident for output impulses produced without impulse from the line, or if the impulse from the line was involved, but entered empty neuron. In the letter case, the second (triggering) impulse comes from the Poisson stream, neuron fires and output impulse goes out as well as enters the empty line. On the other hand, if impulse from the line triggers BN, which already keeps one impulse from the input stream, it may be questionable if the output impulse is able to enter the line, which was just filled with another impulse. We expect it does. This means that the refraction time of biological axon modelled as feedback line does not exceed ∆. Thus, at the beginning of any output ISI, the line keeps impulse with time to live s, where s ∈ ]0; ∆], or 0 < s ≤ ∆. In this paper, we consider the case ∆ < τ (1) in order to keep expressions shorter. 3. Statement of the problem. The input stream of impulses, which drives neuronal activity is stochastic. Therefore, the output activity of our system requires probabilistic description in spite of the fact that both the BN and the feedback line action mechanisms are deterministic. We treat the output stream of BN with delayed feedback as the stationary process2. In order to describe its statistics, we introduce the following basic functions: the joint probability density P (tm, tm−1, . . . , t0) for m+ 1 successive output ISI durations; the conditional probability density P (tm | tm−1, . . . , t0) for output ISI durations; P (tm | tm−1, . . . , t0) dtm gives the probability to obtain an output ISI of duration between tm and tm + dtm provided the previous m ISIs had durations tm−1, tm−2, . . . , t0, respectively. Here we reproduce definition from [35]. Definition 1. The sequence of random variables {tj}, taking values in Ω, is called the Markov chain of the order n ≥ 0, if ∀m>n∀t0∈Ω . . . ∀tm∈Ω : P (tm | tm−1, . . . , t0) = P (tm | tm−1, . . . , tm−n), and this equation does not hold for any n′ < n. In the case of ISIs one reads Ω = R+. 2The stationarity of the output stream results both from the stationarity of the input one and from the absence of time-dependent parameters in the BN model, see Subsection 2.1. In order to ensure stationarity, we also expect that system is considered after initial period sufficient to forget the initial conditions. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1591 In particular, taking m = n+ 1, we have the necessary condition P (tn+1 | tn, . . . , t1, t0) = P (tn+1 | tn, . . . , t1), ti ∈ Ω, i = 0, . . . , n+ 1, (2) required for the stochastic process {tj} to be the n-order Markov chain. In the next sections we prove the following theorem. Theorem 1. The output ISIs stream of BN with delayed feedback under Poisson stimulation cannot be represented as a Markov chain of any finite order. 4. Proof outline. In order to prove the Theorem 1, we are going to show analytically, that the equality (2) does not hold for any finite value of n, namely, in the exact expression for conditional probability density P (tn+1 | tn, . . . , t1, t0), elimination of t0-dependence is impossible. For this purpose we introduce the stream of events (t, s) ts = {. . . , (ti, si), . . . }, where si is the time to live of the impulse in the feedback line at the moment, when ISI ti starts. We consider the joint probability density P (tn+1, sn+1; tn, sn; . . . ; t0, s0) for realization of n + + 2 successive events (t, s), and the corresponding conditional probability density P (tn+1, sn+1 | tn, sn; . . . ; t0, s0) for these events. Lemma 1. Stream ts is the 1st order Markovian: ∀n≥0∀t0>0∀s0∈ ]0;∆] . . . ∀tn+1>0∀sn+1∈ ]0;∆] : P (tn+1, sn+1 | tn, sn; . . . ; t0, s0) = P (tn+1, sn+1 | tn, sn), (3) where {t0, . . . , tn+1} is the set of successive ISIs, and {s0, . . . , sn+1} are the corresponding times to live. Proof. In the compound event (tn+1, sn+1), the time to live sn+1 always gets its value before than the tn+1 does. The value of sn+1 can be determined unambiguously from the (tn, sn) value (see Subsections 2.2 and 5.2): sn+1 = sn − tn, tn < sn, ∆, tn ≥ sn. The only two factors, which determine the next ISI duration, tn+1, are (i) the value of sn+1, and (ii) the behavior of the input Poisson stream under the condition (tn, sn; . . . ; t0, s0) after the moment θ, when the tn+1 starts. The sn+1 value does not depend on (tn−1, sn−1; . . . ; t0, s0), see above. As regards the input Poisson stream, condition (tn, sn; . . . ; t0, s0) imposes certain constraints on its behavior before the θ. Namely, if ti 6= si for some 0 ≤ i ≤ n, then one can conclude that an input impulse was obtained just at the end of ti. In the opposite situation, when ti = si, one can conclude that in the course of ti exactly one impulse was obtained from the Poisson stream. But what do we need in the definition of the P (tn+1, sn+1 | tn, sn; . . . ; t0, s0), is the conditional probability to obtain input impulses at definite moments after the θ. For a Poisson stream this conditional probability does not depend on conditions before the θ. For example, conditional probability to obtain the first after θ impulse at θ + t equals e−λtλdt, whatever conditions are imposed on the stream before the θ. Lemma 1 is proved. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1592 A. K. VIDYBIDA, K. G. KRAVCHUK In order to find the conditional probability density P (tn+1 | tn, . . . , t1, t0), the following steps should be performed: Step 1. Use the property (3) for calculating joint probability density of events (t, s): P (tn+1, sn+1; tn, sn; . . . ; t0, s0) = = P (tn+1, sn+1 | tn, sn) . . . P (t1, s1 | t0, s0)P (t0, s0), (4) where P (t, s) and P (tn, sn | tn−1, sn−1) denote the stationary probability density and conditional probability density (transition probability) for events (t, s). Step 2. Represent the joint probability density for successive output ISI durations as marginal probability by integration over variables si, i = 0, 1, . . . , n+ 1: P (tn+1, tn, . . . , t0) = = ∆∫ 0 ds0 ∆∫ 0 ds1 . . . ∆∫ 0 dsn+1P (tn+1, sn+1; tn, sn; . . . ; t0, s0). (5) Step 3. Use the definition of conditional probability density P (tn+1 | tn, . . . , t1, t0) = P (tn+1, tn, . . . , t0) P (tn, . . . , t0) . (6) Taking into account the Steps 1 and 2, one derives for the joint probability density P (tn+1, . . . , t0) P (tn+1, tn, . . . , t0) = ∆∫ 0 ds0 . . . ∆∫ 0 dsn+1P (t0, s0) n+1∏ k=1 P (tk, sk | tk−1, sk−1). (7) In the next section, we are going to find the exact analytic expressions for probability densities P (t, s) and P (tk, sk | tk−1, sk−1), and perform the integration in (7). Then we will apply the Step 3, above, to find expressions for the conditional probability density P (tn+1 | tn, . . . , t1, t0). It appears, that the conditional probability density has a singular part of the Dirac’s δ-function type. This is because the system’s dynamics involves discrete events of obtaining impulse by neuron (see below). The presence of δ-functions in (7) requires more exact definition of the integration domain. Namely, it follows from what is said at the end of the Subsection 2.2, that event (t, 0) has zero probability, whereas event (t,∆) has positive probability for any t > 0. Therefore, each integration in (7) should be performed over the half-open interval ]0; ∆]. In order to prove that the equality (2) does not hold for any n ≥ 0, we use the singular parts only. 5. Main calculations. 5.1. Probability density P (t, s) for events (t, s). The probability density P (t, s) can be derived as the product P (t, s) = F (t | s)f(s), (8) where f(s) denotes the stationary probability density for time to live of the impulse in the feedback line at the moment of an output ISI beginning, F (t | s) denotes conditional probability density for ISI duration provided the time to live of the impulse in the feedback line equals s at the moment of ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1593 0 20 40 60 80 0 0.02 0.04 0.06 P(t), 1/s t, s 0 20 40 60 80 0 0.002 0.004 0.006 0.008 0.01 f(s), 1/s time to live, s a b Fig. 2. Output ISI probability density P (t) (a) and probability density f(s) for times to live of the impulse in the feedback line (b), found analytically in [36]. Here τ = 10 ms, ∆ = 8 ms, λ = 150 s−1, N0 = 2. The presence of δ-function in both densities is clearly visible. this ISI beginning. Exact expressions for both f(s) and F (t | s) are given in [36] (Eqs. (5), (6) and (31)). In this paper we need only singular parts of those expressions, which read: F sing(t | s) = λse−λsδ(t− s), (9) f sing(s) = aδ(s−∆), where a = 4e2λ∆ (3 + 2λ∆)e2λ∆ + 1 , (10) where a gives the probability to obtain the impulse in the feedback line with time to live equal ∆ at the beginning of an arbitrary ISI, λ is the input Poisson stream intensity. The presence of δ-functions in F (t | s) can be explained as follows. The probability to obtain an output ISI of duration t, which exactly equals s, is not infinitesimally small. Due to (1), it equals to the probability to obtain exactly one impulse from the Poisson stream during time interval ]0; s[, which is λse−λs. The second impulse comes from the line and triggers the neuron exactly s units of time after the previous triggering. So, we have the non-zero probability to obtain an output ISI, which duration equals exactly s. This gives the δ-function at t = s in the probability density F (t | s). The probability to have time to live, s, exactly equal ∆ at the moment of an output ISI beginning is not infinitesimally small as well. Every time, when the line is free at the moment of triggering, the impulse enters the line and has time to live equal ∆. For the line to be free from impulses at the moment of triggering, it is necessary that t ≥ s for the previous ISI. The set of realizations of the input Poisson process, each realization satisfying t ≥ s, has non-zero probability a, see (10) and [36], and this gives the δ-function at s = ∆ in the probability density f(s). The output ISI probability density P (t) can be obtained as the result of integration of (8) (see [36] for details): ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1594 A. K. VIDYBIDA, K. G. KRAVCHUK P (t) = ∆∫ 0 F (t|s)f(s) ds. (11) Examples of P (t) and f(s) graphs are given in Fig. 2. 5.2. Conditional probability density P (tk, sk | tk−1, sk−1). Here we find the conditional probability density P (tk, sk | tk−1, sk−1) for events (tk, sk), which determines the probability to obtain the event (tk, sk), with precision dtk dsk, provided the previous event was (tk−1, sk−1). By definition of conditional probabilities, the probability density wanted can be represented as the following product: P (tk, sk | tk−1, sk−1) = F (tk | sk, tk−1, sk−1)f(sk | tk−1, sk−1), (12) where F (tk | sk, tk−1, sk−1) denotes conditional probability density for ISI duration, tk, provided i) this ISI started with time to live of impulse in the feedback line equal to sk, and ii) previous (t, s)-event was (tk−1, sk−1); the f(sk | tk−1, sk−1) denotes conditional probability density for times to live of impulse in the feedback line under condition ii). It is obvious, that F (tk | sk, tk−1, sk−1) = F (tk | sk), (13) because with sk being known, the previous event (tk−1, sk−1) does not add any information, useful to predict tk (compare with the proof of Lemma 1). In order to find the probability density f(sk | tk−1, sk−1), let us consider various possible relations between tk−1 and sk−1. If tk−1 ≥ sk−1, the line will have time to get free from the impulse during the ISI tk−1. That is why, at the beginning of the ISI tk, an output spike will enter the line and will have time to live sk = ∆ with probability 1. Therefore, the probability density contains the corresponding δ-function: f(sk | tk−1, sk−1) = δ(sk −∆), tk−1 ≥ sk−1. (14) If tk−1 < sk−1, then the ISI tk−1 ends before the impulse leaves the feedback line. Therefore, at the beginning of the tk, the line still keeps the same impulse as at the beginning of tk−1. This impulse has time to live being accurately equal to sk = sk−1 − tk−1, so f(sk | tk−1, sk−1) = δ(sk − sk−1 + tk−1), tk−1 < sk−1. (15) Taking all together, for the conditional probability density P (tk, sk | tk−1, sk−1) one obtains P (tk, sk | tk−1, sk−1) = F (tk | sk)δ(sk −∆), tk−1 ≥ sk−1, F (tk | sk)δ(sk − sk−1 + tk−1), tk−1 < sk−1, (16) where exact expression for F (t | s) is given in [36] (Eqs. (5), (6)). ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1595 5.3. Joint probability density P (tn+1, . . . , t0). In this section, we are going to find the exact analytic expression for the joint probability density P (tn+1, . . . , t0) at the following domain: D1 = { (t0, . . . , tn) ∣∣∣∣∣ n∑ i=0 ti < ∆ } . (17) It is worth to notice, that the set of n+ 1 successive ISI durations t0, . . . , tn has non-zero probability, p∆ > 0, to fall into the domain (17). Indeed, BN with threshold N0 = 2 requires 2(n + 1) input impulses within time window ] 0; ∆[ to be triggered n + 1 times within this window (condition (1) ensures that no input impulse will be lost). BN receives impulses both from the Poisson stream and from the feedback line. But no more than one impulse from the line may have time to reach BN’s input during time interval less than ∆. Therefore, the other 2n + 1 impulses must be received from the Poisson stream. On the other hand, if as much as 2(n + 1) input impulses are received from the Poisson stream during the time interval ]0; ∆[, the inequality (17) holds for sure, no matter was an impulse from the feedback line involved, or not. Therefore, p∆ > p(2n + 2,∆) > 0, where p(i,∆) gives the probability to obtain i impulses from the Poisson stream during time interval ∆ [37]: p(i,∆) = e−λ∆(λ∆)i/i!. For a fixed (n+ 1)-tuple (t0, . . . , tn) ∈ D1, let us split the integration domain for s0 in (7) in the following way: ]0; ∆] = ]0; t0]∪ ]t0; t0 + t1]∪ ]t0 + t1; t0 + t1 + t2] ∪ . . . ∪ ]t0 + t1 + . . .+ tn; ∆], or ∆∫ 0 ds0 = t0∫ 0 ds0 + n∑ i=1 ∑i j=0 tj∫ ∑i−1 j=0 tj ds0 + ∆∫ ∑n j=0 tj ds0, and introduce the following notations: Ii = ∑i j=0 tj∫ ∑i−1 j=0 tj ds0 ∆∫ 0 ds1 . . . ∆∫ 0 dsn+1P (t0, s0)× × n+1∏ k=1 P (tk, sk | tk−1, sk−1), i = 0, 1, 2, . . . , n, (18) In+1 = ∆∫ ∑n j=0 tj ds0 ∆∫ 0 ds1 . . . ∆∫ 0 dsn+1P (t0, s0) n+1∏ k=1 P (tk, sk | tk−1, sk−1), (19) where we assume, that ∑j2 j=j1 = 0 for j1 > j2. Domain of s0 values covered by Ii, i = 0, . . . , n, corresponds to the scenario, when impulse, which was in the feedback line at the beginning of ISI t0 (with time to live s0), will reach BN during interval ti, see Fig. 3. In this process, after each firing, which starts ISI tk, k ≤ i, the time to live of the impulse in the feedback line is decreased exactly by ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1596 A. K. VIDYBIDA, K. G. KRAVCHUK ✻ s ∆ s0 ✻ ❄t0 s1 t1 ❄ ✻ s2 ∼∼ si−1 ti−1 ✻ ❄si ✲time axist0 ✲✛ t1 ✛ ✲ ≀≀ ti−1 ✲✛ ti ✲✛ ti+1 ✛✲ ≀≀ tn ✛✲ r r r r r r r r r ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ≀≀ ≀≀ ✻ ❄ sn+1 si+1 = ∆ Fig. 3. Illustration of relations between (t0, . . . , tn) and (s0, . . . , sn+1) contributing to the Ii: s0 ∈ ]∑i−1 j=0 tj ;∑i j=0 tj ] , ∑n j=0 tj < ∆. The time to live sk decreases steadily with every output firing for k = 0, . . . , i− 1 until it becomes that si < ti. Then, during the time interval ti the line discharges its impulse to BN input, and at the beginning of ti+1 starts to convey the new one with time to live si+1 = ∆. After that, times to live sk are again decreased by corresponding tk with each firing, k = i+ 1, . . . , n. tk−1. This means, that variables of integration {s0, . . . , sn+1}, above, are not actually independent, but must satisfy the following relations: sk = s0 − k−1∑ j=0 tj , k = 1, . . . , i, (20) which are also ensured by δ-function in the bottom line of (16). Next to si time to live must be equal ∆: si+1 = ∆, (21) and this is ensured by δ-function in the top line of (16). The next to si+1 times to live again are decreased by corresponding ISI with each triggering. Due to (17), this brings about another set of relations sk = ∆− k−1∑ j=i+1 tj , k = i+ 2, . . . , n+ 1, (22) which are again ensured by δ-function in the bottom line of (16). Relations (20), (21) and (22) together with limits of integration over s0 in (18) ensure that at D1 the following inequalities hold: sk > tk, k = 0, . . . , i− 1, si ≤ ti, sk > tk, k = i+ 1, . . . , n. (23) ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1597 Inequalities (23) allow one to decide correctly which part of rhs of (16) should replace each transition probability P (tk, sk | tk−1, sk−1) in (18), and perform all but one integration. This gives Ii = ∑i j=0 tj∫ ∑i−1 j=0 tj ds0 ∆∫ 0 ds1 . . . ∆∫ 0 dsn+1F (t0 | s0)f(s0)× × i∏ k=1 F (tk | sk)δ sk − s0 + k−1∑ j=0 tj F (ti+1 | si+1) δ(si+1 −∆)× × n+1∏ k=i+2 F (tk | sk)δ sk −∆ + k−1∑ j=i+1 tj  = = F tn+1 | ∆− n∑ j=i+1 tj  . . . F (ti+2 | ∆− ti+1)F (ti+1 | ∆)× × ∑i j=0 tj∫ ∑i−1 j=0 tj F ti | s0 − i−1∑ j=0 tj  . . . F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0, i = 0, 1, 2, . . . , n. (24) The last expression might be obtained as well by means of consecutive substitution of either top, or bottom line of (16) into (18), without previously discovering (20) – (23). Finally, integral In+1 corresponds to the case, when at the beginning of interval tn+1, the line still keeps the same impulse as at the beginning of t0. Therefore, In+1 comprises the rest of scenari- os contributing to the value of P (tn+1, . . . , t0) in (5). Proceeding as in the preceding terms, the contribution In+1 reads In+1 = ∆∫ ∑n j=0 tj ds0 ∆∫ 0 ds1 . . . ∆∫ 0 dsn+1F (t0 | s0)f(s0)× × n+1∏ k=1 F (tk | sk)δ sk − s0 + k−1∑ j=0 tj  = = ∆∫ ∑n j=0 tj F tn+1 | s0 − n∑ j=0 tj F tn | s0 − n−1∑ j=0 tj  . . . . . . F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0. (25) ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1598 A. K. VIDYBIDA, K. G. KRAVCHUK Taking into account (24) and (25), one obtains the following expression for joint probability density P (tn+1, . . . , t0): P (tn+1, . . . , t0) = n+1∑ i=0 Ii = n∑ i=0 F (ti+1 | ∆) n+1∏ k=i+2 F tk | ∆− k−1∑ j=i+1 tj × × ∑i j=0 tj∫ ∑i−1 j=0 tj F (t0 | s0) f(s0) i∏ k=1 F tk | s0 − k−1∑ j=0 tj ds0 + + ∆∫ ∑n j=0 tj F (t0 | s0) f(s0) n+1∏ k=1 F tk | s0 − k−1∑ j=0 tj ds0, (26) n∑ i=0 ti < ∆, n = 0, 1, . . . , where we assume, that ∑j2 j=j1 = 0 and ∏j2 j=j1 = 1 for j1 > j2. The expression (26) gives the joint probability density P (tn+1, . . . , t0) for consecutive ISI durati- ons at the domain D1 for an arbitrary n. Therefore, the conditional probability density P (tn+1 | tn, . . . , t0) at D1 can be obtained readily, see equation (6). 5.4. Singular part of P (tn+1, . . . , t0). In order to obtain the singular part of expression, defined in (26), let us first derive singular parts for all Ii, i = 0, . . . , n, and In+1 separately. In order to keep the expressions shorter, we represent Ii as follows: Ii(t0, . . . , tn+1) = Xi(t0, . . . , ti)Yi(ti+1, . . . , tn+1), i = 0, 1, . . . , n, (27) where Xi ≡ ∑i j=0 tj∫ ∑i−1 j=0 tj F ti|s0 − i−1∑ j=0 tj F ti−1|s0 − i−2∑ j=0 tj  . . . F (t1|s0 − t0)F (t0|s0)f(s0) ds0, (28) Yi ≡ F tn+1 | ∆− n∑ j=i+1 tj F tn | ∆− n−1∑ j=i+1 tj  . . . F (ti+2 | ∆− ti+1)F (ti+1 | ∆). (29) At the domain considered, namely, for ∑n i=0 ti < ∆, the expressions for F ( tn | ∆ − − ∑n−1 j=i+1 tj ) , . . . , F (ti+2 | ∆− ti+1) and F (ti+1 | ∆) have no singularities, see (9). Therefore Y sing i = F sing tn+1|∆− n∑ j=i+1 tj F tn|∆− n−1∑ j=i+1 tj  . . . F (ti+2|∆− ti+1)F (ti+1|∆). (30) ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1599 At the same time, integration limits in (28) ensure that Xsing i = 0. Indeed, each integral Xi (and, originally, Ii), i = 0, 1, . . . , n, covers the half-open interval s0 ∈ ]∑i−1 j=0 tj ; ∑i j=0 tj ] . The only singularity of integrand in (28) at this domain is δ (∑i j=0 tj − s0 ) which is provided by F ( ti | s0 − ∑i−1 j=0 tj ) , see (9), and it disappears after integration. Therefore I sing i = F sing tn+1|∆− n∑ j=i+1 tj  . . . F (ti+2|∆− ti+1)F (ti+1|∆)× × ∑i j=0 tj∫ ∑i−1 j=0 tj F ti | s0 − i−1∑ j=0 tj  . . . F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0, i = 0, 1, . . . , n. (31) Now, consider the singular part of In+1, expression (25). Within the integration domain, the integrand contains two singularities: one for F ( tn+1|s0 − ∑n j=0 tj ) at tn+1 = s0 − ∑n j=0 tj and the other one for f(s0) at s0 = ∆, see (9) and (10). After integration over s0, the only δ-function survives, which is provided by F ( tn+1|∆− ∑n j=0 tj ) and located at tn+1 = ∆− ∑n j=0 tj : I sing n+1 = aF sing tn+1 | ∆− n∑ j=0 tj F tn | ∆− n−1∑ j=0 tj  . . . F (t1 | ∆− t0)F (t0 | ∆), (32) where a is the δ-function’s mass in f(s), see (10). Taking into account (9), (31) and (32), for the singular part of the probability density P (tn+1, . . . , t0) one obtains P sing(tn+1, tn, . . . , t0) = n+1∑ i=0 I sing i = = n∑ i=0 Ai(ti+1, . . . , tn+1)δ  n+1∑ j=i+1 tj −∆ +An+1(t0, . . . , tn+1)δ(t0 + . . .+ tn+1 −∆), (33) n∑ i=0 ti < ∆, where Ai and An+1 denote regular factors, defined by the following expressions: Ai(ti+1, . . . , tn+1) = λtn+1 e −λtn+1F tn | ∆− n−1∑ j=i+1 tj  . . . F ( ti+2 | ∆− ti+1 ) F (ti+1 | ∆)× × ∑i j=0 tj∫ ∑i−1 j=0 tj F ti | s0 − i−1∑ j=0 tj  . . . F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0, i = 0, 1, . . . , n, (34) ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1600 A. K. VIDYBIDA, K. G. KRAVCHUK An+1(t0, . . . , tn+1) = a · λtn+1 e −λtn+1× ×F tn | ∆− n−1∑ j=0 tj  . . . F (t1 | ∆− t0)F (t0 | ∆). (35) The presence of δ-functions in joint probability density P (tn+1, . . . , t0) can be additionally explained as follows. If at the beginning of (i + 1)-th ISI, the impulse enters the line, then output interval tn+1 will start with that same impulse in the feedback line with time to live equal sn+1 = ∆− − ∑n j=i+1 tj . To trigger BN after time exactly equal sn+1 after that, it is necessary to obtain one impulse from the Poisson stream during time interval sn+1. This event has non-zero probability, therefore we have the non-zero probability of an output ISI exactly equal to sn+1 : tn+1 = ∆ − − ∑n j=i+1 tj . This gives the corresponding δ-functions in ISI probability density. The term with δ(tn+1 + . . . + t0 − ∆) corresponds to the case, when the impulse enters the line at the beginning of t0. From (6) and (33) one can easily derive the following expression for the singular part of the conditional probability density: P sing(tn+1 | tn, . . . , t0) = 1 P (tn, . . . , t0) n∑ i=0 Ai · δ  n+1∑ j=i+1 tj −∆ + + An+1 P (tn, . . . , t0) δ(t0 + . . .+ tn+1 −∆), n∑ i=0 ti < ∆, (36) where Ai and An+1 are defined in (34) and (35). It should be outlined, that the joint probability density P (tn, . . . , t0) has no singularities at the domain tn < ∆ − ∑n−1 i=0 ti, see (33) with (n − 1) substituted instead of n. As one can see, function P (tn+1 | tn, . . . , t0) contains singularity at tn+1 = ∆− tn− tn−1 − . . . . . . − t0. The dependence of the singular part of function P (tn+1 | tn, . . . , t0) on t0 cannot be compensated by any regular summands, therefore, the whole conditional probability density P (tn+1 | tn, . . . , t0) depends on t0. It means, that the condition (2) does not hold for any n for the output stream of BN with delayed feedback. Theorem 1 is proved. 6. Particular cases. In the previous sections, we have proven the impossibility to represent the stream of output ISI durations for BN with delayed feedback as a Markov chain of any finite order. In particular, output ISI stream is neither a sequence of independent random variables, and therefore is non-renewal, nor it is the first-order Markov process. In the course of proving Theorem 1 (see Sections 4 and 5), we have obtained the expression for P (tn+1, tn, . . . , t0) at the domain ∑n i=0 ti < ∆ in general case of an arbitrary n, see (26). This allows to calculate the conditional probability density P (tn+1 | tn, . . . , t0) for ∑n i=0 ti < ∆ and n = 0, 1, . . . . In this section, we consider the two particular cases of P (tn+1 | tn, . . . , t0) when n = 0 and n = 1, namely, the single-ISI conditional probability density P (t1 | t0) and the double-ISI conditional ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1601 probability density P (t2 | t1, t0) and obtain the expressions for P (t1 | t0) and P (t2 | t1, t0) for domain (17), as well as for all other possible domains, which were not used in the proof of Theorem 1. 6.1. Conditional probability density P (t1 | t0). In order to derive the exact expression for conditional probability density P (t1 | t0) for neighbouring ISI durations, we take Steps 1–3, outlined in Section 4, for n = 0. In the case of P (t1 | t0), there are only three domains, on which the expressions should be obtained separately, namely cases t0 < ∆, t0 > ∆ and t0 = ∆. Performing integration in (7), one obtains the following expressions for P (t1, t0) at these domains: P (t1, t0) =  F (t1 | ∆)P (t0), t0 ≥ ∆, (37) F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0+ + ∆∫ t0 F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0, t0 < ∆. (38) Then, by definition of conditional probability densities, one obtains: P (t1 | t0) =  F (t1 | ∆), t0 > ∆, (39) 1 P (t0) ( F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0+ + ∆∫ t0 F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0 ) , t0 < ∆. (40) It should be outlined, that the output ISI probability density P (t0) has no singularities at the domain t0 < ∆. Indeed, due to (9) – (11), the only δ-function contained in P (t0) is placed at t0 = ∆, see Fig. 2 (a). In the vicinity of the point t0 = ∆, the single-ISI conditional probability density can be derived as P (t1 | t0 = ∆) = lim ε→0 ∫ ∆+ε ∆−ε dt0P (t1, t0)∫ ∆+ε ∆−ε dt0P (t0) = = lim ε→0 ∫ ∆+ε ∆−ε dt0P (t1, t0)∫ ∆+ε ∆−ε dt0 aλ∆e−λ∆δ(t0 −∆) . (41) The integrand in the numerator of (41) also contains singularity at t0 = ∆ due to the term P (t0) in (37). Integration in (41) just gives δ-functions’ masses both in numerator and denominator, and delivers ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1602 A. K. VIDYBIDA, K. G. KRAVCHUK 0 100 200 300 0 0.003 0.006 0.009 0.012 P(t1 | t0), 1/s t1, s 0 100 200 300 0 0.003 0.006 0.009 0.012 t1, s P(t1 | t0), 1/s a b Fig. 4. Conditional probability density P (t1 | t0) for τ = 10 ms, ∆ = 8 ms, λ = 150 s−1, N0 = 2, t0 = 6 ms (a) and t0 = 11 ms (b), found numerically by means of Monte-Carlo method (the number of firings accounted, N = 30 000). Different course of P (t1 | t0) for different t0 values is clearly visible. P (t1 | t0) = F (t1 | ∆), t0 = ∆. (42) Expressions (39), (40) and (42) can be understood as follows. Since t0 ≥ ∆, one can be sure that the line has time to get free from impulse during t0, therefore at the moment of next firing (at the beginning of t1) the impulse enters the line and has time to live equal ∆. In the case of t0 < ∆, see (40), two possibilities arise. The first term corresponds to the scenario, when the feedback line discharges conveyed impulse within time interval t0, and the second one represents the case when at the beginning of t1 the line still keeps the same impulse as at the beginning of t0. It can be shown, that the following normalization conditions take place: ∞∫ 0 dt1P (t1 | t0) = 1 and ∞∫ 0 dt0P (t1, t0) = P (t1). The singular part of P (t1 | t0) can be easily extracted: P sing(t1 | t0) =  e−λ∆λ∆δ(t1 −∆), t0 ≥ ∆, (43) λt1 e −λt1 P (t0)  t0∫ 0 F (t0 | s0)f(s0) ds0δ(t1 −∆) + +a F (t0 | ∆)δ(t0 + t1 −∆) , t0 < ∆. (44) Obviously, expression (44) could be obtained directly from (34) – (36) by substituting n = 0. As it can be seen from (43) and (44), the number of δ-functions in P (t1 | t0) and their positions depend on t0, therefore the conditional probability density P (t1 | t0) cannot be reduced to output ISI probability density P (t1). Therefore, the neighbouring output ISIs of BN with delayed feedback are correlated, as expected. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1603 Examples of P (t1 | t0), found for two domains numerically, by means of Monte-Carlo method (see Section 7 for details), are placed at Fig. 4. 6.2. Conditional probability density P (t2 | t1, t0). In order to derive the exact expression for conditional probability density P (t2 | t1, t0) for the successive ISI durations, we take Steps 1 – 3, outlined in Section 4, for n = 1. In the case of P (t2, t1, t0), there are six domains, on which the expressions should be obtained separately, namely, the domain D1 = {t1, t0 | t1 + t0 < ∆}, which was already utilized in Section 5, and five remaining: D2 = {t1, t0 | t0 ≥ ∆ and t1 ≥ ∆}, D3 = {t1, t0 | t0 < ∆ and t1 ≥ ∆}, D4 = {t1, t0 | t0 ≥ ∆ and t1 < ∆}, D5 = {t1, t0 | t0 < ∆ and ∆− t0 < t1 < ∆}, d = {t1, t0 | t0 + t1 = ∆}. In the case, when the exact equality t0 + t1 = ∆ holds, namely, if (t1, t0) ∈ d, the product P (t2 | t1, t0) dt2 gives the probability to obtain an output ISI of duration within interval [t2; t2 + dt2[, provided the overall duration of two previous ISIs accurately equals ∆. Expressions for P (t2 | t1, t0) can be found exactly on each domain: P (t2 | t1, t0) = =  F (t2 | ∆), (t0, t1) ∈ D2, (45) F (t2 | ∆), (t0, t1) ∈ D3, (46) F (t2 | ∆), (t0, t1) ∈ d, (47) F (t2 | ∆− t1), (t0, t1) ∈ D4, (48) 1 P (t1, t0) ( F (t2 | ∆− t1)F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0+ +F (t2|∆) ∆∫ t0 F (t1|s0 − t0)F (t0|s0)f(s0) ds0 ) , (t0, t1) ∈ D5, (49) 1 P (t1, t0) ( F (t2 | ∆− t1)F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0+ +F (t2 | ∆) t0+t1∫ t0 F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0+ + ∆∫ t0+t1 F (t2|s0 − t0 − t1)F (t1|s0 − t0)F (t0|s0)f(s0) ds0 ) , (t0, t1) ∈ D1, (50) ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1604 A. K. VIDYBIDA, K. G. KRAVCHUK 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s a b Fig. 5. Conditional probability density P (t2 | t1, t0) for τ = 10 ms, ∆ = 8 ms, λ = 150 s−1, N0 = 2, t1 = 13 ms, t0 = 13 ms, (t1, t0) ∈ D2 (a), and t1 = 6 ms, t0 = 13 ms, (t1, t0) ∈ D4 (b), found numerically by means of Monte-Carlo method (N = 30 000). where P (t1, t0) = F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0 + ∆∫ t0 F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0, according to (40). The probability density P (t1, t0) contains δ-function at the domain d, see (44). In (47), the double-ISI conditional probability density was derived as P (t2 | t1, t0) = lim ε→0 ∫ ∆−t0+ε ∆−t0−ε dt1P (t2, t1, t0)∫ ∆−t0+ε ∆−t0−ε dt1P (t1, t0) , (t0, t1) ∈ d, (51) compare with (41). It can be shown, that the numerator in (51) also contains singularity at t0+t1 = ∆. Integration in (51) just gives δ-functions’ masses both in numerator and denominator, which gives (47). It is worth to notice, that P (t1, t0) is regular function on both D1 and D5, see denominators in (49) and (50). Indeed, from (43) and (44) one can see, that P (t1, t0) may include singularities only at the points t1 = ∆ and t1 = ∆− t0. None of these points belongs to D1, or D5. It can be shown, that the following normalization conditions take place: ∞∫ 0 dt2P (t2 | t1, t0) = 1 and ∞∫ 0 dt0P (t2, t1, t0) = P (t2, t1). The singular part of the conditional probability density P (t2 | t1, t0) can be derived as follows: ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1605 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s a b Fig. 6. Conditional probability density P (t2 | t1, t0) for τ = 10 ms, ∆ = 8 ms, λ = 150 s−1, N0 = 2, t1 = 6 ms, t0 = 3 ms (a) and t1 = 6 ms, t0 = 1 ms (b), found numerically by means of Monte-Carlo method (N = 30 000). Different course of P (t2 | t1, t0) for different t0 values is clearly visible. P sing(t2 | t1, t0) = =  e−λt2λt2δ(t2 −∆), (t0, t1) ∈ D2 ∪D3 ∪ d, (52) e−λt2λt2δ(t1 + t2 −∆), (t0, t1) ∈ D4, (53) e−λt2λt2 P (t1, t0) ( F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0δ(t1 + t2 −∆)+ + ∆∫ t0 F (t1 | s0 − t0)F (t0 | s0)f(s0) ds0δ(t2 −∆) ) , (t0, t1) ∈ D5, (54) e−λt2λt2 P (t1, t0)  t0+t1∫ t0 F (t1|s0 − t0)F (t0|s0)f(s0) ds0δ(t2 −∆)+ +F (t1 | ∆) t0∫ 0 F (t0 | s0)f(s0) ds0δ(t1 + t2 −∆)+ +aF (t1 | ∆− t0)F (t0 | ∆)δ(t0 + t1 + t2 −∆) , (t0, t1) ∈ D1. (55) Obviously, expression (55) could be obtained directly from (34) – (36) by substituting n = 1. As one can see, the singular part of P (t2 | t1, t0) depends on t0, therefore P (t2 | t1, t0) cannot be reduced to P (t2 | t1), which means that the output stream is not first-order Markovian. Examples of P (t2 | t1, t0), found numerically for different domains, are placed at Figs 5 and 6. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1606 A. K. VIDYBIDA, K. G. KRAVCHUK 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s 0 100 200 300 0 0.003 0.006 0.009 0.012 t2, s P(t2 | t1, t0), 1/s a b Fig. 7. Conditional probability density P (t2 | t1, t0) for τ = 10 ms, ∆ = 8 ms, λ = 800 s−1, N0 = 4, t1 = 6 ms, t0 = 3 ms (a) and t1 = 6 ms, t0 = 1 ms (b), found numerically by means of Monte-Carlo method (N = 30 000). Different course of P (t2 | t1, t0) for different t0 values is clearly visible. 7. Numerical simulation. In order to check the correctness of obtained analytic expressions, and also to investigate whether the output ISIs stream is non-Markovian for BN with higher thresholds as well as for N0 = 2, numerical simulations were performed. A C++ program, containing class, which models the operation manner of BN with delayed feedback, was developed. Object of this class receives the sequence of pseudorandom numbers with Poisson probability density to its input. The required sequences were generated by means of utilities from the GNU Scientific Library3 with the Mersenne Twister generator as source of pseudorandom numbers. Program contains function, the time engine, which brings system to the moment just before the next input signal, bypassing moments, when neither external Poisson impulse, nor impulse from the feedback line comes. So, only the essential discrete events are accounted. It allows one to make exact calculations faster as compared to the algorithm where time advances gradually by adding small time-steps. The conditional probability densities, P (t1 | t0) and P (t2 | t1, t0), are found by counting the number of output ISI of different durations and normalization (see Figs 4 – 7). For calculation of conditional distributions only those ISIs are selected, which follow one or two ISIs of fixed duration, namely, {t0} for P (t1 | t0) and {t1, t0} for P (t2 | t1, t0). The quantity, the position and the mass of δ-functions, obtained in numerical experiments for BN with threshold 2, coincide with those predicted analytically in (43), (44) and (52) – (55). For N0 > 2, conditional probability densities P (t1 | t0) and P (t2 | t1, t0) are similar to those, found for N0 = 2. In particular, both the quantity and position of δ-functions coincide with those obtained for BN with threshold 2, as expected, compare Figs 6 and 7. 8. Conclusions and discussion. Our results reveal the influence of delayed feedback presence on the neuronal firing statistics. In contrast to the cases of BN without feedback [38] and BN with instantaneous feedback [39], the neighbouring output ISIs of BN with delayed feedback are mutually 3http://www.gnu.org/software/gsl/ ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1607 correlated. It means that even in the simplest possible recurrent network the output ISI stream cannot be treated as a renewal one. The non-renewalness of experimentally registered spike trains was observed for neuronal activity in various sensory systems in mammals [9] and fish [10, 11]. The simplest stochastic processes which are not renewal are the Markov processes of various order. The order of underlying Markov process was estimated in [11] for activity in the weakly electric fish electrosensory system. It was found in [11] that for some neural fibers the Markov order should be at list seven, which does not exclude that the genuine order is higher, or that the activity is non-Markovian. Actually, for proving based on experimental data that a stochastic activity has Markov order m, one needs increasing amount of data with increasing m. If so, it seems impossible to prove experimentally that a stochastic activity is non-Markovian. Similarly as it is impossible to prove experimentally that a number is irrational. We prove here that the output ISI stream of BN with delayed feedback is non-Markovian based on complete knowledge of the mechanism which generates the output stream. In a sense, to have this knowledge is equivalent as to have unlimited amount of experimental data. The main cause of non-Markovianness in our case is the delayed feedback presence. In this connection, it would be interesting to compare our results with recently appeared paper [42], where the discrete time recurrent model network of leaky integrate-and-fire (LIF) neurons is considered. In that model, the inter-neuronal communication lines have zero delays and external input is deterministic, but synaptic weights are subjected to uncorrelated random fluctuations. For that model, it is established that the stochastic process of neuronal firing states will be non-Markovian as well. What could be the reason of non-Markovianness if there are no delays in this model? The answer is that LIF neuron has infinite memory unless its state is reset by firing. The instantaneous firing state of the network specifies which neurons are in the firing state at the given moment of discrete time, but says nothing about the excitation level of neurons, which are quiescent at that moment. This unknown level of excitation is due to firing of neighbouring neurons at earlier moments of time. Therefore, knowledge of the firing states at more and more early moments can more and more improve our predicting ability as regards states of the quiescent neurons at the given moment, and finally, as regards the network firing state at the next moment. Any neighbouring neuron could have its last firing moment infinitely far in the past and this explains why a network without communication delays can as well demonstrate non-Markovian behavior. Given that in the considered system of single neuron with delayed feedback the non-Markovian character arises exclusively from delayed communication, it is natural to consider that this property (non-Markovianness) will be present in any single neuron, whenever delayed feedback constitutes an important input, independently on the specific neuronal model at hand. In a network composed of more than single neuron with feedback, the delayed feedback can be mediated with inter-neurons, and our finding suggests that behavior of such a network will be non-Markovian as well. One should take this facts into account during analysis of neuronal spike trains obtained from any recurrent network with delayed inter-neuronal communication. 1. Gerstein G. L., Mandelbrot B. Random walk models for the spike activity of a single neuron // Biophys. J. – 1964. – 4. – P. 41 – 68. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 1608 A. K. VIDYBIDA, K. G. KRAVCHUK 2. Korolyuk V. S., Kostyuk P. G., Pjatigorskii B. Ya., Tkachenko E. P. Mathematical model of spontaneous activity of some neurons in the CNS // Biofizika. – 1967. – 12. – P. 895 – 899. 3. Nicholls J. G., Martin A. R., Wallace B. G., Fuchs P. A. From neuron to Brain. – Sunderland: Sinauer Associates, 2001. 4. Ghosh-Dastidar S., Adeli H. Spiking neural networks // Int. J. Neural Systems. – 2009. – 19. – P. 295 – 308. 5. Holden A. V. Models of the stochastic activity of neurones // Lect. Notes Biomath. – 1976. – 12. 6. Bressloff P. Stochastic neural field theory and the system-size expansion // SIAM J. Appl. Math. – 2009. – 70. – P. 1488 – 1521. 7. Britvina T., Eggermont J. J. A Markov model for interspike interval distributions of auditory cortical neurons that do not show periodic firings // Formal Aspects Comput. – 2007. – 96. – P. 245 – 264. 8. Buice M. A., Cowan J. D. Statistical mechanics of the neocortex // Progr. Biophys. and Mol. Biol. – 2009. – 99. – P. 53 – 86. 9. Lowen S. B., Teich M. C. Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales // J. Acoust. Amer. – 1992. – 92. – P. 803 – 806. 10. Levine M. W. Firing rates of a retinal neuron are not predictable from interspike interval statistics // Biophys. J. – 1980. – 30. – P. 9 – 26. 11. Ratnam R., Nelson M. E. Nonrenewal statistics of electrosensory afferent spike trains: Implications for the detection of weak sensory signals // J. Neurosci. – 2000. – 20, № 17. – P. 6672 – 6683. 12. König P., Engel A. K., Singer W. Integrator or coincidence detector? The role of the cortical neuron revisited // Trends Neurosci. – 1996. – 19, № 4. – P. 130 – 137. 13. Vidybida A. K. Information processing in a pyramidal-type neuron // BioNet’96 — Biologieorientierte Informatik und pulspropagierende Netze, 3-d Workshop, 14 – 15 Nov. 1996 (Berlin). – 1996. – P. 96 – 99. 14. Rudolph M. Destexhe A. Tuning neocortical pyramidal neurons between integrators and coincidence detectors // J. Comput. Neurosci. – 2003. – 14, № 3. – P. 239 – 251. 15. Lundstrom B. N., Hong S., Higgs M. H., Fairhall A. L. Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space // Neural Comput. – 2008. – 20. – P. 1239 – 1260. 16. Vidybida A. K. Inhibition as binding controller at the single neuron level // BioSystems. – 1998. – 48. – P. 263 – 267. 17. MacKay D. M. Self-organization in the time domain // Self-Organizing Systems / Eds M. C. Yovitts, G. T. Jacobi et al. – Washington: Spartan Books, 1962. – P. 37 – 48. 18. Damasio A. R. The brain binds entities and events by multiregional activation from convergence zones // Neural Comput. – 1989. – 1, № 1. – P. 123 – 132. 19. Eckhorn R., Bauer R., Jordan W., Brosch M., Kruse W., Munk M., Reitboeck H. J. Coherent oscillations: a mechanism for feature linking in the visual cortex? // Biol. Cybernetics. – 1988. – 60, № 2. – P. 121 – 130. 20. Engel A. K., König P., Kreiter A. K., Gray C. M., Singer W. Temporal coding by coherent oscillations as a potential solution to the binding problem: physiological evidence // Nonlinear Dynamics and Neuronal Networks / Eds H. G. Schuster, W. Singer. – VCH Weinheim, 1991. – P. 3 – 25. 21. Vidybida A. K. Output stream of a binding neuron // Ukr. Math. J. – 2007. – 50, № 12. – P. 1819 – 1839. 22. Miles R. Synaptic excitation of inhibitory cells by single CA3 hippocampal pyramidal cells of the guinea-pig in vitro // J. Physiol. – 1990. – 428. – P. 61 – 77. 23. Barbour B. Synaptic currents evoked in Purkinje cells by stimulating individual granule cells // Neuron. – 1993. – 11. – P. 759 – 769. 24. Andersen P. Synaptic integration in hippocampal neurons // Fidia Res. Found. Neurosci. Award Lect. – New York: Raven Press, Ltd, 1991. – P. 51 – 71. 25. Andersen P., Raastad M., Storm J. F. Excitatory synaptic integration in hippocampal pyramids and dentate granule cells // Cold Spring Harbor Symp. Quant. Biology. – Cold Spring Harbor: Cold Spring Harbor Laboratory Press, 1990. – P. 81 – 86. 26. Aroniadou-Anderjaska V., Ennis M., Shipley M. T. Dendrodendritic recurrent excitation in mitral cells of the rat olfactory bulb // J. Neurophysiol. – 1999. – 82. – P. 489 – 494. 27. Bekkers J. M., Stevens C. F. Excitatory and inhibitory autaptic currents in isolated hippocampal neurons maintained in cell culture // Proc. Nat. Acad. Sci. USA. – 1991. – 88. – P. 7834 – 7838. 28. Chan-Palay V. The recurrent collaterals of Purkinje cell axons: a correlated study of rat’s cerebellar cortex with electron microscopy and the Golgi-method // Z. Anat. und Entwicklungsgesch. – 1971. – 134. – S. 210 – 234. ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12 DELAYED FEEDBACK MAKES NEURONAL FIRING STATISTICS NON-MARKOVIAN 1609 29. Lübke J., Markram H., Frotscher M., Sakmann B. Frequency and dendritic distribution of autapses established by layer 5 pyramidal neurons in the developing rat neocortex: comparison with synaptic innervation of adjacent neurons of the same class // J. Neurosci. – 1996. – 16. – P. 3209 – 3218. 30. Nicoll R. A., Jahr C. E. Self-excitation of olfactory bulb neurones // Nature. – 1982. – 296. – P. 441 – 444. 31. Park M. R., Lighthall J. W., Kitai S. T. Recurrent inhibition in the rat neostriatum // Brain Res. – 1980. – 194. – P. 359 – 369. 32. Tamás G., Buhl E. H., Somogyi P. Massive autaptic self-innervation of GABAergic neurons in cat visual cortex // J. Neurosci. – 1997. – 17. – P. 6352 – 6364. 33. Van der Loos H., Glaser E. M. Autapses in neocortex cerebri: synapses between a pyramidal cell’s axon and its own dendrites // Brain Res. – 1972. – 48. – P. 355 – 360. 34. Wu Y., Kawasaki F., Ordway R. W. Properties of short-term synaptic depression at larval neuromuscular synapses in wild-type and temperature-sensitive paralytic mutants of drosophila // J Neurophysiol. – 2005. – 93. – P. 2396 – 2405. 35. Doob J. L. Stochastic processes. – New York: Wiley, 1953. 36. Vidybida A. K., Kravchuk K. G. Output stream of binding neuron with delayed feedback // Eur. Phys. J. B. – 2009. – 72, № 2. – P. 279 – 287. 37. Feller W. An introduction to probability theory and its applications. – New York: Wiley, 1968. – Vol 1. – 466 p. 38. Vidybida A. K. Input-output relations in binding neuron // BioSystems. – 2007. – 89. – P. 160 – 165. 39. Vidybida A. K. Output stream of binding neuron with instantaneous feedback // Eur. Phys. J. B. – 2008. – 65. – P. 577 – 584; 2009. – 69. – P. 313. 40. Farkhooi F., Strube-Bloss M. F., Nawrot M. P. Serial correlation in neural spike trains: Experimental evidence, stochastic modelling, and single neuron variability // Phys. Rev. E. – 2009. – 79. 41. Nawrot M. P., Boucsein C., Rodriguez-Molina V. et al. Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro // Neurocomputing. – 2007. – 70. – P. 1717 – 1722. 42. Cessac B. A discrete time neural network model with spiking neurons: II: Dynamics with noise // J. Math. Biology. – 2010. – DOI: 10.1007/s00285-010-0358-4. Received 08.05.12 ISSN 1027-3190. Укр. мат. журн., 2012, т. 64, № 12