Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies

In the paper modified learning algorithm for GMDH-wavelet-neuro-fuzzy-network in information technologies is proposed. For Wavelet-Neuro-Fuzzy-Network structure optimization based on Group Method of Data Handling (GMDH) is developed and the method of structure optimization is described. Such hybrid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Datum:2013
1. Verfasser: Vynokurova, O.
Format: Artikel
Sprache:English
Veröffentlicht: Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України 2013
Schriftenreihe:Індуктивне моделювання складних систем
Schlagworte:
Online Zugang:http://dspace.nbuv.gov.ua/handle/123456789/83665
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Zitieren:Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies / O. Vynokurova // Індуктивне моделювання складних систем: Зб. наук. пр. — К.: МННЦ ІТС НАН та МОН України, 2013. — Вип. 5. — С. 130-139. — Бібліогр.: 14 назв. — англ.

Institution

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-83665
record_format dspace
spelling irk-123456789-836652015-06-22T03:02:15Z Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies Vynokurova, O. Наукові статті In the paper modified learning algorithm for GMDH-wavelet-neuro-fuzzy-network in information technologies is proposed. For Wavelet-Neuro-Fuzzy-Network structure optimization based on Group Method of Data Handling (GMDH) is developed and the method of structure optimization is described. Such hybrid systems can be used for solving many tasks including signal identification and prediction, person authentication, information classification and clustering, developing pseudo-random generator based on neural networks in cryptography and etc. The experimental investigations were carried out and their results accuracy of data processing by optimally constructed Wavelet-Neuro-Fuzzy-Network and network with multilayer feedforward architecture are presented and compared. У статті запропоновано модифікований алгоритм навчання МГУА-вейлет-нейро-фаззі-мережі для вирішення задач обробки інформації. Для оптимізації структури вейвлет-нейро-фаззі системи запропоновано використовувати Метод Групового Урахування Аргументів (МГУА). Запропонована система дозволяє вирішувати широке коло задач, включаючи ідентифікацію та прогнозування сигналів, автентифікацію користувачів, класифікацію та кластеризацію інформації, розробку псевдо-випадкових генераторів на основі нейромереж в криптографії та інші. Імітаційне моделювання запропонованої архітектури та модифікованого алгоритму навчання підтверджує ефективність запропонованого підходу. В статье предложен модифицированный алгоритм обучения МГУА-вэйвлет-нейро-фаззи сети для решения задач обработки информации. Для оптимизации структуры вэйвлет-нейро-фаззи системы предложено использовать Метод Группового Учета Аргументов (МГУА). Предложенная система позволяет решать широкий круг задач, включая идентификацию и прогнозирование сигналов, аутентификацию пользователей, классификацию и кластеризацию информации, синтез псевдо-случайных генераторов на основе нейросетей в криптографии и другие. Имитационное моделирование предложенной архитектуры и модифицированного алгоритма обучения подтверждает эффективность развиваемого подхода. 2013 Article Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies / O. Vynokurova // Індуктивне моделювання складних систем: Зб. наук. пр. — К.: МННЦ ІТС НАН та МОН України, 2013. — Вип. 5. — С. 130-139. — Бібліогр.: 14 назв. — англ. XXXX-0044 http://dspace.nbuv.gov.ua/handle/123456789/83665 004.032.26 en Індуктивне моделювання складних систем Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
topic Наукові статті
Наукові статті
spellingShingle Наукові статті
Наукові статті
Vynokurova, O.
Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
Індуктивне моделювання складних систем
description In the paper modified learning algorithm for GMDH-wavelet-neuro-fuzzy-network in information technologies is proposed. For Wavelet-Neuro-Fuzzy-Network structure optimization based on Group Method of Data Handling (GMDH) is developed and the method of structure optimization is described. Such hybrid systems can be used for solving many tasks including signal identification and prediction, person authentication, information classification and clustering, developing pseudo-random generator based on neural networks in cryptography and etc. The experimental investigations were carried out and their results accuracy of data processing by optimally constructed Wavelet-Neuro-Fuzzy-Network and network with multilayer feedforward architecture are presented and compared.
format Article
author Vynokurova, O.
author_facet Vynokurova, O.
author_sort Vynokurova, O.
title Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
title_short Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
title_full Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
title_fullStr Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
title_full_unstemmed Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies
title_sort modified learning algorithm for gmdh-wavelet-neuro-fuzzy-network in information technologies
publisher Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України
publishDate 2013
topic_facet Наукові статті
url http://dspace.nbuv.gov.ua/handle/123456789/83665
citation_txt Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies / O. Vynokurova // Індуктивне моделювання складних систем: Зб. наук. пр. — К.: МННЦ ІТС НАН та МОН України, 2013. — Вип. 5. — С. 130-139. — Бібліогр.: 14 назв. — англ.
series Індуктивне моделювання складних систем
work_keys_str_mv AT vynokurovao modifiedlearningalgorithmforgmdhwaveletneurofuzzynetworkininformationtechnologies
first_indexed 2025-07-06T10:28:55Z
last_indexed 2025-07-06T10:28:55Z
_version_ 1836893063320961024
fulltext Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies UDC 004.032.26 MODIFIED LEARNING ALGORITHM FOR GMDH-WAVELET-NEURO- FUZZY-NETWORK IN INFORMATION TECHNOLOGIES O. Vynokurova Control System Research Laboratory, Information technologies security department, Kharkiv National University of Radio Electronics vinokurova@kture.kharkov.ua У статті запропоновано модифікований алгоритм навчання МГУА-вейлет-нейро-фаззі- мережі для вирішення задач обробки інформації. Для оптимізації структури вейвлет- нейро-фаззі системи запропоновано використовувати Метод Групового Урахування Аргументів (МГУА). Запропонована система дозволяє вирішувати широке коло задач, включаючи ідентифікацію та прогнозування сигналів, автентифікацію користувачів, класифікацію та кластеризацію інформації, розробку псевдо-випадкових генераторів на основі нейромереж в криптографії та інші. Імітаційне моделювання запропонованої архітектури та модифікованого алгоритму навчання підтверджує ефективність запропонованого підходу. Ключові слова: гібридна МГУА-вейвлет-нейро-фаззі мережа, вейвлет-нейрон, модифікований алгоритм навчання, МГУА-алгоритми, ідентифікація, прогнозування, автентифікація In the paper modified learning algorithm for GMDH-wavelet-neuro-fuzzy-network in information technologies is proposed. For Wavelet-Neuro-Fuzzy-Network structure optimization based on Group Method of Data Handling (GMDH) is developed and the method of structure optimization is described. Such hybrid systems can be used for solving many tasks including signal identification and prediction, person authentication, information classification and clustering, developing pseudo-random generator based on neural networks in cryptography and etc. The experimental investigations were carried out and their results accuracy of data processing by optimally constructed Wavelet-Neuro-Fuzzy-Network and network with multilayer feedforward architecture are presented and compared. Keywords: Hybrid wavelet-neuro-fuzzy-network, wavelet neuron, modified learning algorithm, GMDH algorithms, information technologies, identification, forecasting, authentication В статье предложен модифицированный алгоритм обучения МГУА-вэйвлет-нейро-фаззи сети для решения задач обработки информации. Для оптимизации структуры вэйвлет- нейро-фаззи системы предложено использовать Метод Группового Учета Аргументов (МГУА). Предложенная система позволяет решать широкий круг задач, включая идентификацию и прогнозирование сигналов, аутентификацию пользователей, классификацию и кластеризацию информации, синтез псевдо-случайных генераторов на основе нейросетей в криптографии и другие. Имитационное моделирование предложенной архитектуры и модифицированного алгоритма обучения подтверждает еффективность развиваемого подхода. Ключевые слова: гибридная МГУА-вэйвлет-нейро-фаззи-сеть, вэйвлет-нейрон, модифицированные метод обучения, МГУА-алгоритмы, информационные технологии, идентификация, прогнозирование, аутентификация Індуктивне моделювання складних систем, випуск 5, 2013 130 Vynokurova O. Introduction Last years the problem of information processing tasks including identification, authentication, prediction, clustering, developing pseudo-random generator in cryptography and etc is of great importance [1-3]. For its solution various approaches were applied. The most perspective information processing methods are neural networks, especially a fuzzy neural networks and GMDH methods. Earlier it was proved that neural networks are universal approximators and have some remarkable properties, such as parallel processing of information, ability to work with incomplete noisy input data, and learning possibilities to achieve the desired response (output). The GMDH [4-6], from the other side, uses the principle of self-organization that allows to construct an optimal structure of the forecasting model during the algorithm operation. It’s very promising to combine advantages of these both approaches for the solution of the problem – constructing an efficient model for the forecasting in different applications including financial ones. The goal of the present work is a synthesis of learning algorithm on the turning points of GMDH-Wavelet-Neuro-Fuzzy-Network based on hybrid criterion. In [7] GMDH-network with neo-fuzzy nodes using triangular and cubic-spline activation membership function was introduced. In this paper as nodes of GMDH- network the wavelet neurons with adaptive membership-activation functions which have an improved approximation and extrapolation properties in comparison with neo-fuzzy neuron is proposed. Also the specialized learning algorithm based on hybrid optimization criterion is introduced. 1. Wavelet neuron architecture Let us consider wavelet-neuron architecture [8], shown on the Fig.1. As it can be seen, wavelet neuron is close enough in construction to the neo-fuzzy neuron, however instead of usual tuning synaptic weights it contains wavelet synapses , . In this case tuning parameters are not only synaptic weights , but center, width and shape parameters of adaptive wavelet membership-activation function iWS =1,2, ,i K n ( )jiw k ( ( ))ji ix kϕ . When a vector signal 1 2( ) = ( ( ), ( ), , ( )) = ( ( 1), ( 2), , ( ))T T nx k x k x k x k x k x k x k n− − −K K i is fed to the input of wavelet neuron, the output is determined by both the tunable weights and wavelet membership activation function: ( )jiw k (1) 1 1 1 ˆ( ) ( ( )) ( 1) ( ( )). ihn n i ji ji i i j y k f x k w k x kϕ = = = = = −∑ ∑∑ Індуктивне моделювання складних систем, випуск 5, 2013 131 Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies � 11( )w k11( )k� 12 ( )k� � � 21( )w k 2 2 ( )hw k 22 ( )w k 21( )k� 22 ( )k� 2 2 ( )h k� � � 11( )h k� 11( )hw k 12 ( )w k � � � � � � � � 2 ( )x k 1( )x k 2 2( ( ))f x k 1 1( ( ))f x k 1WS 2WS ˆ( )y k � 1 ( )nw k ( ) nh nw k 2 ( )nw k 1 ( )n k� 2 ( )n k� ( ) nh n k� � � � � � nWS ( )nx k � ( ( ))n nf x k Fig. 1. Architecture of wavelet neuron In this case we use one-dimensional wavelet membership-activation function, proposed in [9] in the form 2 2 ( ) ( ( )) = (1 ( ) ( ))exp , 2 ji ji i ji ji t k x k k t kϕ α ⎛ ⎞ − −⎜⎜ ⎝ ⎠ ⎟⎟ (2) where ( ) 1( ) = ( ) ( ) ( )ji i ji jit k x k c k kσ −− ; is center parameter of wavelet membership-activation function; ( )jic k ( )ji kσ is width parameter of wavelet membership- activation function; ( )ji kα – shape parameter of wavelet membership-activation function. 2. Wavelet neuron learning algorithm The learning task is to estimate for each iteration synaptic weights , centers , widths and shape parameter k ( )jiw k ( )jic k 1( )ji kσ − ( )ji kα of wavelet membership- activation function, which optimized some prescribed criteria. Індуктивне моделювання складних систем, випуск 5, 2013 132 Vynokurova O. 2.1. Learning algorithm for synaptic weights, width parameters, shape parameters of wavelet membership-activation function As the criterion of optimization for wavelet neuron synaptic weights, width and shape parameters of membership-activation function quadratic error function is used in the form 2 2 =1 =1 1 1 1ˆ( ) = ( ( ) ( )) = ( ) = ( ( ) ( ) ( ( ))) 2 2 2 hn i ji ji i i j 2E k y k y k e k y k w k x kϕ− −∑∑ (3) (here – external training signal). ( )y k The derivatives of the error function with respect to the tuned parameters can be written in the form 2 2 ( )( ) = ( ) ( ( )) = ( )(1 ( ) ( ))exp = ( ) ( ), ( ) 2 ji w ji i ij ji ji ji t kE k e k x k e k k t k e k J k w k ϕ α ⎛ ⎞∂ − − − − −⎜ ⎟⎜ ⎟∂ ⎝ ⎠ (4) 3 1 2 ( ) = ( ) ( )( ( ) ( ))((2 ( ) 1) ( ) ( ) ( )) ( ) ( ) exp = ( ) ( ), 2 ji i ji ji ji ji ji ji ji ji E k e k w k x k c k k t k k t k k t k e k J ks a a s - ¶ - - + - ґ ¶ ж цчз чґ - -з чз ччзи ш (5) 2 2 ( )( ) = ( ) ( ) ( )exp = ( ) ( ). ( ) 2 ji ji ji ji ji t kE k e k w k t k e k J k k α α ⎛ ⎞∂ − −⎜ ⎟⎜ ⎟∂ ⎝ ⎠ (6) Introducing ( 1ih )× –vectors of variables , we can rewrite gradient learning algorithm of -th wavelet-synapse: 1( ( )) = ( ( ( )), , ( ( ))) ,T i i i i h i ii x k x k x kϕ ϕ ϕK 1( ) = ( ( ), , ( )) ,T i i h ii w k w k w kK 1 1 1 1( ) = ( ( ), , ( )) ,T i i h ii k k kσ σ σ− − −K 1( ) = ( ( ), , ( )) ,T i i h ii k k kα α αK 1( ) = ( ( ), , ( )) ,T i i h ii t k t k t kK 1( ) = ( ( ), , ( )) ,w w w i i h ii J k J k J kK T i k 1( ) = ( ( ), , ( )) ,T i i h ii J k J k J kσ σ σK 1( ) = ( ( ), , ( ))T i i h ii J k J k J kα α αK i (7) 1 1 ( 1) = ( ) ( ) ( ) ( ), ( 1) = ( ) ( ) ( ) ( ), ( 1) = ( ) ( ) ( ) ( ). w w i i i i i i i i w k w k k e k J k k k k e k J k k k e k J k σ σ α α η σ σ η α α η − − ⎧ + + ⎪ + +⎨ ⎪ + +⎩ Індуктивне моделювання складних систем, випуск 5, 2013 133 Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies The convergence rate of learning algorithm can be increased via the use of Gauss-Newton algorithms [10]. Using the sum of two matrix inversion lemma and performing a sequence of obvious transformations [11], we can write on-line learning algorithm in the form 2 1 1 2 2 ( ) ( )( 1) = ( ) , ( 1) = ( ) ( ) , ( ) ( ) ( )( 1) = ( ) , ( 1) = ( ) ( ) , ( ) ( ) ( )( 1) = ( ) , ( 1) = ( ) ( ) , 0 1 ( ) w w w wi i i i i iw i i i i i i i i i i i i i i i e k J kw k w k r k r k J k r k e k J kk k r k r k J k r k e k J kk k r k r k J k r k σ σ σ σ σ α α α α α β σ σ β α α β β − − ⎧ + + + +⎪ ⎪ ⎪⎪ + + + +⎨ ⎪ ⎪ + + + + ≤⎪ ⎪⎩ P P P P P P ,≤ (8) where β is forgetting factor (0 1β≤ ≤ ). 2.2.The learning algorithm on the turning points for center parameters of wavelet membership-activation functions Generally the learning algorithms for neural networks are based on using some learning error function. But in the situation when the processed signal or process is significantly non-stationary using such criteria leads to shift effect, which reduces the accuracy of prediction [12]. For the solving such problem, we need to introduce learning criterion that can be consider the shift effect of prediction signal. Fig. 2 shows the fragment of signal with shift effect of predictive value . ( )y k ˆ( )y k Fig. 2. Signal fragment with prediction shift effect It can be seen on the fig. 2 that, in order to minimizing of shift effect we need to minimize distance or at the inflection points of the signal, or at the points of intersection with the axis . It should be noted, that inflection points of and g k ( )y k Індуктивне моделювання складних систем, випуск 5, 2013 134 Vynokurova O. ˆ( )y k is the intersection axis of ( )y kΔ and ˆ( )y kΔ corresponding (here Δ is symbol of first difference). In the prediction theory of economical time series in German literature such prediction quality criteria as Wegstreke [13, 14] is accepted. This criterion is the estimation of predicting model quality, when its value 1+ corresponds optimal predicting model, and when its value 1− corresponds incorrect prediction result. Such criterion has form 1 1 ( )( ( ) ( 1)) ( ) ( 1) N k N k signal k y k y k Wegstreke y k y k = = − − = − − ∑ ∑ (9) where ( )signal k is signum function ˆ1, ( ) ( 1) 0, ˆ( ) 1, ( ) ( 1) 0, 0 , if y k y k signal k if y k y k otherwise − − >⎧ ⎪= − − −⎨ ⎪ ⎩ < ( )y k is the actual value of process; is prediction value; is length of learning sample. ˆ( )y k N Due to the fact, that the sign function is not differentiable, it can be replaced by a hyperbolic tangent function (see fig. 3) with large steepness parameter ˆ1 exp( 2 ( ( ) ( 1 1 )))ˆ( ) tanh ( ( ) ( 1)) , ˆ1 exp( 2 ( ( ) ( ))) y k y ksignal k y k y k y k y k γγ γ − − − − ≈ − − = + − − − where γ is steepness parameter, 1γ >> . Fig. 3. Signum-function (solid, thick line) and hyperbolic tangent function ( 1γ > - dotted line, 1γ = - solid line, 1γ < - dash-dotted line) Індуктивне моделювання складних систем, випуск 5, 2013 135 Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies Using this hypothesis, we can introduce the learning criterion, which will take into account the shifting effect of prediction in the form ˆ(tanh ( ( ) ( 1)))( ( ) ( 1))( ) ( ) ( 1) c y k y k y k y kE k y k y k γ − − − = − − − , (10) and in this case the derivative with respect to the parameter has form ( )jiс k 2 ( ) ( ( ) ( 1)) ( ) ( ( )) ˆ(1 tanh ( ( ( ) ( 1)))) ( ) . ( ) c ji ji i ji ji E k sign y k y k c k x k y k y k w k c k γ ϕ γ ∂ = − − × ∂ ∂ × − − − ∂ (11) Using expression (4) we can rewrite learning algorithm on the turning points of wavelet membership-activation function centers based on gradient procedure in the form 2 ( 1) ( ) ( ( ) ( 1)) (1 tanh ( ( ( ) ( 1)))) ( ( )) ji ji c c ji i c k c k sign y k y k y k y k x k η γ γ ϕ + = − − − × × − − − (12) where 2 1 3 ( ) ( ( )) ( ) ( )((2 ( ) 1) ( ) ( ) ( ))exp 2 jic ji i ji ji ji ji ji ji t k x k w k k k t k k t kϕ σ α α− ⎛ ⎞ = + − ⎜⎜ ⎝ ⎠ − ⎟⎟ , ( )jiw k is synaptic weight of wavelet neuron; cη is learning rate parameter, 0 1cη< < . Such learning algorithm has enough high speed and computational simplicity, and main its advantage is possibility of minimizing shift between actual signal and forecast, that it is important for the solving many practical tasks. 3. Wavelet-Neuro-Fuzzy-Network and its architecture optimization using the Group Method of Data Handling Wavelet-Neuro-Fuzzy-Network is a multilayer feedforward architecture that consists of wavelet neurons. 3-layers Wavelet-Neuro-Fuzzy-Network with n inputs and m outputs is shown of Fig. 4. Given architecture is completely coincides with the structure of the 3-layer perceptron, except that the wavelet neurons are used here as an elementary nodes instead of usual neurons. Індуктивне моделювання складних систем, випуск 5, 2013 136 Vynokurova O. 1x 2x nx [1] 1WNFN [1] 2WNFN 1 [1] nWNFN [2] 1WNFN [2] 2WNFN 2 [2] nWNFN [3] 1WNFN [3] mWNFN 1ŷ ˆ my Fig. 4. GMDH-Wavelet-Neuro-Fuzzy-Network If we use wavelet neurons that have only two inputs, the GMDH can be applied for the synthesis of the Wavelet-Neuro-Fuzzy-Network with optimal architecture. The main idea of the GMDH algorithm lays in successive synthesis of the neuron layers until the external criterion begins to increase. Algorithm description: 1) Form pairs from the wavelet neuron outputs of the current layer (at the first iteration we use the set of input signals). Each pair is fed to the corresponding wavelet neuron. 2) Using the learning subsample to adjust synaptic weight coefficients of each wavelet neuron. 3) Using the test subsample to calculate the value of the external criterion (regularity) for each wavelet neuron: ( 2[ ] [ ] 1 1 ˆ( ) ( ) Nпер s p kпер y k y k N ε = = −∑ )s p (13) where is a size of the test subsample, перN s is the layer number, p is a neuron number in the current layer _____ 1, sp n= , is the p-th neuron of the s-th layer response signal for the i-th input vector. [ ]ˆ ( )s py k 4) Find the minimal value of the external criteria for all wavelet neurons of the current layer [ ] [ ]mins s pp ε ε= (14) and check the condition [ ] [ 1]s sε ε −> , (15) Індуктивне моделювання складних систем, випуск 5, 2013 137 Modified Learning Algorithm for GMDH-Wavelet-Neuro-Fuzzy-Network in Information Technologies where [ ] [ 1],s sε ε − are the criterion values for the best neurons of the and s-th and (s-1)- th layers correspondingly. If the condition (15) is true then return to the previous layer and find the best neuron that provides minimal value of the criterion (13). Otherwise, select F best neurons according to the criterion (13) value and go to the step 1 to construct the next layer of neurons. 5) Determine the final structure of the network. Moving backward from the best neuron of the (m-1)-th layer along the input connections and passing successively all the layers of neurons, preserve in the final structure only such neurons that are used in the next layer. After the GMDH finishes its functioning it can be said that the final optimal structure of the Wavelet-Neuro-Fuzzy-Network is synthesized. As it can be readily seen we obtain not only optimal structure, but also trained neural network that is ready to process new data. One of the most important advantages of GMDH usage for the Wavelet-Neuro-Fuzzy-Network architecture synthesis is a capability to use simple but quick learning procedures for the wavelet neuron weights adjustment because network is trained layer-by-layer. Conclusion In the paper the modified learning algorithm for GMDH-wavelet-neuro-fuzzy- network in information technologies is proposed. Using Group Method of Data Handling algorithms we can synthesize an optimal architecture of the Wavelet- Neuro-Fuzzy-Network. Theoretical justification and experimental results prove the efficiency of the developed approach to the Wavelet-Neuro-Fuzzy-Network architecture self-organization. References 1. Desai V.V. Pseudo random number generator using time delay neural network / Desai V.V., Patil R., Deshmukh V.B., Rao D.H. // World Journal of Science and Technology. – 2012. - 2(10). – P. 165-169. 2. Du K.-L. Neural Networks and Statistical Learning / Du K.-L., Swamy M.N.S. - Springer-Verlag, London, 2014. – 824 p. 3. Lian S. Secure Hash Function based on neural network / Lian S., Sun J., Wang Z. // Neurocomputing. – 2006. – 69(16-18). – P. 2346-2350. 4. Ivakhnenko A.G. Inductive sorting-out GMDH algorithms with polynomial complexity for active neurons of neural network / Ivakhnenko A.G. // Neural Networks. – 1999. – 2. – P. 1169-1173. Індуктивне моделювання складних систем, випуск 5, 2013 138 Vynokurova O. 5. Ivakhnenko A.G. Heuristic self-organization in problems of engineering cybernetics / Ivakhnenko A.G. // Automatica. – 1970. - 6(2). – P. 207-219. 6. Ivakhnenko A.G. Self-organization of neuro net with active neurons for effects of nuclear test explosions forecasting // Ivakhnenko A.G. // System Analysis Modeling Simulation. – 1995. – 20. – P. 107-116. 7. Bodyanskiy Ye. The neo-fuzzy neural network structure optimization using the GMDH for the solving forecasting and classification problems / Bodyanskiy Ye., Zaychenko Yu., Pavlikovskaya E., Samarina M., Viktorov Ye. // Proc. Int.Workshop on Inductive Modeling, Krynica, Poland. – 2009. – P. 77-89. 8. Bodyanskiy Ye. An adaptive learning algorithm for a wavelet neural network / Bodyanskiy Ye., Lamonova N., Pliss I., Vynokurova O. // Expert Systems. – 2005. - 22(5). – P. 235-240. 9. Bodyanskiy Ye. Compartmental adaptive wavelon and its learning algorithm // Bodyanskiy Ye., Vynokurova O. // Control systems and computers. – 2009. - 1 (219). – P. 47-53 (In Russian). 10. Shepherd A.J. Second-Order Methods for Neural Networks // Shepherd A.J. - London: Springer-Verlag, 1997. – 145 p. 11. Bodyanskiy Ye. An adaptive learning algorithm for a neuro-fuzzy network. / Bodyanskiy Ye., Kolodyazhniy V., Stephan A. // Computational Intelligence and Applications [Ed. by B. Reusch]. Berlin-Heidelberg-New-York: Springer. - 2001. – P. 68-75. 12. Lekutai G. Self-tuning control of nonlinear systems using neural network adaptive frame wavelets / Lekutai G., van Landingham H. F. // Proc. IEEE Int. Conf. on Systems, Man and Cybernetics. 1997. – 2. – P. 1017-1022. 13. Baumann M. Nutzung neuronaler Netze zur Prognose von Aktienkursen / Baumann M. // Report Nr. 2/96, TU Ilmenau. - 1996. – 113 p. 14. Fueser K. Neuronale Netze in der Finanzwirtshaft / Fueser K. - Wiesbaden: Gabler. - 1995. – 437 p. Індуктивне моделювання складних систем, випуск 5, 2013 139