Реферати
Збережено в:
Дата: | 2018 |
---|---|
Формат: | Стаття |
Мова: | Ukrainian |
Опубліковано: |
Інститут проблем реєстрації інформації НАН України
2018
|
Назва видання: | Реєстрація, зберігання і обробка даних |
Онлайн доступ: | http://dspace.nbuv.gov.ua/handle/123456789/168774 |
Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Цитувати: | Реферати // Реєстрація, зберігання і обробка даних. — 2018. — Т. 20, № 3. — С. 140–149. — укр. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-168774 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-1687742020-05-09T01:27:00Z Реферати 2018 Article Реферати // Реєстрація, зберігання і обробка даних. — 2018. — Т. 20, № 3. — С. 140–149. — укр. 1560-9189 http://dspace.nbuv.gov.ua/handle/123456789/168774 uk Реєстрація, зберігання і обробка даних Реєстрація, зберігання і обробка даних Інститут проблем реєстрації інформації НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
Ukrainian |
format |
Article |
title |
Реферати |
spellingShingle |
Реферати Реєстрація, зберігання і обробка даних |
title_short |
Реферати |
title_full |
Реферати |
title_fullStr |
Реферати |
title_full_unstemmed |
Реферати |
title_sort |
реферати |
publisher |
Інститут проблем реєстрації інформації НАН України |
publishDate |
2018 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/168774 |
citation_txt |
Реферати // Реєстрація, зберігання і обробка даних. — 2018. — Т. 20, № 3. — С. 140–149. — укр. |
series |
Реєстрація, зберігання і обробка даних |
first_indexed |
2025-07-15T03:32:23Z |
last_indexed |
2025-07-15T03:32:23Z |
_version_ |
1837682229982003200 |
fulltext |
140
004.085
. ., . ., . ., . .
. , . . 2018. . 20. 3. . 3–12. — .
-
. ,
. -
. ,
. .: 4.
.: 27 .
: , , -
, , .
UDC 004.085
Petrov V.V., Kryuchyn A.A., Belyak E.V. and Shykhovets O.V. Long-term data storage media.
Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 3–12. — Ukr.
The results of the analysis of media development technologies for long-term data storage are pre-
sented. It is shown that the use of hard disk drive and solid-state non-volatile media only partially solve
the problem of long-term information storage. Particular attention is paid to modern technologies for stor-
ing data on optical media. It is shown that the latest developments of optical media are aimed at a signifi-
cant increase of optical media capacity by means of the nanostructured recording materials use. It was
shown that maintaining of the big data security and integrity includes request stability of the readout sig-
nal contrast and data errors rates for long and fixed period of time. Therefore capacity of single disk for
long data memory should be high enough to store the whole data to avoid the variation of the baseline
over many disks. It was proposed to develop optical storage based on a nanoplasmonic hybrid glass mat-
rix. The nanoplasmonic hybrid glass composite has to be formed by a sol-gel process to incorporate gold
nanorods into hybrid glass composite. Incorporation of the inorganic phase increases the local Young’s
modulus of the host matrix around nanorods which helps to improve the lifespan of the shape of nanorods
by removing the unwanted shape degradation susceptible to the environmental thermal perturbation and
increase lifespan of the data storage. Proposed method could be compared with the spin coating method,
paving a way to the low-cost large-scale mass production of the optical disks. Fig.: 4. Refs: 27 titles.
Key words: optical media, long-term data storage, holographic memory, nanostructured recording
materials, data migration.
621.382; 621.383
. ., . ., . ., . . -
MOCVD- III- . -
, . . 2018. . 20. 3. . 13–20. — .
MOCVD
III- GaN- ( )
. ( , -
) MOCVD -
(<10 )
III- . , -GaN
(~5×106 –2) , -GaN-
(375–475 ) -
. -
, , -
, , - -
, AlCN BCN
MOCVD
. .: 4. .: 28 .
: III- , , , MOCVD,
, , .
ISSN 1560-9189 , , 2018, . 20, 3 141
UDC 621.382; 621.383
Sukhovii N.O., Lyahova N.N., Masol I.V., Osinkiy V.I. A study of applications of nanotexturized
sapphire as a template for MOCVD-heteroepitaxy of III-nitrides. Data Rec., Storage & Processing. 2018.
Vol. 20. N 3. P. 13–20. — Ukr.
It is considered some applications of nano-textured sapphire templates with MOCVD- III-nitride
hetero-structure, namely its suitability for use in UV photodiodes and for energy storage layers, as ideally
suited through its high thermal, chemical and radiation resistance due to the strong bond between nitrogen
and group III atoms for space, biological, and military integrated circuits, where traditional silicon does
not fit.
Thermodynamic parameters (temperature, pressure) and precursors have been experimentally de-
termined during the process of MOCVD for the creation of the nano-templates with a radius of nano-
pores (<10 nm), when treating the sapphire surface in a stream of ammonia under certain conditions (in-
stallation of MOCVD EPIQUIP, horizontal reactor, temperature 1050 °C, pressure 20 mbar, for 20 minu-
tes for the formation of low-defective heteroepitaxial layers of III-nitrides. In particular, it provides a low
threading dislocation density (~5×106 –2) for p-GaN layers which is a less costly process than ELOG
(epitaxial lateral overgrowth). The density of dislocations, which are the centers of non-radiation recom-
bination for layers of p-GaN, was determined on the basis of the diffusion length of no equilibrium car-
riers by the method of currents induced by an electron beam. It has been shown that the UV GaN Schott-
ky photodiodes on such nano-templates had a steeper long-wavelength edge (375–475 nm) of normalized
photo-sensitivity compared with photodiodes without them.
In the process of MOCVD, the appropriateness of such nano-templates for energy accumulation
layers is considered, in particular for the production of super-capacitors, with the formation of low-
defective nitride boron layers in which the graphene can be encapsulated, and for the development of the
nanocarbides and consolidated phases of AlCN or BCN on the surface of such nano-templates in the
stream of trimethyl aluminum or triethyl boron, respectively. Fig.: 5. Refs: 28 titles.
Key words: III-nitrides, template, textured sapphire, MOCVD, density of dislocations, low-
deficiency, photodiode.
004.932.2
. .
. , . . 2018. . 20. 3.
. 21–28. — .
-
- -
. -
,
,
.
.: 1. .: 4 .
: , , -
, , , .
UDC 004.932.2
Tsybulska Y.O. Using fast algorithms for calculation of the correlation and convolution to prepare
reference images. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 21–28. — Ukr.
It is considered the several methods for calculating of the cross-correlation function and convolu-
tion of two images to solve the problem of reference image preparation for the correlation-extremal guid-
ance system of the controlled aerial vehicles. During the flight the correlation-extremal guidance system
performs a comparison of reference images of sighting surface, prepared before, and current images from
the onboard sensors. To form optimal reference images the calculation of the cross-correlation function
(or its analogues) of initial images and preliminary variants of reference images is executed repeatedly, so
it takes considerably amount of time.
In this work the formula of modified 2-D Hartley Transform with separated variables is obtained,
so it is allows construct analytical expression of recursive base operation (analogue of the 2-D Fast Fou-
rier Transform algorithm).
142
The estimation of computational complexity of the cross-correlation function and convolution with
using of algorithms of 2-D Discrete Fourier Transform (DFT), 2-D Fast Fourier Transform (FFT), 2-D
Discrete Hartley Transform (DHT) and 2-D Fast Hartley Transform (FHT) is conducted.
The obtained analytical expressions of the number of operations show that the using of modified 2-
D FHT with separated variables allows to reduce amount of operations for the cross-correlation function
of two images by one third. Moreover, the calculation of DHT and FHT is performed in the field of real
numbers (in contrast to DFT and FFT), therefore it does not doubling of amount of RAM to store and
operate of the real and imaginary components.
The conducted studies show that since the initial images for reference images preparation are usu-
ally large enough, the using of modified 2-D FHT for cross-correlation function calculation allows to re-
duce the time of quality estimation and selecting of optimal reference image for the controlled aerial ve-
hicle flight correction. Tabl.: 1. Refs: 4 titles.
Key words: reference image, quality e.valuation, cross-correlation function, convolution, Fourier
Transform, Hartley Transform.
004.93
. ., . ., . .
. , . . 2018.
. 20. 3. . 29–36. — .
.
. -
. -
, ,
.
. .: 2. .: 5. .: 12 .
: , , , , , ,
.
UDC 004.93
Subbotin S.A., Korniienko O.V. and Drokin I.V.A prediction of the frequency of non-periodic sig-
nals based on convolutional neural networks. Data Rec., Storage & Processing. 2018. Vol. 20. N 3.
P. 29–36. — Ukr.
The problem on creation of mathematical support for construction of forecast models based on
convolutional neural networks is solved in the work. A method is proposed for using convolutional neural
networks to predict the frequency of non-periodic signals. To determine the frequency of the signal, it
was divided into parts, after which a fast Fourier transform was used for each part. The spectrograms ob-
tained after the transform are used as inputs to the learning of the neural network. The output value de-
pends on the presence or absence of a frequency that is above the critical value on the predicted interval.
The first layer of the neural network uses a three-dimensional convolution, and on the next layers - a one-
dimensional convolution. Between the convolutional layers, there are subsampling layers used to acceler-
ate learning and prevent retraining. The neural network contains two output neurons which determine the
presence of a frequency that exceeds the critical value. The practical task of predicting the frequency of
vibration of aircraft engines during their tests is solved. The construction of different neural network
models, their training and testing on the data that were collected from vibration sensors during the testing
of the aircraft engine has been performed. To increase the amount of data, augmentation is used. To do
this, several copies of the signal with changed frequencies are added. The models constructed differ in the
amount of data used and in the forecasting time. Comparison of the test results of all the models has been
performed. The maximum forecasting time that can be achieved with the proposed method is determined.
This time is enough for the pilot to react and change the flight mode or to land the helicopter. Tabl.: 2.
Fig.: 5. Refs: 12 titles.
Key words: forecasting, signal, training, neural network, convolution, error, gradient.
004.94
. ., . ., . ., . . -
. ,
. . 2018. . 20. 3. . 37–48. — .
ISSN 1560-9189 , , 2018, . 20, 3 143
-
, -
,
. ,
. .: 12 .
: , , , ,
, .
UDC 004.942
Kalinovsky Ya.A., Boyarinova Yu.E., Khitsko Ya.V. and Sukalo A.S. Application of isomorphic
hypercomplex numerical systems for synthesis of fast linear convolutional algorithms. Data Rec., Storage
& Processing. 2018. Vol. 20. N 3. P. 37–48. — Rus.
The linear convolution of discrete signals is the most common computational task in the field of
digital signal processing. The complexity of calculating the linear convolution of arrays n is long 2( )O n
and rapidly increases when n grows, so the methods of «fast» calculations are used: a fast Fourier trans-
form, a transition to a ring of polynomials.
The method of increasing the efficiency of multiplying hypercomplex numbers to construct fast
linear convolution algorithms is considered.
The components of the convolution of discrete signals are the sums of the pair products of the sig-
nal components and the convolution kernel, and the product of two hypercomplex numbers is the sum of
the pair products of the components of these numbers. But you still need to perform some transforma-
tions. To reduce the quantity of real multiplications, one can use an isomorphic hypercomplex number
system (HNS).
For every canonical HNS, there exists an isomorphic HNS, on the diagonal of the multiplication
table there are either cells of multiplication tables of the field of complex numbers or some basic element,
and the remaining cells of the multiplication table are zeros.
Various methods of obtaining isomorphic pairs of HNS have been proposed: on the basis of sys-
tems of double and orthogonal numbers, on the basis of systems of quadriplex and orthogonal numbers,
systems other than 2n dimension. It has been synthesized pairs of isomorphic hypercomplex number sys-
tems, as well as expressions of isomorphism operators.
The implementation of nonlinear operations on hypercomplex numbers by means of a transition
from a strongly-filled HNS to an isomorphic weakly-filled HNS, performing operations in it, and a re-
verse transition significantly reduces the number of necessary real operations and, especially, multiplica-
tion.
All this indicates the advisability of applying these algorithms to solving problems of processing
digital signals. Refs: 12 titles.
Key words: hypercomplex number system, linear convolution, isomorphism, multiplication, com-
plex numbers, quaternions.
519.816
. ., . . -
. , . . 2018.
. 20. 3. . 49–66. — .
, -
, -
. ,
. -
-
, -
. , -
, , -
-
.
, .
-
, . .: 2.
.: 6. .: 44 .
144
: , -
, , , .
UDC 519.816
Kadenko S.V. and Vysotskyi V.O. A method for public opinion-based formal description of weakly
structured subject domains. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 49–66. — Ukr.
It has been shown that for territorial community-level decision making it is appropriate to utilize
expert data based-methods, as this subject domain is a weakly structured one. At the same time, opinion
of the target territorial community representatives should be taken into consideration alongside expert
data during decision making. A method for formal description of weakly structured community-level
problems taking into consideration both expert information and opinion of respondents from among
community representatives has been suggested. It represents a hybrid approach, incorporating elements of
both traditional expert data-based methods and social surveying. The main goal is to set by a decision-
maker or research organizer. It is then decomposed by experts into sub-goals or factors that are crucial for
its achievement, and these factors and their weights are estimated by respondents who are ordinary com-
munity members. The method includes the following conceptual steps: hierarchical decomposition of the
problem, direct estimation of importance of factors that influence the problem, estimation of lowest-level
«non-decomposable» factors by respondents in Likert agreement scale, and rating of the factors based on
respondents’ estimates through linear convolution (weighted summing). These ratings provide the basis
for defining top-priority activities that should be performed in order to solve the problem, and for subse-
quent distribution of limited resources among these activities. Experimental results, obtained in the proc-
ess of actual research of public space quality, illustrate the method’s application, and confirm its high
efficiency.
The strongest point of the suggested method is the combination of efficiency and ease of use. In
contrast to traditional expert data-based methods it does not require any coaching sessions to be held with
the respondents prior to estimation. The method is intended for decision-making support at the level of
territorial communities (urban, rural, raion, others) in the spheres, directly related to the interests of com-
munity members. Target users of the method include local self-government bodies, public and volunteer
organizations, activists, and other interested parties. Tabl.: 2. Fig.: 6. Refs: 44 titles.
Key words: decision-making support, weakly structured subject domain, hierarchic decomposition
of a problem, expert estimate, Likert scale.
004.44:002.513.5
, , , , . -
. , . . 2018. . 20. 3.
. 67–82. — .
, -
. . -
, , — -
, .
-
. . -
—
- (Jensen-Shannon).
-
, 2 .
, , -
. .: 7. .: 20 .
: , , ,
, - .
UDC 004.44:002.513.5
Dmytro Lande, Zijiang Yang, Shiwei Zhu, Jianping Guo and Moji Wei.. Automatic text summariza-
tion of Chinese legal information. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 67–82. — Rus.
A method of automatic text summarization of the legal information provided in Chinese has been
developed. The model of the abstract and the procedure of his formation are considered. Two approaches
are proposed, namely, to determine the level of importance of sentences, it was suggested to proceed to
determine the weight values of separate hieroglyphs, rather than words in the text of documents and ab-
ISSN 1560-9189 , , 2018, . 20, 3 145
stracts. Also consideration of model of documents as networks of sentences for detection of the most im-
portant sentences on parameters of this network has been offered.
A new hybrid method of automatic text summarization, covering statistical and marker methods,
as well as taking into account the location of sentences in the text of the document is introduced. The of-
fered model of the abstract reflects information need of customers during the work with legal information.
The approach to determination of weight values of separate hieroglyphs, but not segmented words
in the text of documents and abstracts is realized. This technique avoids the cost-effective procedure of
the words segmentation needed for other meaningful methods of Chinese language processing.
When summarizing the new idea of determination of weight values of sentences on the basis of
weights of separate hieroglyphs, but not words as it is standard was realized. Therefore the quality of
summarizing is checked not only proceeding from accounting of scales of separate hieroglyphs, but also
taking into account scales of the whole words included in the documents and abstracts to be convinced
that the offered approach is satisfactory also by criteria of traditional systems of summarizing.
Application of two estimates of quality of the paper without participation of experts — a cosine
measure and Jensen-Shannon divergence is shown. Summarizing on the basis of the offered network
model of the document was the best by criteria of a cosine measure and Jensen-Shannon's distances for
abstracts which volume exceeds 2 sentences. The offered approach taking into account little changes can
be used for texts of any subject, in particular, of scientific and technical and news information. Fig.: 7.
Refs: 20 titles.
Key words: automatic text summarization, legal information, chinese language, cosine measure,
Jensen-Shannon divergence.
004.32
. .
. , . . 2018. . 20. 3. . 83–90. —
.
( )
.
.
-
. -
. ,
,
( ) -
.
. - -
. , -
. .: 4 .
: , -
, , ,
, .
UDC 004.32
Matov .Ya. Optimization of the provision of computing resources with adaptive cloud infra-
structure. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 83–90. — Ukr.
The cloud computing (CC) infrastructure is considered as an adaptation object and the process of
adaptation of cloud computing as an optimization one. The general statement of the task on adapting the
disciplines of providing computing resources to users of the IA has been performed. It is proposed to use
dynamic adaptive mixed (with absolute and relative priorities) discipline of providing services with
computing resources to users, in which dynamic adaptation to the changing conditions and conditions of
the CC system and the environment set by consumers of computing resources is possible. The direction of
the solution of the problem of optimization of dynamic adaptive mixed discipline is given. A well-known
optimization functional is proposed, which is based on the assumption that the results of using the
computing resources by the user (solving user tasks) depreciate in proportion to the time spent in the
decision queue and the solution in the CC system. Other functionals are also possible with time cons-
146
traints. This is relevant for modern global real-time information and analytical systems using cloud
computing technologies and can be critical for limited computational resources of CC. For example, the
goal of adaptation can be to meet the constraints on the efficiency index, given in the form of equations or
inequalities. In any case, this formulation of the adaptation task necessitates the implementation in the IA
of several or one mixed discipline of providing services with computing resources to users. It is indicated
that the optimization problem is solved by an iterative method using the appropriate analytical models for
the functioning of the CC. Refs: 4 titles.
Key words: cloud computing, discipline of providing computing resources, adaptation and
optimization of service disciplines, efficiency of adaptation, mixed service discipline, mathematical
model.
004.5
. ., . ., . . -
. , . .
2018. . 20. 3. . 91–101. — .
-
— Data Mapping.
, -
. Data
Mapping
. -
SPARQL- ,
. .: 9. .: 18 .
: , , OWL, SQL, SPARQL, -
, Protégé 5.
UDC 004.5
Senchenko V.R., Boychenko O.A. and Boychenko A.V. An investigation of methods and
technologies to integrate an ontological model and relational data. Data Rec., Storage & Processing.
2018. Vol. 20. N 3. P. 91–101. — Ukr.
A study of methods and technological solutions on Data Mapping concerning the integration of an
ontological and relational data models has been carried out.
The main objective of the investigation is to accelerate and reduce the cost of constructing some
ontological models for systems that process distributed data in a heterogeneous environment.
At present, the theoretical basis for the integration of various data models (Data Mapping) is a di-
rection that has been defined as Ontology-based data integration. The theoretical and practical develop-
ment of this area is the Ontology-Based Data Access (OBDA) approach, which integrates ontological
models presented in the form of RDF graphs with relational data.
The methodology of Data Mapping application for distributed data processing systems has been
developed. As an example the process of data models consolidation for Ukrainian State Budget monitor-
ing system is given. The database of Ukrainian State Budget monitoring system consists of many rela-
tional tables, which contain reports of the State Treasury on the implementation of revenue and expendi-
ture parts of both state and municipal budgets, as well as the regional section. In addition, the database
contains data on lending and arrears of budget institutions.
The ontology model connects to data sources through a declarative specification provided in terms
of display, including classes and their properties.
The given application converts a SPARQL queries into a SQL query to the relational database.
The generated SQL query may be execute by the Oracle 11g database driver, which returns the result as a
data snapshot. Then, to improve the performance of SPARQL queries, it should be used the semantic
query optimization method.
Indicative application of the methodology on the example of the construction of an ontological
model of the system of monitoring the state budget of Ukraine in Protégé 5. It has been demonstrated the
results of the execution of the SPARQL-queries to the relational data of the budget process under the
Oracle 11g Database. It has been shown the directions semantic optimization for SPARQL-queries, which
allow improving obtained data quality.
The proposed methodology allows: to integrate data presented by different models — ontological
and relational knowledge acquisition; to overcome constraints when databases based on outdated data
ISSN 1560-9189 , , 2018, . 20, 3 147
models are being merged with modern ontological-oriented systems; to eliminate data redundancy in de-
signing knowledge-based systems. Fig.: 9. Refs: 18 titles.
Key words: ontological model, relational model, OWL, SQL, SPARQL, data management, finan-
cial control, state budget, Protégé 5.
621.384.3
. ., . ., . ., . ., . .
AMD Zen. -
, . . 2018. . 20. 3. . 102–111. — .
AMD Zen PRO -
, .
,
,
.
Windows 10. .: 1. .: 11 .
: , , ,
, , Secure Encrypted Virtualization, Secure Memory Encryp-
tion, AMD Zen.
UDC 621.384.3
Sokolovskyi V.S., Karpinets V.V., Yaremchuk Yu.E., Prisyagniy D.P. and Pryimak A.V. Securing
virtual machines with AMD Zen CPU architecture and instruction set. Data Rec., Storage & Processing.
2018. Vol. 20. N 3. P. 102–111. — Ukr.
It is demonstrated the development of a virtualization environment security subsystem with the
help of hardware-accelerated AMD Zen CPU cryptography API and its instruction set for security tasks,
including but not limited to: protection against unauthorized memory access, data leaks, hypervisor
breach, external attacks and malware spread via the Internet. The method in question utilizes real-time
memory encryption and decryption, with the memory bandwidth and computing power sufficient for
seamless hypervisor and server operation, virtual machine live migration and secure export, and demon-
strates capabilities of ARM Cortex A5 on-board cryptography processor core for mentioned tasks, as well
as providing secure asymmetric key exchange invisible and inaccessible to any software beside internal
Trusted Platform Module and its inner DRAM memory controller, to guarantee high level of virtual envi-
ronment security and sufficient resistance to most active attacks with minimum computation overhead,
suitable for most real-life virtualization-based workload scenarios. The example subsystem specifically
targets Microsoft Windows 10 operating system, however software support for different operating sys-
tems (including UNIX-based) may already be provided by appropriate vendors, including enterprise-
ready solutions, such as Cisco, Dell, HP, etc. Fig.: 1. Refs: 11 titles.
Key words: information security, hypervisor, cryptographic processor, memory encryption, AMD
Secure Encrypted Virtualization technology, AMD Zen CPU architecture.
519.8.816
. ., . ., . ., . ., . . -
. ,
. . 2018. . 20. 3. . 112–120. — .
-
,
. .: 12 .
: e , , -
, , .
UDC 519.8.816
Azarova A.O., Roik A.M., Poplavskiy A.V., Pavlovskiy P.V. and Tkachuk A.P. A method of for-
malizing the making decision process based on the theory of threshold elements. Data Rec., Storage &
Processing. 2018. Vol. 20. N 3. P. 112–120. — Ukr.
The conceptual principles and the theoretical substantiation of the peculiarities of the threshold
elements mathematical apparatus application for the making decision of complex classification problems
148
are formulated. This makes it possible to significantly simplify the procedure for formalizing the DSS for
objects with quantitative evaluation parameters.
The method of formalizing the making decision process based on the theory of threshold elements
for objects with quantitative evaluation parameters is suggested, which, unlike existing approaches, in
particular the method of linear weighted sums, allows us to substantiate weighting of estimating parame-
ters. This allows for making decision of a clear and one-valued solution provided that the combinations of
valuation parameters are incompletely checked.
The process of constructing the logical function of choice is formalized and the algorithm of the
transition from the logical selection function to the threshold function is proposed. Construction of the
logical choice function requires the definition of its minimum disjunctive normal form. This led to the
need to find the number of simple implicants that should have this form of function. Such a process was
identified by proving the authors of the corresponding theorem.
A corresponding approach to the making classification decision, when functionally meaningless
elements that are theoretically possible to be ignored can’t be taken into account, and threshold selection
functions will not ensure the attribution of an element to any class is formulated.
The algorithm of making decision in the process of application of the method described above is
produced. The peculiarities of such an algorithm are that it is clearly and cardinally described and quite
simply implemented with the use of modern computer technology. Refs: 12 titles.
Key words: making decision, decision support system, formalization of the process, mathematical
method and threshold elements.
004.942.519.67
. ., . ., . .
. , . . 2018. . 20.
3. . 121–130. — .
— , , -
, ,
, -
, , . -
(Minimum Cost Flow, MCF) -
, .
, . -
MSF - -
. .: 9. .: 8 .
: , - -
, minimum cost flow problem, multicommodity minimal cost network flows, optimization modeling
with spreadsheets.
UDC 004.942.519.87
Dodonov E.O., Dodonov O.G. and Kuzmychov A.I. Modeling and visualization of generalized
minimum cost flows problems. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 121–130. — Ukr.
Modeling the minimal cost flows is, really, the research on the models of any type or principle of
operation of all communications, natural or artificial, by which network flows are transmitted or must be
transmitted in such a way that the total costs for the movement of energy, funds or resources, were the
least. So the core of the mathematical and computing instruments of network optimization is the model of
the fundamental problem of minimum cost flow (MCF) in its various versions, statements and applica-
tions. Usually the implementation of these models requires serious efforts and costs associated with the
use of special software and language tools.
Some examples of solving generalized MSF problems on accessible technology of spreadsheet op-
timization modeling are given.
In these examples it has been proved the possibility of studying complicated statements of the
problem of minimum cost flows, in particular, in the K-product version, taking into account specific costs
(resources, resources, time) in nodes. This opportunity opens the way for the development of specialized
software for transport and logistics services for optimal management solutions to meet customer orders.
Fig.: 9. Refs: 8 titles.
ISSN 1560-9189 , , 2018, . 20, 3 149
Key words: flows in minimal cost networks, one- and multicommodity flows, minimum cost flow
problem, minimal cost network flows, optimization modeling with spreadsheets.
519.816
. . -
. , . .
2018. . 20. 3. . 131–138. — .
. , -
. :
,
, -
. -
, -
.
. .: 2. .: 16 .
: , , -
, , ,
.
UDC 519.816
Roik P.D. Consistency index evaluation for estimates of experts, taking into account their compe-
tence for group decision making. Data Rec., Storage & Processing. 2018. Vol. 20. N 3. P. 131–138. —
Ukr.
The issue of assessment of expert estimates’ consistency in cases of group polling taking into ac-
count experts’ competence has been considered. The development of an appropriate method should pro-
ceed from a range of particular basic postulates. Namely, it is taken into consideration that expert esti-
mates always assume the existence of a «true» estimate corresponding to the average value of an array of
individual expert estimates. Such an array can be presented on a continuous or a discrete numerical scale
limited on both sides. The maximum level of consistency (1.0) can be achieved only if all experts provide
the same estimate. The consistency index shouldn’t be dependent on estimate shifts along the axis. For
the same set of estimates, increasing the scale leads to an increased consistency index, and vice versa. If a
set of estimates has a higher level of consistency compared to another on a certain scale, it also has
greater consistency on any other scale. In case of linear changes (simultaneous proportional in-
crease/decrease) in scale parameters and the values of all estimates, the consistency index remains un-
changed. So, the index should be scalable. The higher the level of an expert’s relative competence, the
higher this estimate’s impact on the aggregate level of consistency is. The article considers two ap-
proaches, when analyzing the level of experts’ competence is reduced to calculating the consistency index
without taking into consideration experts’ competence, and when the index is to be calculated as the nor-
malized value of the amount of distances between expert estimates for all possible estimate pairs multi-
plied by the product of respective experts’ weight coefficients. Imitation modeling has been carried out,
and an evaluation has been offered for a threshold consistency value above which aggregation of expert
estimates becomes possible. The proposed method has been practically implemented and tested within a
system of distributed collection and processing of expert data for decision support systems. Fig.: 2. Refs:
16 titles.
Key words: decision support systems, expert estimates, expert estimates consistency index, spect-
ral approach, consistency threshold, expert competence.
|