Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data

Ontology-based knowledge representation models in the context of big data are one way to reduce complexity for data processing across methods of semantic description. This research paper aims at providing an overview of the methods and techniques for efficient management of the ontology-based models...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Datum:2022
1. Verfasser: Novitsky, A.V.
Format: Artikel
Sprache:English
Veröffentlicht: Інститут програмних систем НАН України 2022
Schlagworte:
Online Zugang:https://pp.isofts.kiev.ua/index.php/ojs1/article/view/472
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Назва журналу:Problems in programming

Institution

Problems in programming
id pp_isofts_kiev_ua-article-472
record_format ojs
resource_txt_mv ppisoftskievua/97/ccb5ed34ce3f81b3572ac4cf826de897.pdf
spelling pp_isofts_kiev_ua-article-4722023-01-19T07:12:43Z Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data Методи та технології управляння моделями представлення знань, базованих на онтологіях в контексті велики х даних Novitsky, A.V. ontology-based model; ontologies; big data; reasoners; representation system; ontologies; shapes constraint language; information validation; ontology-based knowledge representation models UDC 004.6 модель на основі онтології; онтології; великі дані; вивід; система представлення; онтології; мова обмежень форм; перевірка інформації; моделі представлення знань на основі онтологій УДК 004.6 Ontology-based knowledge representation models in the context of big data are one way to reduce complexity for data processing across methods of semantic description. This research paper aims at providing an overview of the methods and techniques for efficient management of the ontology-based models that improve big data systems. For this case, the shapes constraint language (SHACL) for information validation was reviewed as the key method. The knowledge representation systems and reasoners are studied and reviewed in the paper as well. It describes approaches based on ontologies in the context of big data. The proper management of ontology-based knowledge representation models through offered methods and techniques brings improved data integration, big data quality, and business process integration.Prombles in programming 2021; 4: 19-25 Онтологічні моделі представлення знань у контексті великих даних є одним із способів зменшити складність обробки даних за допомогою семантичних методів. У статті розглянуто методи і засоби ефективного управління моделями на основі онтологій, які покращують системи великих даних. Для цього випадку мова вираження обмежень форм (SHACL) для перевірки інформації була розглянута як ключовий метод. У статті також досліджуються та розглядаються представлення знань та методи виводу. Належне управління моделями представлення знань на основі онтології за допомогою запропонованих методів і засобів забезпечує покращену інтеграцію даних, якість великих даних та інтеграцію бізнес-процесів.Prombles in programming 2021; 4: 19-25  Інститут програмних систем НАН України 2022-02-07 Article Article application/pdf https://pp.isofts.kiev.ua/index.php/ojs1/article/view/472 10.15407/pp2021.04.019 PROBLEMS IN PROGRAMMING; No 4 (2021); 19-25 ПРОБЛЕМЫ ПРОГРАММИРОВАНИЯ; No 4 (2021); 19-25 ПРОБЛЕМИ ПРОГРАМУВАННЯ; No 4 (2021); 19-25 1727-4907 10.15407/pp2021.04 en https://pp.isofts.kiev.ua/index.php/ojs1/article/view/472/476 Copyright (c) 2022 PROBLEMS IN PROGRAMMING
institution Problems in programming
baseUrl_str https://pp.isofts.kiev.ua/index.php/ojs1/oai
datestamp_date 2023-01-19T07:12:43Z
collection OJS
language English
topic ontology-based model
ontologies
big data
reasoners
representation system
ontologies
shapes constraint language
information validation
ontology-based knowledge representation models
UDC 004.6
spellingShingle ontology-based model
ontologies
big data
reasoners
representation system
ontologies
shapes constraint language
information validation
ontology-based knowledge representation models
UDC 004.6
Novitsky, A.V.
Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
topic_facet ontology-based model
ontologies
big data
reasoners
representation system
ontologies
shapes constraint language
information validation
ontology-based knowledge representation models
UDC 004.6
модель на основі онтології
онтології
великі дані
вивід
система представлення
онтології
мова обмежень форм
перевірка інформації
моделі представлення знань на основі онтологій
УДК 004.6
format Article
author Novitsky, A.V.
author_facet Novitsky, A.V.
author_sort Novitsky, A.V.
title Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
title_short Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
title_full Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
title_fullStr Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
title_full_unstemmed Methods and techniques for management of ontolodgy-based knowledge representation models in the context of BIG data
title_sort methods and techniques for management of ontolodgy-based knowledge representation models in the context of big data
title_alt Методи та технології управляння моделями представлення знань, базованих на онтологіях в контексті велики х даних
description Ontology-based knowledge representation models in the context of big data are one way to reduce complexity for data processing across methods of semantic description. This research paper aims at providing an overview of the methods and techniques for efficient management of the ontology-based models that improve big data systems. For this case, the shapes constraint language (SHACL) for information validation was reviewed as the key method. The knowledge representation systems and reasoners are studied and reviewed in the paper as well. It describes approaches based on ontologies in the context of big data. The proper management of ontology-based knowledge representation models through offered methods and techniques brings improved data integration, big data quality, and business process integration.Prombles in programming 2021; 4: 19-25
publisher Інститут програмних систем НАН України
publishDate 2022
url https://pp.isofts.kiev.ua/index.php/ojs1/article/view/472
work_keys_str_mv AT novitskyav methodsandtechniquesformanagementofontolodgybasedknowledgerepresentationmodelsinthecontextofbigdata
AT novitskyav metoditatehnologííupravlânnâmodelâmipredstavlennâznanʹbazovanihnaontologíâhvkontekstívelikihdanih
first_indexed 2025-07-17T09:35:35Z
last_indexed 2025-07-17T09:35:35Z
_version_ 1838499826879168512
fulltext 19 Моделі та засоби систем баз даних і знань Introduction Big data means complex data sets that are unable to process adequately via tradi- tional data applications. Big data manage- ment is handled by special-purpose resource planning systems called enterprise informa- tion systems. These systems represent busi- ness processes adequately and force the over- all cost-eff ectiveness [1]. Modern enterprises are focused on the enterprise-wide centralized information system to validate and integrate large amounts of complex data. To capture and represent complex and big data, ontology- based knowledge representation models are used. One of the factors which impact big data processing is the complexity of understanding the data. Semantic technology is allowed to automatically recognize data. This article is to explain the approach of using ontologies for big data to ensure a common understanding of information. The ontology-based knowledge representation models make explicit domain assumptions [2]. Querying information in the context of big data becomes accessible for large enter- prises. Ontologies bring detailed and meaning- ful distinctions between relationships, classes, and properties. The paper is devoted to on- tology-based modeling and its management in semantic graph databases. Big data quality is improved with the help of ontology-based knowledge representation models and the rea- soners that enable consistency and satisfi abil- ity checks [3]. The research paper also reviews an alternative using ontologies to model data. SHACL (shapes constraint language) is over- viewed to demonstrate the benefi ts of this method for information validation in the triplestore and for validating RDF graphs against a set of constraints. The overview of the OWL reasoners and RDF graph capture systems is used as the guide for big data play- ers (large enterprises, structure that is in the stage of developing a large-scale centralized database) on how to manage ontology-based models and improve the data quality with the help of automated reasoning of the informa- tion in the semantic graph database. OWL Reasoners for ontologies. The research paper reviews the main two reasoners with the wide range of op- timizations that benefit big data improve- ments. They contain updated algorithms and tableaux algorithms that are native to the ontology-based knowledge representa- tion models. FaCT++ is one of the newest reason- ers that is designed to implement tableaux al- gorithms and updated heuristic optimization techniques. The table of characteristics of the FaCT++ reasoner is given below. УДК004.6 http://doi.org/10.15407/pp2021.04.019 O. Novytskyi METHODS AND TECHNIQUES FOR MANAGEMENT OF ONTOLOGY-BASED KNOWLEDGE REPRESENTATION MODELS IN THE CONTEXT OF BIG DATA Ontology-based knowledge representation models in the context of big data are one way to reduce complexity for data processing across methods of semantic description. This research paper aims to provide an overview of the methods and techniques for efficient management of the ontology-based models that improve big data systems. For this case, the shapes constraint language (SHACL) for information validation was reviewed as the key method. The knowledge representation systems and reasoners are studied and reviewed in the paper as well. The author describes approaches based on ontologies in the context of big data. The proper management of ontology-based knowledge representation models through offered methods and techniques brings improved data integration, big data quality, and business process integration. Key words: ontology-based model, ontologies, big data, reasoners, representation system, ontologies, shapes constraint language, information validation, ontology-based knowledge representation models. © O. Novytskyi, 2021 ISSN 1727-4907. Проблеми програмування. 2021. №4 20 Моделі та засоби систем баз даних і знань Description A new highly-optimized reasoner with tableaux-based SROIQ algorithms License LGPL v2 Semantics OWL DL Classifi cation OWL EL Classifi cation OWL DL Consistency OWL EL Consistency OWL DL Realization OWL EL Realization Table 1. FaCT++ Characteristics. Source: ORE The FaCT++ reasoner implementation starts with the preprocessing stage. It is ap- plied to the knowledgebase and can be trans- formed according to the internal representation requirements. FaCT++ performs classifi cation. With the help of applied optimizations, the FaCT++ reasoner is used to reduce the quantity of subsumption tests to be performed [4]. The main application of the FaCT++ op- timizations is to transform concepts into SNF. The simplifi ed normal form lets users imple- ment negation, conjunction, universal restric- tion, at-most restrictions. The main FaCT++ features for big data optimization are: (a) Absorption – is suitable for rewriting optimization. There are concept and role ab- sorption techniques to take into consideration. The concept absorption is responsible for GCIs elimination via concept defi nition axioms. The role absorption eliminates GCIs in the concept- free mode. (b) TCE (Told Cycle Elimination) – the technique for text optimization. This cycle is often eliminated together with defi nitional cycles. The user can undertake TCE and defi - nitional cycles with the help of axiom transfor- mations. (c) Synonym Replacement – this FaCT++ technique aims at extending simpli- fi cation properties. Synonym Replacement im- proves clash detection in the early stage. The knowledgebase is transformed in the context of synonym elimination with the help of axioms. The FaCT++ reasoner is used for satis- fi ability checking optimizations. New ordering heuristics are available for the implementa- tion of new optimization methods. There is a special-purpose To-Do list. The user can force entry assortment with the help of the FaCT++ To-Do algorithm. It is worth noting that the rea- soner provides the Backjumping optimization. The tree label matters when the dependency set of information items is formed. Boolean opti- mization that is available with the help of the FaCT++ reasoner allows users to implement constant propagation (BCP) [5]. HermiT is the reasoner for ontology- based knowledge representative models. It is used for the identifi cation of subsumption rela- tionships (between classes and other specifi ca- tions). This reasoner is public and available for the users without restrictions. The notable fea- ture of the HermiT reasoner is its new versions with the updated reasoning algorithms [5], [7]. The main HermiT characteristics are given in the table below [8] Description The conformant reasoner for the ontology-based knowledge representation models. The HermiT uses direct semantics. It is based on the hyper-tableaux algorithms. License LGPL 3.0 Semantics OWL DL Classifi cation, OWL EL Classifi cation OWL DL Consistency OWL EL Consistency OWL DL Realization OWL EL Realization Table 2. HermiT Characteristics The HermiT reasoner allows users to classify the ontology-based knowledge repre- sentation models faster. The manual classifi ca- tion often takes hours. The reasoner makes it possible to classify even big data knowledge- base and complex information for minutes. The HermiT reasoner uses direct se- mantics for optimization processes and hyper- tableaux algorithm implementation. The last version of the reasoner is called HermiT 1.3.8. Besides the main function of DL Safe rule han- dling, the new version of reasoner allows big data players to add new rules directly to the ontology-based models [6]. The number of optimization techniques of the HermiT reasoner is similar to the FaCT++ one described in the research paper above. The 21 Моделі та засоби систем баз даних і знань signifi cant feature of the HermiT reasoner is the high-level DL Safety rules compliance. DL Safety rules will be considered incomplete if: a) the knowledgebase contains property chains in the rule bodies; b) the KB includes transitivity axioms in the rule bodies of the knowledgebase; c) the complex properties are used in the rule bodies of the ontology-based model. The HermiT reasoner is one of the new- est reasoners that is recommended for ontol- ogy-based knowledge representation model management in the context of big data. The direct semantics use and hyper-tableaux algo- rithm approach improve the quality of data and simplify business processes related to ontolo- gies and semantics. RacerPRO is the improved version of the former Racer knowledge representa- tion system. As above-described reasoners and other programs suitable for ontology-based knowledge representation model management, RacerPRO is used for optimized tableaux algo- rithm implementation. The description logic of is used for this knowledge repre- sentation system. Description Racer is a knowledge representation system that implements a highly optimized tableau calculus for a very expressive description logic. It provides the reasoning for T-boxes and A-boxes as well. License - Semantics OWL DL Classifi cation, OWL EL Classifi cation OWL DL Consistency OWL EL Consistency OWL DL Realization OWL EL Realization Table 3. HermiT Characteristics The RacerPRO license is BSD 3-clause. This system is required for big data projects because it is the separate-standing knowledge representation system for solving main reason- ing problems [9]. The reasoning procedure takes place in the streaming model that is suitable for complex data proceedings. Both T-boxes and A-boxes often include issues to solve when it comes to knowledge representation. RacerPRO solves these reasoning problems with the help of standard tableaux algorithms and unique interference services (e.g. logical abduction). The architecture of the latest version of the RacerPRO system is presented in Figure 1. RacerPRO PLUGINS RDF RDF-S OWL-Lite OWL-DL Racer Exten on nRQL SWRL SPARQL AllegroGr aph OWL API OWL Link JRacer/ LRacer TCP/IP Figure 1. RacerPRO Architecture. The additional benefi t of the RacerPRO reasoning and knowledge representation sys- tem is its query language called nRQL. Using the new Racer Query Language means supple- mentary assistance when it comes to ontology- based model management: attribute values of diff erent individuals; improved propertied for string attri- butes; negation-as-failure support. Reasoning over ontology as a rule is a complex task. In real tasks for reasoning, we have to store facts in RDF. Therefore, in the next part, we make a short review semantic reasoner with RDF storage. Snorocket. This is the special-purpose algorithm based on the healthcare terminology classifi ers. Snorocket will be suitable for big data projects related to the clinical, medical, healthcare, science directions [10]. Snorocket is not a multifunctional solution for ontology- based knowledge representation model man- agement. This is suitable for working with on- tology related to medical data only. Snorocket is available for users in the extension format. The classifi ers of the algorithm allow healthcare representatives to manage semantic data related to medi- 22 Моделі та засоби систем баз даних і знань cal terminology. Big data projects based on healthcare or medical content, imagery, and other information can benefi t from us- ing Snorocket. Nevertheless, this extension with the implementation of the unique Dres- den algorithm is not suitable for any other knowledgebase. The limited ways of the ap- plication make Snorocket the last RDF store system in the list of top ones overviewed in the research paper. Methods for RDF graphs validation. Shapes Constraint Language (SHACL) RDF is a main part of the Semantic Web. Its simple data model provides power- ful expressiveness which can be applied to represent information in any scope. Practi- cal Semantic Web applications require some technology to describe and validate the RDF data [11]. One of such technology for RDF is SHACL [12], [13], which has developed to model some restrictions in the form of con- straints on data. The shapes constraint language (SHA- CL) is considered as the alternative to tradi- tional ontologies that are used for data mod- eling. SHACL is used for RDF graphs vali- dation. There is a set of constraints that are applicable to the validation process. SHACL includes shapes that specify metadata accord- ing to its resource. The big data knowledge- base is compliable with the shapes constraint language. The special-purpose shape specifi es the resource in the context of big data as well. This resource can be the principle of data use, the reason for data use, and the frequency of data use. The SHACL data validation process is applicable for both unavailable and available data in the triplestore. The shape constraint lan- guage conditions are called shapes expressed in the RDF graph format. The main purpose of the SHACL data validation is to check infor- mation according to the range of conditions. Those pieces of data that meet the shape con- straints can be viewed as a description of data graphs. It is worth noting that SHACL-gener- ated descriptions based on the shape constraint language validation of graphs can be used out of the validation process [12]. It makes SHACL the key method for ontology-based knowledge representation model management. The ready-done descrip- tions with the help of shape constraint language validation algorithms can be implemented in the context of big data: ● for code generation; ● for data generation. These descriptions are suitable for code building that is one more technique out of the validation process. The separate-standing aspect to take into account is the relationship between SHACL and RDFs inferencing. The shape con- straint language includes the property entail- ment to identify the interference specifi cations. To protect the knowledgebase items and bring a smooth validation process, it is recommended to use only verifi ed RDF resources to proceed in SHACL RDF-based technologies [13]. The SHACL validation is recom- mended for big data because, in comparison with the standard ontologies and semantic techniques, this is the effi cient way to avoid ontology limitations (limited set of property constructs). The RDF resource validation is suitable for ontology-based knowledge repre- sentation models in the context of big data for its shape-generated failure determination and data improvement properties. Reasoners with built-in RDF store features The range of special-purpose databases for graphs that store triples is called the RDF database. It is worth noting that triples or RDF databases are considered as data points. These points are represented in the SPO relationship (subject-predicative-object relationship). All the data items are stored in the same format – a triple format. The database receives and uses information and stores it in triple form. The RDF database is suitable for ontology-based knowledge representation management in the context of big data because all the complex information is well-organized with the help of triple sets. One more reason to use the RDF data- base as the ontology-based model management when it comes to big data is the convenience to display information in graphs provided by this type of database. To carve the graphs from the triple database, any query language is used. The functionality and fl exibility of the RDF database benefi t enterprise-centric knowledge- base and big data projects. 23 Моделі та засоби систем баз даних і знань Not all the databases can be included in the category of triple ones. There is a range of requirements for the digital product to be named as the RDF database. The main features of the triple database to potential-to-inclusion products are: a) suffi cient data storage is provided; b) data as recorded as triples; c) users are allowed to retrieve the data with the help of query language. These are features of average RDF store. For the purpose to determine the most effi cient triple databases for ontology-based knowledge representation model management. HyLAR is the special-purpose rea- soner for ontology-based knowledge repre- sentation models that contains RDF-based libraries. These libraries obtain a wide range of functionality for ontology-based model management. The HyLAR reasoner can be considered as the supplementary reasoning engine for big data. Its rdfstore.js, SPAR- QLs, and RDF-ext libraries are used as the triple databases [14]. The HyLAR reasoner is available in three versions to implement for the knowl- edgebase: a) NPM module b) A server-based solution c) Browser version. The HyLAR reasoner with its RFD li- braries supports business database rules. This is one more reason to use the HyLAR-based database for big data projects. The database processing generated by the HyLAR reasoner and its RDF-based libraries is presented in the infographics below [14]. HyLAR Reasoning Engine GreedyIncremental Rules OWL Parser SPARQL Parser PARSER Owl file Query Result RDF triplets Knowledge base Fig 2. HyLAR Architecture. HyLAR is used as the reasoning engine combined with the OWL and SPARQL pars- ers. The reasoner brings results in the format of triples that is well seen on Fig 2. The knowl- edgebase with ready-done available triples can be applied together with the HyLAR reasoning engine for conversion and creation of enter- prise-centralized big data projects with quali- tative and checked information. Apache Jena. The Jena open-source Java framework includes a special-purpose RDF API. Jena contains information only in the format of RDF triples. The collection of RDF triples forms the general database and is in- cluded in the Jena data structure called Model. Jena is optimal for ontology-based knowledge representation model management because the data structure model of this framework eas- ily determines PDF graphs and provides them relations. The database is well-structural and easy-to-navigate which is required for big and complex data [15]. The way on how the relationships go in the one-direction mode through the triple cod- ing exemplifi ed in Fig 3 [16]. Fuseki Applica on code HTTP Applica on code RDF API Ontology API SPARQL API Parser RDF/XML Turtle N-triples RDFa Inference API None Build-in reasoner External reasoner Store API In-memory SDB TDB Custom SQL database Tuple store Fig 3. Jena Architecture Another benefi t of Jena in the context of big data is the availability of both RDF and Ontology APIs. The distinct concepts of the framework with the RDF-based triple collec- tion are its opportunity to build up direct rela- tions between graphs (nodes) in the structure, rich-in-functions APIs that provide suffi cient management tools for the ontology-based knowledge representation models, and big data 24 Моделі та засоби систем баз даних і знань orientation with Jena’s in-memory structures in combination with intended methods of the complex data simplifi cation. RDFox is the semantic reasoning en- gine with the functionality of the RDF triple store. This one of the core systems for big data with its unique conception of shared memory parallel reasoning [17]. RDFox is notable with memory-eco- nomical properties that are suitable for enter- prise-centric knowledgebase and big data proj- ects. About 1.5 billion triples can be stored in 50 GB of the RDFox RDF store. The following table presents the main characteristics of the RDFox reasoning engine. Description The latest version of the former RDFox semantic reasoning engine. The latest version was launched in 2021. It contains a triple store that is suitable for knowledge representation purposes. Additional Features Rule reasoner OWL reasoner RDFS Reasoner Semantics RDF OWL SPARQL Table 4. RDFox Characteristic. The ontology-based knowledge repre- sentation model management in the context of big data can be undertaken through the RDFox semantic reasoning engine. The key benefi t of the system is its triple store with memory-effi cient stock. Additionally, big data projects can benefi t from RDFox named graphs, Data-log extensions, and incremen- tal update & aggregation. Conclusions The ontology-based knowledge repre- sentation models bring strong benefi ts to the digital world. The big data sphere is not the exception. Essential relationships between concepts automated data reasoning, and se- mantic advantages are key benefi ts of the ontology-based models for big data projects. But ontologies require proper management when it comes to accurate knowledge repre- sentation. The current research paper faces the problematics of the poor ontology-based knowledge representation model management in the context of big data. The most top-ranking systems, rea- soners, and other digital solutions were overview by the author. The best ones in the article are described in the research pa- per. Besides theoretical information given in the general paragraphs about reasoners, shape constraint language, and triple data- base, there is an analytical background for each reviewed solution. Pros & cons are presented under the description of the rea- soners and reasoning engines. According to the undertaken research, the ontology-based knowledge representation models in the con- text of big data can be easily managed by the reasoning engines, reasoner extensions, query languages, hyper-tableaux algorithms, SHACL implementation, RDF database us- age. The future prospects of digital transfor- mation and new technique and method de- velopment with a focus on big data are real. The ontology-based knowledge represen- tation models are successfully managed by the digital solutions now. The huge progress in the big data-driven direction is predicted over the coming decade. Reference [1] B. Mouad, «An Evaluation and Compara- tive study of massive RDF Data manage- ment approaches based on Big Data Tech- nologies,» International Journal of Emerg- ing Trends in Engineering Research, т. 7, pp. 48-53, 2019. [2] Y. Sure-Vetter, S. Staab та R. Studer, «Meth- odology for Development and Employment of Ontology Based Knowledge Management Ap- plications.,» ACM SIGMOD Record 3, т. 4, № 31, pp. 18-23, 2002. [3] P. Haase та L. Stojanovic, «Consistent Evolu- tion of OWL Ontologies,» The Semantic Web: Research and Applications, pp. 182-197, 2005. [4] T. Dmitry , «Incremental and Persistent Rea- soning in FaCT++,» в ORE, 2014. [5] T. Dmitry та H. Ian , «FaCT++ description logic reasoner: System description,» в Interna- tional joint conference on automated reason- ing, Berlin, 2006. [6] R. Shearer, B. Motik та I. Horrocks, «HermiT: A Highly-Effi cient OWL Reasoner,» Owled, т. 432, p. 91, 2008. 25 Моделі та засоби систем баз даних і знань [7] Data and knowlege group. University of Ox- ford., «HermiT OWL Reasoner,» [online]. Available: http://www.hermit-reasoner.com/. [Date: 05 2021]. [8] D. Michel , G. Birte , G. Rafael , H. Matthew, J.-R. Ermesto, N. Matentzoglu та P. Bijan , «ORE Live Competition,» 05 2021. [online]. Available: http://dl.kr.org/ore2015/vip.cs.man. ac.uk_8008/reasoners.html. [9] V. Haarslev, «The RacerPro knowledge rep- resentation and reasoning system.,» Semantic Web, т. 3, № 3, pp. 267-277, 2012. [10] M. J. Lawley та C. Bousquet, «Fast clas- sifi cation in Protégé: Snorocket as an OWL 2 EL reasoner.,» в 6th Australasian Ontol- ogy Workshop (IAOA’10). Conferences in Research and Practice in Information Tech- nology, 2010. [11] G. Jose Emilio Labra, «Validating and De- scribing Linked Data Portals using RDF Shape Expressions.,» в LDQ@ SEMANTICS, 2014. [12] J. Corman, J. L. Reutter та O. Savković, «Se- mantics and validation of recursive SHACL,» в International Semantic Web Conference, Cham, 2018. [13] W3C, «Shapes Constraint Language (SHA- CL),» 20 July 2017. [online]. Available: https:// www.w3.org/TR/shacl/. [Date: 05 2021]. [14] M. Terdjimi, M. Lionel та M. Mrissa, «Hylar: Hybrid location-agnostic reasoning.,» в ESWC Developers Workshop 2015, 2015. [15] A. Ameen, K. Ur Rahman Khan та R. B. Padmaja, «Reasoning in semantic web using Jena.,» Computer Engineering and Intelligent Systems, т. 5, № 4, pp. 39-47, 2014. [16] Apache Software Foundation, «Jena archi- tecture overview,» [online]. Available: https:// jena.apache.org/about_jena/architecture.html. [Date: 05 2021]. [17] Oxford Semantic Technologies, «RDFox,» [online]. Available: https://www.oxfordseman- tic.tech/product. [Date: 05 2021]. Received: 27.10.2021 About author: Oleksandr Novytskyi, PhD, Researcher. Number of scientifi c publications in Ukrainian journals – 13. https://orcid.org/0000-0002-9955-7882. Affi liation: Інститут програмних систем НАН України, проспект Академіка Глушкова, 40. Тел.: 526 5139 E-mail: alex.googl@gmail.com