TY - JOUR
T1 - Logic-Oriented Autoencoders and Granular Logic Autoencoders
T2 - Developing Interpretable Data Representation
AU - Al-Hmouz, Rami
AU - Pedrycz, Witold
AU - Balamash, Abdullah
AU - Morfeq, Ali
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2022/3/1
Y1 - 2022/3/1
N2 - The plethora of ways of data representation and their applications to system modeling is inherently associated with dimensionality reduction. In a nutshell, the result of dimensionality reduction should support efficient ways of constructing ensuing models (classifiers and predictors) as well as an interpretation of the data themselves. Furthermore, there should be a suitable measure quantifying the quality of data positioned in the reduced space. We advocate that what makes the reduced data interpretable goes hand in hand with revealing a logic fabric of the data, suppressing redundancy, and finally arriving at a logic description of data. The anticipation is that the reduced data can be described in the form of logic expressions formed over the original highly dimensional data. Evidently, having these above-stated points in mind, the aim of this article is twofold: 1) to develop a logic-oriented data representation with the aid of autoencoders; and 2) to quantify the quality of results of this dimensionality reduction by incorporating a facet of information granularity. In other words, we argue that the result of dimensionality reduction gives rise to information granules whose level of granularity associates with the quality of processing completed by the autoencoder. In light of the recent surge of architectures of deep learning, the study is focused on the construction and analysis of logic-oriented autoencoders. We propose a two-level architecture composed of the logic-oriented processing units organized in two layers of logic processing units. As data representation provided by the autoencoder is not ideal, we augment the original architecture by granular parameters, which give rise to granular logic-oriented autoencoders. A suite of experiments is also reported.
AB - The plethora of ways of data representation and their applications to system modeling is inherently associated with dimensionality reduction. In a nutshell, the result of dimensionality reduction should support efficient ways of constructing ensuing models (classifiers and predictors) as well as an interpretation of the data themselves. Furthermore, there should be a suitable measure quantifying the quality of data positioned in the reduced space. We advocate that what makes the reduced data interpretable goes hand in hand with revealing a logic fabric of the data, suppressing redundancy, and finally arriving at a logic description of data. The anticipation is that the reduced data can be described in the form of logic expressions formed over the original highly dimensional data. Evidently, having these above-stated points in mind, the aim of this article is twofold: 1) to develop a logic-oriented data representation with the aid of autoencoders; and 2) to quantify the quality of results of this dimensionality reduction by incorporating a facet of information granularity. In other words, we argue that the result of dimensionality reduction gives rise to information granules whose level of granularity associates with the quality of processing completed by the autoencoder. In light of the recent surge of architectures of deep learning, the study is focused on the construction and analysis of logic-oriented autoencoders. We propose a two-level architecture composed of the logic-oriented processing units organized in two layers of logic processing units. As data representation provided by the autoencoder is not ideal, we augment the original architecture by granular parameters, which give rise to granular logic-oriented autoencoders. A suite of experiments is also reported.
KW - Autoencoder
KW - deep learning
KW - granular computing
KW - information granules
KW - logic autoencoder
UR - http://www.scopus.com/inward/record.url?scp=85097924201&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097924201&partnerID=8YFLogxK
U2 - 10.1109/tfuzz.2020.3043659
DO - 10.1109/tfuzz.2020.3043659
M3 - Article
AN - SCOPUS:85097924201
SN - 1063-6706
VL - 30
SP - 869
EP - 877
JO - IEEE Transactions on Fuzzy Systems
JF - IEEE Transactions on Fuzzy Systems
IS - 3
M1 - 3
ER -