The plethora of ways of data representation and their applications to system modeling is inherently associated with dimensionality reduction. In a nutshell, the result of dimensionality reduction should support efficient ways of constructing ensuing models (classifiers and predictors) as well as an interpretation of the data themselves. Furthermore, there should be a suitable measure quantifying the quality of data positioned in the reduced space. We advocate that what makes the reduced data interpretable goes hand in hand with revealing a logic fabric of the data, suppressing redundancy, and finally arriving at a logic description of data. The anticipation is that the reduced data can be described in the form of logic expressions formed over the original highly dimensional data. Evidently, having these above-stated points in mind, the aim of this article is twofold: 1) to develop a logic-oriented data representation with the aid of autoencoders; and 2) to quantify the quality of results of this dimensionality reduction by incorporating a facet of information granularity. In other words, we argue that the result of dimensionality reduction gives rise to information granules whose level of granularity associates with the quality of processing completed by the autoencoder. In light of the recent surge of architectures of deep learning, the study is focused on the construction and analysis of logic-oriented autoencoders. We propose a two-level architecture composed of the logic-oriented processing units organized in two layers of logic processing units. As data representation provided by the autoencoder is not ideal, we augment the original architecture by granular parameters, which give rise to granular logic-oriented autoencoders. A suite of experiments is also reported.
ASJC Scopus subject areas