Abstract

This study introduces a novel non-systematic logical structure, termed B-type Random 2-Satisfiability, which incorporates non-redundant first- and second-order clauses, as well as redundant second-order clauses. The proposed logical rule is implemented in the discrete Hopfield neural network using the Wan Abdullah method, with the corresponding cost function minimized through an exhaustive search algorithm to reduce the inconsistency of the logical rules. The inclusion of redundant literals is intended to enhance the capacity of the model to extract overlapping knowledge. Additionally, the performance of B-type Random 2-Satisfiability with varying clause proportions in the discrete Hopfield neural network is evaluated using various metrics, including learning error, retrieval error, weight error, energy analysis, and similarity analysis. Experimental results indicate that the model demonstrates superior efficiency in synaptic weight management and offers a broader solution space when the number of the three types of clauses is selected randomly.

Highlights
  • A B-type Random 2-Satisfiability (BRAN2SAT) logic is proposed, which is composed of both redundant and non-redundant literals.

  • The BRAN2SAT logic is embedded into a discrete Hopfield neural network using the WA method to enhance the interpretability of the network.

  • The performance of the proposed model is evaluated using various metrics.

  • A superior performance of BRAN2SAT was confirmed by the results.

1 Introduction

Artificial neural networks (ANNs) are mathematical models inspired by the perceptual processing of the human brain. Composed of interconnected neurons, ANNs adjust synaptic weights in response to external inputs, enabling the generation of accurate outputs for specific tasks (Rosenblatt 1958). Nowadays, ANNs have numerous applications, such as in natural language processing (Raza et al. 2024), autonomous driving (Xu et al. 2024), medical image processing (Lee et al. 2023; Azad et al. 2024), financial forecasting (Mousapour Mamoudan et al. 2023; Lee & Kang 2023), and smart manufacturing (Sakkaravarthi et al. 2024; Farahani et al. 2025). One of the earliest ANNs was the Hopfield neural network (HNN) proposed by Hopfield and Tank (Hopfield 1982). This single-layer feedback network lacks hidden layers and features neurons that form loops, allowing outputs to feed back into the input layer. The Discrete hopfield neural network (DHNN), a specific form of HNN, utilizes discrete binary states for neurons and features bidirectional connections through synaptic weights. It is trained with the Hebbian learning rule (Hopfield & Tank 1985), where neurons are iteratively updated until convergence to a near-optimal state. The asynchronous updating process ensures DHNN minimizes an energy function, leading to a stable state suitable for real-world problems (Ma et al. 2023; Yu et al. 2024; Wang et al. 2024a; Deng et al. 2024). However, as the number of neurons increases, the DHNN faces challenges such as local minima and limited storage capacity. Therefore, the structure of DHNN must be optimized and improved. To address the storage capacity limitations of conventional DHNN (approximately |$0.15N$|⁠, where |$N$| denotes the number of neurons) and ensure stable convergence to the global optimum while storing a large number of patterns, Abdullah (1992) first modelled neurons in DHNN using symbolic logic with satisfiability (SAT) structures. Synaptic weights are determined by comparing cost functions derived from logical rules with the final energy function, which have proven superior to the traditional Hebbian learning method (Hopfield 1999). Sathasivam (2010) later advanced this work by introducing Horn satisfiability (HORNSAT) into DHNN. The study implemented neuron relaxation mechanisms to prevent oscillations that could lead to local minima. The study demonstrated that the proposed HORNSAT logic effectively converges to global minimum energy states. This marked the first evaluation of neuron states based on logical rules within DHNN, laying the foundation for subsequent research on systematic logical rules. Building on this foundation, Kasihmuddin et al. (2019) and Mansor et al. (2017) subsequently proposed 2-Satisfiability (2SAT) and 3-Satisfiability (3SAT), respectively. The primary difference between these structures lies in the number of literals per clause: 2SAT includes two literals per clause, while 3SAT involves three. Despite the successful implementation of systematic logic in DHNNs, it consists solely of clauses of the same order, which lead to the overfitting of neuron states during the retrieval phase. Consequently, the model exhibits a lack of generalizability on real-world data sets.

Recent studies have proposed various non-systemic logics to address these limitations. These logics are composed of clauses that are randomly arranged in different orders, offering greater flexibility than system logic. This flexibility enables non-systemic logic to generate a wider diversity of neuron states, enhancing its adaptability to a wide range of real-world problems. Sathasivam et al. (2020) proposed Random k-Satisfiability (RAN2SAT), combining first- and second-order clauses. Although the quality of neuron states declines as the number of neurons increases, RAN2SAT exhibits greater variations in synaptic weights. Building on this idea, Zamri et al. (2022) proposed a new non-systematic logical rule, weighted Random 2-Satisfiability (⁠|$r$|2SAT). This rule adjusts the logical structure by considering the expected proportion of negated literals (⁠|$r$|⁠) and utilizes a Genetic Algorithm (GA) to generate suitable |$r$|SAT structures. In DHNN, |$r$|SAT with weighted ratios of negated literals improves global minimum rates and ensures the correct application of negations, leading to higher neuron permutations. Inspired by these developments, Abdeen et al. (2023) propose a novel non-systematic logic, termed S-type Random k Satisfiability, which constructs a logical framework based on two given statistical parameters and assigns negative literals within first- and second-order clauses. Reports indicate that when these parameters reach higher values, the network demonstrates significant effectiveness in synaptic weight management and global minimum solution generation. Subsequently, Roslan et al. (2024) further expanded the perspective on non-systematic logic by introducing Conditional Random k Satisfiability, which intentionally omits positive literals in second-order clauses. Studies show that the proposed logical rules, combined with the newly introduced non-monotonic Smish activation function, achieve a diversified set of global minima in DHNN. Non-systematic logical rules in DHNN result in more diverse final neuron states than systematic rules. However, the predetermined combination and proportional distribution of clauses in different orders may constrain the solution spaces. To achieve greater solution diversity, researchers have combined systematic and non-systematic logic to create flexible hybrid logical rules. These hybrid rules randomly generate clauses of different orders, incorporating both systematic and non-systematic characteristics, thereby enhancing logical diversity. Guo et al. (2022) proposed Y-type Random 2-Satisfiability, which randomly assigns first- and second-order clauses. This approach integrates both systematic and non-systematic logic by using a random clause generator, thereby enhancing storage capacity and connection diversity. While systematic logic provides higher storage capacity for DHNN through its clause structures, non-systematic logic offers richer connections for first-order logic, further improving connection diversity. Although these logical rules for DHNN have received significant attention, a common drawback is the lack of redundancy in variables, as all clauses consist of non-redundant literals. Therefore, incorporating redundant literals is necessary. However, introducing redundant literals with opposing values into a logical expression generally results in non-satisfiability logic.

Maximum satisfiability (MAXSAT) is a variant of logical rules that is inherently unsatisfiable. According to Prugel-Bennett & Tayarani-Najaran (2011), MAXSAT aims to maximize the number of satisfiable clauses, the outcome of MAXSAT logic is consistently False. Kasihmuddin et al. (2018) first introduced non-satisfiability logic into DHNN through maximum 2-satisfiability (MAX2SAT), where only satisfiable clauses are considered in the cost function, as synaptic weights for non-satisfiable clauses are set to zero. This MAX2SAT approach, utilizing exhaustive search (ES), reportedly achieves global minimum energy for smaller neuron sets. Building on this, Someetheram et al. (2022) proposed a new non-satisfiability logic, Random maximum 2-satisfiability (RM2SAT), which combines MAX2SAT and Random 2-Satisfiability. RM2SAT is embedded in DHNN, employing the Election Algorithm during the learning phase to find a consistent interpretation that minimizes the cost function and achieves optimal performance in both learning and testing. While MAXSAT has been successfully applied in DHNN, no existing literature has explored the implementation of satisfiability logic with redundant literals in DHNN. Therefore, it is necessary to consider introducing redundant literals in satisfiability logic. On the one hand, logical rules with redundant literals can represent |$\frac{n}{2}$| neurons in DHNN with |$n$| clauses, simplifying the network structure. In terms of synaptic weights, the synaptic weights between neurons with redundant literal representations change from a single weight value to the sum of two weight values. This adjustment allows the network to more precisely and flexibly modulate the connection strength between neurons, thereby enhancing its adaptability and generalization ability while reducing the risk of getting trapped in local minima. On the other hand, logical rules with redundant literals can update fewer neuron states during retrieval, allowing DHNN to reach stable states with less iteration, thus improving storage capacity and resistance to interference.

As a follow-up to the previously mentioned literature review, it is worthwhile to investigate the following research questions:

  • What alternative representations of non-systematic logic can effectively simplify the structure of the DHNN by introducing redundant and non-redundant literals?

  • What alternative representations of non-systematic logic can precisely adjust synaptic weights through redundant literals, thereby effectively modelling neuron connections in the DHNN?

  • What experimental design and performance metrics are most suitable for validating the effectiveness of embedding non-systematic logical rules with redundant literals into the DHNN during the learning and retrieval phases?

To answer these questions, B-type Random 2-Satisfiability (BRAN2SAT) is proposed by incorporating redundant second-order clauses, as well as non-redundant second- and first-order clauses. The inclusion of redundant second-order clauses enables more diverse connections with fewer neurons in the DHNN and facilitates control over neuron state updates. This work represents the first application of logical rules with redundant literals in DHNN, demonstrating their potential to enhance network performance. The contributions of this study are as follows:

  • To propose a novel non-systematic logical rule termed BRAN2SAT. This logic consists of redundant second-order clauses combined with non-redundant first- and second-order clauses. This combination preserves the fault tolerance of redundant clauses and reflects the simplicity and efficiency of non-redundant clauses, which achieves equilibrium between logical complexity and reasoning efficiency.

  • To embed the proposed BRAN2SAT into the DHNN, each literal is mapped to the corresponding neuron in the network, forming the DHNN–BR2SAT model. The inconsistencies in BRAN2SAT are then minimized, and the correct synaptic weights of the DHNN are determined by comparing the associated cost function with the energy function. The influence of BRAN2SAT is reflected in the behaviour of the DHNN–BR2SAT through the synaptic weights.

  • To conduct extensive experiments to demonstrate the effectiveness of DHNN–BR2SAT. DHNN–BR2SAT is evaluated by considering different clause orders. The performance of the learning and retrieval phases of DHNN–BR2SAT is comprehensively evaluated in terms of the performance metrics of learning error, synaptic weight error, energy distribution, retrieval error, total variation, and similarity metrics to justify the behaviour of the proposed BRAN2SAT.

The rest of the study is structured as follows. Section 2 provides the motivation. Section 3 presents the overview structure of the novel BRAN2SAT. Section 4 explains the implementation of BRAN2SAT in DHNN. Section 5 displays the experimental setup and performance evaluation metrics employed in the whole simulation experiment. Section 6 explains the results, discusses the behaviour of DHNN combined with BRAN2SAT, and addresses the limitations of the model. Finally, Section 7 presents conclusions and future work.

2 Motivation

2.1 The satisfiability logic in the DHNN lacks redundant literals

A significant limitation of DHNN lies in their tendency to become trapped in local minima during energy descent, which hinders the ability of the network to recall stored patterns accurately. Embedding logical rules within DHNN enhances interpretability by guiding neuron update paths. Moreover, the final neuron states of the improved DHNN model offer potential solutions for practical optimization problems. Various types of logical rules have been successfully incorporated into DHNN, including systematic logical rules (Kasihmuddin et al. 2019; Mansor et al. 2017), non-systematic logical rules (Sathasivam et al. 2020), and hybrid logical rules (Guo et al. 2022; Gao et al. 2022). However, these rules are based on non-redundant literals, where each literal appears only once in the logical rules. Since each literal represents a neuron that connects to others as well as to itself, when literals appear only once, the associated neurons exhibit similar synaptic weights, lacking sufficient variability. This can lead to overfitting during the DHNN retrieval phase. To address this issue, the embedding of MAXSAT with two redundant variables of opposing values in DHNN achieves optimal performance in learning and retrieval phases (Kasihmuddin et al. 2018; Someetheram et al. 2022). However, the logic constrains the values and number of redundant variables, limiting their flexibility in regulating synaptic weights. To address these limitations, we propose a satisfiability logic that incorporates redundant literals. This is achieved by randomly generating redundant second-order clauses alongside non-redundant first- and second-order clauses, providing DHNN with more flexible logical rules. This approach provides a novel method for modelling in real-world applications.

2.2 Limited storage capacity of DHNN

The storage capacity of DHNN is relatively limited. Theoretically, the maximum capacity is approximately 15% of the total number of neurons |$N$|⁠, allowing for up to |$0.15N$| patterns to be stored (Hopfield 1982). Exceeding this limit can overload the network, leading to spurious memories or failure to converge correctly. This limitation renders DHNN unsuitable for large-scale pattern storage. To address this issue, Storkey & Valabregue (1999) proposed a weight update method that effectively reduces interference among patterns during training. This algorithm improves upon the Hebbian learning rule, making it better suited for handling more significant numbers of patterns and reducing confusion among similar patterns. However, compared to Hebbian learning, the Storkey rule has higher computational complexity and may be less effective under high-noise conditions. Additionally, Okada (1996) focused on sparse coding strategies that store information by activating only a small fraction of neurons, thereby reducing overlap among patterns and significantly enhancing storage capacity and noise resistance. Building on this, we propose embedding a logical rule with redundant variables in DHNN. By introducing redundant variables into the logical rules, the corresponding neurons possess two weights of opposite signs, which suppresses neuron activation, reduces pattern overlap, and facilitates more accurate pattern retrieval. This characteristic enhances the storage capacity and noise resistance of DHNN to a certain extent.

3 B-type Random 2-Satisfiability

BRAN2SAT is a non-systematic satisfiability logical rule with redundant literals, represented in conjunctive normal form (CNF). The logical structure comprises clauses containing two redundant literals and clauses with at most two non-redundant random literals. Literals are combined using the OR operator ( |$ \vee$| ), while clauses are connected using the AND operator ( |$ \wedge$| ). It should be noted that the second-order clauses containing redundant literals are subject to additional restrictions on clause selection (Liberatore 2008). In contrast, no such restrictions apply to clauses with non-redundant random literals. The proposed structure of BRAN2SAT is as follows:

  • A set of |$x$| literals. |$z_1, z_2, \dots , z_x$|⁠, where each takes a value from the set |$\lbrace z_i, \lnot z_i\rbrace$|⁠, representing positive or negative states.

  • A set of |$ y$| clauses. |$ C_1, C_2, \dots , C_y$|⁠.
    (1)
     
    (2)
    where |$2v$| represents the total number of redundant second-order clauses, |$m$| represents the total number of non-redundant second-order clauses, and |$n$| represents the total number of first-order clauses.
  • The formulation for Redundancy 2-satisfiability is presented as follows (Liberatore 2008):
    (3)
    where |$v \ge 1$|⁠, the clause of |$P_{{\rm {R2SAT}}}$| is structured based on |$C_i^{\prime }$| and |$C_i^{\prime\prime}$| as follows:
    (4)
     
    (5)
    The literals |$x_i$| and |$y_i$| are redundant in the formulation, and their states are restricted. In the |$P_{{\rm {R2SAT}}}$| formulation, each clause is constructed by selecting one clause from two sets. Specifically, one clause is randomly selected from set |$C_i^{\prime }$|⁠, which contains the literals |$(x_i \vee y_i)$| and |$ (\lnot x_i \vee \lnot y_i)$|⁠, while another clause is randomly chosen from set |$C_i^{\prime\prime}$|⁠, which contains the literals |$(\lnot x_i \vee y_i)$| and |$ (x_i \vee \lnot y_i)$|⁠.
  • The formulation for Random 2-Satisfiability is given as:
    (6)
    the clause of |$P_{{\rm {RAN2SAT}}}$| is structured based on |$C_i^{(2)}$| and |$C_i^{(1)}$| as follows:
    (7)
    where the literals |$a_i$|⁠, |$b_i$|⁠, and |$c_i$| are non-redundant, and their states are randomly determined, with |$a_i \in \lbrace a_i, \lnot a_i\rbrace$|⁠, |$b_i \in \lbrace b_i, \lnot b_i\rbrace$|⁠, and |$c_i \in \lbrace c_i, \lnot c_i\rbrace$|⁠. The clauses of |$P_{\text{RAN2SAT}}$| are formed by combining literals from two sets, |$C_i^{(2)}$| and |$C_i^{(1)}$|⁠. For each |$C_i^{(2)}$| clause (⁠|$1 \le i \le m$|⁠), it is the disjunction of |$a_i$| and |$b_i$|⁠, where each may be the literal or its negation. For each |$C_i^{(1)}$| clause (⁠|$1 \le i \le n$|⁠), it is formed by |$c_i$|⁠, which may also be the literal or its negation. Thus, the overall structure of |$P_{\text{RAN2SAT}}$| depends on the combination of these clauses, with the form of each clause determined by the randomly assigned truth values of the literals.
  • The general formulation of |$P_{{\rm {BRAN2SAT}}}$| incorporates Redundancy 2-Satisfiability and Random 2-Satisfiability as follows:
    (8)
     
    (9)
    where |$C_i^{^{\prime }}$| and |$C_i^{\prime\prime}$| define the redundant second-order clauses, while |$C_i^{(2)}$| and |$C_i^{(1)}$| define non-redundant second- and first-order clauses, respectively.

The four different examples for |$P_{{\rm {BRAN2SAT}}}$| are as follows:

(10)
(11)
(12)
(13)

According to Equations 1013, the bipolar value for each literal of the CNF takes either 1 or −1, representing TRUE and FALSE, respectively (Sathasivam 2010). The logical formula is satisfied (i.e. |$P_{{\rm {BRAN2SAT}}} = 1$|⁠) if at least one literal in each clause is satisfied. In other words, |$P_{{\rm {BRAN2SAT}}} = 1$| requires that all clauses in the formula be satisfied. There are only four possible combinations of redundant second-order clauses in the BRAN2SAT model, implying that these combinations will repeat as the number of redundant second-order clauses |$v$| increases. Unlike the previous study RAN2SAT proposed by Sathasivam et al. (2020), where the proportion in Equation 7 is predetermined despite the randomness of the literals, this approach removes randomness from the proportional selection of the number of clauses. In this study, second-order clauses with redundant literals are incorporated, and only the number of non-redundant second- and first-order clauses is restricted to |$m\ge 1$| and |$n\ge 1$|⁠, with no restriction on the proportion of clauses. In this context, |$P_{{\rm {BRAN2SAT}}}$| provides a more flexible logical rule in terms of clause structure and is more applicable to real-world scenarios.

4 BRAN2SAT in the discrete Hopfield neural network

4.1 Learning phase

The DHNN is a fully interconnected feedback network consisting of a single layer of binary neurons. The neuron state is typically represented by a bipolar value of +1 or −1. The neuron update formula is as follows (Hopfield 1982):

(14)

from Equation 14, |$W_{ij}$| represents the synaptic weight between neurons |$i$| and |$j$|⁠, while |$S_i$| denotes the state of unit |$i$|⁠. |$\theta$| represents the threshold. Notably, |$W_{ij}$| exhibits two key properties: (a) no self-feedback is allowed, as |$W_{ii} = W_{jj} = 0$|⁠, indicating that each neuron cannot provide feedback to itself. (b) Symmetry weights, with |$W_{ij} = W_{ji}$|⁠, ensure that the interaction between two neurons is reciprocal and equal. The DHNN–BR2SAT model consists of two primary phases: the learning and the retrieval phases. During the learning phase, the Wan Abdullah (WA) method (Abdullah 1994) is applied, where each literal in |$P_{{\rm {BRAN2SAT}}}$| is mapped to a corresponding neuron in the DHNN, thereby generating the cost function |$E_{P_{{\rm {BRAN2SAT}}}}$|⁠. The consistent interpretation of |$P_{{\rm {BRAN2SAT}}}$| is then obtained to minimize |$E_{P_{{\rm {BRAN2SAT}}}}$| to zero. It is worth noting that the number of unsatisfied clauses in |$P_{{\rm {BRAN2SAT}}}$| is directly proportional to the value of |$E_{P_{{\rm {BRAN2SAT}}}}$| (Kasihmuddin et al. 2017). Equation 15 below presents the cost function for DHNN–BR2SAT based on the WA method:

(15)

where |$v$|⁠, |$m$|⁠, and |$n$| represent the number of clauses. The formula for |$Q_{ij}$| corresponding to the inconsistent interpretation of |$P_{{\rm {BRAN2SAT}}}$| is as follows:

(16)

where |$X$| represents the set of literals |$\lbrace x_i, y_i, a_i, b_i, c_i\rbrace$|⁠, with |$S_X$| assigned based on the polarity of each literal. The synaptic weights are computed by comparing the coefficients of |$E_{P_{{\rm {BRAN2SAT}}}}$| with the Lyapunov energy function (Equation 21). To ensure the correct synaptic weights for DHNN–BR2SAT, at least one set of neuron states must be identified that satisfies |$P_{{\rm {BRAN2SAT}}}$| (Abdullah 1992). The resulting synaptic weights are then stored in a content addressable memory (CAM). CAM functions by using the stored synaptic weights to guide the network toward optimal neuron states during its evolution. Thus, finding a consistent interpretation of |$P_{{\rm {BRAN2SAT}}}$| is crucial for DHNN to store memory patterns correctly. The probability of achieving a consistent interpretation of |$P_{{\rm {BRAN2SAT}}}$| is given as follows:

(17)

where |$P$| represents the probability, and |$\left(1 - \frac{1}{2^k}\right)$| denotes the probability that a |$k$|-order clause is satisfiable.

4.2 Retrieval phase

During the retrieval phase, the neuron states are updated asynchronously. This iterative updating process is governed by Equations 1820, as provided below:

(18)
(19)
(20)

where the local field |$h_i(t)$| represents the composite of the weighted input signals |$W_{ij} \cdot S_j$| received by neuron |$i$| from all other neurons |$j$|⁠, along with a bias term |$W_i$|⁠. |$h_i(t)$| serves as the input variable in the hyperbolic tangent activation function (HTAF, Rumelhart et al. 1986). |$s_i(t)$| denotes the updated state of neuron |$i$| at time |$t$|⁠. It is important to note that when |$\tanh (h_i) = 0$|⁠, the updated neuron state remains the same as its previous state |$S_i(t-1)$|⁠. This occurs only when |$h_i(t) = 0$|⁠, representing a state of balance where the excitatory and inhibitory signals received by the neuron cancel each other out (Willems 1972).

When the DHNN reaches a stable state, the energy function converges to the nearest local minimum. The Lyapunov energy function |$H_{P_{{\rm {BRAN2SAT}}}}$| and the global minimum energy |$H_{P_{{\rm {BRAN2SAT}}}}^{min}$| are given as follows:

(21)
(22)

where |$2v$|⁠, |$m$|⁠, and |$n$| represent the total number of redundant second-order, non-redundant second-, and first-order clauses, respectively.

To determine whether the final neuron state corresponds to the global minimum, the following formula is applied:

(23)

where |$\mathrm{ Tol}$| is a predefined tolerance value that determines the convergence criterion for the network.

The stable state of a neural network refers to the condition in which, after a certain point in time, the neuron states no longer change, i.e. |$ S_i(t+1) = S_i(t)$|⁠.

 
Theorem 4.1.

Let |$ N = (W, \theta )$| be DHNN. Assume that |$ N$| operates asynchronously and |$ W$| is a symmetrical matrix with zero diagonal. Then, DHNN will always converge to a stable state (Hopfield & Tank 1985).

A stable state exists in the DHNN only if Theorem 4.1 is satisfied. Under this condition, the DHNN evolves in the direction of decreasing energy until a stable state is reached. The following provides proof that the energy function decreases monotonically.

The Lyapunov energy function for neuron |$ i$| at time |$ t$| and |$ t+1$| is given by:

(24)
(25)

The change in energy from time |$t$| to |$t+1$| is given by:

(26)

The energy change from |$t$| to |$t+1$| can be simplified as:

(27)

From Equation 18, we obtain |$\Delta H_{P_{{\rm {BRAN2SAT}}}} \le 0$|⁠. Since neuron |$ i$| represents an arbitrary neuron in the network, and all neurons in the network update their states according to the same rule, the total energy change of the network must be less than or equal to zero, as follows:

(28)
(29)

It follows that the DHNN evolves in the direction of Lyapunov energy function minimization.

Figure 1 presents the schematic diagram of DHNN–BR2SAT. The model incorporates non-redundant first-, second-, and redundant second-order clauses. Algorithm 1 outlines the pseudo-code of DHNN–BR2SAT with an ES algorithm in the learning phase.

Schematic diagram of DHNN–BR2SAT.
Figure 1:

Schematic diagram of DHNN–BR2SAT.

5 Experimental setup

This section evaluates the impact of varying numbers of clauses at different orders on the behaviour of DHNN–BR2SAT through simulation experiments. The evaluation is conducted in three aspects: the learning phase, the retrieval phase, and both total variation and the similarity index. A comprehensive analysis is provided on how different logical orders affect BRAN2SAT performance, along with an evaluation of parameter perturbation.

The experiment was conducted in a Windows 11 Home Edition environment with an i5-1035G4 1.5GHz CPU and 8 GB RAM. MATLAB R2016b was used as the development tool. The detailed parameter settings for the algorithm are provided in Table 1.

Table 1:

Parameters for the proposed DHNN–BR2SAT.

Parameter explanationParameter value
Number of neurons (⁠|$ \mathrm{ NN}$|⁠)|$ 10 \le \mathrm{ NN }\le 50$|
Number of neuron combinations (⁠|$ N_{\mathrm{ comb}}$|⁠)100 (Zamri et al. 2020)
Number of learning trials (⁠|$ N_{\mathrm{ learn}}$|⁠)100
Current number of learning trials (⁠|$ N_l$|⁠)|$ N_l \le N_{\mathrm{ learn}}$|
Initialization of neuron states in the learning phaseRandom
Threshold (⁠|$ \eta$|⁠)0 (Zamri et al. 2022)
Relaxation rate (⁠|$ r$|⁠)|$ 1 \le r \le 5$|
Number of testing trials (⁠|$ N_t$|⁠)100 (Zamri et al. 2020)
Initialization of neuron states in the testing phaseRandom
Tolerance value (⁠|$ \mathrm{ Tol}$|⁠)0.001 (Sathasivam 2010)
Activation functionHTAF
Type of selectionRandom search
Parameter explanationParameter value
Number of neurons (⁠|$ \mathrm{ NN}$|⁠)|$ 10 \le \mathrm{ NN }\le 50$|
Number of neuron combinations (⁠|$ N_{\mathrm{ comb}}$|⁠)100 (Zamri et al. 2020)
Number of learning trials (⁠|$ N_{\mathrm{ learn}}$|⁠)100
Current number of learning trials (⁠|$ N_l$|⁠)|$ N_l \le N_{\mathrm{ learn}}$|
Initialization of neuron states in the learning phaseRandom
Threshold (⁠|$ \eta$|⁠)0 (Zamri et al. 2022)
Relaxation rate (⁠|$ r$|⁠)|$ 1 \le r \le 5$|
Number of testing trials (⁠|$ N_t$|⁠)100 (Zamri et al. 2020)
Initialization of neuron states in the testing phaseRandom
Tolerance value (⁠|$ \mathrm{ Tol}$|⁠)0.001 (Sathasivam 2010)
Activation functionHTAF
Type of selectionRandom search
Table 1:

Parameters for the proposed DHNN–BR2SAT.

Parameter explanationParameter value
Number of neurons (⁠|$ \mathrm{ NN}$|⁠)|$ 10 \le \mathrm{ NN }\le 50$|
Number of neuron combinations (⁠|$ N_{\mathrm{ comb}}$|⁠)100 (Zamri et al. 2020)
Number of learning trials (⁠|$ N_{\mathrm{ learn}}$|⁠)100
Current number of learning trials (⁠|$ N_l$|⁠)|$ N_l \le N_{\mathrm{ learn}}$|
Initialization of neuron states in the learning phaseRandom
Threshold (⁠|$ \eta$|⁠)0 (Zamri et al. 2022)
Relaxation rate (⁠|$ r$|⁠)|$ 1 \le r \le 5$|
Number of testing trials (⁠|$ N_t$|⁠)100 (Zamri et al. 2020)
Initialization of neuron states in the testing phaseRandom
Tolerance value (⁠|$ \mathrm{ Tol}$|⁠)0.001 (Sathasivam 2010)
Activation functionHTAF
Type of selectionRandom search
Parameter explanationParameter value
Number of neurons (⁠|$ \mathrm{ NN}$|⁠)|$ 10 \le \mathrm{ NN }\le 50$|
Number of neuron combinations (⁠|$ N_{\mathrm{ comb}}$|⁠)100 (Zamri et al. 2020)
Number of learning trials (⁠|$ N_{\mathrm{ learn}}$|⁠)100
Current number of learning trials (⁠|$ N_l$|⁠)|$ N_l \le N_{\mathrm{ learn}}$|
Initialization of neuron states in the learning phaseRandom
Threshold (⁠|$ \eta$|⁠)0 (Zamri et al. 2022)
Relaxation rate (⁠|$ r$|⁠)|$ 1 \le r \le 5$|
Number of testing trials (⁠|$ N_t$|⁠)100 (Zamri et al. 2020)
Initialization of neuron states in the testing phaseRandom
Tolerance value (⁠|$ \mathrm{ Tol}$|⁠)0.001 (Sathasivam 2010)
Activation functionHTAF
Type of selectionRandom search

To evaluate the efficiency of DHNN–BR2SAT, this section examines the performance of the model across five aspects: learning error analysis and weight analysis during the learning phase, energy analysis, and global solution analysis during the retrieval phase, and similarity index analysis. Tables 24 present the parameters associated with all evaluation metrics.

Table 2:

List of parameters in the learning phase.

ParameterRemarks
|$ f_{N_c}$|Maximum fitness achieved
|$ f_i$|Current fitness achieved
|$ W_{\mathrm{ WA}}$|Synaptic weight obtained by WA method
|$ W_i$|Current synaptic weight
|$ N_w$|Number of weights at a time
|$ N_{wc}$||$ N_{wc} = N_w \cdot N_{comb}$|
ParameterRemarks
|$ f_{N_c}$|Maximum fitness achieved
|$ f_i$|Current fitness achieved
|$ W_{\mathrm{ WA}}$|Synaptic weight obtained by WA method
|$ W_i$|Current synaptic weight
|$ N_w$|Number of weights at a time
|$ N_{wc}$||$ N_{wc} = N_w \cdot N_{comb}$|
Table 2:

List of parameters in the learning phase.

ParameterRemarks
|$ f_{N_c}$|Maximum fitness achieved
|$ f_i$|Current fitness achieved
|$ W_{\mathrm{ WA}}$|Synaptic weight obtained by WA method
|$ W_i$|Current synaptic weight
|$ N_w$|Number of weights at a time
|$ N_{wc}$||$ N_{wc} = N_w \cdot N_{comb}$|
ParameterRemarks
|$ f_{N_c}$|Maximum fitness achieved
|$ f_i$|Current fitness achieved
|$ W_{\mathrm{ WA}}$|Synaptic weight obtained by WA method
|$ W_i$|Current synaptic weight
|$ N_w$|Number of weights at a time
|$ N_{wc}$||$ N_{wc} = N_w \cdot N_{comb}$|
Table 3:

List of parameters in the retrieval phase.

ParameterRemarks
|$ H_{\mathrm{ min}}$|Minimum energy value
|$ H_f$|Final energy
|$ N_{G}$|Number of global minimum solutions
|$ N_{L}$|Number of local minimum solutions
|$ N_t$|Number of testing trials
|$ N_{tc}$||$ N_{tc} = N_t \cdot N_{\mathrm{ comb}}$|
ParameterRemarks
|$ H_{\mathrm{ min}}$|Minimum energy value
|$ H_f$|Final energy
|$ N_{G}$|Number of global minimum solutions
|$ N_{L}$|Number of local minimum solutions
|$ N_t$|Number of testing trials
|$ N_{tc}$||$ N_{tc} = N_t \cdot N_{\mathrm{ comb}}$|
Table 3:

List of parameters in the retrieval phase.

ParameterRemarks
|$ H_{\mathrm{ min}}$|Minimum energy value
|$ H_f$|Final energy
|$ N_{G}$|Number of global minimum solutions
|$ N_{L}$|Number of local minimum solutions
|$ N_t$|Number of testing trials
|$ N_{tc}$||$ N_{tc} = N_t \cdot N_{\mathrm{ comb}}$|
ParameterRemarks
|$ H_{\mathrm{ min}}$|Minimum energy value
|$ H_f$|Final energy
|$ N_{G}$|Number of global minimum solutions
|$ N_{L}$|Number of local minimum solutions
|$ N_t$|Number of testing trials
|$ N_{tc}$||$ N_{tc} = N_t \cdot N_{\mathrm{ comb}}$|
Table 4:

Variable specification for similarity index.

Variable|$S_i^{\mathrm{ ideal}}$||$S_i$|
|$l$|11
|$m$|1−1
|$n$|−11
|$o$|−1−1
Variable|$S_i^{\mathrm{ ideal}}$||$S_i$|
|$l$|11
|$m$|1−1
|$n$|−11
|$o$|−1−1
Table 4:

Variable specification for similarity index.

Variable|$S_i^{\mathrm{ ideal}}$||$S_i$|
|$l$|11
|$m$|1−1
|$n$|−11
|$o$|−1−1
Variable|$S_i^{\mathrm{ ideal}}$||$S_i$|
|$l$|11
|$m$|1−1
|$n$|−11
|$o$|−1−1

While many performance measures have been proposed and applied to evaluate the accuracy of DHNN, no single metric has been universally established as a standard benchmark. This lack of consensus complicates the comparison of different network models. Consequently, various metrics must be employed for performance evaluation, and it is important to observe whether these metrics provide a consistent performance ranking across different DHNNs. Among these, the mean absolute error (MAE, Zheng et al. 2023) is one of the most direct measures of prediction error. It represents the average of the absolute differences between predicted and actual values, with smaller MAE values indicating better model performance. As a linear metric, MAE assigns equal weight to all individual errors, thereby providing a clear representation of the overall prediction error. In contrast, the root mean squared error (RMSE, Fan et al. 2021) is a fundamental statistical indicator for assessing model performance. By squaring errors before averaging, RMSE assigns greater weight to larger errors. Consequently, RMSE is particularly effective in evaluating the performance of the model when handling extreme data. Compared to MAE, RMSE is more sensitive to larger extreme errors. The mean absolute percentage error (Wang et al. 2021), which expresses errors as percentages, provides improved interpretability. It is especially suitable for evaluating the relative accuracy of model predictions.

During the learning phase, error is measured using |${\rm {MAE}}_{{\rm {learn}}}$|⁠, |${\rm {RMSE}}_{{\rm {learn}}}$|⁠, and |${\rm {MAPE}}_{{\rm {learn}}}$|⁠. |${\rm {MAE}}_{{\rm {learn}}}$| represents the average error between the current neuron fitness |$f_i$| and the maximum fitness value |$f_{NC}$|⁠. To assess the presence of extreme errors in the learning process, |${\rm {RMSE}}_{{\rm {learn}}}$| is applied, which calculates the RMSE between |$f_i$| and |$f_{NC}$|⁠. Additionally, it is essential to evaluate relative error, which is captured by |${\rm {MAPE}}_{{\rm {learn}}}$|⁠. Together, these three metrics enable a comprehensive evaluation of model performance, considering overall, extreme local, and relative errors from multiple perspectives. The formulas are as follows:

(30)
(31)
(32)

The computation of these three equations primarily involves the maximum fitness and the fitness of the current iteration. In this context, fitness is defined as the number of satisfied clauses in the logical formula. Since achieving optimal synaptic weights requires maximizing the adaptation value based on the given logical rules, the ability to obtain these optimal weights reflects the performance of the learning outcomes of the DHNN–BR2SAT model. To evaluate this, |${\rm {MAE}}_{{\rm {weight}}}$| (Gao et al. 2022) and |${\rm {RMSE}}_{{\rm {weight}}}$| (Gao et al. 2022) are used to measure the error in the learning results, capturing both the average error and the local extreme error of the weights, respectively. The formulas are as follows:

(33)
(34)

where |$W_i$| and |$W_{{\rm {WA}}}$| represent the current weight and the optimal weight of the neuron, respectively. The parameters used in the learning error analysis and synaptic weight analysis are presented in Table 2.

During the retrieval phase, the retrieval efficiency of DHNN–BR2SAT is evaluated by measuring the energy error of the retrieval process using |${\rm {MAE}}_{{\rm {energy}}}$| (Guo et al. 2022). To further assess the final neuron state upon reaching stability, |${\rm {MAE}}_{{\rm {test}}}$| (Gao et al. 2022) is used to measure the error of the final solution. At the same time, the retrieval capability of DHNN is evaluated using the global minimum ratio (⁠|${\rm {ZM}}$|⁠, Zamri et al. 2022). The formulas are as follows:

(35)
(36)
(37)

where |$|H_{\mathrm{ min}} - H_f|$| represents the absolute difference between the final energy and the minimum energy, and |$N_{tc}$| denotes the total number of iterations for all logical rules. |${\rm {MAE}}_{{\rm {test}}}$| is calculated as the ratio of the number of local minimum solutions to the total number of final solutions, while |${\rm {ZM}}$| represents the ratio of global minimum solutions to the total number of final solutions. The parameters involved in the retrieval phase are provided in Table 3.

Finally, the quality of each final neuron state is assessed using the similarity index based on the global minimum of the final neuron states. The similarity index reflects the generalization ability of the DHNN–BR2SAT model (Kasihmuddin et al. 2019). The ideal neuron state |$S^{{\rm {ideal}}}_i$| is defined as follows:

(38)

where |$X$| and |$\lnot X$| represent the positive and negative literals in each clause, Table 4 lists the variables that influence the value of the similarity index (Mansor & Sathasivam 2021). It is important to note that only the final neuron state of the global minimum solution is considered in the similarity analysis. The Jaccard index |$S_{{\rm {Jaccard}}}$| is used to evaluate the similarity between two sets of solutions and is defined as the ratio of the intersection to the union (Amer et al. 2021).

(39)

The value of |$S_{{\rm {Jaccard}}}$| ranges from 0 to 1. A value closer to 1 indicates a higher similarity between the two sets of solutions, while a lower value signifies greater differences. Total variation (⁠|${\rm {TV}}$|⁠) measures the change between the states of different final neurons. It reflects the degree of variation in the solutions by accumulating the differences between neighbouring states. This reveals whether the model produces diverse solutions during the retrieval process (Karim et al. 2021)

(40)
(41)

where |$J_i^n$| is used to determine whether the two final neuron states are identical. |$J_i^n = 0$| if the two states are the same, and |$J_i^n = 1$| if the two states differ. A higher |${\rm {TV}}$| value indicates that the retrieved global minimum solutions are more distinct from one another, demonstrating greater diversity. By combining these two metrics, the final solutions of the model can be comprehensively assessed in terms of diversity, both from the |${\rm {TV}}$| and the |$S_{{\rm {Jaccard}}}$|⁠.

To provide readers with a clearer understanding of the model, the flowchart in Figure 2 illustrates the overall experimental steps of DHNN–BR2SAT. During the learning phase, the ES algorithm (Schuurmans & Southey 2001) is employed to achieve a consistent interpretation of |$P_{{\rm {BRAN2SAT}}}$|⁠, ensuring correct synaptic weights. In the subsequent retrieval phase, random neuron states are input, and the global minimum solution is evolved through the dynamic updating of the network state.

Flowchart of DHNN–BR2SAT.
Figure 2:

Flowchart of DHNN–BR2SAT.

6 Results and discussion

The purpose of this section is to analyse the effect of the number of redundant second-order clauses |$v$| and non-redundant second-order clauses |$m$| and first-order clauses |$n$| on the performance of the proposed BRAN2SAT. Since |$v$|⁠, |$m$|⁠, and |$n$| are randomly generated, the proportion of clauses will be based on the proportion of redundant second-order clauses |$\alpha$| and non-redundant second-order clauses |$\beta$|⁠; the situation is shown in Table 5.

Table 5:

Different cases for the BR2SAT model.

CaseModelProportion
Case IBRAN2SAT|$ \text{rand}(v, m, n)$|
Case IIBRAN2SAT. |$ \alpha \ge 0.5$||$ \alpha = \frac{v}{v + m + n} \ge 0.5$|
Case IIIBRAN2SAT |$ \beta \ge 0.5$||$ \beta = \frac{m}{v + m + n} \ge 0.5$|
CaseModelProportion
Case IBRAN2SAT|$ \text{rand}(v, m, n)$|
Case IIBRAN2SAT. |$ \alpha \ge 0.5$||$ \alpha = \frac{v}{v + m + n} \ge 0.5$|
Case IIIBRAN2SAT |$ \beta \ge 0.5$||$ \beta = \frac{m}{v + m + n} \ge 0.5$|
Table 5:

Different cases for the BR2SAT model.

CaseModelProportion
Case IBRAN2SAT|$ \text{rand}(v, m, n)$|
Case IIBRAN2SAT. |$ \alpha \ge 0.5$||$ \alpha = \frac{v}{v + m + n} \ge 0.5$|
Case IIIBRAN2SAT |$ \beta \ge 0.5$||$ \beta = \frac{m}{v + m + n} \ge 0.5$|
CaseModelProportion
Case IBRAN2SAT|$ \text{rand}(v, m, n)$|
Case IIBRAN2SAT. |$ \alpha \ge 0.5$||$ \alpha = \frac{v}{v + m + n} \ge 0.5$|
Case IIIBRAN2SAT |$ \beta \ge 0.5$||$ \beta = \frac{m}{v + m + n} \ge 0.5$|

Table 5 presents three different cases for the BRAN2SAT. In Case I, the number of clauses for all three types is generated randomly. In Case II, the proportion of redundant second-order clauses (⁠|$\alpha$|⁠) is greater than or equal to 0.5, indicating that redundant clauses represent at least half of the total number of clauses. Case III requires the proportion of non-redundant second-order clauses (⁠|$\beta$|⁠) to be greater than or equal to 0.5, meaning that non-redundant clauses number for at least half of the total clauses. By constructing Cases I–III, the effects of different proportions of redundant and non-redundant clauses on the overall performance of the DHNN–BR2SAT model are systematically investigated. The following sections provide a detailed analysis of the performance of DHNN–BR2SAT under these three cases.

6.1 Learning phase

The goal of this phase is to evaluate the learning capability of BRAN2SAT with varying clause proportions within the DHNN framework. During the learning process, these models are embedded into DHNN, and an ES algorithm (Schuurmans & Southey 2001) is employed to achieve a consistent interpretation of BRAN2SAT, thereby establishing the correct synaptic weights for DHNN. Figures 35 and Tables 68 present the results for |${\rm {MAE}}_{{\rm {learn}}}$|⁠, |${\rm {RMSE}}_{{\rm {learn}}}$|⁠, and |${\rm {MAPE}}_{{\rm {learn}}}$| across different DHNN–BR2SAT models. As observed in Figures 3 and 4, |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| exhibit a linear increase with the number of neurons. This trend is attributed to the dependence of the ES algorithm on a trial-and-error method to achieve consistent interpretations of BRAN2SAT, which is only effective with smaller neurons. Thus, optimization algorithms incorporating global and local search operators can be introduced during the DHNN–BR2SAT learning phase to enhance learning efficiency, facilitating consistent interpretations of BRAN2SAT (Kilani 2010). Notably, from Tables 6 and 7, |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| values in Case I fluctuate considerably but remain relatively low. This fluctuation is due to the random allocation of the three clause types within the model, where first-order clauses are the least likely to satisfy SAT allocations compared to other-order clauses (Giráldez-Cru & Levy 2021). Therefore, when the number of randomly assigned first-order clauses in DHNN–BR2SAT is large, the |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| values are relatively high. Conversely, second-order clauses have a higher likelihood of satisfying SAT, leading to lower |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| when their proportion is higher in Case I. The bolded values represent the minimum |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| for the same number of neurons. In Case II, the proportion of redundant second-order clauses exceeds 0.5, which results in a multiplicative increase in the number of redundant second-order clauses as the number of neurons rises. This increase complicates ES retrieval, raising |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| values Massacci & Marraro (2000). In contrast, Case III exhibits a proportion of non-redundant clauses exceeding 0.5. While the total number of second-order clauses in Case III is lower than in Case II, the presence of some redundant clauses improves solution efficiency Heule et al. (2019), resulting in relatively low |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| values. Although errors increase with the number of neurons, the ratio of |${\rm {MAE}}_{{\rm {learn}}}$| to |${\rm {RMSE}}_{{\rm {learn}}}$| remains close to 1:1, indicating that the error growth is uniform and controllable, with no significant outliers affecting model performance (Gerrard et al. 2020). According to Figure 5 and Table 8, |${\rm {MAPE}}_{{\rm {learn}}}$| values in Case I fluctuate considerably, peaking at 31.732691 around the number of neurons of 20, indicating a higher proportion of unsatisfied clauses relative to the total number of clauses (Aribowo & Muslim 2020). When |$\mathrm{ NN}$| is between 25 and 40, the three models show similar stability, with trends in Cases II and III being closely aligned, primarily due to the higher proportion of second-order clauses.

${\rm {MAE}}_{{\rm {learn}}}$ for different DHNN–BR2SAT models.
Figure 3:

|${\rm {MAE}}_{{\rm {learn}}}$| for different DHNN–BR2SAT models.

${\rm {RMSE}}_{{\rm {learn}}}$ for different DHNN–BR2SAT models.
Figure 4:

|${\rm {RMSE}}_{{\rm {learn}}}$| for different DHNN–BR2SAT models.

${\rm {MAPE}}_{{\rm {learn}}}$ for different DHNN–BR2SAT models.
Figure 5:

|${\rm {MAPE}}_{{\rm {learn}}}$| for different DHNN–BR2SAT models.

Table 6:

Comparison of |${\rm {MAE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
101.84382.25482.5785
152.50262.82582.5823
204.75994.18113.6932
253.69804.70524.2056
305.49705.98935.2526
354.99806.45755.7542
407.00167.75656.7696
456.50218.23347.2596
507.51089.50308.7298
Best1.84382.25482.5785
Worst7.51089.50308.7298
Avg.4.92385.76745.2028
Avg rank1.44442.77781.7778
NNCase ICase IICase III
101.84382.25482.5785
152.50262.82582.5823
204.75994.18113.6932
253.69804.70524.2056
305.49705.98935.2526
354.99806.45755.7542
407.00167.75656.7696
456.50218.23347.2596
507.51089.50308.7298
Best1.84382.25482.5785
Worst7.51089.50308.7298
Avg.4.92385.76745.2028
Avg rank1.44442.77781.7778
Table 6:

Comparison of |${\rm {MAE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
101.84382.25482.5785
152.50262.82582.5823
204.75994.18113.6932
253.69804.70524.2056
305.49705.98935.2526
354.99806.45755.7542
407.00167.75656.7696
456.50218.23347.2596
507.51089.50308.7298
Best1.84382.25482.5785
Worst7.51089.50308.7298
Avg.4.92385.76745.2028
Avg rank1.44442.77781.7778
NNCase ICase IICase III
101.84382.25482.5785
152.50262.82582.5823
204.75994.18113.6932
253.69804.70524.2056
305.49705.98935.2526
354.99806.45755.7542
407.00167.75656.7696
456.50218.23347.2596
507.51089.50308.7298
Best1.84382.25482.5785
Worst7.51089.50308.7298
Avg.4.92385.76745.2028
Avg rank1.44442.77781.7778
Table 7:

Comparison of |${\rm {RMSE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
102.13252.51382.8634
152.80373.15512.8990
205.03934.47374.0061
254.05554.99604.5252
305.80306.27395.5597
355.33986.74996.0670
407.31228.04177.0842
456.83898.52337.5660
507.83699.79089.0329
Best2.13252.51382.8634
Worst7.83699.79089.0329
Avg.5.24026.05765.5115
Avg rank1.44442.77781.7778
NNCase ICase IICase III
102.13252.51382.8634
152.80373.15512.8990
205.03934.47374.0061
254.05554.99604.5252
305.80306.27395.5597
355.33986.74996.0670
407.31228.04177.0842
456.83898.52337.5660
507.83699.79089.0329
Best2.13252.51382.8634
Worst7.83699.79089.0329
Avg.5.24026.05765.5115
Avg rank1.44442.77781.7778
Table 7:

Comparison of |${\rm {RMSE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
102.13252.51382.8634
152.80373.15512.8990
205.03934.47374.0061
254.05554.99604.5252
305.80306.27395.5597
355.33986.74996.0670
407.31228.04177.0842
456.83898.52337.5660
507.83699.79089.0329
Best2.13252.51382.8634
Worst7.83699.79089.0329
Avg.5.24026.05765.5115
Avg rank1.44442.77781.7778
NNCase ICase IICase III
102.13252.51382.8634
152.80373.15512.8990
205.03934.47374.0061
254.05554.99604.5252
305.80306.27395.5597
355.33986.74996.0670
407.31228.04177.0842
456.83898.52337.5660
507.83699.79089.0329
Best2.13252.51382.8634
Worst7.83699.79089.0329
Avg.5.24026.05765.5115
Avg rank1.44442.77781.7778
Table 8:

Comparison of |${\rm {MAPE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
1026.339928.184528.6499
1525.025825.688925.8234
2031.732727.874228.4091
2526.414026.140226.2851
3027.485227.224327.6451
3526.305425.830026.1555
4026.929126.746727.0784
4526.008228.523325.8750
5026.824426.397226.4538
Best25.025825.688925.8234
Worst31.732728.523328.6499
Avg.27.007226.956626.9306
NNCase ICase IICase III
1026.339928.184528.6499
1525.025825.688925.8234
2031.732727.874228.4091
2526.414026.140226.2851
3027.485227.224327.6451
3526.305425.830026.1555
4026.929126.746727.0784
4526.008228.523325.8750
5026.824426.397226.4538
Best25.025825.688925.8234
Worst31.732728.523328.6499
Avg.27.007226.956626.9306
Table 8:

Comparison of |${\rm {MAPE}}_{{\rm {learn}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
1026.339928.184528.6499
1525.025825.688925.8234
2031.732727.874228.4091
2526.414026.140226.2851
3027.485227.224327.6451
3526.305425.830026.1555
4026.929126.746727.0784
4526.008228.523325.8750
5026.824426.397226.4538
Best25.025825.688925.8234
Worst31.732728.523328.6499
Avg.27.007226.956626.9306
NNCase ICase IICase III
1026.339928.184528.6499
1525.025825.688925.8234
2031.732727.874228.4091
2526.414026.140226.2851
3027.485227.224327.6451
3526.305425.830026.1555
4026.929126.746727.0784
4526.008228.523325.8750
5026.824426.397226.4538
Best25.025825.688925.8234
Worst31.732728.523328.6499
Avg.27.007226.956626.9306

Friedman rank tests are conducted on |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| for all the numbers of neurons. The Chi-square value is |$\chi ^2 = 8.6667$|⁠, with a degree of freedom of |$\mathrm{ df} = 2$|⁠, and |$p < 0.05$|⁠. Therefore, the null hypothesis of no significant difference for the DHNN–BR2SAT is rejected. According to Tables 6 and 7, Case I demonstrates competitive |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$| values, with its average rank surpassing that of the other two models. These results statistically confirm the superiority of Case I in terms of |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$|⁠.

Figures 6 and 7 and Tables 9 and 10 illustrate the |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| for all DHNN–BR2SAT models. The results indicate that Case II has the highest |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| values. This suggests that when more redundant second-order clauses are present in BRAN2SAT logical rules, DHNN generates more synaptic weights, leading to higher learning errors. However, the additional weight connections enhance information transfer between neurons without significantly increasing network size (Yu et al. 2024). The bolded values in Tables 9 and 10 represent the minimum |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| for the same number of neurons, with most values observed in Cases I and III. This suggests that Cases I and III perform better, as they contain more non-redundant literals and relatively fewer total literals, leading to fewer synaptic weights. Consequently, DHNN requires fewer learning iterations to minimize the cost function (Mézard et al. 2002). Another notable finding is that the trends of |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| are similar to those of |${\rm {MAE}}_{{\rm {learn}}}$| and |${\rm {RMSE}}_{{\rm {learn}}}$|⁠. This occurs because, in the DHNN–BR2SAT model, obtaining the correct synaptic weights via the WA method depends on |$EP_{{\rm {BRAN2SAT}}} = 0$| (Chen et al. 2023). Therefore, when |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| are minimized, DHNN retrieves the optimal final neuron states that correspond to the behaviour of the BRAN2SAT model. |$E_{P_{{\rm {BRAN2SAT}}}} = 0$|⁠. Therefore, when |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| are minimized, DHNN retrieves the optimal final neuron states that correspond to the behaviour of the DHNN–BR2SAT model.

${\rm {MAE}}_{{\rm {weight}}}$ for different DHNN–BR2SAT models.
Figure 6:

|${\rm {MAE}}_{{\rm {weight}}}$| for different DHNN–BR2SAT models.

${\rm {RMSE}}_{{\rm {weight}}}$ for different DHNN–BR2SAT models.
Figure 7:

|${\rm {RMSE}}_{{\rm {weight}}}$| for different DHNN–BR2SAT models.

Table 9:

Comparison of |${\rm {MAE}}_{{\rm {weight}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0002
150.00050.00280.0011
200.03510.02330.0172
250.01190.03690.0271
300.04710.05340.0402
350.03620.05860.0463
400.05900.06930.0555
450.06120.08210.0671
500.06560.09310.0856
Best0.00000.00000.000186
Worst0.06560.09310.0856
Avg.4.92385.76745.2028
Avg rank1.50002.72221.7778
NNCase ICase IICase III
100.00000.00000.0002
150.00050.00280.0011
200.03510.02330.0172
250.01190.03690.0271
300.04710.05340.0402
350.03620.05860.0463
400.05900.06930.0555
450.06120.08210.0671
500.06560.09310.0856
Best0.00000.00000.000186
Worst0.06560.09310.0856
Avg.4.92385.76745.2028
Avg rank1.50002.72221.7778
Table 9:

Comparison of |${\rm {MAE}}_{{\rm {weight}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0002
150.00050.00280.0011
200.03510.02330.0172
250.01190.03690.0271
300.04710.05340.0402
350.03620.05860.0463
400.05900.06930.0555
450.06120.08210.0671
500.06560.09310.0856
Best0.00000.00000.000186
Worst0.06560.09310.0856
Avg.4.92385.76745.2028
Avg rank1.50002.72221.7778
NNCase ICase IICase III
100.00000.00000.0002
150.00050.00280.0011
200.03510.02330.0172
250.01190.03690.0271
300.04710.05340.0402
350.03620.05860.0463
400.05900.06930.0555
450.06120.08210.0671
500.06560.09310.0856
Best0.00000.00000.000186
Worst0.06560.09310.0856
Avg.4.92385.76745.2028
Avg rank1.50002.72221.7778
Table 10:

Comparison of |${\rm {RMSE}}_{{\rm {weight}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0079
150.01610.09590.0362
201.09670.69980.5267
250.36590.96960.7095
301.21271.27971.0839
350.94691.35051.1678
401.40361.52341.3303
451.39791.63411.4572
501.47361.76521.6931
Best0.00000.00000.0079385
Worst1.47361.76521.6931
Avg.0.87931.03540.8903
Avg rank1.50002.72221.7778
NNCase ICase IICase III
100.00000.00000.0079
150.01610.09590.0362
201.09670.69980.5267
250.36590.96960.7095
301.21271.27971.0839
350.94691.35051.1678
401.40361.52341.3303
451.39791.63411.4572
501.47361.76521.6931
Best0.00000.00000.0079385
Worst1.47361.76521.6931
Avg.0.87931.03540.8903
Avg rank1.50002.72221.7778
Table 10:

Comparison of |${\rm {RMSE}}_{{\rm {weight}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0079
150.01610.09590.0362
201.09670.69980.5267
250.36590.96960.7095
301.21271.27971.0839
350.94691.35051.1678
401.40361.52341.3303
451.39791.63411.4572
501.47361.76521.6931
Best0.00000.00000.0079385
Worst1.47361.76521.6931
Avg.0.87931.03540.8903
Avg rank1.50002.72221.7778
NNCase ICase IICase III
100.00000.00000.0079
150.01610.09590.0362
201.09670.69980.5267
250.36590.96960.7095
301.21271.27971.0839
350.94691.35051.1678
401.40361.52341.3303
451.39791.63411.4572
501.47361.76521.6931
Best0.00000.00000.0079385
Worst1.47361.76521.6931
Avg.0.87931.03540.8903
Avg rank1.50002.72221.7778

Friedman rank tests are conducted on |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| for all the numbers of neurons. The Chi-square value is |$\chi ^2 = 7.6000$|⁠, with a degree of freedom of |$\mathrm{ df} = 2$|⁠, and |$p < 0.05$|⁠. Therefore, the null hypothesis of no significant difference for the DHNN–BR2SAT is rejected. According to Tables 9 and 10, Case I demonstrates competitive |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$| values, with its average rank surpassing that of the other two models. These results statistically confirm the superiority of Case I in terms of |${\rm {MAE}}_{{\rm {weight}}}$| and |${\rm {RMSE}}_{{\rm {weight}}}$|⁠.

6.2 Retrieval phase

After the DHNN–BR2SAT completes the retrieval of logical consistency during the learning phase, optimal synaptic weights are obtained using the WA method (Abdullah 1992) and stored in the CAM. The model then enters the retrieval phase, where its task is to restore random input neuron states to their stored states. The network automatically adjusts the neuron states based on the input until it reaches a stable state. However, due to the tendency of DHNN to generate repetitive states, output overfitting can occur. Therefore, it is necessary to evaluate the performance of the retrieval phase of DHNN–BR2SAT by assessing energy error, retrieval error, and the quality of global minima (Zamri et al. 2022). Figures 810 and Tables 11–13 present the |${\rm {ZM}}$| values, |${\rm {MAE}}_{{\rm {energy}}}$|⁠, and |${\rm {MAE}}_{{\rm {test}}}$| for different DHNN–BR2SAT models during the retrieval phase. From Table 11, when |$\mathrm{ NN}=10$|⁠, all models achieve a |${\rm {ZM}}$| value of 1. This indicates that, with a smaller number of neurons, DHNN effectively manages synaptic weights during the learning phase, enabling the storage of accurate patterns for all global minima. As shown in Figure 8, |${\rm {ZM}}$| values for all DHNN–BR2SAT models decrease as the number of neurons increases. As more neurons are added, the prevalence of suboptimal synaptic weights in DHNN grows, reducing its ability to retrieve global minima and affecting its computation of optimal neuron states in the local field (Guo et al. 2022). For BRAN2SAT models with different clause proportions, the rate and extent of |${\rm {ZM}}$| reduction differ. Compared to other models, Case I exhibits the largest fluctuations in |${\rm {ZM}}$| values, attributed to the random allocation of its first- and second-order clauses. First-order clauses have a lower probability of achieving a consistent interpretation, and suboptimal synaptic weights may cause the final neuron state to become trapped in local minima. Meanwhile, Cases II and III show similar trends in |${\rm {ZM}}$| values, though Case III maintains relatively higher |${\rm {ZM}}$| values. Higher |${\rm {ZM}}$| values reflect lower |${\rm {MAE}}_{{\rm {learn}}}$| during the learning phase, indicating that optimal synaptic weights promote convergence of neuron states to global minima.

${\rm {ZM}}$ for different DHNN–BR2SAT models.
Figure 8:

|${\rm {ZM}}$| for different DHNN–BR2SAT models.

${\rm {MAE}}_{{\rm {energy}}}$ for different DHNN–BR2SAT models.
Figure 9:

|${\rm {MAE}}_{{\rm {energy}}}$| for different DHNN–BR2SAT models.

${\rm {MAE}}_{{\rm {test}}}$ for different DHNN–BR2SAT models.
Figure 10:

|${\rm {MAE}}_{{\rm {test}}}$| for different DHNN–BR2SAT models.

Table 11:

Comparison of |${\rm {ZM}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
101.00001.00000.9900
150.98000.89050.9600
200.17000.39280.5317
250.65490.23520.4200
300.10020.06170.1557
350.23060.05090.1239
400.03240.01090.0619
450.03070.01170.0307
500.0180.00040.0101
Best1.00001.00000.9900
Worst0.01800.00040.0101
Avg.0.35740.29490.3649
Avg rank1.55562.72221.7222
NNCase ICase IICase III
101.00001.00000.9900
150.98000.89050.9600
200.17000.39280.5317
250.65490.23520.4200
300.10020.06170.1557
350.23060.05090.1239
400.03240.01090.0619
450.03070.01170.0307
500.0180.00040.0101
Best1.00001.00000.9900
Worst0.01800.00040.0101
Avg.0.35740.29490.3649
Avg rank1.55562.72221.7222
Table 11:

Comparison of |${\rm {ZM}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
101.00001.00000.9900
150.98000.89050.9600
200.17000.39280.5317
250.65490.23520.4200
300.10020.06170.1557
350.23060.05090.1239
400.03240.01090.0619
450.03070.01170.0307
500.0180.00040.0101
Best1.00001.00000.9900
Worst0.01800.00040.0101
Avg.0.35740.29490.3649
Avg rank1.55562.72221.7222
NNCase ICase IICase III
101.00001.00000.9900
150.98000.89050.9600
200.17000.39280.5317
250.65490.23520.4200
300.10020.06170.1557
350.23060.05090.1239
400.03240.01090.0619
450.03070.01170.0307
500.0180.00040.0101
Best1.00001.00000.9900
Worst0.01800.00040.0101
Avg.0.35740.29490.3649
Avg rank1.55562.72221.7222
Table 12:

Comparison of |${\rm {MAE}}_{{\rm {energy}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0012
150.02160.06830.0119
202.61680.47710.3247
250.60360.65100.4685
300.79551.09250.6876
350.43341.06420.7973
401.05381.25150.8717
450.88151.45211.2029
500.87611.60771.5236
Best0.00000.00000.0012
Worst2.61681.60771.5236
Avg.0.80910.85160.6544
Avg rank1.72222.72221.5556
NNCase ICase IICase III
100.00000.00000.0012
150.02160.06830.0119
202.61680.47710.3247
250.60360.65100.4685
300.79551.09250.6876
350.43341.06420.7973
401.05381.25150.8717
450.88151.45211.2029
500.87611.60771.5236
Best0.00000.00000.0012
Worst2.61681.60771.5236
Avg.0.80910.85160.6544
Avg rank1.72222.72221.5556
Table 12:

Comparison of |${\rm {MAE}}_{{\rm {energy}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0012
150.02160.06830.0119
202.61680.47710.3247
250.60360.65100.4685
300.79551.09250.6876
350.43341.06420.7973
401.05381.25150.8717
450.88151.45211.2029
500.87611.60771.5236
Best0.00000.00000.0012
Worst2.61681.60771.5236
Avg.0.80910.85160.6544
Avg rank1.72222.72221.5556
NNCase ICase IICase III
100.00000.00000.0012
150.02160.06830.0119
202.61680.47710.3247
250.60360.65100.4685
300.79551.09250.6876
350.43341.06420.7973
401.05381.25150.8717
450.88151.45211.2029
500.87611.60771.5236
Best0.00000.00000.0012
Worst2.61681.60771.5236
Avg.0.80910.85160.6544
Avg rank1.72222.72221.5556
Table 13:

Comparison of |${\rm {MAE}}_{{\rm {test}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0100
150.02000.10950.0400
200.83000.60720.4683
250.34510.76480.5800
300.89980.93830.8443
350.76940.94910.8761
400.96760.98910.9381
450.96930.98830.9693
500.9820.99960.9899
Best0.00000.00000.0100
Worst0.98200.99960.9899
Avg.0.64260.70510.6351
Avg rank1.55562.72221.7222
NNCase ICase IICase III
100.00000.00000.0100
150.02000.10950.0400
200.83000.60720.4683
250.34510.76480.5800
300.89980.93830.8443
350.76940.94910.8761
400.96760.98910.9381
450.96930.98830.9693
500.9820.99960.9899
Best0.00000.00000.0100
Worst0.98200.99960.9899
Avg.0.64260.70510.6351
Avg rank1.55562.72221.7222
Table 13:

Comparison of |${\rm {MAE}}_{{\rm {test}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.00000.00000.0100
150.02000.10950.0400
200.83000.60720.4683
250.34510.76480.5800
300.89980.93830.8443
350.76940.94910.8761
400.96760.98910.9381
450.96930.98830.9693
500.9820.99960.9899
Best0.00000.00000.0100
Worst0.98200.99960.9899
Avg.0.64260.70510.6351
Avg rank1.55562.72221.7222
NNCase ICase IICase III
100.00000.00000.0100
150.02000.10950.0400
200.83000.60720.4683
250.34510.76480.5800
300.89980.93830.8443
350.76940.94910.8761
400.96760.98910.9381
450.96930.98830.9693
500.9820.99960.9899
Best0.00000.00000.0100
Worst0.98200.99960.9899
Avg.0.64260.70510.6351
Avg rank1.55562.72221.7222

Further examination of |${\rm {MAE}}_{{\rm {energy}}}$| and |${\rm {MAE}}_{{\rm {test}}}$| of DHNN–BR2SAT shows that a close relationship exists between global minima and final energy (Sathasivam 2010). When neuron states converge to global minima, the absolute difference between the final energy and the minimum energy defined in Equation 23 is minimized. As illustrated in Tables 12 and 13, |${\rm {MAE}}_{{\rm {energy}}}$| and |${\rm {MAE}}_{{\rm {test}}}$| values of zero indicate that all final neuron states effectively converge to global minima. From Figures 9 and 10, as the number of neurons increases, energy error and the number of local minima increase across all models. This rise is primarily due to the limitations of the learning algorithm of DHNN, resulting in some suboptimal synaptic weights that trap final neuron states in local minimal energy states (Guo et al. 2022). This is further evidenced by trends in |${\rm {MAE}}_{{\rm {weight}}}$|⁠. Another cause of local minimal energy is suboptimal neuron updates during retrieval, which lead to a higher energy error by generating more unsatisfied clauses. This issue is particularly evident when first-order clauses are embedded in DHNN, as illustrated by high |${\rm {MAE}}_{{\rm {energy}}}$| of Case I at |$\mathrm{ NN}=20$| in Figure 9. Most of the time, Case II displays higher |${\rm {MAE}}_{{\rm {energy}}}$| and |${\rm {MAE}}_{{\rm {test}}}$| values than other models due to the larger number of clauses in its logical expressions, which produce more suboptimal synaptic weights during the learning phase. In other words, optimal synaptic weights obtained during the learning phase affect all performance metrics during retrieval, emphasizing the importance of maximizing fitness in the learning phase. Notably, Case III performs better than other models, suggesting that introducing an appropriate proportion of redundant second-order clauses in BRAN2SAT positively impacts the performance of DHNN.

Friedman rank tests are conducted on |${\rm {ZM}}$|⁠, |${\rm {MAE}}_{{\rm {test}}}$|⁠, and |${\rm {MAE}}_{{\rm {energy}}}$| for all the numbers of neurons. The Chi-square values for |${\rm {ZM}}$| and |${\rm {MAE}}_{{\rm {test}}}$| are |$\chi ^2 = 7.5882$|⁠, and for |${\rm {MAE}}_{{\rm {energy}}}$|⁠, the Chi-square value is |$\chi ^2 = 7.3714$|⁠, with a degree of freedom of |$df = 2$| and |$p < 0.05$|⁠. Thus, the null hypothesis of no significant difference for the proposed DHNN–BR2SAT is rejected. According to Tables 11–13, Case I demonstrates competitive |${\rm {ZM}}$| and |${\rm {MAE}}_{{\rm {test}}}$| values, with its average rank surpassing that of the other two models. Case III shows competitive |${\rm {MAE}}_{{\rm {energy}}}$| values, with its average rank exceeding those of the other two models. These results statistically validate the superiority of Case I in terms of |${\rm {ZM}}$| and |${\rm {MAE}}_{{\rm {test}}}$| and Case III in terms of |${\rm {MAE}}_{{\rm {energy}}}$|⁠.

6.3 Total variation and similarity analysis

Previous metrics primarily assessed the quantity of DHNN–BR2SAT neuron states, but evaluating the quality of final neuron states is equally essential. This study uses the |${\rm {TV}}$| and the |$S_{{\rm {Jaccard}}}$| to assess the diversity quality of global solutions for each BRAN2SAT model. Both |${\rm {TV}}$| and |$S_{{\rm {Jaccard}}}$| are based on global solution analysis. In other words, when |${\rm {ZM}}$| equals zero, neither |${\rm {TV}}$| nor |$S_{{\rm {Jaccard}}}$| exists. Higher |${\rm {TV}}$| values indicate greater diversity among final neuron states in the global solutions (Guo et al. 2022). According to the definition of |$S_{{\rm {Jaccard}}}$|⁠, it is calculated based on the similarity between final neuron states and reference neuron states. Lower |$S_{{\rm {Jaccard}}}$| values reflect more significant differences between final neuron states and reference neuron states (Real & Vargas 1996). As shown in Figure 11 and Table 14, overall, the |${\rm {TV}}$| values for different DHNN–BR2SAT models decrease with increasing NN and eventually approach zero. This trend reflects a decline in the ability of each model to retrieve global minima. It suggests that the suboptimal performance of the ES algorithm during the DHNN learning phase leads to an increase in local minima, thereby reducing the number of global minima. Case II shows the lowest |${\rm {TV}}$| value, as it retrieves more local minima due to the suboptimal learning phase associated with the higher number of second-order clauses compared to the other two models. Additionally, Cases I and III exhibit higher |${\rm {TV}}$| values due to the balanced distribution of redundant and non-redundant clauses, which allows for a more diverse set of solutions. However, since the quantities of redundant and non-redundant clauses in Case I are randomly chosen, significant fluctuations are observed in its |${\rm {TV}}$| trend. As shown in Figure 12 and Table 15, |$S_{{\rm {Jaccard}}}$| values for different DHNN–BR2SAT models remain relatively stable between 0.5 and 0.7. Overall, Cases II and III perform better than Case I. Cases II and III contain more second-order clauses, with each clause including two variables. This structure allows for a greater variety of neuron state combinations that align with global minima, resulting in larger differences between the final neuron states and the benchmark state. In contrast, Case I contains more first-order clauses, with each clause having only one variable, which limits Case I to a single neuron state corresponding to the global minimum energy. Unfortunately, the global solution for Case I matches the benchmark state, indicating a lack of solution diversity. As NN increases, the inefficiency of the ES algorithm causes DHNN to generate more local minima, reducing the diversity of final neuron states. Thus, selecting a BRAN2SAT model with redundant clauses helps enhance model generalization and interpretability, thereby lowering |$S_{{\rm {Jaccard}}}$| values.

${\rm {TV}}$ for different DHNN–BR2SAT models.
Figure 11:

|${\rm {TV}}$| for different DHNN–BR2SAT models.

S$_{{\rm {Jaccard}}}$ for different DHNN–BR2SAT models.
Figure 12:

S|$_{{\rm {Jaccard}}}$| for different DHNN–BR2SAT models.

Table 14:

Comparison of |${\rm {TV}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
10415331916030
15911479318804
20161038185238
25654223504188
3010026171556
3523065091239
40324109619
45307117307
501804101
Best911479318804
Worst1804101
Avg.2837.60002071.80003120.2000
Avg rank1.61112.88891.5000
NNCase ICase IICase III
10415331916030
15911479318804
20161038185238
25654223504188
3010026171556
3523065091239
40324109619
45307117307
501804101
Best911479318804
Worst1804101
Avg.2837.60002071.80003120.2000
Avg rank1.61112.88891.5000
Table 14:

Comparison of |${\rm {TV}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
10415331916030
15911479318804
20161038185238
25654223504188
3010026171556
3523065091239
40324109619
45307117307
501804101
Best911479318804
Worst1804101
Avg.2837.60002071.80003120.2000
Avg rank1.61112.88891.5000
NNCase ICase IICase III
10415331916030
15911479318804
20161038185238
25654223504188
3010026171556
3523065091239
40324109619
45307117307
501804101
Best911479318804
Worst1804101
Avg.2837.60002071.80003120.2000
Avg rank1.61112.88891.5000
Table 15:

Comparison of |$S_{{\rm {Jaccard}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.66890.65710.6583
150.62250.61520.6157
200.67350.64370.6226
250.60450.61320.6049
300.61940.63930.6023
350.61800.58780.5827
400.61360.51820.6142
450.56070.52270.6052
500.59430.36430.5454
Best0.56070.36430.5454
Worst0.67350.65710.6583
Avg.0.61950.57350.6057
Avg rank2.44441.66671.8889
NNCase ICase IICase III
100.66890.65710.6583
150.62250.61520.6157
200.67350.64370.6226
250.60450.61320.6049
300.61940.63930.6023
350.61800.58780.5827
400.61360.51820.6142
450.56070.52270.6052
500.59430.36430.5454
Best0.56070.36430.5454
Worst0.67350.65710.6583
Avg.0.61950.57350.6057
Avg rank2.44441.66671.8889
Table 15:

Comparison of |$S_{{\rm {Jaccard}}}$| in different DHNN–BR2SAT models with different numbers of neurons. Bolded values represent the best values for the same number of neurons.

NNCase ICase IICase III
100.66890.65710.6583
150.62250.61520.6157
200.67350.64370.6226
250.60450.61320.6049
300.61940.63930.6023
350.61800.58780.5827
400.61360.51820.6142
450.56070.52270.6052
500.59430.36430.5454
Best0.56070.36430.5454
Worst0.67350.65710.6583
Avg.0.61950.57350.6057
Avg rank2.44441.66671.8889
NNCase ICase IICase III
100.66890.65710.6583
150.62250.61520.6157
200.67350.64370.6226
250.60450.61320.6049
300.61940.63930.6023
350.61800.58780.5827
400.61360.51820.6142
450.56070.52270.6052
500.59430.36430.5454
Best0.56070.36430.5454
Worst0.67350.65710.6583
Avg.0.61950.57350.6057
Avg rank2.44441.66671.8889

Friedman rank tests are conducted on |${\rm {TV}}$| and |$S_{{\rm {Jaccard}}}$| for all the numbers of neurons. The Chi-square value for |${\rm {TV}}$| is |$\chi ^2 = 11.0286$|⁠, with a degree of freedom of |$\mathrm{ df} = 2$| and |$p < 0.05$|⁠. Therefore, the null hypothesis of no significant difference for the proposed DHNN–BR2SAT model is rejected. According to Tables 14 and 15, Case I demonstrates competitive |${\rm {TV}}$| values, with its average rank surpassing that of the other two models. Case III shows superior performance in the |${\rm {TV}}$| aspect. In contrast, the Friedman test for |$S_{{\rm {Jaccard}}}$| reveals |$p > 0.05$|⁠, suggesting no significant difference between the DHNN–BR2SAT model and the other models in terms of |$S_{{\rm {Jaccard}}}$|⁠.

6.4 Qualitative analysis of DHNN–BR2SAT

Table 16 presents a qualitative analysis of the DHNN–BR2SAT model under different clause proportions. Based on prior results, it is inferred that Case I achieves optimal performance in both the learning and retrieval stages. Notably, ‘|$\times$|’ indicates poor performance on a given metric, while ‘|$\checkmark$|’ indicates good performance. In summary, Case I exhibits superior learning efficiency and increases the number of global solutions.

Table 16:

Qualitative analysis of different DHNN–BR2SAT models.

NNCase ICase IICase III
|${\rm {MAE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {ZM}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {energy}}}$||$\times$||$\times$||$\checkmark$|
|${\rm {MAE}}_{{\rm {test}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {TV}}$||$\times$||$\times$||$\checkmark$|
|$S_{{\rm {Jaccard}}}$||$\times$||$\checkmark$||$\times$|
NNCase ICase IICase III
|${\rm {MAE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {ZM}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {energy}}}$||$\times$||$\times$||$\checkmark$|
|${\rm {MAE}}_{{\rm {test}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {TV}}$||$\times$||$\times$||$\checkmark$|
|$S_{{\rm {Jaccard}}}$||$\times$||$\checkmark$||$\times$|
Table 16:

Qualitative analysis of different DHNN–BR2SAT models.

NNCase ICase IICase III
|${\rm {MAE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {ZM}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {energy}}}$||$\times$||$\times$||$\checkmark$|
|${\rm {MAE}}_{{\rm {test}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {TV}}$||$\times$||$\times$||$\checkmark$|
|$S_{{\rm {Jaccard}}}$||$\times$||$\checkmark$||$\times$|
NNCase ICase IICase III
|${\rm {MAE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {learn}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {RMSE}}_{{\rm {weight}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {ZM}}$||$\checkmark$||$\times$||$\times$|
|${\rm {MAE}}_{{\rm {energy}}}$||$\times$||$\times$||$\checkmark$|
|${\rm {MAE}}_{{\rm {test}}}$||$\checkmark$||$\times$||$\times$|
|${\rm {TV}}$||$\times$||$\times$||$\checkmark$|
|$S_{{\rm {Jaccard}}}$||$\times$||$\checkmark$||$\times$|

6.5 The limitation of DHNN–BR2SAT

Through the above analysis, we have gained a comprehensive understanding of the distributions of learning error, synaptic weight error, energy, final solutions, neuron variations, and similarity indices, which has allowed for a systematic evaluation of the overall behaviour of DHNN–BR2SAT. The results indicate that embedding BRAN2SAT within DHNN facilitates obtaining ideal global minima and achieving diverse neuron states. However, based on the no free lunch theorem (Wolpert & Macready 1997), we conclude that no single model is optimal for all scenarios, as each model produces unique outcomes. This study is limited by computational time. The presence of redundant variables doubles the number of clauses, making it challenging for the model to find the correct interpretation during the learning phase. With larger |$\mathrm{ NN}$|⁠, DHNN tends to fall into a trial-and-error process, which further extends the computational time. Additionally, due to the limitations of the ES algorithm, this study restricts the neuron count to fewer than 50. Future research could consider employing metaheuristic algorithms, such as GA (Acampora et al. (2023), Particle Swarm Optimization (Felicetti & Wang 2024), and Black-winged Kite Algorithm (Wang et al. 2024b), as alternatives to ES to enhance computational efficiency.

7 Conclusion

This study introduces a novel logical rule, termed BRAN2SAT, consisting of redundant second-order clauses and non-redundant first- and second-order clauses. The logical rule is implemented into DHNN by minimizing the cost function of the network, forming DHNN–BR2SAT. During the learning phase, the optimal synaptic weights are computed by comparing the cost function with the energy function. In the retrieval phase, the network achieves a stable state by updating fewer neurons. The effectiveness of the DHNN–BR2SAT model is evaluated by varying the proportions of different clause orders, with performance evaluated using various metrics during both the learning and retrieval phases. Experimental results indicate that when the numbers of the three types of clauses are randomly selected, the DHNN–BR2SAT model achieves optimal performance. Notably, this is the first attempt to implement logic containing redundant literals into DHNN, highlighting the potential of this approach to improve model performance when dealing with overlapping data.

Future directions. Metaheuristic algorithms are employed to optimize the retrieval phase, facilitating the generation of a more diverse final solution. Furthermore, the DHNN–BR2SAT model offers a novel approach to logic mining, with potential applications in practical problems such as sentiment analysis, industrial data classification, and financial investment forecasting. The DHNN–BR2SAT model represents a significant advancement in logic mining, filling the gap in extracting overlapping knowledge, and offering substantial potential for applications in various domains.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Author Contributions

Binbin Yang: Methodology, Writing – original draft, Writing – review & editing. Guoxiang Li: Project administration, Data curation. Adila Aida Azahar: Writing – review & editing, Data curation. Mohd Shareduwan Mohd Kasihmuddin: Supervision, Methodology. Yuan Gao: Methodology, Data curation. Suad Abdeen: Writing – review & editing, Methodology. Baorong Yu: Visualization, Writing - review & editing.

Funding

The authors are grateful for the financial support from the National Natural Science Foundation of China (no. 62162006). Additionally, we acknowledge the support by Guangxi Key Laboratory of Big Data in Finance and Economics Project Fund, Guangxi First-class Discipline statistics Construction Project Fund and Guangxi University Young and Middle-aged Teachers’ Research Capacity Enhancement Project (nos 2021KY0907, 2021KY0378, and 2023KY1293) and Guangxi Key Laboratory of Seaward Economic Intelligent System Analysis and Decision-making (no. 2024C013).

Data Availability

No data were used for the research described in the article.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable comments and suggestions.

References

Abdeen
 
S.
,
Kasihmuddin
 
M. S. M.
,
Zamri
 
N. E.
,
Manoharam
 
G.
,
Mansor
 
M. A.
,
Alshehri
 
N.
(
2023
).
S-type random k satisfiability logic in discrete hopfield neural network using probability distribution: performance optimization and analysis
.
Mathematics
,
11
,
984
1029
. .

Abdullah
 
W. A. T. W.
(
1992
).
Logic programming on a neural network
.
International Journal of Intelligent Systems
,
7
,
513
519
. .

Abdullah
 
W. A. T. W.
(
1994
).
Seeking global minima
.
Journal of Computational Physics
,
110
,
320
326
. .

Acampora
 
G.
,
Chiatto
 
A.
,
Vitiello
 
A.
(
2023
).
Genetic algorithms as classical optimizer for the quantum approximate optimization algorithm
.
Applied Soft Computing
,
142
,
110296
110307
. .

Amer
 
A. A.
,
Abdalla
 
H. I.
,
Nguyen
 
L.
(
2021
).
Enhancing recommendation systems performance using highly-effective similarity measures
.
Knowledge-Based Systems
,
217
,
106842
106860
. .

Aribowo
 
W.
,
Muslim
 
S.
(
2020
).
Long-term electricity load forecasting based on cascade forward backpropagation neural network
.
Journal of Telecommunication, Electronic and Computer Engineering (JTEC)
,
12
,
39
44
. .

Azad
 
R.
,
Aghdam
 
E. K.
,
Rauland
 
A.
,
Jia
 
Y.
,
Avval
 
A. H.
,
Bozorgpour
 
A.
,
Karimijafarbigloo
 
S.
,
Cohen
 
J.
,
Adeli
 
E.
,
Merhof
 
D.
(
2024
).
Medical image segmentation review: the success of u-net
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
46(12)
,
10076
10095
. .

Chen
 
J.
,
Kasihmuddin
 
M. S. M.
,
Gao
 
Y.
,
Guo
 
Y.
,
Mansor
 
M. A.
,
Romli
 
N.
,
Chen
 
W.
,
Zheng
 
C.
(
2023
).
Pro2sat: Systematic probabilistic satisfiability logic in discrete hopfield neural network
.
Advances in Engineering Software
,
175
,
103355
10354
. .

Deng
 
Q.
,
Wang
 
C.
,
Lin
 
H.
(
2024
).
Chaotic dynamical system of hopfield neural network influenced by neuron activation threshold and its image encryption
.
Nonlinear Dynamics
,
112
,
6629
6646
. .

Fan
 
J.
,
Zheng
 
J.
,
Wu
 
L.
,
Zhang
 
F.
(
2021
).
Estimation of daily maize transpiration using support vector machines, extreme gradient boosting, artificial and deep neural networks models
.
Agricultural Water Management
,
245
,
106547
106558
. .

Farahani
 
M. A.
,
McCormick
 
M. R.
,
Harik
 
R.
,
Wuest
 
T.
(
2025
).
Time-series classification in smart manufacturing systems: An experimental evaluation of state-of-the-art machine learning algorithms
.
Robotics and Computer-Integrated Manufacturing
,
91
,
102839
102865
. .

Felicetti
 
M. J.
,
Wang
 
D.
(
2024
).
Stochastic configuration networks with particle swarm optimisation search
.
Information Sciences
,
677
,
120868
120877
. .

Gao
 
Y.
,
Guo
 
Y.
,
Romli
 
N. A.
,
Kasihmuddin
 
M. S. M.
,
Chen
 
W.
,
Mansor
 
M. A.
,
Chen
 
J.
(
2022
).
Gran3sat: Creating flexible higher-order logic satisfiability in the discrete hopfield neural network
.
Mathematics
,
10
,
1899
1926
. .

Gerrard
 
W.
,
Bratholm
 
L. A.
,
Packer
 
M. J.
,
Mulholland
 
A. J.
,
Glowacki
 
D. R.
,
Butts
 
C. P.
(
2020
).
Impression–prediction of nmr parameters for 3-dimensional chemical structures using machine learning with near quantum chemical accuracy
.
Chemical Science
,
11
,
508
515
. .

Giráldez-Cru
 
J.
,
Levy
 
J.
(
2021
).
Popularity-similarity random sat formulas
.
Artificial Intelligence
,
299
,
103537
103556
. .

Guo
 
Y.
,
Kasihmuddin
 
M. S. M.
,
Gao
 
Y.
,
Mansor
 
M. A.
,
Wahab
 
H. A.
,
Zamri
 
N. E.
,
Chen
 
J.
(
2022
).
Yran2sat: a novel flexible random satisfiability logical rule in discrete hopfield neural network
.
Advances in Engineering Software
,
171
,
103169
103193
. .

Heule
 
M. J. H.
,
Kiesl
 
B.
,
Biere
 
A.
 
Encoding redundancy for satisfaction-driven clause learning
. In:
T.
 
Vojnar
, &
L.
 
Zhang
(Eds.),
Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems
.
Berlin, Germany
:
Springer
,
2019
,
41
58
.

Hopfield
 
J. J.
(
1982
).
Neural networks and physical systems with emergent collective computational abilities
.
Proceedings of the National Academy of Sciences
,
79
,
2554
2558
. .

Hopfield
 
J. J.
(
1999
).
Brain, neural networks, and computation
.
Reviews of Modern Physics
,
71
,
S431
S437
. .

Hopfield
 
J. J.
,
Tank
 
D. W.
(
1985
).
“neural” computation of decisions in optimization problems
.
Biological Cybernetics
,
52
,
141
152
. .

Karim
 
S. A.
,
Zamri
 
N. E.
,
Alway
 
A.
,
Kasihmuddin
 
M. S. M.
,
Ismail
 
A. I. M.
,
Mansor
 
M. A.
,
Hassan
 
N. F. A.
(
2021
).
Random satisfiability: a higher-order logical approach in discrete hopfield neural network
.
IEEE Access
,
9
,
50831
50845
. .

Kasihmuddin
 
M. S. M.
,
Mansor
 
M. A.
,
Basir
 
M. M. F.
,
Sathasivam
 
S.
(
2019
).
Discrete mutation hopfield neural network in propositional satisfiability
.
Mathematics
,
7
,
1133
1153
. .

Kasihmuddin
 
M. S. M.
,
Mansor
 
M. A.
,
Sathasivam
 
S.
(
2017
).
Hybrid genetic algorithm in the hopfield network for logic satisfiability problem
.
Pertanika Journal of Science & Technology
,
25
,
139
152
.

Kasihmuddin
 
M. S. M.
,
Mansor
 
M. A.
,
Sathasivam
 
S.
(
2018
).
Discrete hopfield neural network in restricted maximum k-satisfiability logic programming
.
Sains Malaysiana
,
47
,
1327
1335
. .

Kilani
 
Y.
(
2010
).
Comparing the performance of the genetic and local search algorithms for solving the satisfiability problems
.
Applied Soft Computing
,
10
,
198
207
. .

Lee
 
J.
,
Yeo
 
C.
,
Cheon
 
S.-U.
,
Park
 
J. H.
,
Mun
 
D.
(
2023
).
Brepgat: Graph neural network to segment machining feature faces in a b-rep model
.
Journal of Computational Design and Engineering
,
10
,
2384
2400
. .

Lee
 
U.
,
Kang
 
N.
(
2023
).
Adaptive neural network ensemble using prediction frequency
.
Journal of Computational Design and Engineering
,
10
,
1547
1560
. .

Liberatore
 
P.
(
2008
).
Redundancy in logic ii: 2cnf and horn propositional formulae
.
Artificial Intelligence
,
172
,
265
299
. .

Ma
 
W.
,
Li
 
X.
,
Yu
 
T.
,
Wang
 
Z.
(
2023
).
A 4d discrete hopfield neural network-based image encryption scheme with multiple diffusion modes
.
Optik
,
291
,
171387
171403
. .

Mansor
 
M. A.
,
Kasihmuddin
 
M. S. M.
,
Sathasivam
 
S.
(
2017
).
Artificial immune system paradigm in the hopfield network for 3-satisfiability problem
.
Pertanika Journal of Science & Technology
,
25
,
1173
1188
.

Mansor
 
M. A.
,
Sathasivam
 
S.
(
2021
).
Optimal performance evaluation metrics for satisfiability logic representation in discrete hopfield neural network
.
International Journal of Mathematics and Computer Science
,
16
,
963
976
.

Massacci
 
F.
,
Marraro
 
L.
(
2000
).
Logical cryptanalysis as a sat problem
.
Journal of Automated Reasoning
,
24
,
165
203
. .

Mézard
 
M.
,
Parisi
 
G.
,
Zecchina
 
R.
(
2002
).
Analytic and algorithmic solution of random satisfiability problems
.
Science
,
297
,
812
815
. .

Mousapour Mamoudan
 
M.
,
Ostadi
 
A.
,
Pourkhodabakhsh
 
N.
,
Fathollahi-Fard
 
A. M.
,
Soleimani
 
F.
(
2023
).
Hybrid neural network-based metaheuristics for prediction of financial markets: a case study on global gold market
.
Journal of Computational Design and Engineering
,
10
,
1110
1125
. .

Okada
 
M.
(
1996
).
Notions of associative memory and sparse coding
.
Neural Networks
,
9
,
1429
1458
. .

Prugel-Bennett
 
A.
,
Tayarani-Najaran
 
M. H.
(
2011
).
Maximum satisfiability: Anatomy of the fitness landscape for a hard combinatorial optimization problem
.
IEEE Transactions on Evolutionary Computation
,
16
,
319
338
. .

Raza
 
S.
,
Garg
 
M.
,
Reji
 
D. J.
,
Bashir
 
S. R.
,
Ding
 
C.
(
2024
).
Nbias: A natural language processing framework for bias identification in text
.
Expert Systems with Applications
,
237
,
121542
121557
. .

Real
 
R.
,
Vargas
 
J. M.
(
1996
).
The probabilistic basis of jaccard’s index of similarity
.
Systematic Biology
,
45
,
380
385
. .

Rosenblatt
 
F.
(
1958
).
The perceptron: a probabilistic model for information storage and organization in the brain
.
Psychological Review
,
65
,
386
391
. .

Roslan
 
N.
,
Sathasivam
 
S.
,
Azizan
 
F. L.
(
2024
).
Conditional random k satisfiability modeling for k= 1, 2 (cran2sat) with non-monotonic smish activation function in discrete hopfield neural network
.
AIMS Mathematics
,
9
,
3911
3956
. .

Rumelhart
 
D. E.
,
Hinton
 
G. E.
,
Williams
 
R. J.
(
1986
).
Learning representations by back-propagating errors
.
Nature
,
323
,
533
536
. .

Sakkaravarthi
 
K.
,
Reddy
 
P. B. A.
,
Sakthi
 
I.
(
2024
).
Entropy optimization in casson tetra-hybrid nanofluid flow over a rotating disk with nonlinear thermal radiation: a levenberg–marquardt neural network approach
.
Journal of Computational Design and Engineering
,
11
,
333
354
. .

Sathasivam
 
S.
(
2010
).
Upgrading logic programming in hopfield network
.
Sains Malaysiana
,
39
,
115
118
.

Sathasivam
 
S.
,
Mansor
 
M. A.
,
Ismail
 
A. I. M.
,
Jamaludin
 
S. Z. M.
,
Kasihmuddin
 
M. S. M.
,
Mamat
 
M.
(
2020
).
Novel random k satisfiability for k ≤ 2 in hopfield neural network
.
Sains Malays
,
49
,
2847
2857
. .

Schuurmans
 
D.
,
Southey
 
F.
(
2001
).
Local search characteristics of incomplete sat procedures
.
Artificial Intelligence
,
132
,
121
150
. .

Someetheram
 
V.
,
Marsani
 
M. F.
,
Kasihmuddin
 
M. S. M.
,
Zamri
 
N. E.
,
Sidik
 
S. S. M.
,
Jamaludin
 
S. Z. M.
,
Mansor
 
M. A.
(
2022
).
Random maximum 2 satisfiability logic in discrete hopfield neural network incorporating improved election algorithm
.
Mathematics
,
10
,
4734
4762
. .

Storkey
 
A. J.
,
Valabregue
 
R.
(
1999
).
The basins of attraction of a new hopfield learning rule
.
Neural Networks
,
12
,
869
876
. .

Wang
 
C.
,
Luo
 
D.
,
Deng
 
Q.
,
Yang
 
G.
(
2024a
).
Dynamics analysis and fpga implementation of discrete memristive cellular neural network with heterogeneous activation functions
.
Chaos, Solitons & Fractals
,
187
,
115471
115480
. .

Wang
 
J.
,
Wang
 
W.
,
Hu
 
X.
,
Qiu
 
L.
,
Zang
 
H.
(
2024b
).
Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems
.
Artificial Intelligence Review
,
57
,
98
150
. .

Wang
 
S.
,
Wang
 
J.
,
Lu
 
H.
,
Zhao
 
W.
(
2021
).
A novel combined model for wind speed prediction–combination of linear model, shallow neural networks, and deep learning approaches
.
Energy
,
234
,
121275
121291
. .

Willems
 
J. C.
(
1972
).
Dissipative dynamical systems part I: General theory
.
Archive for Rational Mechanics and Analysis
,
45
,
321
351
. .

Wolpert
 
D. H.
,
Macready
 
W. G.
(
1997
).
No free lunch theorems for optimization
.
IEEE Transactions on Evolutionary Computation
,
1
,
67
82
. .

Xu
 
Z.
,
Zhang
 
Y.
,
Xie
 
E.
,
Zhao
 
Z.
,
Guo
 
Y.
,
Wong
 
K. Y. K.
,
Li
 
Z.
,
Zhao
 
H.
(
2024
).
Drivegpt4: interpretable end-to-end autonomous driving via large language model
.
IEEE Robotics and Automation Letters
,
9
,
8186
8193
. .

Yu
 
F.
,
Xu
 
S.
,
Lin
 
Y.
,
Gracia
 
Y. M.
,
Yao
 
W.
,
Cai
 
S.
(
2024
).
Dynamic analysis, image encryption application and fpga implementation of a discrete memristor-coupled neural network
.
International Journal of Bifurcation and Chaos
,
34
,
2450068
2450088
. .

Zamri
 
N. E.
,
Alway
 
A.
,
Mansor
 
M. A.
,
Kasihmuddin
 
M. S. M.
,
Sathasivam
 
S.
(
2020
).
Modified imperialistic competitive algorithm in hopfield neural network for boolean three satisfiability logic mining
.
Pertanika Journal of Science & Technology
,
28
,
983
1008
.

Zamri
 
N. E.
,
Azhar
 
S. A.
,
Mansor
 
M. A.
,
Alway
 
A.
,
Kasihmuddin
 
M. S. M.
(
2022
).
Weighted random k satisfiability for k= 1, 2 (r2sat) in discrete hopfield neural network
.
Applied Soft Computing
,
126
,
109312
109335
. .

Zheng
 
S.
,
Guo
 
W.
,
Li
 
C.
,
Sun
 
Y.
,
Zhao
 
Q.
,
Lu
 
H.
,
Si
 
Q.
,
Wang
 
H.
(
2023
).
Application of machine learning and deep learning methods for hydrated electron rate constant prediction
.
Environmental Research
,
231
,
115996
116003
. .

Author notes

These two authors contributed equally to this work.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.