Dendritic Neural Model
- Abstract:
DNM is employed to solve classification problems.
Since the DNM is a feed-forward model, and all the excitation
functions are continuous and differentiable,
the Error Back-propagation (EBP) algorithm is used to train it,
whose learning rule is derived from the Least Squared Error
between the actual outputs and the desired targets.
Further, the DNM will be simplified and then be transformed
by a Logic Circuit Classifier (LCC), which consists of the
comparators and logic AND, OR and NOT gates.
Although the EBP algorithm may trap into local
minima occasionally, it has very fast convergence
speed and satisfactory optimization performances.
- Code Resource:
Link.
Latest Update Date:
2021.06.07
-
Citation: Junkai Ji, Shangce Gao, Jiujun Cheng,
Zheng Tang, and Yuki Todo.
“An approximate logic neuron model with a dendritic
structure,” Neurocomputing 173 (2016): 1775-1783. Link
Dendritic Neural Regression
- Abstract:
The dendrite neural regression (DNR)
employs a novel weight to describe the dendrite strength,
which can enhance the regression ability of the model
significantly. A recently proposed optimization
algorithm named AMSGrad is utilized in DNR training.
AMSGrad is a variant of the Adam algorithm,
which can speed up the convergence of DNR during
the optimization procedure. The DNR trained by the
AMSGrad algorithm (ADNR) has demonstrated excellent
and stable performance in the real regression problem.
- Code Resource:
Link.
Latest Update Date:2022.06.07
- Citation:
Junkai Ji, Minhui Dong, Qiuzhen Lin,
and Kay Chen Tan. "Noninvasive Cuffless Blood Pressure
Estimation With Dendritic Neural Regression."
IEEE Transactions on Cybernetics, 2022. Accepted.
DNM Trained by the States of Matter Search Algorithm
- Abstract:
DNM is also applied to solve classification problems.
It is trained by a metaheuristic algorithm,
named the states of matter search (SMS) algorithm.
The evolutionary operations of SMS are based on the physical
principle of the thermal-energy motion ratio, and the whole
optimization process is divided into the following three
phases: the gas state, the liquid state and the solid state.
Each state has its own operations with different
exploration–exploitation ratios.
The SMS algorithm can be regarded as a more global
search approach. Empirical evidences have verified that
it can provide better training performance than several
state-of-the-art evolutionary algorithms, such as: Genetic
Algorithm (GA), Particle Swarm Optimization (PSO) and adaptive
differential evolution with optional external archive (JADE).
- Code Resource:
Link.
Latest Update Date:2021.06.07
- Citation:
Junkai Ji, Shuangbao Song, Yajiao Tang,
Shangce Gao, Zheng Tang, and Yuki Todo.
"Approximate logic neuron model trained by states of matter
search algorithm.
" Knowledge-Based Systems 163 (2019): 120-130. Link
DNM Trained by the Multi-objective Differential Evolution Algorithm
- Abstract:
The architecture of DNM affects the learning capacity,
generalization capability, computing time and approximation of
LCC. Thus, a Pareto-based multiobjective differential evolution
(MODE) algorithm is proposed to simultaneously optimize
DNM’s topology and weights. The mean squared error of the
training dataset and the model complexity are selected as
the two objectives of MODE. MODE can
generate a concise and accurate LCC for every specific task from DNM.
- Code Resource:
Link.
Latest Update Date:2022.06.07
- Citation:
Junkai Ji, Yajiao Tang, Lijia Ma, Jianqiang Li,
Qiuzhen Lin, Zheng Tang, and Yuki Todo.
"Accuracy versus simplification in an approximate logic
neural model." IEEE Transactions on
Neural Networks and Learning Systems 32.11 (2020): 5194-5207. Link