CLC number: TP183

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2010-10-15

Cited: 4

Clicked: 7764

Jian Bao, Yu Chen, Jin-shou Yu. A regeneratable dynamic differential evolution algorithm for neural networks with integer weights[J]. Journal of Zhejiang University Science C, 2010, 11(12): 939-947.

@article{title="A regeneratable dynamic differential evolution algorithm for neural networks with integer weights",

author="Jian Bao, Yu Chen, Jin-shou Yu",

journal="Journal of Zhejiang University Science C",

volume="11",

number="12",

pages="939-947",

year="2010",

publisher="Zhejiang University Press & Springer",

doi="10.1631/jzus.C1000137"

}

%0 Journal Article

%T A regeneratable dynamic differential evolution algorithm for neural networks with integer weights

%A Jian Bao

%A Yu Chen

%A Jin-shou Yu

%J Journal of Zhejiang University SCIENCE C

%V 11

%N 12

%P 939-947

%@ 1869-1951

%D 2010

%I Zhejiang University Press & Springer

%DOI 10.1631/jzus.C1000137

TY - JOUR

T1 - A regeneratable dynamic differential evolution algorithm for neural networks with integer weights

A1 - Jian Bao

A1 - Yu Chen

A1 - Jin-shou Yu

J0 - Journal of Zhejiang University Science C

VL - 11

IS - 12

SP - 939

EP - 947

%@ 1869-1951

Y1 - 2010

PB - Zhejiang University Press & Springer

ER -

DOI - 10.1631/jzus.C1000137

**Abstract: **neural networks with integer weights are more suited for embedded systems and hardware implementations than those with real weights. However, many learning algorithms, which have been proposed for training neural networks with float weights, are inefficient and difficult to train for neural networks with integer weights. In this paper, a novel regeneratable dynamic differential evolution algorithm (RDDE) is presented. This algorithm is efficient for training networks with integer weights. In comparison with the conventional differential evolution algorithm (DE), RDDE has introduced three new strategies: (1) A regeneratable strategy is introduced to ensure further evolution, when all the individuals are the same after several iterations such that they cannot evolve further. In other words, there is an escape from the local minima. (2) A dynamic strategy is designed to speed up convergence and simplify the algorithm by updating its population dynamically. (3) A local greedy strategy is introduced to improve local searching ability when the population approaches the global optimal solution. In comparison with other gradient based algorithms, RDDE does not need the gradient information, which has been the main obstacle for training networks with integer weights. The experiment results show that RDDE can train integer-weight networks more efficiently.

**
**

[1]Alibeik, S.A., Nemati, F., Sharif-Bakhtiar, M., 1995. Analog Feedforward Neural Networks with Very Low Precision Weights. IEEE Int. Conf. on Neural Networks, p.90-94.

[2]Anguita, D., Gomes, B.A., 1996. Mixing floating- and fixed-point formats for neural network learning on neuroprocessors. *Microprocess. & Microprogr*., **41**(10):757-769.

[3]Babri, H.A., Chen, Y.Q., Yin, T., 1998. Improving backpropagation learning under limited precision. *Pattern Recogn. Lett*., **19**(11):1007-1016.

[4]Bao, J., Zhou, B., Yan, Y., 2009. A Genetic-Algorithm-Based Weight Discretization Paradigm for Neural Networks. WRI World Conf. on Computer Science and Information Engineering, p.655-659.

[5]Basturk, A., Gunay, E., 2009. Efficient edge detection in digital images using a cellular neural network optimized by differential evolution algorithm. *Exp. Syst. Appl*., **36**(2):2645-2650.

[6]Behan, T., Liao, Z., Zhao, L., Yang, C.T., 2008. Accelerating Integer Neural Networks on Low Cost DSPs. Proc. Int. Conf. on Intelligent Systems, p.1270-1273.

[7]Draghici, S., 2002. On the capabilities of neural networks using limited precision weights. *Neur. Networks*, **15**(3):395-414.

[8]Fukushima, K., Wake, N., 1991. Handwritten alphanumeric character recognition by the neocognitron. *IEEE Trans. Neur. Networks*, **2**(3):355-365.

[9]Hagan, M.T., Menhaj, M.B., 1994. Training feedforward networks with the Marquardt algorithm. *IEEE Trans. Neur. Networks*, **5**(6):989-993.

[10]Holmstrom, L., Koistinen, P., 1992. Using additive noise in back-propagation training. *IEEE Trans. Neur. Networks*, **3**(1):24-38.

[11]Ilonen, J., Kamarainen, J.K., Lampinen, J., 2003. Differential evolution training algorithm for feed-froward neural networks. *Neur. Process. Lett*., **17**(1):93-105.

[12]Kamio, T., Tanaka, S., Morisue, M., 2000. Backpropagation Algorithm for Logic Oriented Neural Networks. Proc. IEEE-INNS-ENNS Int. Joint Conf. on Neural Networks, **2**:123-128.

[13]Khan, A.H., Hines, E.L., 1994. Integer-weight nueral nets. *Electron. Lett*., **30**(15):1237-1238.

[14]Khan, A.H., Wilson, R.G., 1996. Integer-Weight Approximation of Continuous-Weight Multilayer Feedforward Nets. IEEE Int. Conf. on Neural Networks, p.392-397.

[15]Marchesi, M., Orlandi, G., Piazza, F., Pollonara, L., Uncini, A., 1990. Multi-layer Perceptrons with Discrete Weights. Int. Joint Conf. on Neural Networks, **2**:623-630.

[16]Nejadgholi, I., Seyyedsalehi, S.A., 2007. Nonlinear normalization of input patterns to speaker variability in speech recognition neural networks. *Neur. Comput. Appl*., **18**(1):45-55.

[17]Phansalkar, V.V., Sastry, P.S., 1994. Analysis of the back-propagation algorithm with momentum. *IEEE Trans. Neur. Networks*, **5**(3):505-506.

[18]Plagianakos, V.P., Vrahatis, M.N., 1999. Neural Network Training with Constrained Integer Weights. Proc. Conf. on Evolutionary Computation, **3**:2007-2013.

[19]Plagianakos, V.P., Vrahatis, M.N., 2002. Parallel evolutionary training algorithms for “hardware-friendly” neural networks. *Nat. Comput*., **1**(2-3):307-322.

[20]Qing, A., 2006. Dynamic differential evolution strategy and applications in electromagnetic inverse scattering problems. *IEEE Trans. Geosci. Remote Sens*., **44**(1):116-125.

[21]Robert, C., Gaudy, J.F., Limoge, A., 2002. Electroencephalogram processing using neural networks. *Clin. Neurophysiol*., **113**(5):694-701.

[22]Rumelhart, D.E., McClelland, J.L., 1986. Paralle Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Fundations. MIT Press, Cambridge, MA, USA.

[23]Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning representations by back-propagating errors. *Nature*, **323**:533-536.

[24]Slowik, A., Bialko, M., 2008. Training of Artificial Neural Networks Using Differential Evolution Algorithm. Conf. on Human System Interactions, p.60-65.

[25]Storn, R., Price, K., 1997. Differential evolution—a simple and efficient heuristic for global optimization over continuous space. *J. Glob. Optim*., **11**(4):341-359.

[26]Woodland, P.C., 1989. Weight Limiting, Weight Quantisation and Generalization in Multi-layer Perceptrons. First IEE Int. Conf. on Artificial Neural Networks, p.297-300.

[27]Yan, Y., Zhang, H., Zhou, B., 2008. A New Learning Algorithm for Neural Networks with Integer Weights and Quantized Non-linear Activation Functions. International Federation for Information Processing, **276**:427-431.

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China

Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn

Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE

Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn

Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE

Open peer comments: Debate/Discuss/Question/Opinion

<1>