A New Formulation for Feedforward works Saman Razavi and Bryan A. Tolson speaker:Song Ben outline ? Introduction ?N ew Variables Identifying works ?R eformulated work ?T raining Reraining ReNNs ?C omparing ANNS A nd ReNNs ? Conclusions Introduction ? Feedforward works, also known as multilayer perceptrons, with one hidden layer have been proven capable of approximating any function with any desired accuracy provided that associated conditions are satisfied . ? They are probably the monly used neural networks and have been applied to a wide variety of problems including function approximation, prediction, and simulation arising from a variety of disciplines puter science, engineering . ?O ne hidden layer is adequate to enable works to approximate any given function . A lthough works with more than one hidden layer may require fewer hidden neurons to approximate the same function , single-hidden-layer neural networks are superior works with more than one hidden layer with the same level plexity mainly due to the fact that the latter are more prone to fall into poor local minima. This paper is focused on single-hidden-layer works with one output which can be expressed through the following equations: Suppose that there are m input-target sets, x k? T k for k = 1,...,m, for work training. Thus, the n × (R + 2) + work variables arranged in Iw, Hw, Hb, and Ob are to be adjusted to minimize an error function, F, such as sum of squared errors (SSE) work outputs, yk, and desired targets, Tk, as New Varibles Identifying works ? In a single-hidden-layer work, each hidden neuron in conjunction with the input nodes represents a function, x→ Hoi as in (1) and (2), forming a sigmoidal line, a sigmoidal plane, or a sigmoidal hyperplane for a one-, two-, or three- input (or more) network. These sigmoids have an inclined segment in the middle and two horizontal tails on the sides with a height of unity. The output neuron acts as a collector taking a biase
人工智能课件 来自淘豆网m.daumloan.com转载请标明出处.