Current location - Quotes Website - Personality signature - Looking for an English translation about neural networks
Looking for an English translation about neural networks

Introduction

---------------------------------- ----------------------------------

The term "neural network" actually comes from Biology, and the correct name for the neural networks we refer to should be "Artificial Neural Networks (ANNs)". In this article, I will use both terms interchangeably.

A true neural network is composed of several to billions of cells called neurons (the tiny cells that make up our brains), which are connected in different ways to form a network. Artificial neural networks attempt to simulate this biological architecture and its operation. Here's a conundrum: We don't know much about neural networks in biology! Therefore, neural network architectures vary greatly between different types, and all we know is the basic structure of neurons.

The neuron

---------------------------------- ------------------------------------

Although it has been confirmed that in our brain There are approximately 50 to 500 different types of neurons in the brain, but most of them are specialized cells based on basic neurons. Basic neurons include synapses, soma, axons and dendrites. Synapses are responsible for the connections between neurons. They are not directly physically connected, but there is a small gap between them that allows electrical signals to jump from one neuron to another. These electronic signals are then handed over to soma for processing and its internal electronic signals transmit the processing results to axon. Axon will distribute these signals to dendrites. Finally, dendrites take these signals and pass them on to other synapses, and the cycle continues.

Like the basic neurons in biology, artificial neural networks also have basic neurons. Each neuron has a specific number of inputs, and a weight is also set for each neuron. Weight is an indicator of the importance of the information entered. The neuron then calculates a net value, which is the sum of all inputs multiplied by their weights. Each neuron has its own critical value (threshold), and when the total weight value is greater than the threshold value, the neuron will output 1. Otherwise, 0 is output. Finally, the output is sent to other neurons connected to this neuron to continue the remaining calculations.

Learning

---------------------------------- ----------------------------------

As written above, the core of the problem How should the weights and critical values ??be set? There are as many different training methods in the world as there are network types. But some of the more famous ones include back-propagation, delta rule and Kohonen training mode.

Due to different structural systems, training rules are also different, but most rules can be divided into two categories - supervised and non-supervised. Supervisory training rules require a "teacher" to tell them what output should be produced for specific inputs. The training rules then adjust all the required weight values ??(this is very complex in the network) and the whole process starts over again until the data can be correctly analyzed by the network. Supervisory training models include back-propagation and delta rule. Rules in a non-supervisory manner do not require teachers as the output they produce is further evaluated.

Architecture

---------------------------------- ----------------------------------

In neural networks, follow clear The word rule is the most "vague" one.

Because there are so many different types of networks, from simple Boolean networks (Perceptrons), to complex self-adjusting networks (Kohonen), to thermal dynamic network models (Boltzmann machines)! And these all comply with a network architecture standard.

A network includes multiple neuron "layers", input layer, hidden layer and output layer. The input layer is responsible for receiving input and distributing it to the hidden layer (because the user cannot see these layers, they are called hidden layers). These hidden layers are responsible for the required calculations and outputting the results to the output layer, while the user can see the final results. Now, to avoid confusion, the topic of architecture will not be explored in more depth here. For more details on different neural networks, see Generation5 essays

Although we have discussed neurons, training, and architecture, we have not yet understood what neural networks actually do.

The Function of ANNs

-------------------------------- -----------------------------------------------

Neural networks are designed to Work with patterns - they can be classified as categorical or associative. Classification networks take a set of numbers and classify them. For example, the ONR program accepts an image of a number and outputs the number. Or the PPDA32 program accepts a coordinate and classifies it into category A or B (the category is determined by the training provided). For more practical uses, see the military radar in Applications in the Military, which can distinguish vehicles or trees.

Associative mode accepts one set of numbers and outputs another set. For example, a HIR program takes a "dirty" image and outputs the closest image it has learned. Lenovo mode can also be applied to complex applications, such as signature, face, fingerprint recognition, etc.

The Ups and Downs of Neural Networks

-------------------------------- -----------------------------------------------

Neural Network There are many advantages in this field that make it increasingly popular. It's excellent at type classification/identification. Neural networks can handle exceptions and abnormal input data, which is important for many systems (such as radar and acoustic positioning systems). Many neural networks imitate biological neural networks, that is, they imitate the way the brain works. Neural networks also contribute to the development of neuroscience, allowing them to identify objects as accurately as humans but at the speed of a computer! The future is bright, but for now...

Yes, there are some downsides to neural networks. This is usually due to a lack of powerful enough hardware. The power of neural networks comes from processing information in parallel, that is, processing multiple pieces of data at the same time. Therefore, it is very time-consuming to simulate parallel processing on a serial machine.

Another problem with neural networks is that the conditions for building a network for a certain problem are insufficiently defined - there are too many factors to consider: the training algorithm, the architecture, the number of neurons in each layer, and how many layers, the performance of the data, and many more factors. Therefore, as time becomes increasingly important, most companies cannot afford to repeatedly develop neural networks to effectively solve problems.

NN Neural Network, Neural Network

ANNs Artificial Neural Networks, Artificial Neural Networks

neurons

synapses

p>

self-organizing networks Self-adjusting networks

networks modeling thermodynamic properties Thermodynamic network model

English translation

Introduction

< p>------------------------------------------------ -----------------------

Neural network is a new technology in the field of fashion vocabulary. Many people have heard of the word, but few people really understand what it is. The purpose of this paper is to introduce all the basic neural network functions, including its general structure, related terms, types and applications.

"Neural network" actually came from biology, and neural networks we refer the correct name should be "Artificial Neural Networks (ANNs)". In this article, I will also use the two interchangeable terms.

A real neural network is a few to a few billion cells called neurons (composed of tiny cells in our brains) are composed of, they are different ways to connect and type into the network. Artificial neural network is trying to model this biological system structure and its operation. There is a problem here : we biological neural networks do not know much! Thus, between different types of neural network architecture is very different, we know only the basic structure of neurons.

The neuron

-- -------------------------------------------------- ------------------

While already recognized in our brain, about 50 to

500 kinds of different neurons, but most of them are based on special cells in the basic neuron. Contains the basic neural synapses, soma, axon and dendrites. Synapses between neurons responsible for the connection, they are not directly connected, but they have a very small gap between to allow electronic signals from one neuron to another neuron. Then the electrical signals to the soma will be an internal electronic signal processing and its processing result will pass axon. The axon of these signals will be distributed to dendrites. Finally , dendrites with these signals and then to the other synapses, and then continue to the next cycle.

As a basic biological neurons, artificial neural networks have basic neurons. Each neuron has a specific number of inputs, will be set for each neuron weight (weight). Weight is the importance of the information entered an indicator. Then, neurons calculates the weight of the total value (net value), while the total weight of all the input value is multiplied by the total of their weights. Each neuron has their own threshold (threshold), while the power is greater than the critical value of the total value of weight, the neuron will output 1. On the contrary, the output 0. Finally, the output can be transmitted to the neuronal connections with other neurons to the remaining calculations.

Learning

--------------------- ---------------------------------------- As written above, at issue is t

he critical value of the weight and how to set it? The world has many different training methods, as much as the network type. But some well-known, including back-propagation, delta rule and Kohonen training mode.

< p> Because of different structural systems, training is not the same rules, but most of the rules can be divided into two broad categories - regulatory and non-regulated. Supervising the training rules need to be "teachers" tell them how a particular input to the output should be. Then the training rule to adjust the weight of all the needs of value (this is a very complex network), and the whole process would start again until the correct data can be analyzed by the network. Regulatory approach of the training model includes back-propagation and the delta rule. The rules of non-regulatory approach without teachers, because they produce the output will be further evaluated.

Architecture

--- -------------------------------------------------- ------------------

In the neural network, comply with the rules clear word is the most "obscure" the. Because there are too many different types of networks, from simple Boolean networks (Perceptrons), to the complex network of self-adjustment (Kohonen), to the thermal dynamic network model (Boltzmann machines)! These have to comply with the standards of a network architecture.

A network including multiple neurons, "layer", the input layer, hidden layer and output layer. Input layer to r

eceive input and distribute to the hidden layer (because the user can not see the layers, so do see the hidden layer). The hidden layer is responsible for the necessary calculations and output to the output layer, the user can see the final result. Now, to avoid confusion, would not be here more in-depth study architecture talking about it. Different neural networks for more detailed information can be read Generation5 essays, including a multiple neural network "layer", the input layer, hidden layer and output Input layer to receive input and distribute to the hidden layer (because the user can not see the layers, so do see the hidden layer). The hidden layer is responsible for the necessary calculations and output to the output layer, the user can see the final result. Now, to avoid confusion, would not be here more in-depth study architecture talking about it. Different neural networks for more detailed information can be seen Generation5 essays.

Although we discussed the neurons , training and architecture, but we do not know what the actual neural network.

The Function of ANNs

---------------- -------------------------------------------------- ---- Neural networks are designed to work with patterns - they can be divided into two categories-type or association type. Category-type network can accept a few, and then classified. For example, ONR program accepts a number of the image and the output figure. Or PPDA32 program accepts a coordina

te and to classify it as Class A or B (type of training provided by the decision). More practical use can be seen Applications in the Military in the military radars, the radar could pick out a vehicle or tree.

< p> Lenovo model to accept a group of numbers and the output of another group. HIR procedures such as acceptance of a 'dirty' image and the output of a learned and the closest it an image. Lenovo model also can be used in complex applications such as signature, face, fingerprint recognition.

The Ups and Downs of Neural Networks

-------------------------- -------------------------------------------------- Neural network in this area has many advantages, making it more popular. It is in the type classification / recognition is very good. Neural networks can handle the exception and not the normal input data, which are important for many systems (such as radar and sonar systems). Many neural networks are mimic biological neural networks, that is their mode of operation modeled on the work of the brain. Neural networks also have to help the development of neuroscience, it can, like humans, accurate identification of objects and the speed of computers! The future is bright, but now ...

Yes, the neural network are also some bad points. This is usually because of lack of sufficiently powerful hardware. Power derived from the neural network to process information in parallel, that is, a number of data simultaneously. Therefore, to simulate a seria

l parallel processing machines is very time-consuming.

Another problem with neural networks is a problem in building a network of defined conditions are not - there are too many factors to consider: training algorithms, architecture, number of neurons in each layer, the number of layers, data show, etc. There are other additional factors. Therefore, more and more important over time, most companies can not afford to repeat the development of neural network to effectively solve the problem.

I don’t know if I found it randomly