Under special circumstances, manual intervention is needed, which reflects the dynamic nature of the system.
Introduction-introduction. Many people have heard of this word, but few people really understand what it is. The purpose of this paper is to introduce the basic knowledge of neural network, including its function, general structure, related terms, types and applications. The word "neural network" actually comes from biology, and the correct name of the neural network we refer to should be "artificial neural networks (ANNs)". In this article, I will use these two interchangeable terms simultaneously. A real neural network consists of billions to billions of cells called neurons (tiny cells that make up our brains), which are connected in different ways to form a network. Artificial neural network is an attempt to simulate this biological structure and its operation. There is a problem here: we don't know much about biological neural networks! Therefore, the architecture of neural networks varies greatly among different types, and all we know is the basic structure of neurons. Neurons-Basic neurons include synapses, somas, axons and dendrites. Synapse is responsible for the connection between neurons. They have no direct physical connection, but there is a small gap between them, which allows electronic signals to jump from one neuron to another. Then these electrical signals will be handed over to soma for processing, and the processing results will be transmitted to axon through its internal electrical signals. Axons distribute these signals to dendrites. Finally, dendrites carry these signals to other synapses and continue the next cycle. Like basic neurons in biology, artificial neural networks also have basic neurons. Each neuron has a certain number of inputs, and a weight is also set for each neuron. Weight is an indicator of the importance of input data. Then, the neuron will calculate the net value, that is, the sum of all inputs multiplied by their weights. Each neuron has its own threshold. When the total weight is greater than the threshold, the neuron will output 1. Otherwise, output 0. Finally, the output will be transmitted to other neurons connected to this neuron to continue the rest of the calculation. Study, study, study, study, study, study. There are many different training methods in the world, just like there are many types of networks. But some famous ones include back propagation, Delta rule and Kohonen training mode. Due to different structural systems, training rules are different, but most rules can be divided into two categories-supervised and unsupervised. The training rules of supervision mode need the "teacher" to tell them what the specific input should be. Then the training rules will adjust all the necessary weights (which is very complicated in the network), and the whole process will start again until the network can correctly analyze the data. The training mode of supervision method includes back propagation and delta rule. The rules of unregulated methods do not need teachers, because the output they produce will be further evaluated. Architecture. Because there are many kinds of networks, from simple perceptron to complex Kohonen to Boltzmann machines! And these, all abide by the standards of a network architecture. The network includes multiple neuron "layers", an input layer, a hidden layer and an output layer. The input layer is responsible for receiving input and distributing it to the hidden layer (it is called the hidden layer because users can't see these layers). These hidden layers are responsible for the required calculations and output the results to the output layer, and users can see the final results. Now, in order to avoid confusion, I won't go into the topic of architecture here. For more details about different neural networks, please refer to the 5th generation paper. Although we have discussed neurons, training and architecture, we still don't know what neural networks actually do. The function of artificial neural network. A classification network can accept a set of numbers and then classify them. For example, the ONR program accepts an image of a number and outputs the number. Or the PPDA32 program accepts a coordinate and classifies it as Class A or Class B (the class is determined by the training provided). For more practical purposes, you can see the military application of military radar, which can detect vehicles or trees respectively. Association mode accepts one set of numbers and outputs another set of numbers. For example, the HIR program accepts a "dirty" image and outputs the closest image it has ever learned. Lenovo pattern can be applied to complex applications, such as signature, face recognition, fingerprint recognition and so on. The ups and downs of neural networks. It is excellent in type classification/recognition. Neural network can deal with abnormal and abnormal input data, which is very important for many systems (such as radar and acoustic positioning system). Many neural networks imitate biological neural networks, that is, they imitate the way the brain works. Neural network also contributes to the development of neuroscience, so that it can distinguish objects as accurately as human beings and has the speed of computers! The future is bright, but now ... yes, neural networks also have some shortcomings. This is usually due to the lack of sufficiently powerful hardware. The power of neural network comes from processing information in parallel, that is, processing multiple data at the same time. So it is very time-consuming to have a serial computer to simulate parallel processing. Another problem of neural network is that the conditions defined for constructing a network for a certain problem are insufficient-there are too many factors to consider: training algorithm, architecture, number of neurons in each layer, how many layers there are, data representation, etc. , and many other factors. Therefore, as time becomes more and more important, most companies cannot afford to re-develop neural networks to effectively solve problems. NN neural network, ANNs artificial neural network, artificial neural network neuron synapse self-organizing network network modeling thermodynamic properties thermodynamic network model++++++++I have never heard of grid algorithm. It seems that only the word grid computing has developed rapidly with the internet technology, and it is a new computing model specifically for complex scientific computing. This computing model uses the Internet to organize computers scattered in different geographical locations into a "virtual supercomputer", in which each computer participating in the calculation is a "node" and the whole calculation is a "grid" composed of thousands of "nodes", so this computing method is called grid computing. The "virtual supercomputer" organized in this way has two advantages: first, it has superior data processing ability; The other is to make full use of the idle processing power on the Internet. Simply put, grid is to integrate the whole network into a huge supercomputer, and realize the comprehensive enjoyment of computing resources, storage resources, data resources, information resources, knowledge resources and expert resources.