انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

Aritificial Neural Networks

الكلية كلية تكنولوجيا المعلومات     القسم قسم البرامجيات     المرحلة 3
أستاذ المادة إيمان صالح صكبان الرواشدي       4/2/2011 8:30:38 AM

Application4

Artificial Neural Networks

 

 

1.The Nervous System

 

The human nervous system can be broken down into three stages

 

 

2.Basic Components of Biological Neurons

 

.

 

 

4. What are Artificial Neural Networks used for?

 

 

As with the field of AI in general, there are two basic goals for neural network research:

 

Brain modeling : The scientific goal of building models of how real brains work. This can potentially help us understand the nature of human intelligence, formulate better teaching strategies, or better remedial actions for brain damaged patients.

 

Artificial System Building : The engineering goal of building efficient systems for real world applications. This may make machines more powerful, relieve humans of tedious tasks, and may even improve upon human performance. These should not be thought of as competing goals. We often use exactly the same networks and techniques for both. Frequently progress is made when the two approaches are allowed to feed into each other. There are fundamental differences though, e.g. the need for biological plausibility in brain modeling, and the need for computational efficiency in artificial system building.

 

 

5.Why are Artificial Neural Networks worth studying?

 

 

6.Architecture of ANNs

 

1-The Single Layer Feed-forward Network

 

 

 

 

 

2-Multi Layer Feed-forward Network

 

  

 

 

 

 

 

3-The Recurrent Networks differ from feed-forward architecture.

 

 

 

 

 

 

There could be neurons with self-feedback links; that is the output of a neuron is fed back into it self as input.

 

 

7.Learning in Neural Networks

 

There are many forms of neural networks. Most operate by passing neural ‘activations’ through a network of connected neurons.

 

One of the most powerful features of neural networks is their ability to learn and generalize from a set of training data. They adapt the strengths/weights of the connections between neurons so that the final output activations are correct.

 

 

There are three broad types of learning:

 

1. Supervised Learning (i.e. learning with a teacher)

 

2. Reinforcement learning (i.e. learning with limited feedback)

 

3. Unsupervised learning (i.e. learning with no help)

 

 

8-The McCulloch-Pitts Neuron

 

9.General Procedure for Building Neural Networks

 

10.Artificial Neuron - Basic Elements

 

         

 

 

 

Activation Functions

 

An activation function f performs a mathematical operation on the signal output. The activation functions are chosen depending upon the type of problem to be solved by the network.

 

 

 

11.NEURAL NETWORK LEARNING RULES

 

Our focus in this section will be artificial neural network learning rules.

 

A neuron is considered to be an adaptive element. Its weights are modifiable depending on the input signal it receives, its output value, and the associated teacher response. In some cases the teacher signal is not available and no error information can be used, thus the neuron will modify its weights based only on the input and/or output. unsupervised learning. Under different learning rules, the form of the neuron s activation function may be different.

 

 

 

 

 

 

11.1.Hebbian Learning Rule

 

11.2.Perceptron Learning Rule

 

 

11.3.Delta Learning Rule    

 

11.4.Widrow-Hoff Learning Rule

 

 

11.5. Correlation Learning Rule

 

 

11.6.Winner-take-all Learning Rule

 

 

 

 

11.7.Outstar Learning Rule

 

 

 

 

 

 

 

11.9 Application of  Neural Network

 

·        Applications can be grouped in following categories:

 

·        Clustering:

 

          

 

·        Classification/Pattern recognition:

 

           The task of pattern recognition is to assign an input pattern

 

            (like handwritten symbol) to one of many classes. This category

 

             includes algorithmic implementations such as associative  

 

             memory.

 

·        Function approximation :

 

         The tasks of function approximation is to find an estimate of the

 

         unknown function subject to noise. Various engineering and   

 

        scientific disciplines require function approximation.

 

·        Prediction Systems:

 

          The task is to forecast some future values of a time-sequenced

 

          data. Prediction has a significant impact on decision support  

 

           systems. Prediction differs from function approximation by  

 

           considering time factor. System may be dynamic and may   

 

            produce different results for the same input data based on  

 

            system state (time).

 

 

 


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .