Perceptron- Artificial Neuron for Machines & Robots

Artificial Intelligence & Robotic Brain are referenced from the Human Brain, and as we understand our brain we try to develop it artificially. Human Brain is one of the very complex structure which is hard to understand, comprehend & restructure it, but it restructures itself. It consists of trillions of connections which helps in day to day task & also taking big decisions.

What if this decision taking ability in some form can be transferred to Machines, Robots & computers. To develop & make that possible there must be the smallest constitute. That smallest constitute is called Perceptron. 

Perceptron in simple language is artificial neuron. It works the same way as our neuron works. The meta structure of Perceptron is inspired by the biological neuron. Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, the artificial neuron has a number of input channels, a processing stage, and one output that can fan out to multiple other artificial neurons.


As Our Brain & neurons are made up of cells, hardware wise & work with the electrical impulses in those neurons. Similarly, Memristors are the key hardware component in developing an artificial brain.  And neural network algorithms helps it in working. I already wrote about Memristors in my previous blog(Neural Hardware). 

A single entity of neural networks is called a Perceptron or Perceptron is a single layer neural network. It is also called a linear classifier(like a step function 1 for Yes, 0 for No). It takes multiple inputs to compute the single output. Combining they form a network consisting hidden layer of weights, bias & net sum.
The perceptron consists of 4 parts.

  1. Input values or One input layer
  2. Weights and Bias
  3. Net sum
  4. Activation Function.


So, how does perceptron works? There are some simple steps in it-

1. All the inputs x are multiplied with their weights w. Let’s call it k. 

2. Add all the multiplied values and call them Weighted Sum.

3. Apply that weighted sum to the correct Activation Function.

Perceptron Use Case

Perceptrons are widely used in prediction models, Linear regression model. 

But if you think about it, it looks as if the perceptron consumes a lot of information for very little output - just 0 or 1. How could this ever be useful on its own?


There is indeed a class of problems that a single perceptron can solve. Consider the input vector as the coordinates of a point. For a vector with n elements, this point would live in an n-dimensional space. To make life (and the code below) easier, let’s assume a two-dimensional plane. Like a sheet of paper. 


Further, consider that we draw a number of random points on this plane, and we separate them into two sets by drawing a straight line across the paper:

This line divides the points into two sets, one above and one below the line. (The two sets are then called linearly separable.)

A single perceptron, as bare and simple as it might appear, is able to learn where this line is, and when it finished learning, it can tell whether a given point is above or below that line.

Comments

Popular posts from this blog

Supervised & Unsupervised Machine Learning Techniques

MACHINE LEARNING IN DETECTION OF HEART ARRHYTHMIAS