SOME BEST ALGORITHM OF MACHINE LEARNING

It is no doubt that the sub-field of machine learning / artificial intelligence has increasingly gained more popularity in the past couple of years. As Big Data is the hottest trend in the tech industry at the moment, machine learning is incredibly powerful to make predictions or calculated suggestions based on large amounts of data. Some of the most common examples of machine learning are Netflix’s algorithms to make movie suggestions based on movies you have watched in the past or Amazon’s algorithms that recommend books based on books you have bought before.

Machine learning algorithms can be divided into 3 broad categories —
 supervised learning, unsupervised learning, and reinforcement learning.Supervised learning is useful in cases where a property (label) is available for a certain dataset (training set), but is missing and needs to be predicted for other instances. Unsupervised learning is useful in cases where the challenge is to discover implicit relationships in a given unlabeled dataset (items are not pre-assigned). Reinforcement learning falls between these 2 extremes — there is some form of feedback available for each predictive step or action, but no precise label or error message. Best on these 3 categories some of the best and easy algorithms solve any MI(machine learning) related problem are given below:-

DECISION TREE-  A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance-event outcomes, resource costs, and utility. Take a look at the image to get a sense of how it looks like.


From a business decision point of view, a decision tree is the minimum number of yes/no questions that one has to ask, to assess the probability of making a correct decision, most of the time. As a method, it allows you to approach the problem in a structured and systematic way to arrive at a logical conclusion.

ORDINARY LEAST SQUARES REGRESSION-  If you know statistics, you probably have heard of linear regression before. Least squares is a method for performing linear regression. You can think of linear regression as the task of fitting a straight line through a set of points. There are multiple possible strategies to do this, and “ordinary least squares” strategy go like this — You can draw a line, and then for each of the data points, measure the vertical distance between the point and the line, and add these up; the fitted line would be the one where this sum of distances is as small as possible.


Linear refers the kind of model you are using to fit the data, while least squares refers to the kind of error metric you are minimizing over.

LOGISTIC REGRESSION-  Logistic regression is a powerful statistical way of modeling a binomial outcome with one or more explanatory variables. It measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative logistic distribution.


In general, regressions can be used in real-world applications such as:
  • Credit Scoring
  • Measuring the success rates of marketing campaigns
  • Predicting the revenues of a certain product
  • Is there going to be an earthquake on a particular day?

CLUSTERING ALGORITHM- Clustering is the task of grouping a set of objects such that objects in the same group (cluster) are more similar to each other than to those in other groups.


Every clustering algorithm is different, and here are a couple of them:
  • Centroid-based algorithms
  • Connectivity-based algorithms
  • Density-based algorithms
  • Probabilistic
  • Dimensionality Reduction
  • Neural networks / Deep Learning

INDEPENDENT COMPONENT ANALYSIS-  ICA is a statistical technique for revealing hidden factors that underlie sets of random variables, measurements, or signals. ICA defines a generative model for the observed multivariate data, which is typically given as a large database of samples. In the model, the data variables are assumed to be linear mixtures of some unknown latent variables, and the mixing system is also unknown. The latent variables are assumed non-gaussian and mutually independent, and they are called independent components of the observed data.


ICA is related to PCA, but it is a much more powerful technique that is capable of finding the underlying factors of sources when these classic methods fail completely. Its applications include digital images, document databases, economic indicators and psychometric measurements.
         

Comments

Popular posts from this blog

Perceptron- Artificial Neuron for Machines & Robots

Supervised & Unsupervised Machine Learning Techniques

MACHINE LEARNING IN DETECTION OF HEART ARRHYTHMIAS