#Categories

  • Sheikh Aman

Explanation of Decision tree Algorithm in Machine Learning with python.

Updated: Aug 10


Decision tree algorithm

Open your eyes and just let your beautiful eyes to look around, from the face detection of your smartphone to a product recommendation of your amazon and Flipkart you will find applications of Machine learning and Artificial Intelligence.

So, hello and welcome me Mr. machine and today we will discuss Decision tree algorithm in machine learning.



Why do we choose the decision tree?


Well, this decision tree classifier is very easy to read and understand.

It belongs to one of the few models where you can interpret easily, why this classifier has exactly made that decision.



note-
"you can never justify that any algorithm is good or bad.
You have to apply 𝚑𝚒t and trial method to every algorithm and the algorithm that fits for your dataset with high accuracy is the best algorithm for you."

What is a Decision tree?


A decision tree is a graphical representation of all the possible solutions to a decision based on certain conditions.


You might be wondering then why it is called a decision tree?


It is because it starts with root then branches off with a number solution just like a tree. Similarly in a decision tree, it has a root and grows with an increasing number of a solution just like a tree.


Decision tree

Decision Tree Terminologies.


Root node: It represents the entire sample that further gets divided into two or more homogeneous sets.

Leaf node: Node cannot be segregated into a further node.

Splitting: Splitting is dividing the node/sub node into different parts on the basis of a common condition.

Branch/Sub Tree: Formed by Splitting the tree/node.

Pruning: Opposite of splitting, basically removing unwanted branches from the tree.

Parent/Child node: Root Node is the present node and all the other node branched from it is known as the child node.


How does the Decision Tree Algorithm work? | Basic Idea.


The idea behind any decision tree classifier algorithm is :

  1. With the help of Attribute Selection Measures(ASM), selecting the best attribute to split the records.

  2. Consider that attribute a decision node and then break the dataset into smaller subsets.

  3. Recursively repeat this process for every child node until and unless following condition matches:

  • Every tuple now belongs to the same attribute value.

  • No more attributes remain.

  • No more instances remain.



How does a tree decide where to split? | Attribute selection Measures(ASM)


Gini Index: The measure of impurity (or Purity) used in building a decision tree in C A R T (Classification And Regression Tree ) algorithm is a Gini index.

Information Gain: The information is the decrease in entropy after a dataset is split on the basis of the attribute. Constructing a decision tree is all about finding an attribute which returns the highest information gain.

Reduction In Variance: Reduction in variance is an algorithm that is used for continuous target variables (regression problems). The split lower variance is selected as the criteria to split the population.

Chi-Square: It is an algorithm to find out the statistical significance between the differences between sub-node and parent node.


Some important terms.

Entropy

Entropy is just a metric which measures impurity.

Impurity

Impurity is a degree of randomness.



Entropy in decision tree

What is Information gain?

  • Measures the reduction in entropy.

  • Decides which attribute should be selected as a decision node

If S is our total collection,

Information Gain = Entropy (S) - [(Weighted Avg) * Entropy(each features)]


Let us study all these terms more practically.

Here we will perform a decision tree algorithm and find out the best tree with the help of the CART algorithm. Here, we will predict the right condition to play.

At first, we will look at the data set.



Out of 14 instances, we have 9 yes(event) and 5 no(no_event).

so according to the formula, Entropy (S) is:



Now your question is which node to select as Root Node?

So, the process is we have to calculate entropy, information and information gained for each node.

Selecting Outlook as the root node

Just observe the outlook column.

Sunny has 2 yes and 3 no.

Overcast has 4 yes.

Rainy has 3 yes and 2 no.

Calculating Entropy for each feature.




Similarly, we have calculated Entropy, Information and Information Gained for every node.

And here is the result

Outlook 
Information : 0.693
Information Gained : 0.24

Temperature  
Information : 0.911 
Information Gained : 0.029

Humidity  
Information : 0.788 
Information Gained : 0.152

Windy  
Information : 0.892 
Information Gained : 0.048

Now, we will select the attribute with maximum information gained.

Here, information gain of outlook is greater. So, the outlook is the root node.

In this way, you have to recalculate all the attribute like information, entropy, information gain for further subnodes and at the end, you will get a best-structured tree for your problem.

Here, how the complete tree looks like after all calculation and you can predict easily when to play.


Pruning

Pruning is nothing but cutting down of node in order to get an optimal solution. It reduces complexity. In this picture, you can see that it only shows the possibility of yes.


Now, you must be thinking that there's so much of calculation right!!

No need to worry, There are inbuild libraries of python which will make your work easy. You have to just import the libraries and perform the codes. No need for calculation inside the code cell. But those are the basic knowledge you should know about Decision tree classifier. It helps to understand this algorithm with code more clearly and it also helps you to relate those basics with your code.

So, let's jump to the decision tree algorithm code section.


Code of Decision Tree Classifier with output. | Snippets




















Building Decision Tree classifier in Scikit-Learn | Python Code


link of the decision tree dataset




Here is the tree. | Scikit-learn decision tree.

Now, Visualize the decision tree.

Conclusion


In this tutorial, you have learned about decision tree algorithm, how to built this, how to write code. Hope this tutorial has helped you a lot. If you face any problem then go to contact us section drop your query there. You will get your answer the same day. Give lots of love to this tutorial and share to the needy person. You can follow me on twitter.

thanks.

Something Interesting

#Some_Interesting_topics

MR. MACHINE

Subscribe to Our Newsletter
Copyright © 2020 MR. Machine. All Rights Reserved
  • Facebook
  • Twitter
  • Pinterest
  • Instagram