Find- S is used to Find A maximal Specific Hypothesis out of All possible Hypothesis
In this blog we are going to see one more example of Find-S algorithm. We have already discussed about Find-S algorithm. You Can Read it from here. So Let start it
Here is the Data Set, which we are going to use
Citations | Size | In Library | Price | Editions | Buy |
SOME | SMALL | NO | AFFORDABLE | MANY | NO |
MANY | BIG | NO | EXPENSIVE | ONE | YES |
SOME | BIG | ALWAYS | EXPENSIVE | FEW | NO |
MANY | MEDIUM | NO | EXPENSIVE | MANY | YES |
MANY | SMALL | NO | AFFORDABLE | MANY | YES |
In this Training Data set we have some parameter like Citations, Size, In Library, Price, Edition. On the basis of these parameter an person is going to decide either he is going to Buy the book or Not. So over Target Attribute is Buy.
In this Algorithm our goal is to Find the Maximal Specific Hypothesis that can easily able to classify out test data correctly.
In this we are going to start with Most Specific Hypothesis that is
First, we initialize h to most specific hypothesis: h0 = {φ, φ, φ, φ}
Now we consider first training example: x1 = (Some , Small , No , Affordable , Many)
This is the Negative training example. So we neglect the training
after this h1 remain as it is
h1 = {φ, φ, φ, φ}
Now we consider second training example: x2 = (Many , Big , No , Expensive , One)
This is Positive Training example From here, it is clear that none of the attributes value in h is satisfied with the attributes value in x1. So we will compare attribute value of hypothesis with attribute value of example if they match we keep the same otherwise attribute value in Hypothesis is replaced with more general value
So, each attribute in h is replaced by the next general constraints –
h2 = (Many , Big , No , Expensive , One)
No comments:
Post a Comment