High Accurate and a Variant of k-fold Cross Validation Technique for Predicting the Decision Tree Classifier Accuracy
Creators
- 1. Department of Computer Science, Dravidian University, Kuppam, India
Contributors
- 1. Publisher
Description
In machine learning data usage is the most important criterion than the logic of the program. With very big and moderate sized datasets it is possible to obtain robust and high classification accuracies but not with small and very small sized datasets. In particular only large training datasets are potential datasets for producing robust decision tree classification results. The classification results obtained by using only one training and one testing dataset pair are not reliable. Cross validation technique uses many random folds of the same dataset for training and validation. In order to obtain reliable and statistically correct classification results there is a need to apply the same algorithm on different pairs of training and validation datasets. To overcome the problem of the usage of only a single training dataset and a single testing dataset the existing k-fold cross validation technique uses cross validation plan for obtaining increased decision tree classification accuracy results. In this paper a new cross validation technique called prime fold is proposed and it is experimentally tested thoroughly and then verified correctly using many bench mark UCI machine learning datasets. It is observed that the prime fold based decision tree classification accuracy results obtained after experimentation are far better than the existing techniques of finding decision tree classification accuracies.
Files
C84030110321.pdf
Files
(283.0 kB)
Name | Size | Download all |
---|---|---|
md5:01d05e89741dd948ba73b6398cd5df9c
|
283.0 kB | Preview Download |
Additional details
Related works
- Is cited by
- Journal article: 2278-3075 (ISSN)
Subjects
- ISSN
- 2278-3075
- Retrieval Number
- 100.1/ijitee.C84030110321