Scoring in Gridsearch CV
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Over a Mysterious Island Looping
--
Chapters
00:00 Scoring In Gridsearch Cv
00:46 Accepted Answer Score 12
01:17 Answer 2 Score 3
02:17 Thank you
--
Full question
https://stackoverflow.com/questions/5253...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #machinelearning #datascience #gridsearch
#avk47
ACCEPTED ANSWER
Score 12
You are basically correct in your assumptions. This parameter dictionary allows the gridsearch to optimize across each scoring metric and find the best parameters for each score.
However, you can't then have the gridsearch automatically fit and return the best_estimator_, without choosing which score to use for the refit, it will instead throw the following error:
ValueError: For multi-metric scoring, the parameter refit must be set to a scorer 
key to refit an estimator with the best parameter setting on the whole data and make
the best_* attributes available for that metric. If this is not needed, refit should 
be set to False explicitly. True was passed.
ANSWER 2
Score 3
What is the intent of using these values, i.e. precision, recall, accuracy in scoring?
Just in case your question also includes "What are precision, recall, and accuracy and why are they used?"...
- Accuracy = (number of correct predictions)/(total predictions)
 - Precision = (true positives)/(true positives + false positives)
 - Recall = (true positives)/(true positives + false negatives)
 
Where a true positive is a prediction of true that is correct, a false positive is a prediction of true which is incorrect, and a false negative is a prediction of false that is incorrect.
Recall and Precision are useful metrics when working with unbalanced datasets (i.e., there are a lot of samples with label '0', but much fewer samples with label '1'.
Recall and Precision also lead into slightly more complicated scoring metrics like F1_score (and Fbeta_score), which are also very useful.
Here's a great article explaining how recall and precision work.