Pythonic way to compute sensitivity and specificity
--------------------------------------------------
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
and get $2,000 discount on your first invoice
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Magic Ocean Looping
--
Chapters
00:00 Pythonic Way To Compute Sensitivity And Specificity
01:12 Accepted Answer Score 6
01:41 Answer 2 Score 2
01:59 Answer 3 Score 0
02:20 Answer 4 Score 1
02:35 Thank you
--
Full question
https://stackoverflow.com/questions/4193...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #numpy
#avk47
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
and get $2,000 discount on your first invoice
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Magic Ocean Looping
--
Chapters
00:00 Pythonic Way To Compute Sensitivity And Specificity
01:12 Accepted Answer Score 6
01:41 Answer 2 Score 2
01:59 Answer 3 Score 0
02:20 Answer 4 Score 1
02:35 Thank you
--
Full question
https://stackoverflow.com/questions/4193...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #numpy
#avk47
ACCEPTED ANSWER
Score 6
Focusing on compactness through NumPy supported ufunc-vectorized operations, broadcasting and array-slicing, here's an approach -
C = (((mask==255)*2 + (truth==255)).reshape(-1,1) == range(4)).sum(0)
sensitivity, specificity = C[3]/C[1::2].sum(), C[0]/C[::2].sum()
Alternatively, going a bit NumPythonic, we could have counts C with np.bincount -
C = np.bincount(((mask==255)*2 + (truth==255)).ravel())
To make sure we are getting floating pt numbers as the ratios, at the start, we need to use : from __future__ import division.
ANSWER 2
Score 2
test for same shape:
a = np.random.rand(4,4)
b = np.random.rand(4,4)
print(a.shape == b.shape) #prints true
test for truth values:
#assuming you have scaled a and b to only include 1 or 0 (divide by 255)
true_positive = np.sum(mask * truth)
true_negative = len(mask.flat) - np.count_nonzero(mask + truth)
false_positive = np.count_nonzero(mask - truth == 1)
false_negative = np.count_nonzero(truth - mask == 1)
ANSWER 3
Score 1
The four arrays can be find and organized like that:
categories=dstack((mask&truth>0,mask>truth,mask<truth,mask|truth==0))
then the scores :
tp,fp,fn,tn = categories.sum((0,1))
finally the results :
sensitivity,specificity = tp/(tp+fn),tn/(tf+fp)
ANSWER 4
Score 0
My idea is use collections.Counter from standard library.
# building pair list (can be shortened to one-liner list comprehension, if you want)
pair_list = []
for y in range(mask.shape[0]):
for x in range(mask.shape[1]):
pair_list.append((mask[y, x], truth[y, x]))
# getting Counter object
counter = collections.Counter(pair_list)
true_positive = counter.get((255, 255))
false_positive = counter.get((255, 0))
false_negative = counter.get((0, 255))
true_negative = counter.get((0, 0))