[James Version] Calibration bias
Noticed that the code would always converge to a local minima result that was slightly off. I pulled up surabhi's original code and noticed that this overconfidence did not exist. It actually had pretty nice results on hers - like the paper reported.
// James GA running 10 times on a set of k=3
AE | -1.67227 -0.0061265 -1.50538
RT | -1.68515 -0.0415931 -1.51129 0.104616 0.0566525 -0.20819
#0 | -1.67488 -0.0169397 -1.50817 0.104666 0.0566025 -0.208206
#1 | -1.6743 -0.0151142 -1.50812 0.104665 0.0566026 -0.20817
#2 | -1.67427 -0.0150438 -1.50784 0.104666 0.0566025 -0.208184
#3 | -1.67514 -0.0177074 -1.50811 0.104666 0.0566028 -0.208177
#4 | -1.67502 -0.017345 -1.50809 0.104666 0.0566026 -0.208183
#5 | -1.67452 -0.0158027 -1.50786 0.104665 0.0566025 -0.208178
#6 | -1.67653 -0.0220251 -1.50832 0.104661 0.0566308 -0.208189
#7 | -1.67617 -0.0209453 -1.50839 0.104666 0.0566027 -0.208215
#8 | -1.67564 -0.0192785 -1.50837 0.104666 0.0566029 -0.208177
#9 | -1.67427 -0.0150508 -1.50803 0.104665 0.0566027 -0.208204
EP | -1.67427 -0.0150508 -1.50803 0.104665 0.0566027 -0.208204
// Surabhi GA running 10 times on a set of k=3
AE | -1.66748 -0.0259636 -1.30091
RT | -1.72157 -0.0811713 -1.44921 0.030697 -0.121473 -0.18711
#0 | -1.69412 -0.0119396 -1.46087 -0.00838714 -0.07911 -0.185709
#1 | -1.6971 -0.0144251 -1.4505 -0.00792694 -0.110534 -0.182317
#2 | -1.67948 -0.017627 -1.46256 -0.0192463 -0.0716537 -0.229937
#3 | -1.6899 -0.0142955 -1.46161 -0.0155122 -0.0746916 -0.198319
#4 | -1.69344 -0.0131366 -1.45671 -0.0155376 -0.0893379 -0.188515
#5 | -1.6858 -0.0144514 -1.44562 -0.00427915 -0.126124 -0.20938
#6 | -1.68924 -0.0185381 -1.45134 -0.01437 -0.106472 -0.207014
#7 | -1.69797 -0.0106369 -1.45688 -0.0153039 -0.0878606 -0.173418
#8 | -1.70515 -0.00902027 -1.45315 -0.00687636 -0.101897 -0.153945
#9 | -1.68704 -0.0177302 -1.45533 -0.0178133 -0.0936731 -0.210895
EP | -1.69192, -0.0141801, -1.45546, -0.0125253, -0.0941353, -0.193945
Edited by Darren Tsai