Skip to main content

Mixture of art and science

Working with people in machine learning was surprising at first.

I expected, and found, a lot of knowledge in math, statistic and probabilities. As an example, I remember discussing numerical stability of "Shannon entropy" vs the variance form.

Going from:

pwin=11+exp⁑(Ξ³oppβˆ’Ξ³ref)H=βˆ’pwinΓ—log⁑2(pwin)βˆ’(1βˆ’pwin)Γ—log⁑2(1βˆ’pwin)\begin{matrix} p_{win} = \frac{1}{1 + \exp(\gamma_{opp}-\gamma_{ref})} \\ \mathrm{H} = -p_{win} \times \log_2(p_{win})-(1-p_{win})\times \log_2(1-p_{win}) \end{matrix}

to:

varianceβˆ—win=βˆ’log1pexp(Ξ³βˆ—opp)βˆ’log1pexp(Ξ³βˆ—ref)pβˆ—win=exp⁑(varianceβˆ—win)H=norm(varianceβˆ—win)\begin{matrix} variance*{win} = -\mathrm{log1pexp}(\gamma*{opp}) -\mathrm{log1pexp}(\gamma*{ref}) \\ p*{win} = \exp(variance*{win}) \\ \mathrm{H} = norm(variance*{win}) \end{matrix}

and taking

log1pexp(x)={log(1+exp⁑x)x≀40xx>40\mathrm{log1pexp}(x) = \begin{cases} log(1+\exp x) & x\leq 40 \\ x & x> 40 \end{cases}

The unexpected parts was

StokfishTests.png.png