The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function $f$ with finite domain, there is a distribution $\mu$ over the domain of $f$ such that computing $f$ to error $\epsilon$ against inputs from $\mu$ is just as hard as computing $f$ to error $\epsilon$ on worst-case inputs. Notably, however, the distribution $\mu$ depends on the target error level $\epsilon$: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels... (read more)
PDF