1

In PAC Learning, Sample Complexity is defined as:

The function $m_\mathcal{H} : (0,1)^2 \rightarrow \mathbb{N}$ determines the sample complexity of learning $\mathcal{H}$: that is, how many examples are required to guarantee a probably approximately correct solution. The sample complexity is a function of the accuracy ($\epsilon$) and confidence ($\delta$) parameters. It also depends on properties of the hypothesis class $\mathcal{H}$ - for example, for a finite class we showed that the sample complexity depends on log the size of $\mathcal{H}$.

I am looking into getting clarification on the following notation

$m_\mathcal{H} : (0,1)^2 \rightarrow \mathbb{N}$

Ref: Understanding Machine Learning: From Theory to Algorithms Shai Shalev-Shwartz and Shai Ben-David

user490208
  • 11
  • 2
  • a function from ordered pairs $(\epsilon,\delta)$ to the natural numbers. in other words, given $\epsilon,\delta \in (0,1)$, the function $m_{\mathcal{H}}(\epsilon,\delta)$ returns the positive integer such that having $m_{\mathcal{H}}(\epsilon,\delta)$ samples will yield a probably approximately correct classifier. – mathworker21 Aug 25 '22 at 17:37

0 Answers0