In PAC Learning, Sample Complexity is defined as:
The function $m_\mathcal{H} : (0,1)^2 \rightarrow \mathbb{N}$ determines the sample complexity of learning $\mathcal{H}$: that is, how many examples are required to guarantee a probably approximately correct solution. The sample complexity is a function of the accuracy ($\epsilon$) and confidence ($\delta$) parameters. It also depends on properties of the hypothesis class $\mathcal{H}$ - for example, for a finite class we showed that the sample complexity depends on log the size of $\mathcal{H}$.
I am looking into getting clarification on the following notation
$m_\mathcal{H} : (0,1)^2 \rightarrow \mathbb{N}$
Ref: Understanding Machine Learning: From Theory to Algorithms Shai Shalev-Shwartz and Shai Ben-David