I'm currently studying VC-dimensions. Suppose I have the hypothesis class $H$, where $H = \{ h \in \mathbb{R} \rightarrow \{ 0,1 \} \}$. I think this means that the hypothesis class $H$ is a set of classifiers mapping between real numbers and binary-valued 1 or 0. So the VC-dimension for this one, if I'm not mistaken, would be infinite, right? My intuition is that it can arbitrarily shatter a large number of points (i.e., whichever points and no matter the labelling or how many points chosen, the classifier set $H$ contains a classifier that can correctly label it). Please correct me if I'm wrong with my thoughts! Thank you.
Asked
Active
Viewed 385 times
1
-
Just to add on $\mathcal{X} = \mathbb{R}$, but I think should be obvious – M. Fire Oct 02 '21 at 17:08
1 Answers
1
You are exactly correct. This should be clear from introductory books including Understanding Machine Learning and Foundations of Machine Learning.
usul
- 4,429