In your case, richdem use osgeo.gdal to open the raster and the result is a numpy array with the elevations values.
shasta_dem = rd.LoadGDAL('mydem.tiff')
shasta_dem.shape
(60, 100)
print(shasta_dem)
[[170.52 170.74 170.81 ... 195.55 195.72 195.93]
[169.09 169.09 168.69 ... 196.58 196.61 196.73]
[166.79 166.75 165.95 ... 197. 197.29 197.66]
...
[191.41 191.85 192.57 ... 194.74 194.69 194.13]
[186.95 186.49 187.27 ... 195.88 195.9 195.86]
[184.65 184. 184.44 ... 196.99 197.02 197.02]]
fig = plt.figure(figsize = (12, 8))
plt.imshow(shasta_dem, interpolation='none',cmap='jet')
plt.show()

x and y are simply the shape of the array (60 lines and 100 columns in my example) because Matplotlib knows nothing about georeferenced rasters ( (look at Display a georeferenced DEM surface in 3D matplotlib).
The typical geospatial coordinate reference system is defined on a cartesian plane with the 0,0 origin in the bottom left and X and Y increasing as you go up and to the right. But raster data, coming from its image processing origins, uses a different referencing system to access pixels. We refer to rows and columns with the 0,0 origin in the upper left and rows increase and you move down while the columns increase as you go right (from Python affine transforms)
A solution is to take into account the Geotransform matrix
gt = shasta_dem.geotransform
But richdem does not allow to get the dimensions of the dem (def LoadGDAL(filename, no_data=None) in init.py) to compute the real extension of the dem for matplotlib
Therefore, with GDAL
from osgeo import gdal
ds = gdal.Open(dem)
data = ds.ReadAsArray()
gt = ds.GetGeoTransform()
extent = (gt[0], gt[0] + ds.RasterXSize * gt[1],gt[3] + ds.RasterYSize * gt[5], gt[3])
print(extent)
(202086.57700025057, 205625.41440703123, 88411.04799999669, 90534.35044407193)
fig, ax = plt.subplots(figsize=(12, 8))
img = ax.imshow(data, extent=extent, origin='upper', cmap='jet')
plt.show()
