My objective is to read a .txt file of websites that I have stored and then output the contents of each entry in the .txt file as a .nc file. This is my first time using GitBash and I think that I have some of this basic loop constructed, but i'm not sure how I can curl each website from the .txt file and output it with the appropriate name. Below I have included a minimal working example:
Input (mytextfile.txt):
https://data.pacificclimate.org/data/downscaled_gcms/tasmax_day_BCCAQv2+ANUSPLIN300_MRI-CGCM3_historical+rcp45_r1i1p1_19500101-21001231.nc.nc?tasmax[0:55114][152:152][290:290]
https://data.pacificclimate.org/data/downscaled_gcms/tasmax_day_BCCAQv2+ANUSPLIN300_GFDL-ESM2G_historical+rcp45_r1i1p1_19500101-21001231.nc.nc?tasmax[0:55114][152:152][290:290]
https://data.pacificclimate.org/data/downscaled_gcms/tasmax_day_BCCAQv2+ANUSPLIN300_HadGEM2-ES_historical+rcp45_r1i1p1_19500101-21001231.nc.nc?tasmax[0:55114][152:152][290:290]
Code:
for url in $(mytextfile.txt); do
curl --globoff "$url" > FileNameGoesHere.nc
Output individual .nc files for each URL:
tasmax45_MRI-CGCM3.nc
tasmax45_GFDL-ESM2G.nc
tasmax45_HadGEM2-ES.nc
So to my understanding, right now the script would read in each line of my text file, execute a curl on it, then output with >, but I want each of the output files to have a particular name.
If I was using Python then zip(url,out_name) would be something I would think of (not sure if this is a thing in GitBash though). Additionally, sometimes when I curl the web contents will not download fully (e.g if file is 500kb, sometimes it'll download with 450kb because of connection errors), so if someone was interested in helping me build some functionality so that if this occurs it will redownload, that would be great, but not necessary.