I had a feature class made up of 1,700,000 polygons. I used Geopandas to create a geodataframe:
state = "MD"
state_gdb = r"C:\Projects\Pop_Alloc\{}_Data.gdb".format(state)
join_feat = "{}_Ftprnt_CB_Join".format(state)
bldg_feat_df = gpd.read_file(state_gdb, layer=join_feat)
No problem; it took maybe 5-10 minutes to run. I have another feature class; let's call it 'parcels.' It has around 2,600,000 polygon features. I tried to do the same thing; make a geodataframe.
parcel_gdb = r"C:\Projects\Pop_Alloc\Parcels_by_state.gdb"
state_parcel = "{}_Parcels_merge".format(state)
parcel_feat_df = gpd.read_file(parcel_gdb, layer=state_parcel)
It has now been running for several hours. Is there a reason for this? Do I simply not have enough in memory to create this geodataframe. Is there a way to resolve this issue (generator, chunking)?