I'm trying to write a Python script performing a geoalgorithm. What is surprising me is the following:
- I first test the algorithm by QGIS (2.8) interface. In my case, the GRASS interpolator v.surf.idw
- I see that the solution is sufficiently good using a certain setting of parameters.
Then, I run the same algorithm with the same parameters, froma Python script. In my case:
out_ras = processing.runalg("grass:v.surf.idw", vl,12,2,"field_3" ,False, "%f , %f, %f, %f "% (xmin , xmax , ymin , ymax), 0.5, -1, 0.001, fileoutput)
where:
vlis the point vector layerfield_3is the filed where vlaues to be interpolatedfileoutputis the raster file in output(xmin, , xmax , ymin , ymax)are the Extent of my layer
This setting (perfectly working when launched from QGis interface) produices a Nodata value Raster (only 1 cell). It seems that the algorithm does not recognize the vector in input. I've also checked the CRS of the layer (with vl.crs().authid() ) and everything sounds good.
Any help? Any experience in detecting different behaviour of the SAME algorithm run by Python through processing instead of from QGIS UI ?
print vl.isValid(), which should printTrue. Please tell us if that check is OK. – Germán Carrillo Jul 15 '15 at 14:43import processing
Define the data source
vl = iface.activeLayer()
if vl.isValid() : print 'Layer is valid!'
fileoutput = "C:\Users\iacopo.borsi\Desktop\rastout.tif"
ext = vl.extent() xmin = ext.xMinimum() ymin = ext.yMinimum() xmax = ext.xMaximum() ymax = ext.yMaximum() myfield = 'Z'
– iaborsi Jul 17 '15 at 08:08fileInfo = QFileInfo(fileoutput) baseName = fileInfo.baseName() rlayer = QgsRasterLayer(fileoutput, baseName)
if rlayer.isValid(): QgsMapLayerRegistry.instance().addMapLayer( rlayer ) else: print 'Raster layer is not valid!'
– iaborsi Jul 17 '15 at 08:08