I have been editing the locations of some points in QGIS. The scale that I am using varies between 1:6000 and 1:12000 depending on the area that I am looking at. I have been seeing some irregularities in that some points that I have moved. They appear to be in a different location than where I had placed them. Is it possible that when I move a point at 1:12000 scale its location is being stored at a lower precision than when I move a point at 1:6000 scale? For example, at 1:12000 scale there are n number of significant figures but at 1:6000 scale there are n+x significant figures. That is to say, do the number of significant figures used to describe a point's location vary based on the scale the point was created/altered in?
-
1For sure the accuracy it depends on how close you have zoomed when digitizing because the place of the vertex is captured from the screen pixel that is clicked and then converted into geographical units. As usual, great number of decimals does not mean that data are accurate. – user30184 May 02 '17 at 14:00
-
I realize the accuracy of my point placement could vary depending on the scale. I am wondering whether or not the precision of the stored spatial data varies depending on the scale. – Kingfisher May 02 '17 at 14:06
-
So, I realize that the accuracy of my input point can vary based on a host of variables. I think you may believe accuracy and precision are the same. Precision is different from accuracy. Precision can be equated to the number of significant figures a data point is measured to where as accuracy is how close the measurement is to reality. I am looking for someone who can answer based on technical knowledge of the program or who has first hand experience. – Kingfisher May 02 '17 at 15:00
-
Here's a test to demonstrate the problem. At scale 1:12000, create a point via click. Move the cursor, bring it back to the same position, create another point. Do it a third time. Do any of these points have exactly the same coordinate values? (If they do, you got lucky!) – mkennedy May 02 '17 at 18:07
-
Digitizing by hand is mouse movements and click events. It seems that gamers are the best experts on that area and for example https://www.reddit.com/r/GlobalOffensive/comments/1x2a3l/here_is_how_to_get_the_most_out_of_your_mouse/?st=j27vaqjd&sh=8c4f79ec is interesting reading. – user30184 May 02 '17 at 18:13
-
A typical laptop screen may be 31 cm / 1920 pixels wide which makes makes 60 pixels/cm. At scale 1:6000 it means that distance between two adjacent pixels is 1 meter and at 1:12000 scale 2 meters. That is the smallest mouse movement that can be observed. Computer can count even sub-pixel movements if the mouse has that high DPI but they can't be shown on screen. – user30184 May 02 '17 at 18:30
-
@Kingfisher did you read my answer? – mgri May 08 '17 at 14:55
-
@mgri I did and I gave you a +1. I didn't accept it because I was looking for more documentation with citation on the precision that could be expected from the program itself. Thank you for your answer. – Kingfisher May 08 '17 at 18:26
-
@Kingfisher oh well, thanks for the feedback! My question was purely a needing for understanding if I was on the right path and I appreciate your comment (I will edit the answer if I find something of interest, but maybe a detailed example of what do you expect would be great!) – mgri May 08 '17 at 18:36
1 Answers
Short Answer No, if both accuracy and precision remain the same.
Long Answer You are confusing the Precision from the Accuracy.
According to this page from Wikipedia ([...] are added by me for the sake of clearness):
[...] the general term accuracy is used to describe the closeness of a measurement to the true value [...] and precision is the closeness of agreement among a set of results.
From the definition above, it follows that you can talk about accuracy (and then precision) if you know the true value of the measurement, otherwise you can only talk about precision. See the image below for a clearer understanding:
Having said that, let us assume that you know the true value (in your case, it should be the starting point): the more you have zoomed closely to it when digitizing, the more accurate your new measurement (i.e. the new point) will be. The grade of precision will depend on how closely will be the new points (assuming that they need to be close at the end of digitizing).
Finally, it is not completely clear to me how did you digitize the points. I think to two possibilities:
- you manually moved them by using the Digitizing Tools and by literally moving the features. It is very unlikely to maintain accuracy and precision in this way (see above), so the quality of the result will depend on the map scale that you are using for looking at it (the lower the scale, the better the result);
- you manually entered the new coordinates: the number of decimal degrees that you entered may affect both accuracy and precision, depending on the latitude of the point. This happens because the Earth in not a sphere, but it is generally approximated as an ellipsoid: you will find a very well-written explanation in this answer to a related question. From a general point of view, the error will be smaller or larger depending on which precision you need (millimeters, centimeters, meters, and so on).
In conclusion, the quality of the result should only depend on (in a random order):
- how well do you know the position of the original point;
- the way with which you edit the point;
- the accuracy in respect of the position of the original point;
- the precision required by you;
- the latitude of the point;
and not from the scale you are using.
- 16,159
- 6
- 47
- 80
