12

This is related to a previous question How to determine projection parameters when customizing a projection I posted.

I am trying to quantitatively evaluate scale distortion associated with choosing different projection center, center azimuth, and scale factor values for a Hotine Oblique Mercator (HOM).

1) Is the following method a reasonable approach?

Using the same concept as the spreadsheet whuber created for evaluating Albers scale distortion, create a spreadsheet filled with Snyder’s equations for the HOM (ellipsoid formula, “alternative B”, page 74 in “Map Projections – A Working Manual”). The user inputs the chosen ellipsoid parameters (a and e), and values for the "customized" projection parameters (lat/long of projection center, centerline azimuth, scale factor, and false easting/northing). The rest of the projection constants are then automatically calculated. The spreadsheet also contains cells for each lat/long pair (in half-degree increments, or whatever increments are desired) across the projection area. The scale factor and rectified coordinates at each lat/long point are automatically calculated when changing any of the projection parameters. Now, the scale factor can be numerically evaluated 1) by computing an overall average and range of scale distortion across the projection region, and 2) the point coordinates and their associated scale factors can easily be imported into ArcMap to create a visual picture of how the scale distortion is distributed. Obviously the results are just a sample and will vary depending on how many lat/long locations are evaluated, but does this sound like a valid methodology? Spreadsheet looks like this:

Example of HOM scale error spreadsheet

2) I've also been using a distortion analysis tool created by Michael Braymen that calculates scale (and area and angular) distortion for any given projection using an "asterisk analysis" that approximates a Tissot's Indicatrix.

The tool's Python script can be viewed or there is a also Powerpoint available that describes the tool. I have modified the script to create 50 meter asterisk lines (i.e., a 50 meter ellipse "radius"), instead of the default 5000m.

When I compare the results from this tool against what is produced by the spreadsheet method in #1 above, the numbers do not agree very well. For example:

Sampling approximately the same number of locations (~400) across the same projection extents yields:

Avg Scale Error using method under #1 above = 0.9997200465 (0.027995%)
Max "compression" scale error, Method 1 = 0.9994254755 (0.057452%)
Max "expansion" scale error, Method 1 = 1.0006580056 (0.065801%)
Range scale error, Method 1 = 0.0012325301 (0.123253%)

Avg Scale Error using method under #2 above = 1.0001550206 (0.015502%)
Max "compression" scale error, Method 2 = 0.9998956844 (0.010432%)
Max "expansion" scale error, Method 2 = 1.0010584928 (0.105849%)
Range scale error, Method 2 = 0.0011628084 (0.116281%)

Can anyone think of a reason the results would be so different? Can I interpret the scale factor at a point (method 1) as the scale distortion of an "infinitely small circle" at that point?

Also, I am aware of the many discussions on creating Tissot's Indicatrices, so I don't need to be pointed to those ... unless there is some vetted tool out there that accurately, quantitatively evaluates distortion for user-defined regional (i.e., not global) areas, accepts the HOM, is easily implemented, and is nearly free :)? Actually, assuming the tool used in Method 2 is accurate, it work's great for my purposes. The drawback is it takes about 9 hours to run on my PC for each evaluation.

Taras
  • 32,823
  • 4
  • 66
  • 137
fbiles
  • 323
  • 2
  • 9
  • 2
    -1 - My initial take on this is that you are more likely to get a clear answer if you provide a more focused question. I can see at least 3 or 4 different questions that could be answered. First you ask if the first is a reasonable method, then you start asking a series of questions about the second question, below your error results. This site works best with specific, focused questions, because these yield answers that are the same, that others may encounter as well. Perhaps splitting this into two or more separate questions, would help. – Get Spatial Sep 06 '12 at 08:46
  • 1
    (1) I haven't seen the code, but the algorithms described in the PPT are hugely inefficient. (The "asterisks" can be replaced by a simple formula and the points should be intelligently selected, not random.) The calculation should complete in fractions of a second, not hours! (2) The spreadsheet approach is good in spirit, but as a practical matter it's too easy to make a tiny mistake and hard to identify or correct it. Anyone with the skills to conceive of and make such a spreadsheet should be using more powerful programming tools supporting better development and debugging facilities. – whuber Sep 06 '12 at 14:33
  • @Get Spatial - Thanks for the tip. Indeed, there are 2 main questions: 1) Is the spreadsheet idea a valid method for assessing scale distortion?, and 2) when I try to validate the method against results from another tool that also evaluates scale distortion, the numbers are quite different and I was wondering if anyone had any ideas as to why. Just seemed hard to break the questions up. Perhaps there's a better forum for these types of questions, but I've seen such great responses here from people that really seem to know their stuff, I thought I'd give it a try. – fbiles Sep 06 '12 at 20:30
  • @whuber Ah, if I only had the luxury of more time to try writing a program. Maybe I could try it in R in my spare time. Agreed, it would be more efficient in the long run and could have better features (if I was a faster/better programmer). I was able to verify the formulas were entered correctly by using the same values Snyder uses in his example (p 276), which are shown in grey in col A of the spreadsheet, & comparing the output to his answers. I get what you mean by replacing the asterisks with points (a la your response to my previous question), and maybe I will try to work on that idea. – fbiles Sep 06 '12 at 20:52
  • @whuber ...but I still don't understand why the results between the 2 methods would be that different... – fbiles Sep 06 '12 at 20:53
  • All it takes is a tiny typo in either program... If you have any interest, I would be happy to share a Tissot extension I wrote for ArcView 3.x: if you can run it, at least that could give you another reference for checking the results. – whuber Sep 06 '12 at 21:03
  • @whuber - Ok, I guess I'm a glutton for punishment. Your offer is accepted. The agency I work for still has ArcView 3.3 software available. I can install it on a laptop to check out the extension. – fbiles Sep 06 '12 at 23:11
  • The download page is at http://www.quantdec.com/software/tissot/site.htm. The .avx file is the extension: just copy it into the ArcView extension folder and it will be visible in the File|Extensions menu as "Tissot 1.05". If you want, you can extract the source code, because it's not encrypted. The guts are in a record headed "(Script.125" in a script called "Tissot.Compute". – whuber Sep 06 '12 at 23:20
  • FWIW, I have posted R code here. – whuber Mar 05 '13 at 02:36
  • what does scale coefficient mean in terms of map projection? –  May 20 '13 at 06:30
  • @Jordy It is the ratio between infinitesimal distances shown on the map (at a particular point in a particular direction) and the true distance according to the map's nominal scale. Because we are treating infinitesimal distances, the variation of this ratio around each point describes an ellipse: that is the Tissot indicatrix. Often just one or two scale coefficients are reported: one along the meridian and another perpendicular to it. These do not usually suffice to determine the ellipse; a direction of maximum scale distortion is also needed. – whuber May 20 '13 at 16:15

1 Answers1

1

We used a similar spreadsheet method developed by NGS for evaluating projection distortion statewide. We mapped the points and gave them a symbology classification that made it easy to visually evaluate the distortion. I do not understand the algorithms well enough to comment on the methods, however, we successfully created a low distortion projection using this type of analysis.

MistySkye
  • 41
  • 2
  • Thanks for the input. Where could I find more information about the 'spreadsheet method developed by NGS'? Is there a tool, paper, guidelines, or something available to download? – fbiles May 24 '21 at 03:29
  • @fbiles - check out the readme file from NGS on their design process here: https://geodesy.noaa.gov/pub/SPCS/MapData/_ReadMe.pdf

    data are available here https://geodesy.noaa.gov/pub/SPCS/MapData/ZoneDesignData/

    – JamiRae Aug 20 '21 at 14:14