techniques and tips

Here are tips at each stage of the solve-field process, with conversion to azimuth / elevation per pixel with a known image location and time.

Solving field

Noisy images, including typical DSLR images of the night sky can have too many (false) sources detected in step 1. This can be observed in the *-objs.png files generated early in the solve-field processing chain. A reasonable goal for the number of sources detected is about 100. The default source count limit is 1000, but this is way too many for a practical solution time (or indeed, a solution at all). Adjusting the --sigma parameter is a useful way to control for noisy image. An image that at a glance looks high SNR upon closer inspection (e.g. a 3D intensity plot) may reveal a lot of false source detection potential. DSLR images especially should use --downsample 2 or --downsample 4. Thus typically two of the first lines you should see upon running solve-field should be like:

Downsampling by 2...
simplexy: found 129 sources.

Looking at the *-objs.png file should quickly reveal that mostly stars are highlighted. If there is debris, clouds, reflections, etc. that cause more than several false detections, this could drive failure to calibrate. In the gallery, that are images with a large planetary body in view from a satellite and other false detections, that still work. But in general too much clutter in the image causes more difficulty in solving.

Once a hash comes over about odds of about 1e6 (exp(1)**14) -- log-odds 14, solve-field attempts to enhance the match. The default log-odds threshold to solve is 1e9 (exp(1)**20) -- log-odds 20, solve-field declares the image field solved. If the image solves, one of the lines will be like:

log-odds ratio 35.9538 (4.11658e+15), 31 match, 0 conflict, 70 distractors, 123 index.

One of the most major improvements in speeding solution time, from impossibly long to say 10 seconds or less, is to set a minimum image field width with the -L parameter. is a blind solver, so it doesn’t know if you have the Hubble Telescope or a cell phone in the night sky. Obviously that is an extremely wide range of field of view (FOV) to cover. Why not make an obvious lower limit on your image FOV and speed image solution time by a factor of 20 or more. Don’t worry about fine adjustment to -L, being within 25-50% is more than adequate. So if I think my lens/camera setup gives a 10 degree FOV, I’ll set -L 5.

The *-indx.png shows good and bad sources. The *-ngc.png shows constellations and star names. This is readily confirmed with Stellarium should there be doubt.

In short:

  • --sigma and --downsample help reduce extraneous sources – try to get a little over 100 sources detected and manually see that most of them are stars
  • -L will greatly speed solution, particularly if you have DSLR, auroral camera, etc. imagery
  • is made for tangent plane images, but extensions exist to calibrate all-sky images.
  • Distortion of even prosumer lens may be too much for solve-field to handle over the entire image. Try cropping the image to a region of interest, save as .png and use solve-field on that.

Field accuracy

  1. Try to find a suitable image crop that will register with low enough error at the edges of the image. The wider the optical field of view, the closer to the center of the image and the smaller you have to make the crop. Otherwise, the center of the image will register well, but the error can grow unacceptably large at the edges > 1 degree az/el. This is where one has to visually inspect the image at each step (accuracy of RA/DEC, before converting to az/el) and iterate the cropping.  Very large DSLR images (several megapixels) benefit from downsampling with “solve-field –downsample 2” or so to smooth out the noise. When the FOV is too large (and you didn’t crop enough of the edges off) “solve-field” will simply fail. When a crop is good, solve-field solves in a few seconds on a modest laptop.

  2. Astrometry_azel post-processing in Python, which wrangles the data into a format acceptable to AstroPy for coordinate conversion to azimuth, elevation. This is the step where knowing the time and position of the photograph is vital. Seconds of time and 10s of meters of offset aren’t as important to wide field-of-view >30 degree images. Time and position error is increasingly important with decreasing field of view images.

  3. visually verify with Stellarium (noting the time zone, which is clearly displayed on the traditional program, but is not currently displayed on the web Stellarium service). Especially verify azimuth and elevation, which is where the accumulated error will be the worst.

Matlab exit return code for CI

Continuous integration (CI) systems generally rely on an integer return code to detect success (== 0) or failure (!= 0). The error() function of Matlab / GNU Octave returns non-zero status that works well with CI systems. To be compatible with versions of Matlab < R2019a, for example to specifically keep compatibility for a customer that needs outdated Matlab, we run Matlab CI tests using this script pair:


This script is called by

function matlab_runner()
% for non-interactive use only (from system terminal)
% avoids command line quote escape issues
% fprintf() and exit() to be compatible with Matlab < R2019a

r = runtests;

if isempty(r)
  fprintf(2, 'no tests were discovered\n')

if any(cell2mat({r.Failed}))
  fprintf(2, 'Failed with Matlab %s\n', version)



This calls matlab_runner.m. We use Python since it manages the command line much better than Matlab. Edit the variable “wanted_matlab” to test the required Matlab versions.

The exact method for switching Matlab versions may be different on your CI system.

#!/usr/bin/env python3
Tests several versions of Matlab using Lmod module (for Linux)
relies on tests/version_runner.m
"module" requires shell=True
both by Michael Hirsch June 2020
import subprocess
import sys
import platform

if platform.system() != "Linux":
    raise SystemExit("This script for Linux only")

# the tests take several minutes, so we didn't test every possible version
wanted_matlab = ['2017a', '2020a']

failed = 0

for w in wanted_matlab:
    k = f"matlab/{w}"
    ret ="module avail {k}", stderr=subprocess.PIPE, universal_newlines=True, shell=True)
    if k not in ret.stderr:
        print(f"SKIP: {k} not available", file=sys.stderr)

    mod_cmd = f"module load {k}"
    if int(w[:4]) < 2019:
        bat = "matlab -r -nodesktop -nosplash"
        bat = "matlab -batch"

    ret = + " && " + bat + " version_runner", universal_newlines=True, shell=True, cwd='tests')
    if ret.returncode != 0:
        failed += 1

if failed == 0:
    print("OK:", wanted_matlab)
    print(failed, " Matlab versions failed", file=sys.stderr)


Matlab R2019a added -batch command line option, which makes error() return code 1. matlab -batch is so much more robust than matlab -r that users should generally switch to commands like:

matlab -batch test_myscript

Fix Gfortran stack to static warning

GCC / Gfortran 10 brought new warnings for arrays too big for the current stack settings, that may cause unexpected behavior. The warnings are triggered like:

real :: big2(1000,1000)

Warning: Array ‘big2’ at (1) is larger than limit set by ‘-fmax-stack-var-size=’, moved from stack to static storage. This makes the procedure unsafe when called recursively, or concurrently from multiple threads. Consider using ‘-frecursive’, or increase the ‘-fmax-stack-var-size=’ limit, or change the code to use an ALLOCATABLE array. [-Wsurprising]


This is generally a true warning when one has assigned arrays as above too large for the stack. Simply making the procedure recursive may lead to segfaults.

Correct the example above like:

real, allocatable :: big2(:,:)


For multiple arrays of the same shape do like:

integer :: M=1000,N=2000,P=500

real, allocatable, dimension(:,:,:) :: w,x, y, z

allocate(x,y,z, mold=x)


As with the Intel Fortran heap-arrays command-line options, there could be a penalty in speed by having large arrays drive off the stack into heap memory.

Install latest GFortran 10 on Linux

Newer version of compilers generally have more useful and detailed warning messages. Recent GCC versions have been steadily improving Fortran 2018 support. As with any compiler, newer versions of Gfortran may require rebuilding other libraries linked with the Fortran compiler if the ABI presented by libgfortran changes. On Linux, one can switch Gfortran versions with update-alternatives. If experiencing errors getting any version of gfortran installed in Ubuntu, try:

add-apt-repository universe

Ubuntu PPA

The latest GCC / Gfortran for Ubuntu is available from the Ubuntu-test PPA. Add Ubuntu-test PPA by:

add-apt-repository ppa:ubuntu-toolchain-r/test

apt update

Install the most recent Gfortran (similarly for gcc-10, g++-10) by:

apt install gfortran-10

Switch between compiler versions with update-alternatives.

  • Windows: Install latest Gfortran
  • MacOS: get latest gfortran by brew install gcc

Setup and usage tips is easy to use on Linux and Windows. Windows uses Windows Subsytem for Linux for


After install, if you need the star index files, use:

Linux / Windows Subsystem for Linux

  1. Download/install the pre-compiled binary code

    apt install
  2. Install the star map data necessary to solve images.

    apt install astrometry-data-2mass-08-19 astrometry-data-tycho2-10-19


Using Homebrew:

brew install astrometry-net


The major steps in achieving a useful WCS starfield polynomial fit are:

  1. source extraction (identifying star position in image frame)

  2. quad asterism hashing, including tolerance for star position wiggling due to noise.

  3. Match hash to star catalog, at least 3 choices are available:

  4. Bayesian decision process, find extremely high probability solution or reject.

We have written a separate article on tips and techniques


[Optional] compile

This is normally not necessary, unless you want to customize/optimize.


apt install libcairo2-dev libnetpbm10-dev netpbm libpng12-dev libjpeg-dev zlib1g-dev swig libcfitsio-dev

Install: If you have Anaconda/Miniconda Python as your default, will use them (or whatever your default Python is). Back to at least 0.70, is Python 3 compatible.


tar xf*.gz


make py
make extra

make install INSTALL_DIR=~/

Add to ~/.bashrc

export PATH="$PATH:$HOME/"

do not use ~ or you’ll get error

cannot find executable astrometry-engine

Uncomment inparallel in ~/ (or /etc/astrometry.cfg)

Copy the star index files with


If it can’t find the index file, be sure ~/ contains:

add_path /home/username/astrometry/data

~ or $HOME will NOT work!

Reference paper

Program giving azimuth/elevation for each pixel of your sky image

Alternative: online image scaling

Update Logitech Unifying firmware

Eavesdropping / injection vulnerabilities allow unencrypted wireless mouse connection to be used as a keyboard by attackers to inject unwanted keystrokes, possibly taking over your PC. Force pairing allows unauthorized input to your PC. Logitech device firmware has distinct per-OS update procedures.


Unifying software is used to update firmware and pair receivers with mice and keyboards. In Logitech Unifying software click Advanced → Update Firmware


Since May 2017, the Linux “fwupd” utility has supported updating Logitech Unifying receivers. Ubuntu, Fedora and other modern Linux distros will raise a prompt to update Logitech receiver firmware, which is a seamless quick process.

Check firmware version and pair devices to the Logitech Unifying receiver with

apt install solaar

Update thanks to Richard Hughes, a senior developer at Red Hat.

Check firmware version

List all recognized devices, including firmware versions where applicable:

fwupdmgr get-devices


Xvfb makes fake X11 for CI

Continuous integration for program that plot or need a display can be tricky, since in many cases the CI doesn’t have an X11 display server. Workarounds include:

  • Pytest conditional tests that detect CI via environment variable, totally avoiding generating plots. This can reduce code coverage.
  • generate plots using xvfb dummy X11 display server. This maintains code coverage, and may allow dumping plots to disk for further checks


This method uses X server virtual framebuffer (Xvfb) on continuous integration services.

GitHub Actions

Add to the “.github/workflows/ci.yml”, and assuming the project uses PyTest, the xvfb-action enables Xvfb for that command:

- name: Run headless test
  uses: GabrielBB/xvfb-action@v1.2
    run: pytest


Travis-CI supports Xvfb by adding to project “.travis.yml”:

services: xvfb

Detect CI inside Python

Pytest handles conditional tests well.

import os
import pytest

CI = os.environ.get('CI') in ('True', 'true')

@pytest.mark.skipif(CI, reason="no plots for CI")
def test_myfun():
    from matplotlib.pyplot import figure,show


CI Environment variables

These CI’s and more set the environment variable CI as a de facto standard for easy CI detection.

One-step build/install CMake

CMake ≥ 3.15 is strongly recommended in general for more robust and easy syntax.

Compile/Install CMake

This will get you the latest release of CMake. For Linux and Mac, admin/sudo is NOT required.


There is an unoffical PyPi CMake package:

pip install cmake

CMake major versions

  • 3.18: CMake Profiler cmake -B build --profiling-output=perf.json --profiling-format=google-trace
  • 3.17: Ninja Multi-Config generator, --debug-find to see what find_package() is trying to do,
  • 3.16: Precompiled headers, unity builds, many advanced project features
  • 3.15: CMAKE_GENERATOR environment variable works like -G option, enhanced Python interpreter finding
  • 3.14: check_fortran_source_runs(), better FetchContent
  • 3.13: ctest --progress, better Matlab compiler support, lots of new linking options, fixes to Fortran submodule bugs, cmake -B build incantation, target_sources() with absolute path
  • 3.12: transitive library specification (out of same directory), full Fortran Submodule support
  • 3.11: specify targets initially w/o sources
  • 3.10: added Fortran Flang (LLVM) compiler, extensive MPI features added
  • 3.9: further C# and Cuda support originally added in CMake 3.8.
  • 3.8: Initial Cuda support
  • 3.7: comparing ≤ ≥ , initial Fortran submodule support
  • 3.6: better OpenBLAS support
  • 3.5: Enhanced FindBoost target with auto Boost prereqs
  • 3.4: Limit CPU usage when using ctest -j parallel tests
  • 3.3: List operations such as IN_LIST