The Non-Negative Least-Squares (NNLS) algorithm should be considered as a possible addition to the HESSI suite of imaging programs The original design of the program was by C. L. Lawson, R. J. Hanson (``Solving Least Square Problems'', Prentice Hall, Englewood Cliffs NJ, 1974.). It performs admirably in mapping at the VLA and other radio interferometers, and has some advantages over both CLEAN and MEM. Here is what NRAO's Alan Bridle says about it:

NNLS distinguishes itself on bright, compact sources that neither `CLEAN' nor MEM can process adequately. Briggs showed that on such sources, both CLEAN and MEM produce artifacts that resemble calibration errors and that limit dynamic range. NNLS has no difficulty imaging such sources. It also has no difficulty with sharp edges, such as those of planets or of strong shocks, and can be very advantageous in producing models for self-calibration for both types of sources. Briggs (1995) showed that NNLS deconvolution can reach the thermal noise limit in VLBA images for which `CLEAN' produces demonstrably worse solutions. NNLS is therefore a powerful deconvolution algorithm for making high dynamic range images of compact sources for which strong finite support constraints are applicable. !Briggs' thesis (1995) is available over the web:

http://www.aoc.nrao.edu/ftp/dissertations/dbriggs/diss.html

See in particular chapter 4 , where Briggs points out the power of "Compact Support" in deconvolution, which is essentially the constraint that the map vanish outside a specified domain. After trying out SVD (singular value decomposition) as a convolution method, he goes on to say (page 179) that SVD is vastly inferior to NNLS, both in computational requirements and quality of solution. Finally on page 181 he gets down to explicit use of NNLS, where he says that excels on sources that are too compact for MEM but too extended for CLEAN.

Minimize || E x - f || subject to G x >= h. where E is an m2 x n matrix and f is the m2 element data vector. In general, G is an m x n constraint matrix, but for his case, G is the identity matrix and h is the zero vector.An interesting thing about NNLS is that it is solved iteratively, but as Lawson and Hanson (1983) show, the iteration always converges and terminates. There is no cutoff in iteration required. Sometimes it might run too long, and have to be terminated, but the solution will still be "fairly good", since the solution improves smoothly with iteration. Noise, as expected, increases the number of iterations required to reach the solution.

Tim Cornwell (also at NRAO) expounds on NNLS: cornwell.ps

There is also an algorithm that sets upper AND lower bounds (maximum brightness and zero brightness, say) called BVLS at: http://lib.stat.cmu.edu/general/bvls

- Fortran-77 version of Non-Negative Least Squares
- Fortran-90 version of Non-Negative Least Squares
- C version of Non-Negative Least Squares
- Preliminary (2/6/02) IDL version of Non-Negative Least Squares
- Matlab version of Non-Negative Least Squares

Ed Schmahl Last modified: Wed Feb 6 15:47:02 EST 2002