Uncrewed underwater vehicles equipped with high-resolution synthetic aperture sonar (SAS) imaging payloads can survey large areas of seafloor rapidly. This facilitates the characterization and monitoring of the vast underwater dumpsites of unexploded ordnance that exist throughout the oceans of the world. However, it produces an overwhelming amount of detailed imagery, and this motivates a need for automated recognition of objects using machine learning. Machine learning performance is influenced by the volume and diversity of training data, which tends to be relatively scarce compared to other domains of application, such as optical image recognition. A complementary means of augmenting training is through synthetic data generated by simulation.

Technical Approach

This project has developed a new method for simulating raw SAS echo data (as opposed to already formed images) that is orders of magnitude faster than the commonly used point-diffraction methods. While those methods take in the order of hours to produce the data for a single object image snippet (i.e., a 5x5 meter patch with centimeter resolution), this new approach can achieve this in the order of seconds. The new approach uses Fourier wavefield formation and propagation in combination with a highly optimized optical rendering engine (Blender).


It has been shown to produce a quantifiably similar quality of data and data products, capturing the important coherent wave physics (including diffraction, speckle, aspect-dependence, and layover) as well as incorporating the effects of the SAS processing chain (including image focusing errors and artifacts). 


This is an enabler for synthetic data augmentation of real data to support more robust and advanced machine learning.