Aperture photometry is a critical method for estimating the visual magnitudes of stars and satellites, essential in Space Domain Awareness (SDA) for tasks like collision avoidance. Traditional methods have fixed aperture shapes, limiting accuracy and adaptability. We introduce a novel approach that defines pixel-specific regions for the aperture and annulus, significantly improving accuracy. Nevertheless, conventional aperture photometry is constrained by predefined equations, leading to errors and sensitivity to image conditions. To overcome these limitations, we propose a learned photometry pipeline that combines aperture photometry with machine learning. Our approach demonstrates remarkable effectiveness for both stars and satellites across diverse image conditions. We rigorously tested it on three datasets, including a custom synthetic dataset and real imagery. Our results showcase outstanding performance, with a 1.44% error in star visual magnitude estimation and a 0.64% error in satellite visual magnitude estimation.
The application of machine learning to a task often necessitates the production of synthetic training data. Some tasks involve rare, but important, scenarios that may not yet have been observed; others are difficult to collect or annotate in large volumes. These difficulties are particularly acute in computer vision applications to scientific imagery, in which human annotation is complicated by noise, ambiguity, and interpretation. One such application is the detection of resident space objects (RSOs) in electro-optical images for space domain awareness (SDA). In many cases, the mislabeling of RSOs by an imperfect annotator (human or machine) can be detrimental to machine learning model performance, especially when the signal-to-noise (SNR) is near or below human detection levels. In this work we introduce SatSim, a modular electro-optical synthetic data generation engine designed to procedurally generate representative, annotated synthetic electro-optical imagery of remote space scenes. SatSim enables rapid generation of synthetic data through Graphics Processing Unit (GPU) acceleration with TensorFlow. This paper discusses the use of SatSim to enhance machine learning approaches and reports the performance of models trained with real data, synthetic data, and real data augmented with synthetic RSOs. In addition, we explore using SatSim to evaluate current state-of-the-art RSO detection algorithms with new sensors (such as all-sky and event-based) and rare but critically important scenarios (such as satellite breakups and collisions) for which limited real data are available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.