Collecting exposures from three telescope systems means managing a lot of data. But when one of those systems is a 4-scope array, managing exposures requires a plan.
Last season, with three telescopes running independently, we relied on a completely local data setup. Each telescope was hard wired to its own laptop in our control room. Exposures were stored on the laptops, and we used free Windows-based sync software to move the files to an on-site network attached storage (NAS) drive. Another copy of the sync software pulled the files from the NAS to our Monster processing machine. While this setup worked, it was painful in the morning waiting while the files synced throughout the system. We weren’t confident that this sync software could perform live syncing during image capture.
When we added 3 more telescopes in the off-season turning our RH300 into a 4-scope RH300 array, we decided we needed a better plan. We would possibly collect hundreds of 50Mb+ files each night. In preparation for full remote operation capability we changed our computer hardware setup, putting Intel NUC mini computers on each telescope. All exposures would be stored locally on the NUCs. We also upgraded our on-site network to fiber optic, providing a much broader pipeline for moving large amounts of data. Our syncing solution changed from Windows free-ware to Google Backup and Sync, Google’s free cross-platform software solution for pushing data to, and pulling data from, Google Drive. We have an unlimited space Google Drive account making this an attractive solution for storing and syncing huge amounts of data.
Google Backup and Sync is installed on each NUC, as well as on the Monster. As each exposure is taken it is immediately uploaded into an organized Images in Progress directory on Google Drive. Each file is then mirrored to the Monster in real time, maintaining the directory structure. By the end of each imaging session, all files have already synced throughout the system. We have a Mac Pro in Bangkok, also synced to Google Drive, housing another mirror of our data. This allows us to process both in Samphran and Bangkok.
In order to take advantage of drive speed on the Monster, images for objects currently being shot and processed are synced to a PCI NVME 280GB SSD. Processing of these images is done on an M2 NVME 1TB SSD. After an object has been processed, the exposures are moved into a Completed Images directory on Google Drive, automatically pulling them off of the NVME SSD drive. The Completed Images directory is then synced to a SATA 5TB drive on the Monster for long-term local storage. We aren’t worried about hard drive failures, as all of the data exists in the Google Cloud.
For anyone interested, here are the specs for our Monster processing machine:
PixInsight, our primary image processing software, takes advantage of all CPU cores. To speed up PixInsight processing, we create a 30Gb RAM disk with 8 swap file directories. A 9th swap file directory for failover sits on one of the NVME SSD drives.