Authors:
(1) Luyuan Peng, Acoustic Research Laboratory, National University of Singapore;
(2) Hari Vishnu, Acoustic Research Laboratory, National University of Singapore;
(3) Mandar Chitre, Acoustic Research Laboratory, National University of Singapore;
(4) Yuen Min Too, Acoustic Research Laboratory, National University of Singapore;
(5) Bharath Kalyan, Acoustic Research Laboratory, National University of Singapore;
(6) Rajat Mishra, Acoustic Research Laboratory, National University of Singapore.
IV Experiments, Acknowledgment, and References
To train and test our model, we used one dataset collected from an underwater robotics simulator [8] as shown in Fig. 3 as well as two datasets collected from a tank as shown in Fig. 2. In the simulator dataset, we operated the ROV in simulation to perform inspection on a vertical pipe in a spiral motion. The total spatial extent covered by the ROV during the inspection is about 2×4×2 m. 14,400 samples of image-pose pair data were collected.
In the first tank dataset, we operated the ROV in a lawnmower path with translations only (Fig.2), and the rotations were minimal. We collected 3,437 data samples. In the second tank dataset, the ROV primarily performed rotation maneuvers at 5 selected points. We collected 4,977 data samples. We augmented the left-camera dataset by adding the right-camera data, and thereby using the geometry of the stereo camera placement to provide more training data. This worked well and yielded better performance. The total spatial extent covered in tank datasets was 0.4×0.6×0.2 m.