Le Pré-Evénènement

Le Pré-Evénènement

Le 3 avril dernier, un peu plus d’un mois avant le submeeting, le Pré-évènement a réuni les chercheurs en robotique sous-marine et en photogrammétrie pour une journée de présentation de leurs travaux scientifiques et d’échanges en préparation des expérimentations de mai. Cet évènement, organisé par l’équipe TIRF (Traitement d’images et reconnaissance des formes, https://www.lre.epita.fr/image/) a eu lieu dans les locaux de l’EPITA de Paris et en visio.

Cliquez sur l’image de titre pour accéder à la vidéo sur la chaine youtube du Submeeting.

David Nakath (Dr., GEOMAR, Kiel University, Germany)

« Ground Truth for Physically Based Underwater Vision

Underwater cameras operate directly in a scattering medium. Hence, computer vision algorithms are posed with unique challenges: (i) the refraction of light entails geometric distortions when the optical interfaces air–glass-water are traversed by the light rays and (ii) the images are radiometrically distorted due to depth-dependent attenuation and scattering effects. Such effects can be precisely simulated with Monte-Carlo rendering techniques, to enable validation, testing and training of underwater computer-vision algorithms. Recently, it has even become possible to directly optimize the underlying parameters using differentiable raytracing-based analysis-by-synthesis approaches. However, to evaluate such an approach, the properties of the camera, lights, and the water have to be known. Hence, I propose to capture sequential underwater imagery, with a calibrated camera and light while constantly  measuring the physical attenuation and scattering properties of the water. This will establish a unique dataset, which will help to disentangle radiometric distortions and fully exploit the opportunities of the aforementioned approaches

Yvan Petillot (Pr., Heriot Watt University, United Kingdom)

« Underwater robotics research at Heriot-Watt »

The Oceans Systems Laboratory at Heriot-Watt University has been involved in underwater robotics research for many years. We initially focused on vehicle design but since the early 2000 we have dedicated ourselves to advancing autonomy. We have developed methods for underwater exploration using AUVs and ROVs using live sensor processing to understand the environment and adapt the mission to keep the vehicle safe (obstacle avoidance) and ensure that the correct data is gathered. More recently, we have been interested in autonomous subsea inspection and manipulation for offshore assets with a particular focus on offshore renewable. In our talk, we will describe our historical achievements and current developments in this field, from sensor processing and mapping to Navigation and control as well as mission planning. We will also present briefly 3 projects we are involved in where these technologies will be applied.

Vincent Hugel (Pr., Cosmer, Toulon University, France)

« Research on underwater robotics at the COSMER laboratory »

 Since 2016, the Cosmer has been involved in robotics activities mainly focused on cable management. The concept of robot chain was introduced to deal with the exploration of confined environments. First, local perception of the cable that link two robots was investigated to estimate the shape of the cable. To this end, the catenary model was used for weighted cables. Then, the shape of the cable was exploited to obtain the relative positioning between the vehicles, which can be used for the proprioceptive control inside the robot chain. Current research focuses on the control of chained robots with fixed-length cables, and on the dynamical modeling of varying-length cables between USV and ROV. Other research activities were carried out in the design of realistic fluid simulation through Smooth Particle Hydrodynamics, the management of swarms of gliders, or underwater robots in tight formation, and the localization of underwater vehicles relative to the seabed.

Olivier Chocron (Dr., LBMS, ENIB, France)

« Autonomous Mobility of Underwater Robots for the maintenance of EMR systems »

The fundamental objective is to provide future underwater robots with motor adaptation capabilities to their task and their environment in order to carry out complex inspection and maintenance tasks of submerged MRE (Marine Renewable Energy) structures. These tasks require the autonomous robot (AUV) to move in a precise manner, possibly following predetermined  trajectories, in order to avoid collisions with the marine environment or EMR underwater infrastructures that it must approach as closely as possible. In a logic of global adaptation, the environment includes the robot itself. This means that adaptation must also apply to the specificities of the robot, whether intrinsic (due to its design limitations) or circumstantial (internal/external breakdowns or disturbances). The original paradigm is that the resources of the AUV are limited, such as the number and power of its thrusters, the sensors or their range, as well as the onboard computing capabilities and energy. A task-oriented design and control methodology responding to this autonomous mobility problem is proposed here. It includes the dynamic reconfiguration of propulsion as well as its control through a hybridization of proven methods (model-based nonlinear control) and artificial intelligence (artificial evolution and automatic learning)

Baptiste Pelletier (PhD Student, LIRMM, DECISIO, ISAE-ONERA, France)

« Formal Skills-Based Software Architecture for Autonomous Underwater Robotics »

Work on autonomy in robotics often focuses on the low-level functional layer (hardware reliability, resilience of sensor data, etc.) or the high-level decision-making layer (navigation algorithm, planning, fault management, etc.). However, there is a little explored in-between, the executive layer, which must provide behaviors that can be used by the decision-making layer based on the elements available in the functional layer. We will then see how a skills architecture can help to better formalize this intermediate layer, its usefulness from a system verification and validation point of view, comfort for the end user, and finally its application to underwater robotics in various scenarios.

Erica Nocerino (Dr., Dipartimento SUS, Sassari University, Italy) and Fabio Menna (Dr., 3DOM, FBK Trento, Italy)

« Bridging photogrammetry, computer vision and robotics for underwater mapping and monitoring »

In this presentation, we will give an overview of our research efforts over the last decade targeted on bridging specific aspects of sister optical-based techniques, i.e. photogrammetry, computer vision and robotics, aimed at mapping and monitoring the underwater environment. The driving focus has been to integrate the rigorousness of photogrammetric methods aimed at ensuring the achievement of quality targets, with innovative and automated approaches developed within computer vision, nowadays artificial intelligence, and robotics. Relevant projects will showcase how this integration; both in terms of software development and hardware prototypes, has brought significant benefits in underwater surveying practices.

Dimitrios Skarlatos (Pr., CEG, Cyprus University of Technology, Cyprus)

« Steps in Improving 3D Reconstruction in Submerged Worlds »

Improving 3D reconstruction in the submerged world is a fascinating challenge that combines technological innovation and scientific understanding. In this brief presentation the motivation and need of underwater 3D reconstruction will be established under the scope of archaeological excavation in Mazotos shipwreck site. The challenges that underwater environment poses, the inherent restrictions due to water properties, illumination conditions, and depth will be briefly described. The advancements will focus on the development of a low cost ROV, able to perform data acquisition for 3D reconstruction. The main needs for such an ROV will be presented along with the design decisions taken to fulfill them. The second part of the presentation will be devoted in a novel machine learning approach to restore color formation without any external information, in depths where artificial lighting is necessary. Instead of using  artificial data as training data for ML, a novel approach has been developed, which derives training data from the 3D reconstruction data set itself. This approach makes the proposed methodology self-adaptive to environmental conditions and camera-lights combination. It is also versatile enough to be applied on archive data, just like the data used so far for its development.

Nesrine Karboua (PhD Student, LS2N, Nantes University, France)

« Simulator comparison for underwater robotics »

 As the exploration and monitoring of underwater environments grows in importance for science, industry and conservation, underwater robotics is positioning itself as a key sector in research and development. Simulators provide an essential platform for testing, refining and confirming underwater robotic technologies in a simulated setting before their application in real-world conditions. This presentation focuses on an in-depth benchmarking analysis of key simulators such as Gazebo, Stonefish, UUV Simulator, HoloOcean, URSim/UWRoboticsSimulator and MARUS, examining their maintainability and ability to accurately simulate complex marine environments. By exploring the origin of each simulator, their updating and their effectiveness in modeling the physics and rendering of underwater environments, this study aims to help researchers and developers select the simulator best suited to their specific projects

Thanh Phuong NGUYEN (Dr., LIS, Toulon University, France)

« Toward an effective mobile deep vision based on neural network compression »

Nowadays, deep learning is prevalent in many fields of computer science and related areas  such as computer vision, speech recognition, natural language processing, robotics, etc. Generally, deep models have large memory footprints, and are often computationally expensive during the inference phase. In order to deploy a deep vision model on embedded devices with limited computing resources,  it should be a lightweight model that is both energy and computationally efficient yet still performant. This talk presents our recent efforts to compress a pre-trained deep convolutional neural network model without significantly impacting its performance

HENRIQUE FAGUNDES GASPAROTO (Autorob, ISEN BREST, France)

« Reconfigurable Underwater ROBOTS »

Loïca Avanthey and Laurent Beaudoin (Dr., SEAL, LRE, EPITA, France)

« Micro-Geodesic Ground truth networks and datasets. Feedback on the 2022 Edition. »

(Cliquez sur l’image pour voir la présentation)