The work is focused on visual-based long-term localization under challenging conditions. In recent years, visual SLAM (Simultaneous Localization And Mapping) has proven to be a very powerful algorithm for real-time localization of robotic systems in different applications. It makes possible to extract images from one or more of the robot on-board cameras to simultaneously calculate the trajectory of the cameras (i.e., calculate the trajectory of the robot) and the structure of the scene in which it operates. The SLAM algorithm is a central topic in robotic research field and its applications are becoming more and more numerous. Actually, SLAM applications are covering a wide number of fields, ranging from smart vacuum cleaner robots [Hasan 2014], to the exploration of dangerous areas [Holmquist 2017, Weidner 2017, Vanicek 2018, Appapogu 2019] (e.g., areas of natural disasters, military combat zones, caves, underwater exploration, etc.) and the mapping of inaccessible environments to humans (e.g., NASA’s Mars Exploration Rover 2003 [Di 2008] and 20201 missions, China’s Chang’E-3 [WANG 2014] and Chang’E-4 [Liu 2020] missions). Figure 1.1 presents some examples of SLAM applications in various fields.