Profile

M.Sc. Francis Engelmann
Room 127
Phone: +49 241 80 20760
Fax: +49 241 80 22731
Email: engelmann@vision.rwth-aachen.de

I am always looking for motivated students: if you have good programming knowledge and want to work hard on exciting projects please contact me!


Publications


Theodora Kontogianni, Francis Engelmann, Alexander Hermans, Bastian Leibe
IEEE International Conference on Computer Vision (ICCV'17) 3DRMS Workshop

Deep learning approaches have made tremendous progress in the field of semantic segmentation over the past few years. However, most current approaches operate in the 2D image space. Direct semantic segmentation of unstructured 3D point clouds is still an open research problem. The recently proposed PointNet architecture presents an interesting step ahead in that it can operate on unstructured point clouds, achieving decent segmentation results. However, it subdivides the input points into a grid of blocks and processes each such block individually. In this paper, we investigate the question how such an architecture can be extended to incorporate larger-scale spatial context. We build upon PointNet and propose two extensions that enlarge the receptive field over the 3D scene. We evaluate the proposed strategies on challenging indoor and outdoor datasets and show improved results in both scenarios.

» Show BibTeX
@inproceedings{3dsemseg_ICCVW17, author = {Francis Engelmann and Theodora Kontogianni and Alexander Hermans and Bastian Leibe}, title = {Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds}, booktitle = {{IEEE} International Conference on Computer Vision, 3DRMS Workshop, {ICCV}}, year = {2017} }





Anton Kasyanov, Francis Engelmann, Jörg Stückler, Bastian Leibe
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'17)

Complementing images with inertial measurements has become one of the most popular approaches to achieve highly accurate and robust real-time camera pose tracking. In this paper, we present a keyframe-based approach to visual-inertial simultaneous localization and mapping (SLAM) for monocular and stereo cameras. Our method is based on a real-time capable visual-inertial odometry method that provides locally consistent trajectory and map estimates. We achieve global consistency in the estimate through online loop-closing and non-linear optimization. Furthermore, our approach supports relocalization in a map that has been previously obtained and allows for continued SLAM operation. We evaluate our approach in terms of accuracy, relocalization capability and run-time efficiency on public benchmark datasets and on newly recorded sequences. We demonstrate state-of-the-art performance of our approach towards a visual-inertial odometry method in recovering the trajectory of the camera.

» Show BibTeX
@article{Kasyanov2017_VISLAM, title={{Keyframe-Based Visual-Inertial Online SLAM with Relocalization}}, author={Anton Kasyanov and Francis Engelmann and J\"org St\"uckler and Bastian Leibe}, booktitle={{IEEE/RSJ} International Conference on Intelligent Robots and Systems {(IROS)}}, year={2017} }





Francis Engelmann, Jörg Stückler, Bastian Leibe
IEEE Winter Conference on Applications of Computer Vision (WACV'17)

Inferring the pose and shape of vehicles in 3D from a movable platform still remains a challenging task due to the projective sensing principle of cameras, difficult surface properties, e.g. reflections or transparency, and illumination changes between images. In this paper, we propose to use 3D shape and motion priors to regularize the estimation of the trajectory and the shape of vehicles in sequences of stereo images. We represent shapes by 3D signed distance functions and embed them in a low-dimensional manifold. Our optimization method allows for imposing a common shape across all image observations along an object track. We employ a motion model to regularize the trajectory to plausible object motions. We evaluate our method on the KITTI dataset and show state-of-the-art results in terms of shape reconstruction and pose estimation accuracy.

» Show BibTeX
@inproceedings{EngelmannWACV17_samp, author = {Francis Engelmann and J{\"{o}}rg St{\"{u}}ckler and Bastian Leibe}, title = {{SAMP:} Shape and Motion Priors for 4D Vehicle Reconstruction}, booktitle = {{IEEE} Winter Conference on Applications of Computer Vision, {WACV}}, year = {2017} }





Francis Engelmann, Jörg Stückler, Bastian Leibe
German Conference on Pattern Recognition (GCPR'16) Oral

Estimating the pose and 3D shape of a large variety of instances within an object class from stereo images is a challenging problem, especially in realistic conditions such as urban street scenes. We propose a novel approach for using compact shape manifolds of the shape within an object class for object segmentation, pose and shape estimation. Our method first detects objects and estimates their pose coarsely in the stereo images using a state-of-the-art 3D object detection method. An energy minimization method then aligns shape and pose concurrently with the stereo reconstruction of the object. In experiments, we evaluate our approach for detection, pose and shape estimation of cars in real stereo images of urban street scenes. We demonstrate that our shape manifold alignment method yields improved results over the initial stereo reconstruction and object detection method in depth and pose accuracy.

» Show BibTeX
@inproceedings{EngelmannGCPR16_shapepriors, title = {Joint Object Pose Estimation and Shape Reconstruction in Urban Street Scenes Using {3D} Shape Priors}, author = {Francis Engelmann and J\"org St\"uckler and Bastian Leibe}, booktitle = {Proc. of the German Conference on Pattern Recognition (GCPR)}, year = {2016}}





Aljoša Ošep, Alexander Hermans, Francis Engelmann, Dirk Klostermann, Markus Mathias, Bastian Leibe
IEEE Int. Conference on Robotics and Automation (ICRA'16)

Most vision based systems for object tracking in urban environments focus on a limited number of important object categories such as cars or pedestrians, for which powerful detectors are available. However, practical driving scenarios contain many additional objects of interest, for which suitable detectors either do not yet exist or would be cumbersome to obtain. In this paper we propose a more general tracking approach which does not follow the often used tracking-by- detection principle. Instead, we investigate how far we can get by tracking unknown, generic objects in challenging street scenes. As such, we do not restrict ourselves to only tracking the most common categories, but are able to handle a large variety of static and moving objects. We evaluate our approach on the KITTI dataset and show competitive results for the annotated classes, even though we are not restricted to them.

» Show BibTeX
@inproceedings{Osep16ICRA, title={Multi-Scale Object Candidates for Generic Object Tracking in Street Scenes}, author={O\v{s}ep, Aljo\v{s}a and Hermans, Alexander and Engelmann, Francis and Klostermann, Dirk and and Mathias, Markus and Leibe, Bastian}, booktitle={ICRA}, year={2016} }




Disclaimer Home Visual Computing institute RWTH Aachen University