Opportunities
- Details
- Category: Open Subjects
CoBots or Collaboration Robots are gaining their place in the industry, and are expected to come to our homes. We start to see assembly cells where robots and humans work together in assembly tasks. There are however always risks that are currently more related to human errors than robot failures. The risks may have different severity levels, from a badly assembled product to hurting a human due to operator misattention.
Frequently the attention is lost for just a fraction of time and when the human detects the error it is too late for pressing the Safety-Stop Button.
The objective of this work is to develop a virtual scenario where we can study the interaction issues between robots and humans in collaborative environments, e.g. handover of objects. Speed, visibility and other factors will be analised to understand which are adequate factors for correct and efficient collaboration. This work will benefit from an active collaboration that exists between IS3L-ISR and ProActionLab and be complemented by the development of a Brain-Computer Interface designed for Safe Human-Robot Interaction.
The development is expected to include the development of a Gazebo-based environment containing Kinova Gen3 robot (and possibly a Baxter) controlled by ROS modules. These modules will be designed so that they may reproduce the same scenario and situations with real physical counterparts of these robots, available in our labs.
- Details
- Category: Open Subjects
- Details
- Category: Open Subjects
Coadvisor: Bruno Patrão
Virtual and Augmented Reality is being more and more used in therapeutic applications: physical rehabilitation, post-AVC rehabilitation, vestibular rehabilitation, and psychological therapies, all of them have already been previously addressed by our group.
It is the last context that is targeted by the current proposal. As psychologists frequently struggle to find stimulation scenarios for different conditions, this project aims at creating the basis for a configurable VR scenario designed to induce a particular set of emotional states.
- Details
- Category: Open Subjects
Scope
This PhD is part of a project funded by the industry – Portuguese Mint and Official Printing Office – whose objective is to extract information from faces that can be used as security elements of ID cards, passports, airport gates, e-gates and other security documents or portals.
Objectives
The main objective of this PhD project is to study and develop new methods to extract information from human face images to build a unique descriptor of faces. Faces are used in several applications to identify humans and to rule the interaction between humans and machines.
One of the main problems in building huge databases of human faces is that the validation process can become intractable with medium-size or higher DBs. Therefore, the existence of description vectors of faces can be the best solution for real-time or almost real-time applications, since it represents a trade-off between accuracy on the validation process and the decision time, of key importance in industrial and human-machine systems.
Most of the existing solutions are pixel-based. Some paths to explore are based on semantic content of images which can extract useful information to describe the human face. The most simple examples include to describe the color of the skin, eyes, eyebrows, shape of the nose and other features using semantic interpretation and extraction. This data can accelerate the validation process, achieving considerable speed-ups in relation to pixel-based techniques. It if one of the main objectives of this work to explore this trade-off between pixel-based and semantic-based approaches.
In terms of utility of the approach, it will provide the necessary support for querying databases using natural language, or generate those descriptions about a person from a photo or a robot onboard camera and have it describe a person through speech.
Additionally, human faces express emotions all the time and facial expressions can be sufficiently strong to disturb the human validation process. It is also objective of this project to identify emotions in human faces and produce a descriptor agnostic of these emotions, so that the validation process can match two faces even if expressing different feelings or emotions.
Applications
Face identification for security documents
Human identification in e-gates of airports and other portals
Face identification for smartphones and mobile devices
Human-Robot Interaction
Advisors: Prof. Paulo Menezes and Prof. Nuno Gonçalves
- Details
- Category: Open Subjects
Objective: To develop a quality control system using computer vision to control the parameters of the plates of certain references of bicycle chains.
SRAMPORT is a company of the Chicago-based SRAM group and produces high-quality bicycle chains.
A chain is composed of outer plates, inner plates, rollers and shafts. The plates are produced in presses by a stamping process. There is now a manual control of the physical parameters of the plates produced.
The aim of this work is to develop a system that has the capability of measuring the physical characteristics of several plates dispersed on a base,
after the measurement the system will validate each of the sample plates or reject according to the parameters of the model and its tolerance .
In a second phase the system can undergo an evolution for continuous measurement and integration in the productive process.
Workplace: ISR-UC e SRAMPORT
Supervision: Prof. Paulo Menezes and Eng. Paulo Silva (SRAMPORT)
- Details
- Category: Open Subjects
Most people have assisted to video-mapping shows where pictures and movies are typically projected using buildings structures as a screen.
We know that pointing a projector to a wall requires some adjustment of its orientation or the correction of the "trapezoidal distortion" (affine transformation) that appears when the optical axis is not perpendicular to the wall plane. When projecting over non-planar surfaces the distortion effects render the projected image hard to understand, depending on the relative position of the projector, the "screen" surface(s) and the viewer.
The objective of this work is to develop an interactive system composed of a Kinect device and a video projector that will be used to project over a table, a slanted wall or even the floor near a robot. Using new low consumption LED-based video projectors it is possible to embed it, together with a Kinect-based device on a mobile robot, that may then use this system to display information on the most appropriate nearby surface.
There are indeed several interesting possibilities for the use of this system like (1) in making the projection over a set of objects and modify their appearance or create a scenario for a game based on the manipulation of these objects, or (2) in industrial contexts.
Although in this work we will aim at develop a game-based rehabilitation tool for stroke patients or elderly people, it will be closely related with an industrial application on the context of an ongoing research project.
Remarks:
This work will be benefit from a current ongoing project and from a collaboration with LAAS-CNRS a large research institution in Toulouse, France.
Depending on the pace of the evolution of the project, the student may be invited to do an internship at LAAS-CNRS co-advised by Prof. Frédéric Lerasle, where the working language can be either English or French depending on the student fluency on these languages.
Workplace: ISR-Coimbra / LAAS-CNRS in Toulouse, France

- Details
- Category: Open Subjects
Deep learning-based techniques are at the centre of the attentions of companies, researchers and even general public. The applications on the recognition of people, places and objects have demonstrated the power of these techniques showing unprecedented success rates.
One interesting application of neural networks is in what is called “style transfer”. This corresponds to get a picture or illustration and by using an exemplar from a specific painter the system produces a version of the input picture with close resemblance to the works of that painter.
Read more: deepSTAIl: Style Transfer for Artificial Illustrations
- Details
- Category: Open Subjects
Medical ultrasound is a medical diagnostic technique based on the application of ultrasound to get views of the internal body structures or organs. Among the advantages there are its low cost, non-use of ionising radiation, and realtime imaging. The learning process implies the development os capabilities to interpret the images that correspond to 2D scans of the body inside. Being based on the emission of mechanical waves and detection of their reflections, these waves are modified by the elements that they encounter along the travel paths. In this process it is normal that artefacts may result from reflections or other sources but that are discarded by the experienced physician through a careful choice of the probe scan movements and varying its orientation. In this project we intend to create a system that includes an haptic device (phantom or …) that will be used to simulate the positioning of the sonograph probe, an HMD that will be used to visualise the both the (virtual) patient and the sonograph display.
- Details
- Category: Open Subjects
Manipulation of microscopic objects is gaining relevance as the techniques evolve and bring the possibility of dealing of smaller and smaller objects. For example in biology the possibility of manipulating individual living cells, embryos and stem cell and generic therapies is of outmost importance. For example performing injection in cells is a task that requires extensive practice, in particular as it is performed at a level where the magnitude of the involved forces are very hard to measure, and only a 2D-like visualisation of the task is possible through a microscope. Given this, it is quite normal that in many cases the cells explode due to imprecise manipulation that in turn comes out from the lack of reliable feedback that may bring the operator to sense it properly. In this work we intend to develop a system for manipulating cells or other microscopic objects, based on the generation of haptic forces extracted from visual cues extracted from the manipulated objects’ deformations.
Read more: Haptic Interaction for Simulated Micromanipulation