Robots are all around us. Only some of them have a similar appearance to us, and many don't even really look like what we would expect a robot to look like. We might meet robots every day, and we don't even notice. In this section, we’ll introduce several sectors where robots are utilised with great benefit. There are many more domains where robots are applied; however, we believe that the examples below help you to understand the capabilities and possibilities of robots.
Consumer robotics and household robots
The vision of the 80s is here: we have domestic robots in our home, including robotic mops, vacuum cleaners, window cleaners, pool cleaners, and lawnmowers. These robots are created to make our life easier by doing household chores for us. Some humanoid robots can also be listed here as they are not for entertainment only, but also for working around the house instead of us. Examples of household robots include:
One of the most famous humanoid robots is the Ubtech Lynx robot empowered with Amazon Alexa. Alexa is a virtual voice assistant created by the company Amazon. With Alexa, smart gadgets can be controlled with natural language. You can also engage in small talk and chat about the weather, news, and recipes. Ubtech Lynx is a humanoid robot with a built-in voice assistant system. The robot uses facial recognition to recognise people and call them by their names. It can also give instructions for practicing yoga and some sports activities. The robot can manage our calendar and remind us to join a team meeting or to answer important emails.
Robotic pool cleaner
Pool owners know that pool cleaning can be an all-day activity as the pool is generally located outside of the house and is exposed to weather changes. There are several manual devices to clean the pool as a cheaper option (brushes), but a time and energy-saving solution is the automatic pool cleaner. The robotic pool cleaner goes under the water and uses its tracked undercarriage to run along the bottom and walls of the pool and clean the entire area with the help of some sensors and brushes.
Robotic vacuum cleaner
Robot vacuum cleaners have sensors, lasers, an internet connection and a built-in computer. Today's robotic vacuum cleaners are autonomous machines – most of the time, they don't need any human supervision or instructions to clean the house. They can also charge themselves when the battery is running low. A smart robot vacuum cleaner explores our home first and automatically creates the floorplan in its built-in computer. From then on, it always remembers when it cleaned different parts of the home. Compared to traditional hoovers, robot vacuum cleaners have several advantages: they work autonomously, they can go under furniture, they don’t need as much space for storage, and they represent a trendy and smart lifestyle as well. Today, robot vacuum cleaners are almost the same price as a medium-category hoover (from approx. 200 EUR).
Social robots are created to communicate with us and to entertain us. They look like humans either partly or entirely, meaning these robots are mostly humanoids. Social robots can be as simple as a single monitor with eyes and a mouth that mimics a human face. Advanced social humanoids have a similar body, mimicry, and expressions to humans. To a limited extent, humanoids are also able to recognise and analyse human social behaviour and respond accordingly by using computer vision, speech recognition and synthesis, and natural language processing techniques. Social robots are sometimes covered by an elastic surface which is similar to human skin, while in other cases they have a plastic overlay. Under the surface, there are many elements, including servo motors that move the parts of the robot, as discussed previously.
Currently one of the most famous humanoids is Sophia, Hanson Robotics's humanoid creation. Sophia is a lifelike robot torso with hands and a head (lately even with legs) designed to look like the famous actress Audrey Hepburn. Sophia can respond to questions and engage in small talk. She has 50 facial expressions, including joy, confusion, sorrow, and curiosity. She was created for research, education and entertainment purposes.
One commercially used humanoid is Softbank Robotics’ Pepper, which looks more like a robot, with a human-like structure. Pepper is able to identify people, recognise emotions and communicate with natural language to a limited extent.
Please note that all social robots, even ones with a perfect, human-like appearance and behaviour, are still a million lightyears away from general artificial intelligence (GAI). GAI means that a machine with complex and advanced algorithms is:
able to act like it has natural intelligence
able to make decisions according to the environment
able to adapt to a new environment if the environment changes
The kind of intelligent, thinking and feeling robots that are in sci-fi movies do not exist. With current technologies, it’s not possible to create such intelligence – even if an existing robot looks very smart sometimes.
Healthcare (medical robots)
Robots are also present in hospitals and support medical facilities in several ways. We can find humanoids in some modern hospitals as social robots, or even doing some of the repetitive tasks usually performed by nurses. In addition, other types of robots perform functional tasks in surgery, helping the doctors to achieve the highest precision possible.
In critical situations, robot nurses can protect humans from viruses and infections. They are also handy in case of a significant increase in workforce need, such as in a pandemic. Robots can clean entire hospitals, wards, and rooms where infected patients are recovering. This is very important because it means nurses can concentrate on their important tasks with patients. Robots can also safely screen incoming patients for infection, guiding those with viral symptoms to a separate area where doctors can see them safely.
Surgical robots appeared in the middle of the 1980s. They are used in surgeries such as cardiology, gynaecology, urology, and thoracic surgery which generally require minimally invasive technology, which means that there are only small incisions on the human body through which the surgical procedures are performed. With the help of robots, operations are feasible through these small incisions with high precision, reducing the risk of infection. It also appears to be less tiring for the doctor as the method enables a sitting position – standing in the same position next to the patient is no longer needed throughout the process.
These medical robots are not humanoid; their appearance is closer to industrial robots. Surgical robots have a mechanical body augmented with robotic arms controlled by the doctors. The robotic arms have endoscopes (a long, thin, flexible tube with a camera and light source) which display a high-resolution image on the console for the surgeon.
Due to the structure of the machine, there are some disadvantages as well. The price is the first one – as they require very high quality standards, these robots are extremely expensive. Furthermore, as doctors don't touch the patient – it's the robot that has physical contact with the body – there isn't any haptic feedback. In many cases, nurses stand next to the patient during surgery, supervising the patient and the robot. The main "actor", the surgeon, controls the machine a little bit further away. This means that doctors have to learn how to operate these machines, even if they already know how to perform a specific surgery.
Robotic prostheses are controlled by a high-level integration of machine and human. In this case, the human-robot interaction is realised mostly by the small movements of muscles. Depending on the technology, these muscle movements are detected with electrical or push sensors (non-invasive or invasive), and the prosthesis moves accordingly. Advanced signal processing and AI technologies help the "translation" of the muscle signals to the movement of the robotic leg or arm. The "brain" (the embedded computer) of these robotic prostheses must be tiny, light and energy efficient to make it as convenient as possible to wear it for an extended period – which limits the complexity of the applied technologies.
Autonomous robot vehicles can drive without any human interaction. The most focused area of research is self-driving cars. While driving, the onboard computer analyses the environment with advanced artificial intelligence (AI) methods based on the data received from the many sensors of the vehicle. The process of self-driving car development started in the 1990s.
Several levels of vehicle automation are defined as follows by the Society of Automotive Engineers (SAE):
Level 0: these are not cars from the future but from the present! Level 0 means that the vehicle's automation is limited to warnings and instantaneous assistance if needed. Such features include the emergency brake and blind spot and lane departure warnings.
Level 1: the car is controlled by a human and the driver is assisted by automation support – in this case steering wheel and acceleration support. Adaptive cruise control and lane assist are examples of such features. Adaptive cruise control means that the driver sets a speed and the vehicle will maintain a safe distance from cars, slowing down and speeding back up as needed. Lane assist steers the car back to the middle of the lane if the car is likely to leave the lane without using the turn signals. In Level 1, only one functionality is automated.
Level 2: this is almost the same as Level 1, but multiple functionalities are automated. Some solutions might look like a higher level of automation, however, this is the level we can meet on the roads at the time of writing (early 2021). Many large companies have excellent solutions at this level, like Tesla Autopilot, Mercedes-Benz Drive Pilot, Nissan ProPilot Assist 2 and Volvo's dedicated solution.
Level 3: steering, acceleration and braking at this level can be controlled by the vehicle's automation system. The car's driving system monitors the environment and takes action accordingly. The human driver still has to keep an eye on the road and be ready to take back control if needed. These features can be used only if the AI algorithm considers that it is safe to take over the controls. One approach to introducing Level 3 automation is to allow the AI to drive the car only in slow traffic conditions, such as in traffic jams, to lower the cognitive stress of the driver – helping to keep them relaxed. We are close to having Level 3 vehicles, so it is possible that when you are reading this you are already driving such a car.
Level 4: The entire driving process is taken over by the car's driving system. The system can analyse more complex situations, like the sudden appearance of objects on the road, and is able to handle those situations. That means the driver can relax and engage in other activities like writing emails or reading a book – especially on controlled-access highways. At Level 4, the driver is still able to take over the controls if they decide to. The Waymo test car is an example of this level.
Level 5: this level is the final goal of autonomous driving. The car is ready to drive itself without any human intervention. The level of automation is maximised. No steering wheel, pedals or brakes are included. The vehicle is able to make the best and safest decision in all conditions. It is able to recognise traffic signs, detect pedestrians, predict the behaviour of other vehicles on the road and avoid collisions, and it is able to avoid a dangerous situation even in the most extreme conditions.
In short, levels 0, 1 and 2 need humans to be alert to check and control the environment and control the vehicle. Automated support is given by the vehicle's self-driving system, but constant supervision from a human is required. Levels 3, 4 and 5 are a novel era of automation, which is a true revolution in robotics and will have a significant effect on society. In level 4, the driver needs to intervene if required, but basically the vehicle is driven autonomously. The biggest change in automotive development starts with level 4 where the driver is not needed to drive the car. And we are getting closer to introducing this level to the public roads every day!
Manufacturing (industrial robots)
Industrial robots work mostly in production. They can work 24/7 and perform programmed operations which are generally sequential, recurrent and monotonous. Industrial robots have two main parts: the body, which is the controlling unit, and the arm or arms of the robot, which perform fine-grain operations. Industrial robots can be operated by humans or by computers. The task of the controlling unit is to give instructions to the arm based on the commands of the operator or those of a computer application. The robot arm, also called a manipulator, is able to express a massive amount of power. It is also possible to equip industrial robots with sensors, so its status and the parameters of the environment can be tracked. Based on these data points, early failures in manufacturing can be detected, and maintenance can be scheduled before the robot stops working. Industrial robots are an extremely important part of our society, as this type of robot builds most electronic devices.
There are several types of industrial robots. These can be categorised based on their appearance and how they are used:
Articulated robots are often utilised in manufacturing. Generally, it’s a robotic arm with two or more rotary joints, which are also referred to as “axes” (the plural of “axis”). These axes are organised in a chain to support the next joint along the robotic arm. The robot's body, in this case, is attached to the ground, wall or ceiling, and the first joint is generally part of the body. Then, depending on the purpose of the robot comes the other joints.
6-axes robots are able to perform a wide variety of movements that can be applied in the industry to various tasks. In this case, the robot can rotate on six different axes. For example, the first axis, located at the base of the robot, lets the robot turn right and left. The second axis is located above the first axis, letting the robot to move forward and backwards. The third axis helps the robot to turn behind the body of the robot, while axes 4 and 5 let the robot make smaller movements at the end of the robot's arm. Axis 6 is called the wrist of the robot, turning 360 degrees in both directions.
SCARA (Selective Compliance Articulated Robot Arm) robots are mainly used for smaller pick-and-place operations. They have several degrees of freedom but mostly on the same plane. SCARA works with 3-axes in general, however there are also 4- and 6-axes versions. SCARA works fast and its acceleration is incredible. At the end of its movements the robot stops at a precise position to pick and place the objects. A robot arm like this is able to accurately pick and place approximately 120 elements in a minute, though depending on the application scenario this value may vary.
Parallel or delta robots, also called spiders due to their appearance, have three arms working in parallel under and connected to the body of the robot. The body of the robot is mounted above the workplace. Deltas are also used for pick and place operations just like SCARA robots.
Cartesian robots or linear robots have rectangular arms moving straight along the three main axes of the Cartesian coordinate system. The robot's manipulator is on an overhead system to move the robot along the axes across a large workplace, also called a working envelope. This robot is used for pick-and-place applications, and it can handle big and heavy elements such as full boxes of metal parts. It is easily and highly configurable for many tasks.
Cylindrical robots have a rotational axis at the base. The next motor defines the height of the arm, and the arm's reach is set by a third motor. Cylindrical robots usually have a compact design, which enables them to be used for coating, tending, spot welding and assembly.
Polar robots are also called spherical robots. The robot base is in the centre of a 'sphere' and the arm is able to reach any of the ’polar’ coordinates by rotating along two axes, and by extending the arm. Polar robots are used for injection moulding or welding, for example.
Collaborative robots (cobots) are humans and machines working together. Collaborative robots bridge the gap, as cobots and humans can work safely with each other in the same work area. Cobots were invented by J. Edward Colgate and Michael Peshkin in 1996 as "a device and method for direct physical interaction between a person and a computer-controlled manipulator."
All these types of industrial robots are a fantastic help in manufacturing. They can replace people in carrying out repetitive – and maybe dangerous – operations that would be too monotonous and tiring. In addition, the performance that these robots can offer is better than a human. They work with perfection, precision, and quality – and quantity can be measured automatically in many cases. As industrial machines operate 24/7, productivity can be maximised.
Regarding the cost of an industrial robot, investment costs have to be paid when acquiring the robot. There are also certain maintenance and operational costs. When adding these three types of expenses, it is still usually cheaper for a manufacturing company to have robots instead of human workers across the whole process. Furthermore, as these industrial robots can perform many tasks, humans can be assigned to fields where they can create more added value for the company. The company needs people to contribute in the manufacturing process and to operate and maintain machines, to supervise processes and subprocesses, to intervene in ineffective operations, to report on performance and to ensure business continuity.
Industrial robots that work on or over fields and farms are called agricultural robots. These robots support food production in agriculture. There are several types – the most popular are probably harvesting robots. Agricultural robots do repetitive and monotonous tasks, just like industrial robots. The two main tasks of harvesting robots are picking and placing. Common functions of other agricultural robots are seeding, weeding, pruning, phenotyping and thinning. These tasks are much harder to perform than they sound.
A simple picking action consists of exploring where the plant to harvest (such as fruit or vegetables) is located with the help of a camera. The robot must also check if the plant is ripe enough to be harvested. The robotic arm must pay attention while harvesting not to do any damage to the crop. In addition, robots must cope with several weather conditions. They have to be mobile to go on muddy ground or carefully pick crops even if the weather is windy. Resistance is required to hot or cold weather and UV radiation. Some robots are embedded with solar panels which help them to operate in a 100% eco-friendly manner without any pollutant emissions. Using robotics in agriculture could be a revolution that would help decrease food waste worldwide.
Farm robots utilise advanced technology to pick vegetables in fields and greenhouses. Robots use a complex algorithm to check if the vegetable is ripe enough to be harvested. The determination is established with the help of a camera and LED light system to scan the right location first. Algorithms analyse the colours of the vegetable to determine if it is ripe enough. Next, the robot has to define the exact location of the vegetable on the plant to know the right moves to make to cut it with a small knife. The end of the process is to put the crop into an assembly basket. As even the same plant can have different vegetable shapes and sizes, this process requires advanced AI technologies – similar to the ones that are used in self-driving cars.
Several robots on the market can weed large areas. Human intervention is limited to setting up the program on the robot and it will then autonomously carry out all the weeding work on the field. Weeding robots are equipped with a GPS sensor to get the exact location of the machine and with cameras to detect the location of vegetables and to distinguish vegetables from weeds. Once they are identified, the weeding robot cuts the weeds into pieces. Nowadays weeding robots are eco-friendly, running 100% on electricity.
Other robots in agriculture
Seed sowing in large areas is a really challenging task. Agricultural robots are a great help to replace the manual effort in this process as well. The process works by having a vehicle moving around on the fields sowing the seeds on the ground while it also does weeding if necessary. There are already existing solutions that can weed, for example, up to 20 hectares per season. Other agricultural robots include mowing, spraying, peeling, cleaning, sorting and packing robots, for example.
There is a real need for robots working in agriculture. Doing manual work is not trendy among the younger generation, who are usually looking more into office work and want more flexibility and independence. This is not compatible with manual labour in the fields where nature dictates the conditions and workers need to adapt. The presence of agricultural robots could help meet the great need of farmers in the fields. Without a doubt, agricultural robots are needed just as much as industrial ones.
One of the most futuristic parts of military technology is robotics. Military robots are used to perform military tasks, which can be classed as prevention or intervention. Bomb detector and disposal robots are for prevention. These robots are generally small and light weight with low energy consumption and they can be used to replace humans in extremely dangerous situations. They typically have a high-resolution camera and a robotic arm which can be precisely controlled by humans.
Another military application involves drones. These drones are semi-automatic – when moving from A to B, computer programs control the machine. When the drone arrives at the combat zone, a human or group of humans takes control.
Rescue robots are created to save lives in extreme situations and disasters. Rescue robots are used in areas where human intervention is dangerous or not possible – for example, in earthquakes, floods, hurricanes, or fire emergency areas. Rescue robots can enter emergency areas to find people in trouble. Robots can indicate to the rescue team the exact location of a trapped or missing person. Several robots are also able to carry medication if the situation requires.
Rescue robots are a significant help to rescue teams. The team can stay further away from the dangerous situation, using the robots to focus on extended areas. Robots are replaceable, resistant to severe weather conditions, don't get injured like people can, don't need much time to rest (they just need to be charged) and have consistent performance.
Land rescue robots
The type of rescue robot depends on the scenario. The rescue team uses small, remotely controlled robots if the emergency area is small, and they use robust machines to explore disaster sites where debris lifting is necessary. Robots can even go to dangerous places with a high level of radiation to measure it and clean the debris.
Water rescue robots
Rescue in water is also possible with robots. The robot is navigated through the water and a drowning person can grab it and be pulled back to land. The machine is controlled remotely from the land by the lifeguard team. This life-saving process can be automated – the human control could be replaced by a computer with advanced AI technologies. In this case, the robot must have sensors that help to detect humans and obstacles (like boats and ferries) in the water. This would allow the robot to slow down automatically if a human is found and also avoid other objects, all without any human intervention.
Air rescue robots
Robot help from the air is also possible. In this case, drones are used to save lives. Several drones are used in mountain rescues, but they are also a great help in exploring above water. Drones can also be used to explore emergency areas and can carry weight on their bodies to transport medical supplies or life vests. The drone has a remote control and a screen on it, which is operated by humans. In contrast with a rescue aeroplane, drones can easily navigate in narrow areas, like woods or canyons, and are able to navigate closer to the ground.
Robots can also be used for exploration and observation purposes. One main application domain is space exploration. The reasons to send robots instead of humans is similar to rescue: these robots are replaceable, can outperform humans in several ways (they tolerate extreme weather conditions and high levels of radiation and they can complete tasks that would be dangerous or impossible for humans). Researchers are working on robots for observation purposes, and also on humanoids to replace astronauts in the future.
Robots for observation collect a huge amount of data in the form of measurements, pictures and videos. These robots can also carry samples to Earth, like rocks, dust or other materials found in space. Space robots have to be as lightweight as possible to minimise how much energy it takes to transport them to space. In space, weight does not matter due to zero gravity. Huge robots can operate on the surface of other planets using less energy than they would need on Earth. Still, bringing every gram to space is very costly.
Mars Exploration Rovers (MERs)
Mars Exploration Rovers (MERs) are probably the most famous space robots. The first, named Sojourner, was launched in 1997. This was followed by Spirit and Opportunity in 2003, and about half a year and 100 million kilometres later, both successfully landed on Mars. The fourth MER, named Curiosity, was launched in 2011 and its scientific objectives according to NASA are:
Determine whether life ever arose on Mars
Characterise the climate of Mars
Characterise the geology of Mars
Prepare for human exploration
These robots are brilliant examples of engineering and science. There are so many tasks that such a robot has to fulfil, including landing, navigation, adaptation to the environment, traversing difficult terrain and communicating in space. They must also be resistant to extreme cold and heat, have a robust drive system and low energy consumption, and use solar panels as an energy supply – as well as meeting many other challenges. In addition, failure costs a tremendous amount of money. Fortunately, all MERs fulfilled their original mission objectives. Spirit and Opportunity spent more than six and 14 "Earth-years" on Mars respectively, and Curiosity is still active at the time of writing (December 2020).
Micro rovers are good examples of lightweight robots used to explore space. A micro rover weighs approximately two kilos and is as big as a medium-sized book. This small robot is designed to collect geochemical data to explore the surface of planets. The rover is equipped with a tiny camera allowing it to send data and to analyse the surface visually to determine if it is rock, dust or sand. Regarding the structure of the machine, there is no battery on it for weight-saving purposes – two wires provide power from a larger machine.
Humanoid robots have been created to help or replace astronauts. The goal is to utilise them in dangerous situations. Humanoid astronauts have climbing programs, can use handrails, can operate outside of the space station without oxygen and can complete tasks given by the crew. As space can be a lonely place, crew members might prefer machines that are more like a human and seem like an additional member of the crew.