Researchers in Japan have successfully created itty-bitty unmanned aerial vehicle (UAV) bees as a substitute for the declining population of biological, pollinating bees. It was bound to happen because, after all, natural drones are in fact male bees. Pollinating bees are females, but never mind the semantics.
The manually controlled robotic bee is just 40mm wide with a mass of 15g. Attached to the bottom of the drone is a bit of horsehair coated in a sticky gel that enables the robo-bee to pick up some pollen from one flower and deliver it to the next. But here’s the rub: These robo-bees cannot see and wouldn’t know a flower from an umbrella, so for now, they must be manually controlled. When compared to biological bees that can pollinate some 5,000 flowers a day, these robo-bees cannot get the job done faster or cheaper than the real bees can. But what if robo-bees had the ability to see?
A different group of researchers, at the University of Florida, are hoping to discover the answer to that question. They are working to develop a micro-sized light detection and ranging (LIDAR) device with a mass of about 0.06g and can be applied to a tiny aerial robot, giving them not only the ability to see but to have depth perception as well, something that comes in handy when you are trying to land on a flower swaying in the breeze. This gift of sight makes it possible to take the human out of the flight control equation and, therefore, have a swarm of robots doing what robots do, which is picking up the slack.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Isaac Asimov’s First Law of Robotics
People have been developing robots for hundreds of years, but General Motors was the first to employ them in an industrial setting back in 1961. This first robot took over the dangerous job of welding parts onto auto bodies, and they have been taking over other repetitive and hazardous jobs ever since. However, the robot itself had to be caged, as it was equally dangerous to humans.
Then LIDAR came along, making robots not only faster and more precise but safer to work alongside as well. When a LIDAR-enabled robot detects a human getting too close, it slows down or completely shuts down.
LIDAR also opened the door to a more efficient pick and palletizing robot within a warehouse. Since the robot could see, it no longer needed to be confined to a track to ensure that it did not run over anyone. They are far more efficient, because, should something be in their path, they can simply maneuver around it. They can also work all night in the dark, saving on the electricity bill.
“A robot must obey orders given it by human beings except where such orders would conflict with the First Law.”
Isaac Asimov’s Second Law of Robotics
Once we had robots that could see, we could start programming them to perform more risky, labor-intensive, or time-consuming assignments. In the event of a natural disaster or accident, UAV and unmanned ground vehicle (UGV) robots equipped with LIDAR can be programmed to enter and map an entire structure so first responders know exactly what they are up against before stepping into a building. These robots can also be deployed at the site of such dangerous events, like a chemical explosion or nuclear meltdown, to survey damage when the air quality is considered unsafe for humans.
Recently, the startup Doxel created a system using LIDAR-equipped UAVs to aid in construction site project management. As the robots map their way throughout the site, they can tell if the project is on schedule based on what they see as complete. They can also catch potentially expensive mistakes, such as a waste piping’s slope being off point by an inch or two. The contractor immediately receives a notification about the problem, so the error can be fixed before the floors are poured.
“A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Isaac Asimov’s Third Law of Robotics
The potential of LIDAR-equipped robots is enormous, but currently available technology limits us. If we attach more sensors and cameras to UAVs, then UAVs will become heavier and consume more power. If they consume more power, then they will need a bigger battery, which will make them even heavier. This vicious circle usually results in insufficient flight time.
LIDAR also has trouble overcoming atmospheric conditions such as smoke, rain, or fog, making it less than ideal for monitoring things like forest fires and hurricane damage in real time. Weather is one reason Elon Musk says he is not relying on LIDAR for Tesla’s self-driving car. The high cost is another. Many analysts say Musk is wrong, but he did manage to launch the world’s most powerful rocket into space, so maybe his opinion is worth noting.
LIDAR’s potential is evident. However, the robots will likely fail unless the technology is developed to empower longer flight times, when it is most critical, and prevent effort waste resulting from lousy data transmitted through bad weather.
Once robo-bees are equipped with LIDAR, does it solve anything, or is it just a solution looking for a problem? Many believe our time and money is better spent on saving the real bees we have left. However, research often leads to unexpected places. Maybe these robots will not replace biological bees but imagine what could be done with a swarm of these tiny autonomous UAVs.
Forget the UAV entirely and just imagine the possibilities for the micro-LIDAR. It could be used in medical devices, wearables, smartphones, and tablets. What could you do faster, safer, and more cost-effectively if you put LIDAR in the hands of every person?