To appreciate the importance of MIT introducing RFID tags to robotics, it is first worth considering the limitations of its alternative: the said machine vision (MV). Starting with the restrictions of the latter, let’s then move on to where TurboTrack answers the following issues that plague traditional approaches to robot guidance, particularly when it comes to manufacturing.
Image courtesy of Bigstock.
The Limitations of Machine Vision
While, as mentioned above, MV is considered the industry best practice in terms of robotic guidance, it is still let down by pitfalls in object recognition, which will be discussed later.
First though, consider the seeming simplicity of programming a robotic arm to pick up a blue ball. If the software has been perfectly inputted with well-lit photographs of that ball, the robot’s image recognition capabilities should easily enable a successful operation—that is, providing the target object is sat on a clear, plain surface.
But now picture the target ball in a ball pit, surrounded by its identical blue twins, and consider the challenges now at play. Its matching counterparts mean that the MV system now has to recognise the target ball in the following conditions: among other confusing factors, it’s now camouflaged, overshadowed, partially obstructed, and mildly misshapen (due to the slight compression from its immediately-surrounding twins).
Such confusing factors altogether relate to three particularly large obstacles that hinder MV: visual background interference, lighting issues, and general distortion. Clearly then, there are currently too many problematic variables in image recognition to facilitate a foolproof MV system, especially in robot guidance-based manufacturing. After all (as discussed in the next section), one particular pitfall is that, by relying on a photo as a frame of reference, MV cannot recognise its target object when the latter is even slightly obstructed. In other words, MV requires a perfect line of sight: potentially a fatal flaw when you’re faced with a cluttered assembly line.
Image courtesy of Bigstock.
Perhaps the ideal machine would have its movements determined, not just by what its camera ‘sees’, but by what its sensors ‘hear’ nearby, too. In accordance with the latter point alone, MIT have answered the question that arises here: what if the target object, no matter how camouflaged, overshadowed, obstructed or misshaped, could still be localised? The answer is in the form of the aforementioned RFID reader and tag-based tracking system: TurboTrack.
Where MIT’s TurboTrack Comes In
Having recently created TurboTrack, the paper-like RFID tag-based tracking system, MIT have this year finished testing their product, already attracting the interest of other technology organisations.
One major USP of TurboTrack is that it isn’t limited by the above-mentioned ‘fatal flaw’ in machine vision, wherein even a slight obstruction in-between an MV camera and the target object means that the latter cannot be recognised. This issue is nipped in the bud by TurboTrack, as again, rather than relying on cameras and image recognition systems for robot guidance, MIT’s solution mostly boils down to the decades-old interaction between an RFID reader and its tag (a process known as interrogation).
In one test of TurboTrack, the MIT researchers attached an RFID tag to a cap and another to a bottle. This enabled one robotic arm to locate the cap and place it onto the bottle, which was held by another robotic arm. Image courtesy of MIT News.
Such radio frequency reliance not only circumvents MV’s problem of always having to rely on a perfect line of sight, it can even outright penetrate solid obstacles—including a wall—if and when they stand in the way (thanks to the large wavelengths involved in RF communication). Going back to the robot guidance discussion earlier, this means that robotic arms will not only be able to track their target objects in spite of a cluttered assembly line, but altogether regardless of it.
That said of course, RF interrogation itself still cannot be the ideal medium to fully pinpoint the location of a target object. But this is where the accuracy of MIT’s breakthrough comes into its own: the otherwise-long-existing technology needed a boost, which is why the researchers introduced what they deemed a ‘helper’ component to the reader and tag-based system.
This added ingredient sends out a wideband, multiple-frequency signal (based on the wireless communication form ‘orthogonal frequency-division multiplexing’, a.k.a. OFDM), essentially ‘widening the net’. This is by launching a broad enough signal that the wireless communication no longer comes down to the reader and tag’s RF exchange alone; rather, the helper’s dispersed waves reach their surrounding objects, and all of their rebounded signals are then used to calculate ‘time of flight’ (ToF), particularly in relation to the binary-coded signals from the RFID tag itself.
Image courtesy of Bigstock.
And while achieving such a ToF measurement (the product of calculating an object’s position based on the signal transmission time taken from transmitter to receiver) ensured the system had a general idea of the target’s location, MIT still needed higher precision. Due to this, they introduced a ‘space-time super-resolution algorithm’ to factor in all of the received data. The result is that TurboTrack is accurate to the sub-centimetre level, and the machines that use the system can track their target object within 7.5 milliseconds.
Fundamentally, to quote University of Texas computer science professor Lili Qiu, the researchers have introduced “a framework for RF localisation that ... has [the] potential to support [MIT’s] target applications, such as robotic assembly and nanodrones”.
The said paper-like RFID tag attached to a nanodrone. Image courtesy of MIT Media Lab.
Indeed, given its level of accuracy, and of course its lack of latency, TurboTrack is highly applicable to moving targets as well. So the technology is not just set to revolutionise manufacturing, as drawn on earlier in the robotic arm discussion: the RFID solution even has drone-based search-and-rescue potential, because the technology is both precise and responsive to the point that quadcopters can detect—not only an obstructed object or person—but each other’s flight paths, too.
With this in mind, MIT’s TurboTrack is a tracking system so accurate that both robot guidance-based manufacturing and drone localisation may no longer be marred by the obstacles that have long plagued machine vision systems.