Invisible machine-readable labels that identify and track objects | MIT News

If you download music online, you can get backing information embedded in the digital file that might tell you the song name, genre, featured artists on a given track, composer, and producer. Similarly, if you upload a digital photo, you can get information that may include the time, date, and location the photo was taken. This led Mustafa Doga Dogan to wonder if engineers could do something similar for physical objects. “That way,” he mused, “we could inform ourselves more quickly and more reliably while strolling through a store, museum, or library.”

The idea, at first, was a bit abstract for Dogan, a 4th-year doctoral student in MIT’s Department of Electrical Engineering and Computer Science. But his thinking solidified at the end of 2020 when he heard about a new smartphone model with a camera that uses the infrared (IR) range of the electromagnetic spectrum that the naked eye can’t perceive. Additionally, infrared light has a unique ability to see through certain materials that are opaque to visible light. It occurred to Dogan that this feature, in particular, might be useful.

The concept he has since developed — while working with colleagues at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a Facebook research scientist — is called InfraredTags. Instead of the standard barcodes affixed to products, which can be removed or detached or otherwise become unreadable over time, these labels are unobtrusive (due to the fact that they are invisible) and much more durable, given that ‘they are integrated into the interior of objects made on standard 3D printers.

Last year, Dogan spent a few months trying to find a suitable variety of plastic that infrared light can pass through. It should come in the form of a filament spool specifically designed for 3D printers. After extensive research, he came across custom plastic filaments made by a small German company that looked promising. He then used a spectrophotometer in an MIT materials science lab to analyze a sample, where he found that it was opaque to visible light but transparent or translucent to infrared light – exactly the properties he was looking for.

The next step was to experiment with label making techniques on a printer. One option was to produce the code by carving tiny air gaps – proxies for zeros and ones – into a layer of plastic. Another option, assuming an available printer can handle it, would be to use two types of plastic, one that transmits infrared light and one – on which the code is written – that is opaque. The dual material approach is preferred, where possible, as it may provide clearer contrast and therefore could be more easily read with an infrared camera.

The labels themselves could consist of familiar barcodes, which present information in a linear, one-dimensional format. Two-dimensional options – such as square QR codes (commonly used, for example, on return labels) and so-called ArUco (fiduciary) markers – can potentially pack more information into the same area. The MIT team developed a software “user interface” that specifies exactly what the tag should look like and where it should appear in a particular object. Several beacons can be placed in the same object, in fact, making it easier to access information in the event that views from certain angles are obstructed.

“InfraredTags are a really smart, useful, and accessible approach to embedding information into objects,” comments Fraser Anderson, Principal Investigator at the Autodesk Technology Center in Toronto, Ontario. “I can easily imagine a future where you could point a standard camera at any object and it would give you information about that object – where it was made, the materials used, or repair instructions – and you wouldn’t have not even have to look for a barcode.

Dogan and his collaborators have created several prototypes along these lines, including cups with barcodes etched inside the walls of the container, under a 1-millimeter plastic shell, readable by infrared cameras. They also made a prototype Wi-Fi router with invisible beacons that reveal the network name or password, depending on the perspective from which it is viewed. They have created an inexpensive, wheel-shaped video game controller that is completely passive, without any electronic components. There is just a barcode (ArUco marker) inside. A player simply spins the wheel, clockwise or counter-clockwise, and an inexpensive ($20) infrared camera can then determine its orientation in space.

In the future, if beacons like these become widespread, people could use their cellphones to turn lights on and off, control the volume on a speaker, or regulate the temperature on a thermostat. Dogan and his colleagues are investigating the possibility of adding infrared cameras to augmented reality headsets. He imagines himself walking around a supermarket one day, wearing such helmets and instantly getting information about the products around him – how many calories does a single serving contain and what are the recipes for preparing it?

Kaan Akşit, associate professor of computer science at University College London, sees great potential for this technology. “The labeling and tagging industry is a big part of our daily life,” says Akşit. “Everything we buy in grocery stores down to parts that need replacing in our devices (e.g. batteries, circuitry, computers, auto parts) needs to be identified and tracked properly. Doga’s work addresses these issues by providing an invisible tagging system that is mostly protected against the vagaries of time. And as futuristic notions like the metaverse become part of our reality, adds Akşit, “Doga’s tagging and tagging mechanism can help us to bring a digital copy of the items with us as we explore three-dimensional virtual environments”.

The paper, “InfraredTags: Embedding Invisible AR Markers and Barcodes into Objects Using Low-Cost Infrared-Based 3D Printing and Imaging Tools,” is presented at the ACM CHI Conference on Human Factors in Computing Systems, New Orleans this spring, and will be published in the conference proceedings.

Dogan’s co-authors on this article are Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, and Stefanie Mueller of MIT CSAIL; and Aakar Gupta of Facebook Reality Labs in Redmond, Washington.

This work was supported by an Alfred P. Sloan Foundation Fellowship. Dynamsoft Corp. provided a free software license which facilitated this research.