AI2-THOR Celebrates Five Years by Gaining an Arm Called ManipulaTHOR
CIOReview
CIOREVIEW >> AI

AI2-THOR Celebrates Five Years by Gaining an Arm Called ManipulaTHOR

By CIOReview | Thursday, May 6, 2021

The Allen Institute for AI introduces object manipulation in a robotics testing scenario.

FREMONT, CA: The Allen Institute for AI (AI2) announces the 3.0 release of its embodied AI framework AI2-THOR, which adds active object manipulation to its testing framework. ManipulaTHOR, the virtual agent with a highly articulated robot arm with swiveling joints, brings a more human-like approach to communicating with objects.

AI2-THOR is the first testing framework to evaluate the issue of object manipulation in more than 100 visually rich, physics-powered rooms. By allowing the training and evaluation of generalized potentials in manipulation models, ManipulaTHOR lets for faster training in complex environments than real-world training methods while also being far protected and more cost-effective.

Imagine a robot being able to guide a kitchen, open a refrigerator and pull out a can of soda. This is one of the significant and yet often overlooked hurdles in robotics, and AI2-THOR is the first to develop a benchmark for the task of moving objects to several locations in virtual rooms, allowing reproducibility and measuring progress.

Despite being an established research area in robotics, the visual reasoning perspective of object manipulation has been one of the biggest challenges researchers face. It’s understood that robots battle to correctly perceive, navigate, act, and communicate with others in the world. AI2-THOR remediates this problem with complex simulated testing environments that researchers can leverage to train robots for eventual activities in the real world.

With the embodied AI through AI2-THOR, the landscape has transformed for the common good. AI2-THOR allows researchers to efficiently devise solutions that address the object manipulation issue and problems associated with robotics testing.

AI2-THOR has enabled research on different tasks like navigation, instruction following, multi-agent collaboration, or reasoning if an object can be opened or not. This emergence of AI2-THOR lets researchers and scientists scale the limits of embodied AI. Besides the 3.0 release, the team hosts the RoboTHOR Challenge 2021 with the Embodied AI Workshop at Conference on Computer Vision and Pattern Recognition. AI2’s hurdles cover RoboTHOR object navigation, ALFRED, and Room Rearrangement.