More Efficient In-Context Learning with GLaM
Simulating matter on the quantum scale with AI
General and Scalable Parallelization for Neural Networks
Language Modelling at Scale: Gopher, Ethical considerations, and Retrieval
Engineers teach AI to navigate ocean with minimal energy
These tiny liquid robots never run out of juice as long as they have food
Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize
Posted by Michael Ryoo, Research Scientist, Robotics at Google and Anurag Arnab, Research Scientist, Google Research Transformer models consistently obtain state-of-the-art results in computer vision tasks, including object detection and video classification. In contrast to standard convolutional approaches that process…
Google at NeurIPS 2021
