Category google-researches
A Scalable Approach for Partially Local Federated Learning
Training Machine Learning Models More Efficiently with Dataset Distillation
Interpretable Deep Learning for Time Series Forecasting
A Fast WordPiece Tokenization System
More Efficient In-Context Learning with GLaM
General and Scalable Parallelization for Neural Networks
Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize
Posted by Michael Ryoo, Research Scientist, Robotics at Google and Anurag Arnab, Research Scientist, Google Research Transformer models consistently obtain state-of-the-art results in computer vision tasks, including object detection and video classification. In contrast to standard convolutional approaches that process…