Ritvik Rastogi

Oct 17, 2024

7 stories

5 saves

A new family of convolutional networks, achieves faster training speed and better parameter efficiency than previous models through neural architecture search and scaling, with progressive learning allowing for improved accuracy on various datasets while training up to 11x faster.
Features a universally efficient architecture design, including the Universal Inverted Bottleneck (UIB) search block, Mobile MQA attention block, and an optimized neural architecture search recipe, which enables it to achieve high accuracy and efficiency on various mobile devices and accelerators.
Incorporates a fully convolutional MAE framework and a Global Response Normalization (GRN) layer, boosting performance across multiple benchmarks.
A pure ConvNet model, evolved from standard ResNet design, that competes well with Transformers in accuracy and scalability.
An improved class of Normalizer-Free ResNets that implement batch-normalized networks, offer faster training times, and introduce an adaptive gradient clipping technique to overcome instabilities associated with deep ResNets.
Covers: Lenet Alex Net VGG Inception Net Inception Net v2 / Inception Net v3 Res Net Inception Net v4 / Inception ResNet Dense Net Xception Res Next Mobile Net V1 Mobile Net V2 Mobile Net V3 Efficient Net
Ritvik Rastogi

Ritvik Rastogi

Data Scientist, 2x Kaggle Expert