PinnedI truly appreciate your kind words, It’s an honor to be acknowledged by the author, and I’m really…Jan 31Jan 31
PinnedThanks for the appreciation, Its surreal for me to get acknowledged from the author itself.Feb 1, 2024Feb 1, 2024
Papers Explained 407: Should We Still Pretrain Encoders with Masked Language Modeling?While encoder pretraining has traditionally relied on Masked Language Modeling (MLM), recent evidence suggests that decoder models…3d ago3d ago
Papers Explained 406: Answer MatchingThis paper argues that multiple-choice benchmarks, traditionally used for evaluating language models, suffer from a critical flaw: they…4d ago4d ago
Papers Explained 405: Universal TokenizerPretraining massively multilingual Large Language Models (LLMs) for many languages at once is challenging due to limited model capacity…5d ago5d ago
Papers Explained 404: PangeaPangea is a multilingual multimodal LLM trained on PangeaIns, a diverse 6M instruction dataset spanning 39 languages. PangeaIns features…6d ago6d ago
Papers Explained 403: Crosslingual Reasoning through Test-Time ScalingThis work investigates how much test-time compute can improve multilingual reasoning abilities of English-centric RLMs. Research questions…Jul 7Jul 7
Papers Explained 402: MVTamperBenchMVTamperBench is a benchmark that systematically evaluates MLLM robustness against five prevalent tampering techniques: rotation, masking…Jul 4Jul 4
Papers Explained 401: Prometheus-VisionInspired by the approach of evaluating LMs with LMs, this work proposes to evaluate VLMs with VLMs. For this purpose, a new feedback…Jul 3Jul 3