PinnedI truly appreciate your kind words, It’s an honor to be acknowledged by the author, and I’m really…Jan 31Jan 31
PinnedThanks for the appreciation, Its surreal for me to get acknowledged from the author itself.Feb 1, 2024Feb 1, 2024
Papers Explained 394: OpenThoughtsThe goal of the OpenThoughts project is to create open-source datasets for training reasoning models. The OpenThoughts2–1M dataset led to…12h ago12h ago
Papers Explained 393: Gemini 2.5The Gemini 2.X model family, including Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, and Gemini 2.0 Flash-Lite, represents Google’s…1d ago1d ago
Papers Explained 392: Hard Negative Mining for Domain-Specific RetrievalThis paper addresses the challenge of retrieving accurate, domain-specific information in enterprise search systems, by dynamically…4d ago4d ago
Papers Explained 391: Adaptive Reasoning ModelAdaptive Reasoning Model (ARM) is a reasoning model capable of adaptively selecting appropriate reasoning formats based on the task at…5d ago5d ago
Papers Explained 390: Perplexity-based Importance Refinement (PIR)PIR (Perplexity-based Importance Refinement) is a framework that quantitatively evaluates the importance of each reasoning step based on…6d ago6d ago
Papers Explained 389: short-m@kIn this work, the assumption that long thinking chains result in better reasoning capabilities is challenged. It is first demonstrated that…Jun 17Jun 17
Papers Explained 388: MagistralMagistral is Mistral AI’s first reasoning model, designed for domain-specific, transparent, and multilingual reasoning. It comes in two…Jun 16Jun 16
Papers Explained 387: Sarvam-TranslateSarvam-Translate is trained by fine-tuning Gemma3–4B-IT. It supports 22 Indian languages — Hindi, Bengali, Marathi, Telugu, Tamil…Jun 13Jun 13