Author: AI Pods
-
Unlocking customized AI solutions with Retrieval Augmented Generation (RAG)
by
Major companies are rapidly now adopting Generative AI and Large Language Models (LLMs), enabling their employees get familiar with the technology & outsourcing some of their mundane tasks – be it drafting emails, summarizing documents, creating presentations or getting excel formulas. But, ever wondered how can you tailor these models specifically for your company’s needs?…
-
Embedding: The language of AI World (2/2)
by
Introduction to EmbeddingIn the intricate realm of language understanding, each word has its own meaning, context, and nuance. To help AI models grasp these semantic meaning of the language, we use a technique called embedding. The essence of Embedding: Embedding is all about breaking down something into its essential features. And at the heart of it…
-
Attention: The Magic Behind Large Language Models (Part 2)
by
(4-5 mins read) Introduction & Recap In the previous article, we explored transformers & the concept of attention, the driving force behind the effectiveness of large language models (LLMs) like ChatGPT, using a real-life analogy. To recap, the attention mechanism helps models understand the context of input by focusing on specific parts and assigning weight…
-
Attention: The Magic Behind Large Language Models (Part 1)
by
(4-5 mins read) Introduction & why this article: We all have been encountering Large-Language models (LLMs) or Generative AI in recent times. This is the foundational model to which the revolutionary ChatGPT, Claude-3, Llama-2/3, Mistral (all text-to-text based) & even the recent one Sora (a text-to-video) to name a few, are based upon. With multiple breakthroughs…
