AI Glossary
50 essential AI & machine learning terms explained clearly β from neural networks to RAG, LLMs, and beyond.
Artificial Intelligence
Core ConceptsThe simulation of human intelligence by machines β enabling computers to learn, reason, and make decisions.
Learn more βAI Agent
ApplicationsAn AI system that can perceive its environment, make decisions, use tools, and take autonomous actions to achieve a goal.
Learn more βAttention Mechanism
Models & ArchitectureA neural network technique that lets a model focus on the most relevant parts of an input when producing an output.
Learn more βActivation Function
Models & ArchitectureA mathematical function applied inside neural networks that introduces nonlinearity and lets models learn complex patterns.
Learn more βAI Alignment
Safety & AlignmentThe effort to make AI systems behave in ways that are helpful, safe, and consistent with human goals and values.
Learn more βContext Window
Inference & GenerationThe maximum amount of text (measured in tokens) that an AI model can process in a single interaction.
Learn more βCLIP
Models & ArchitectureA vision-language model that learns shared representations of images and text so they can be compared in the same embedding space.
Learn more βComputer Vision
Core ConceptsThe field of AI focused on enabling machines to interpret and understand images and video.
Learn more βDeep Learning
Core ConceptsA subset of machine learning using multi-layered neural networks to learn complex patterns from large datasets.
Learn more βDiffusion Model
Models & ArchitectureAn AI model that generates images or other media by learning to reverse a gradual noise-adding process.
Learn more βFine-tuning
TrainingTraining a pre-trained model further on a smaller, task-specific dataset to adapt it for a particular use case.
Learn more βFunction Calling
ApplicationsA capability that lets language models request structured tool or API calls instead of only generating plain text.
Learn more βFew-Shot Learning
ApplicationsA prompting or training approach where a model is shown a small number of examples before handling a new task.
Learn more βGenerative AI
Core ConceptsAI that can create new content β text, images, audio, video, and code β rather than just classifying or predicting.
Learn more βGradient Descent
TrainingAn optimization method that updates model parameters in the direction that most reduces prediction error.
Learn more βInference
Inference & GenerationThe process of using a trained AI model to generate predictions, classifications, or responses on new input data.
Learn more βIn-Context Learning
ApplicationsThe ability of a model to learn patterns from instructions and examples provided inside the current prompt without updating its weights.
Learn more βInstruction Tuning
TrainingA fine-tuning approach where a model is trained on many instruction-and-response examples to improve its ability to follow user requests.
Learn more βLarge Language Model
Models & ArchitectureA massive AI model trained on text data that can generate, summarize, translate, and reason about language.
Learn more βLoss Function
TrainingA mathematical measure of how wrong a modelβs predictions are during training.
Learn more βLoRA
TrainingLow-Rank Adaptation, a parameter-efficient fine-tuning method that updates a small set of low-rank matrices instead of the full model.
Learn more βMachine Learning
Core ConceptsA subset of AI where systems learn from data to improve performance without being explicitly programmed.
Learn more βMultimodal AI
Core ConceptsAI systems that can process and generate multiple types of data β such as text, images, audio, and video β in a unified model.
Learn more βMulti-Head Attention
Models & ArchitectureA transformer technique that runs multiple attention operations in parallel so the model can capture different kinds of relationships at once.
Learn more βModel Weights
Models & ArchitectureThe learned parameter values in a neural network that determine how input signals are transformed into outputs.
Learn more βPrompt Engineering
ApplicationsThe practice of crafting inputs to AI models to elicit better, more accurate, or more useful outputs.
Learn more βParameters
Models & ArchitectureThe learned numerical values inside a neural network that store what the model has learned from training data.
Learn more βPrompt Chaining
ApplicationsA workflow pattern where multiple prompts are linked together so the output of one step becomes the input to the next.
Learn more βPre-trained Model
TrainingA model that has already been trained on broad data and can then be adapted or used for downstream tasks.
Learn more βQLoRA
TrainingA LoRA-based fine-tuning method that combines low-rank adapters with quantized base models to reduce memory requirements even further.
Learn more βQuantization
Inference & GenerationA technique that reduces model size and inference cost by storing weights and activations with lower numerical precision.
Learn more βRetrieval-Augmented Generation
ApplicationsA technique that enhances LLM outputs by first retrieving relevant documents from an external knowledge base before generating a response.
Learn more βRLHF
TrainingReinforcement Learning from Human Feedback β a training technique that aligns AI models with human preferences using human ratings.
Learn more βSelf-Attention
Models & ArchitectureA form of attention where each token in a sequence looks at every other token in the same sequence to build context-aware representations.
Learn more βSystem Prompt
ApplicationsA high-priority instruction that sets the role, behavior, constraints, and goals for an AI model within an application.
Learn more βSemantic Search
DataA search approach that finds results based on meaning and intent rather than exact keyword matches.
Learn more βTransformer
Models & ArchitectureThe neural network architecture behind most modern AI β uses attention mechanisms to process sequences in parallel.
Learn more βTokenization
Language & TextThe process of splitting text into smaller units (tokens) that a language model can process.
Learn more βTemperature
Inference & GenerationA parameter that controls the randomness of an AI model's outputs β lower values are more deterministic, higher values are more creative.
Learn more βToken
Language & TextThe basic unit of text processed by a language model, often representing a word, subword, punctuation mark, or symbol.
Learn more βText-to-Image
ApplicationsAI generation that creates images from natural language prompts.
Learn more βText-to-Video
ApplicationsAI generation that creates video clips from natural language prompts.
Learn more β