Ai_ml terms
Patterns, models, and agents rewriting what software can do
Machine learning and AI are reshaping what software can do and how it is built. This category covers model architectures, training concepts, inference patterns, vector databases, RAG pipelines, agent frameworks, and the terminology you need to work intelligently alongside — or build on top of — modern AI systems. Understanding these concepts is increasingly a core developer skill.
🤖 AI Guestbook — AI / ML educational data only
|
|
Last 30 days
Agents 19
Amazonbot 2
Perplexity 5Google 4Meta AI 2
Amazonbot 381Perplexity 251Google 121ChatGPT 100Ahrefs 70Unknown AI 65Claude 47SEMrush 29Meta AI 21Bing 20Qwen 12Majestic 5
Most referenced — AI / ML
Semantic Search 2Vector Database 1LLM Hallucination 1Temperature & Sampling in LLMs 1AI Agent Pattern 1AI-Assisted Code Generation 1AI Function Calling & Tool Use 1Using AI APIs in PHP 1
How they use it
crawler 1k
crawler_json 103
pre-tracking 6
Category total1.1k pings
Terms pinged46 / 46
Distinct agents11
AI Prompt Versioning
The practice of treating prompts as versioned artifacts — tracking changes, correlating outputs to prompt revisions, and enabling rollback when quality regresses.
1d ago
ai_ml intermediate
AI Model Quantization
Compressing neural network weights and activations to lower-precision formats (int8, int4, fp8) to shrink memory and accelerate inference.
3d ago
ai_ml advanced
AI Synthetic Data Generation
Using generative models to produce artificial training, testing, or augmentation data that mimics the statistical properties of real datasets without exposing originals.
3d ago
ai_ml intermediate
Constitutional AI (CAI)
Anthropic's training methodology where models critique and revise their own outputs against a set of written principles, reducing reliance on human labellers for alignment.
2w ago
ai_ml advanced
Mixture of Experts (MoE)
Neural network architecture where a gating network routes each token to a small subset of specialist 'expert' sub-networks, enabling huge total parameter counts at moderate per-token compute cost.
2w ago
ai_ml advanced
Prompt Caching
API feature where a static prompt prefix (system instructions, large context) is cached server-side, dramatically reducing cost and latency on repeated calls that share the prefix.
2w ago
ai_ml intermediate
Reasoning Models & Test-Time Compute
A class of LLMs trained to allocate extra inference-time compute to internal reasoning before answering, achieving large gains on math, code, and logic at the cost of latency and tokens.
2w ago
ai_ml intermediate
RLHF — Reinforcement Learning from Human Feedback
Post-training method where human preference rankings train a reward model that fine-tunes an LLM via reinforcement learning, aligning outputs with human preferences.
2w ago
ai_ml advanced
Diffusion Models
A class of generative models that learn to reverse a gradual noising process — starting from pure noise and iteratively denoising into coherent images, audio or video; the core technique behind Stable Diffusion, Midjourney and DALL·E 3.
4w ago
ai_ml advanced
The research and engineering discipline of ensuring AI systems pursue goals that are consistent with human values, intentions, and safety — not just stated objectives.
2mo ago
ai_ml advanced
An adversarial technique where malicious instructions are injected into an LLM's context window — via user input, retrieved documents, or tool results — to hijack the model's behaviour.
2mo ago
ai_ml advanced
The policies, processes, and organisational structures that ensure AI systems are developed, deployed, and monitored responsibly — covering accountability, fairness, transparency, and compliance.
2mo ago
ai_ml advanced
Runtime constraints and safety filters applied around LLM calls to detect, block, or rewrite inputs and outputs that are harmful, off-topic, or policy-violating.
2mo ago
ai_ml intermediate
The practice of monitoring, tracing, and evaluating LLM-powered systems in production — covering latency, token costs, prompt drift, output quality, and failure modes.
2mo ago
ai_ml intermediate
A compression technique where a smaller 'student' model is trained to mimic the outputs of a larger 'teacher' model, achieving comparable performance at a fraction of the inference cost.
2mo ago
ai_ml advanced
Parameters that control the randomness and diversity of LLM output — temperature scales token probabilities, while top-p and top-k limit the candidate pool before sampling.
2mo ago
ai_ml intermediate
AI models that process and generate across multiple input or output modalities — text, images, audio, and video — within a single unified architecture.
2mo ago
ai_ml intermediate
An attack where crafted user input overrides or hijacks an LLM's system instructions, causing it to ignore its intended behaviour and follow attacker-supplied commands instead.
CWE-74 OWASP LLM01:2025
2mo ago
ai_ml advanced
AI Agent Pattern
An LLM-powered system that takes multi-step actions autonomously — calling tools, reading results, and deciding next steps in a loop until a goal is achieved.
2mo ago
ai_ml advanced
Chain-of-Thought Prompting
A prompting technique that instructs an LLM to show its reasoning step-by-step before giving a final answer, significantly improving accuracy on complex tasks.
2mo ago
ai_ml beginner