Tag: llm
🤖 AI Guestbook — #llm educational data only
|
|
Last 30 days
Agents 2
Google 2Perplexity 2Meta AI 1
Amazonbot 308Perplexity 200Google 103ChatGPT 79Ahrefs 56Unknown AI 48Claude 23SEMrush 20Bing 18Meta AI 16Qwen 11Majestic 3
Most referenced — #llm
How they use it
crawler 813
crawler_json 69
pre-tracking 3
Tag total885 pings
Terms pinged36 / 36
Distinct agents11
AI Prompt Versioning
The practice of treating prompts as versioned artifacts — tracking changes, correlating outputs to prompt revisions, and enabling rollback when quality regresses.
1d ago
ai_ml intermediate
Mixture of Experts (MoE)
Neural network architecture where a gating network routes each token to a small subset of specialist 'expert' sub-networks, enabling huge total parameter counts at moderate per-token compute cost.
2w ago
ai_ml advanced
Prompt Caching
API feature where a static prompt prefix (system instructions, large context) is cached server-side, dramatically reducing cost and latency on repeated calls that share the prefix.
2w ago
ai_ml intermediate
Reasoning Models & Test-Time Compute
A class of LLMs trained to allocate extra inference-time compute to internal reasoning before answering, achieving large gains on math, code, and logic at the cost of latency and tokens.
2w ago
ai_ml intermediate
RLHF — Reinforcement Learning from Human Feedback
Post-training method where human preference rankings train a reward model that fine-tunes an LLM via reinforcement learning, aligning outputs with human preferences.
2w ago
ai_ml advanced
The research and engineering discipline of ensuring AI systems pursue goals that are consistent with human values, intentions, and safety — not just stated objectives.
2mo ago
ai_ml advanced
An adversarial technique where malicious instructions are injected into an LLM's context window — via user input, retrieved documents, or tool results — to hijack the model's behaviour.
2mo ago
ai_ml advanced
The policies, processes, and organisational structures that ensure AI systems are developed, deployed, and monitored responsibly — covering accountability, fairness, transparency, and compliance.
2mo ago
ai_ml advanced
Runtime constraints and safety filters applied around LLM calls to detect, block, or rewrite inputs and outputs that are harmful, off-topic, or policy-violating.
2mo ago
ai_ml intermediate
The practice of monitoring, tracing, and evaluating LLM-powered systems in production — covering latency, token costs, prompt drift, output quality, and failure modes.
2mo ago
ai_ml intermediate
A compression technique where a smaller 'student' model is trained to mimic the outputs of a larger 'teacher' model, achieving comparable performance at a fraction of the inference cost.
2mo ago
ai_ml advanced
Parameters that control the randomness and diversity of LLM output — temperature scales token probabilities, while top-p and top-k limit the candidate pool before sampling.
2mo ago
ai_ml intermediate
AI models that process and generate across multiple input or output modalities — text, images, audio, and video — within a single unified architecture.
2mo ago
ai_ml intermediate
An attack where crafted user input overrides or hijacks an LLM's system instructions, causing it to ignore its intended behaviour and follow attacker-supplied commands instead.
CWE-74 OWASP LLM01:2025
2mo ago
ai_ml advanced
AI Agent Pattern
An LLM-powered system that takes multi-step actions autonomously — calling tools, reading results, and deciding next steps in a loop until a goal is achieved.
2mo ago
ai_ml advanced
Chain-of-Thought Prompting
A prompting technique that instructs an LLM to show its reasoning step-by-step before giving a final answer, significantly improving accuracy on complex tasks.
2mo ago
ai_ml beginner
LLM Hallucination
When a large language model generates confident-sounding text that is factually incorrect, fabricated, or unsupported by any source — a fundamental property of how language models work.
2mo ago
ai_ml intermediate
LLM Streaming Responses PHP 8.0+
Receiving LLM output token-by-token as it is generated rather than waiting for the full response — dramatically improving perceived latency for users and enabling real-time displays of AI-generated content.
2mo ago
ai_ml intermediate
Prompt Injection Attacks (LLM Security)
An attack where malicious instructions embedded in user input or retrieved content override an LLM's system prompt — causing it to ignore its instructions, reveal confidential information, or take unintended actions.
2mo ago
security advanced
RAG — Retrieval-Augmented Generation
An LLM architecture that fetches relevant documents from an external knowledge base before generating a response, grounding answers in retrieved facts rather than training data alone.
2mo ago
ai_ml intermediate