← CodeClarityLab Home
Browse by Category
+ added · updated 7d
🤖 AI Guestbook — #llm educational data only
| |
Last 30 days
0 pings — 2026-04-16 T 6 pings — 2026-04-17 F 1 ping — 2026-04-18 S 20 pings — 2026-04-19 S 9 pings — 2026-04-20 M 0 pings — 2026-04-21 T 3 pings — 2026-04-22 W 10 pings — 2026-04-23 T 18 pings — 2026-04-24 F 15 pings — 2026-04-25 S 11 pings — 2026-04-26 S 8 pings — 2026-04-27 M 10 pings — 2026-04-28 T 6 pings — 2026-04-29 W 20 pings — 2026-04-30 T 56 pings — 2026-05-01 F 15 pings — 2026-05-02 S 17 pings — 2026-05-03 S 12 pings — 2026-05-04 M 3 pings — 2026-05-05 T 2 pings — 2026-05-06 W 12 pings — 2026-05-07 T 31 pings — 2026-05-08 F 33 pings — 2026-05-09 S 17 pings — 2026-05-10 S 9 pings — 2026-05-11 M 3 pings — 2026-05-12 T 7 pings — 2026-05-13 W 13 pings — Yesterday T 2 pings — Today F
Google 2Perplexity 2Meta AI 1
Amazonbot 308Perplexity 200Google 103ChatGPT 79Ahrefs 56Unknown AI 48Claude 23SEMrush 20Bing 18Meta AI 16Qwen 11Majestic 3
crawler 813 crawler_json 69 pre-tracking 3
Tag total885 pings Terms pinged36 / 36 Distinct agents11
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
AI Prompt Versioning
The practice of treating prompts as versioned artifacts — tracking changes, correlating outputs to prompt revisions, and enabling rollback when quality regresses.
1d ago ai_ml intermediate
Mixture of Experts (MoE)
Neural network architecture where a gating network routes each token to a small subset of specialist 'expert' sub-networks, enabling huge total parameter counts at moderate per-token compute cost.
2w ago ai_ml advanced
Prompt Caching
API feature where a static prompt prefix (system instructions, large context) is cached server-side, dramatically reducing cost and latency on repeated calls that share the prefix.
2w ago ai_ml intermediate
Reasoning Models & Test-Time Compute
A class of LLMs trained to allocate extra inference-time compute to internal reasoning before answering, achieving large gains on math, code, and logic at the cost of latency and tokens.
2w ago ai_ml intermediate
RLHF — Reinforcement Learning from Human Feedback
Post-training method where human preference rankings train a reward model that fine-tunes an LLM via reinforcement learning, aligning outputs with human preferences.
2w ago ai_ml advanced
Diagram: AI Alignment AI Alignment
The research and engineering discipline of ensuring AI systems pursue goals that are consistent with human values, intentions, and safety — not just stated objectives.
2mo ago ai_ml advanced
Diagram: AI Context Poisoning AI Context Poisoning
An adversarial technique where malicious instructions are injected into an LLM's context window — via user input, retrieved documents, or tool results — to hijack the model's behaviour.
2mo ago ai_ml advanced
Diagram: AI Governance AI Governance
The policies, processes, and organisational structures that ensure AI systems are developed, deployed, and monitored responsibly — covering accountability, fairness, transparency, and compliance.
2mo ago ai_ml advanced
Diagram: AI Guardrails AI Guardrails
Runtime constraints and safety filters applied around LLM calls to detect, block, or rewrite inputs and outputs that are harmful, off-topic, or policy-violating.
2mo ago ai_ml intermediate
Diagram: AI Observability AI Observability
The practice of monitoring, tracing, and evaluating LLM-powered systems in production — covering latency, token costs, prompt drift, output quality, and failure modes.
2mo ago ai_ml intermediate
Diagram: Knowledge Distillation Knowledge Distillation
A compression technique where a smaller 'student' model is trained to mimic the outputs of a larger 'teacher' model, achieving comparable performance at a fraction of the inference cost.
2mo ago ai_ml advanced
Diagram: LLM Temperature & Sampling Strategies LLM Temperature & Sampling Strategies
Parameters that control the randomness and diversity of LLM output — temperature scales token probabilities, while top-p and top-k limit the candidate pool before sampling.
2mo ago ai_ml intermediate
Diagram: Multimodal AI Multimodal AI
AI models that process and generate across multiple input or output modalities — text, images, audio, and video — within a single unified architecture.
2mo ago ai_ml intermediate
Diagram: Prompt Injection Attack Prompt Injection Attack
An attack where crafted user input overrides or hijacks an LLM's system instructions, causing it to ignore its intended behaviour and follow attacker-supplied commands instead.
CWE-74 OWASP LLM01:2025
2mo ago ai_ml advanced
AI Agent Pattern
An LLM-powered system that takes multi-step actions autonomously — calling tools, reading results, and deciding next steps in a loop until a goal is achieved.
2mo ago ai_ml advanced
Chain-of-Thought Prompting
A prompting technique that instructs an LLM to show its reasoning step-by-step before giving a final answer, significantly improving accuracy on complex tasks.
2mo ago ai_ml beginner
LLM Hallucination
When a large language model generates confident-sounding text that is factually incorrect, fabricated, or unsupported by any source — a fundamental property of how language models work.
2mo ago ai_ml intermediate
LLM Streaming Responses PHP 8.0+
Receiving LLM output token-by-token as it is generated rather than waiting for the full response — dramatically improving perceived latency for users and enabling real-time displays of AI-generated content.
2mo ago ai_ml intermediate
Prompt Injection Attacks (LLM Security)
An attack where malicious instructions embedded in user input or retrieved content override an LLM's system prompt — causing it to ignore its instructions, reveal confidential information, or take unintended actions.
2mo ago security advanced
RAG — Retrieval-Augmented Generation
An LLM architecture that fetches relevant documents from an external knowledge base before generating a response, grounding answers in retrieved facts rather than training data alone.
2mo ago ai_ml intermediate
✓ schema.org compliant