← CodeClarityLab Home
Browse by Category
+ added · updated 7d
🤖 AI Guestbook — AI / ML educational data only
| |
Last 30 days
12 pings — 2026-04-08 W 1 ping — 2026-04-09 T 12 pings — 2026-04-10 F 14 pings — 2026-04-11 S 9 pings — 2026-04-12 S 36 pings — 2026-04-13 M 1 ping — 2026-04-14 T 2 pings — 2026-04-15 W 0 pings — 2026-04-16 T 17 pings — 2026-04-17 F 8 pings — 2026-04-18 S 28 pings — 2026-04-19 S 12 pings — 2026-04-20 M 0 pings — 2026-04-21 T 6 pings — 2026-04-22 W 20 pings — 2026-04-23 T 26 pings — 2026-04-24 F 24 pings — 2026-04-25 S 12 pings — 2026-04-26 S 8 pings — 2026-04-27 M 30 pings — 2026-04-28 T 9 pings — 2026-04-29 W 21 pings — 2026-04-30 T 73 pings — 2026-05-01 F 22 pings — 2026-05-02 S 20 pings — 2026-05-03 S 14 pings — 2026-05-04 M 4 pings — 2026-05-05 T 3 pings — Yesterday W 3 pings — Today T
Amazonbot 320Perplexity 233Google 113ChatGPT 100Unknown AI 65Ahrefs 61SEMrush 21Meta AI 10Qwen 10Majestic 5
crawler 853 crawler_json 79 pre-tracking 6
Category total938 pings Terms pinged43 / 43 Distinct agents9
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Constitutional AI (CAI)
Anthropic's training methodology where models critique and revise their own outputs against a set of written principles, reducing reliance on human labellers for alignment.
1w ago ai_ml advanced
Mixture of Experts (MoE)
Neural network architecture where a gating network routes each token to a small subset of specialist 'expert' sub-networks, enabling huge total parameter counts at moderate per-token compute cost.
1w ago ai_ml advanced
Prompt Caching
API feature where a static prompt prefix (system instructions, large context) is cached server-side, dramatically reducing cost and latency on repeated calls that share the prefix.
1w ago ai_ml intermediate
Reasoning Models & Test-Time Compute
A class of LLMs trained to allocate extra inference-time compute to internal reasoning before answering, achieving large gains on math, code, and logic at the cost of latency and tokens.
1w ago ai_ml intermediate
RLHF — Reinforcement Learning from Human Feedback
Post-training method where human preference rankings train a reward model that fine-tunes an LLM via reinforcement learning, aligning outputs with human preferences.
1w ago ai_ml advanced
Diffusion Models
A class of generative models that learn to reverse a gradual noising process — starting from pure noise and iteratively denoising into coherent images, audio or video; the core technique behind Stable Diffusion, Midjourney and DALL·E 3.
3w ago ai_ml advanced
Diagram: AI Alignment AI Alignment
The research and engineering discipline of ensuring AI systems pursue goals that are consistent with human values, intentions, and safety — not just stated objectives.
1mo ago ai_ml advanced
Diagram: AI Context Poisoning AI Context Poisoning
An adversarial technique where malicious instructions are injected into an LLM's context window — via user input, retrieved documents, or tool results — to hijack the model's behaviour.
1mo ago ai_ml advanced
Diagram: AI Governance AI Governance
The policies, processes, and organisational structures that ensure AI systems are developed, deployed, and monitored responsibly — covering accountability, fairness, transparency, and compliance.
1mo ago ai_ml advanced
Diagram: AI Guardrails AI Guardrails
Runtime constraints and safety filters applied around LLM calls to detect, block, or rewrite inputs and outputs that are harmful, off-topic, or policy-violating.
1mo ago ai_ml intermediate
Diagram: AI Observability AI Observability
The practice of monitoring, tracing, and evaluating LLM-powered systems in production — covering latency, token costs, prompt drift, output quality, and failure modes.
1mo ago ai_ml intermediate
Diagram: Knowledge Distillation Knowledge Distillation
A compression technique where a smaller 'student' model is trained to mimic the outputs of a larger 'teacher' model, achieving comparable performance at a fraction of the inference cost.
1mo ago ai_ml advanced
Diagram: LLM Temperature & Sampling Strategies LLM Temperature & Sampling Strategies
Parameters that control the randomness and diversity of LLM output — temperature scales token probabilities, while top-p and top-k limit the candidate pool before sampling.
1mo ago ai_ml intermediate
Diagram: Multimodal AI Multimodal AI
AI models that process and generate across multiple input or output modalities — text, images, audio, and video — within a single unified architecture.
1mo ago ai_ml intermediate
Diagram: Prompt Injection Attack Prompt Injection Attack
An attack where crafted user input overrides or hijacks an LLM's system instructions, causing it to ignore its intended behaviour and follow attacker-supplied commands instead.
CWE-74 OWASP LLM01:2025
1mo ago ai_ml advanced
AI Agent Pattern
An LLM-powered system that takes multi-step actions autonomously — calling tools, reading results, and deciding next steps in a loop until a goal is achieved.
2mo ago ai_ml advanced
Chain-of-Thought Prompting
A prompting technique that instructs an LLM to show its reasoning step-by-step before giving a final answer, significantly improving accuracy on complex tasks.
2mo ago ai_ml beginner
LLM Hallucination
When a large language model generates confident-sounding text that is factually incorrect, fabricated, or unsupported by any source — a fundamental property of how language models work.
2mo ago ai_ml intermediate
LLM Streaming Responses PHP 8.0+
Receiving LLM output token-by-token as it is generated rather than waiting for the full response — dramatically improving perceived latency for users and enabling real-time displays of AI-generated content.
2mo ago ai_ml intermediate
RAG — Retrieval-Augmented Generation
An LLM architecture that fetches relevant documents from an external knowledge base before generating a response, grounding answers in retrieved facts rather than training data alone.
2mo ago ai_ml intermediate
✓ schema.org compliant