← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

HTTP in Python — requests & httpx

python Python 3.7+ Beginner

Also Known As

requests library httpx Python HTTP urllib

TL;DR

requests is the standard sync HTTP library; httpx adds async support, HTTP/2, and a similar API — both far more ergonomic than urllib.

Explanation

requests: synchronous HTTP with a clean API — Session for connection pooling, automatic JSON encoding/decoding, timeout parameter (always set it), retry with HTTPAdapter. httpx: requests-compatible API plus async (async with httpx.AsyncClient()), HTTP/2 support, better streaming. Key practices: always set timeout, use sessions for multiple requests to the same host (connection reuse), check response.raise_for_status() rather than manual status code checking, and never disable SSL verification (verify=False is insecure).

Common Misconception

urllib is equivalent to requests — urllib requires manual encoding, header management, and error handling; requests provides a dramatically simpler API with the same power.

Why It Matters

HTTP without timeout is a reliability disaster — a single slow upstream server hangs the entire process indefinitely; always setting timeout is the difference between a resilient and a fragile service.

Common Mistakes

  • No timeout parameter — process hangs indefinitely on slow servers.
  • Not using Session for multiple requests — creates a new TCP connection per request.
  • verify=False to bypass SSL — disables certificate verification, enabling MITM.
  • Not calling raise_for_status() — 400/500 responses silently succeed without it.

Code Examples

✗ Vulnerable
import requests
# No timeout — hangs forever if server is slow:
response = requests.get('https://api.example.com/data')
# No error check — 500 treated as success:
data = response.json()

# New connection every request:
for url in urls:
    requests.get(url)  # New TCP connection each time
✓ Fixed
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

# Session with retry and timeout:
session = requests.Session()
retry = Retry(total=3, backoff_factor=0.3, status_forcelist=[500, 502, 503])
session.mount('https://', HTTPAdapter(max_retries=retry))

response = session.get('https://api.example.com/data', timeout=(3.05, 27))
response.raise_for_status()  # Raises for 4xx/5xx
data = response.json()

# Async with httpx:
async with httpx.AsyncClient(timeout=30) as client:
    response = await client.get(url)
    response.raise_for_status()

Added 16 Mar 2026
Edited 22 Mar 2026
Views 84
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F 0 pings S 1 ping S 0 pings M 0 pings T 1 ping W 0 pings T 2 pings F 1 ping S 0 pings S 0 pings M 0 pings T 2 pings W 0 pings T 2 pings F 1 ping S 0 pings S 1 ping M 0 pings T 0 pings W 0 pings T
No pings yet today
No pings yesterday
Perplexity 35 Amazonbot 14 ChatGPT 13 Google 8 Unknown AI 3 Ahrefs 2 Qwen 1
crawler 74 crawler_json 2
DEV INTEL Tools & Severity
🟡 Medium ⚙ Fix effort: Low
⚡ Quick Fix
Use httpx instead of requests for new Python code — it has the same API but supports async, HTTP/2, and proper timeout defaults; always set explicit timeouts on all HTTP calls
📦 Applies To
python 3.7 web cli
🔗 Prerequisites
🔍 Detection Hints
requests.get() without timeout parameter; no retry logic for transient failures; blocking HTTP call in async context
Auto-detectable: ✓ Yes pylint ruff semgrep
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Low Context: Function Tests: Update
CWE-400

✓ schema.org compliant