Every Python application interacting with external systems faces a critical risk: what happens when those systems don't respond? Without proper timeout handling, your application can grind to a halt, wasting resources and potentially crashing under load.
Properly implemented timeouts act as a crucial safety valve for your application. They prevent cascading failures, protect system resources, and maintain responsiveness even when external dependencies falter. Without them, you're essentially operating without a safety net in production.
In this guide, we'll walk through implementing effective timeout strategies for Python applications.
Let's dive into building more reliable Python applications with proper timeout handling!
Why you need timeouts in Python
Python applications regularly interact with external resources like web APIs, databases, and file systems. Each interaction introduces potential delays that could impact your application's performance.
Without timeouts, requests could hang indefinitely, consuming resources and potentially crashing your application. When you set time limits, you ensure responsiveness and avoid resource bottlenecks.
Types of Python timeouts
There are several categories of timeouts in Python applications:
HTTP request timeouts: These prevent your application from hanging when requesting external web services.
Database operation timeouts: These ensure database queries don't hold connections indefinitely.
Socket and network timeouts: These control how long network-related operations can take.
Function execution timeouts: These limit how long a particular function or operation can run.
Let's now explore how to implement these timeouts in your Python code.
HTTP request timeouts
Making HTTP requests to external services is a common operation that needs timeout handling. Python offers several libraries for making HTTP requests, with requests
being the most popular.
Using timeouts with the requests library
The requests
library doesn't set a default timeout, meaning your requests could hang indefinitely. Always specify a timeout explicitly:
import requests
from requests.exceptions import Timeout, RequestException
try:
# Set a 5-second timeout for the request
response = requests.get('https://api.example.com/data', timeout=5)
data = response.json()
except Timeout:
print("The request timed out")
except RequestException as e:
print(f"Request error: {e}")
The timeout
parameter can be either a single value (applied to both connection and read operations) or a tuple for more granular control:
# Set separate connect (3s) and read (10s) timeouts
response = requests.get('https://api.example.com/data', timeout=(3, 10))
For consistent timeout handling across your application, create a session with default settings:
import requests
# Create a session with default timeout settings
session = requests.Session()
session.timeout = (3, 10) # (connect_timeout, read_timeout)
# All requests made with this session will use these timeouts
response = session.get('https://api.example.com/data')
When a timeout occurs, the requests
library raises a Timeout
exception, which you can catch and handle appropriately.
Using timeouts with urllib3 and urllib
For lower-level HTTP clients, timeout configuration differs slightly:
# Using urllib3
import urllib3
http = urllib3.PoolManager(timeout=5.0)
response = http.request('GET', 'https://api.example.com/data')
# Using standard library urllib
import urllib.request
import socket
try:
response = urllib.request.urlopen('https://api.example.com/data', timeout=5)
data = response.read()
except socket.timeout:
print("Request timed out")
except urllib.error.URLError as e:
print(f"URL error: {e}")
Using timeouts with aiohttp (for async code)
For asynchronous HTTP requests, aiohttp
provides timeout controls:
import aiohttp
import asyncio
async def fetch_data():
timeout = aiohttp.ClientTimeout(total=10)
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.get('https://api.example.com/data') as response:
return await response.json()
# Run the async function
asyncio.run(fetch_data())
You can configure more specific timeouts with aiohttp
:
timeout = aiohttp.ClientTimeout(
total=60, # Total request timeout
connect=5, # Connection timeout
sock_read=30 # Socket read timeout
)
Creating a utility for HTTP requests with retries
To make your HTTP requests more robust, combine timeouts with retry logic:
import requests
import time
from requests.exceptions import Timeout, RequestException
def fetch_with_timeout(url, timeout=5, retries=3, backoff_factor=0.3):
"""Fetch data with timeout and exponential backoff retry."""
for attempt in range(retries):
try:
response = requests.get(url, timeout=timeout)
response.raise_for_status()
return response.json()
except Timeout:
wait_time = backoff_factor * (2 ** attempt)
print(f"Request timed out. Retrying in {wait_time:.2f}s...")
time.sleep(wait_time)
except RequestException as e:
print(f"Request error: {e}")
if attempt == retries - 1:
return None
wait_time = backoff_factor * (2 ** attempt)
time.sleep(wait_time)
return None
This approach handles timeouts gracefully and implements exponential backoff for retries, making your application more resilient to temporary network issues.
Database operation timeouts
Database operations present particularly insidious timeout risks. Unlike HTTP requests, which usually complete quickly or fail fast, database queries can lock up your application silently, consuming connection pool resources while appearing to function normally.
Database timeouts come in several flavors:
- Connection timeouts: How long to wait when establishing a connection
- Query/statement timeouts: Maximum duration for a single query execution
- Idle timeouts: How long to keep unused connections open
- Network socket timeouts: Low-level timeout for network operations
Each timeout serves a specific purpose in your defense strategy against database performance issues. Let's examine how to set these timeouts for popular Python database libraries.
Timeouts with SQLAlchemy
SQLAlchemy, Python's most popular ORM, doesn't directly manage timeouts. Instead, it passes timeout parameters to the underlying database drivers. For PostgreSQL with psycopg2, here's how to set both connection and query timeouts:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Set connection and query timeouts (PostgreSQL example)
engine = create_engine(
'postgresql://user:password@localhost/mydb',
connect_args={
'connect_timeout': 5, # 5 seconds to establish connection
'options': '-c statement_timeout=30000' # 30 seconds for query execution (in milliseconds)
}
)
Session = sessionmaker(bind=engine)
session = Session()
try:
# This query will time out after 30 seconds
result = session.execute('SELECT pg_sleep(60)') # Artificially slow query
except Exception as e:
print(f"Query error: {e}")
# PostgreSQL will raise "canceling statement due to statement timeout"
finally:
session.close()
The connect_timeout
parameter controls how long the driver waits to establish the initial connection. The statement_timeout
(set via options) limits how long any individual query can run before the database server terminates it.
For MySQL, the parameters differ:
engine = create_engine(
'mysql+pymysql://user:password@localhost/mydb',
connect_args={
'connect_timeout': 5, # Connection timeout in seconds
},
execution_options={
'timeout': 30 # Query timeout in seconds
}
)
Timeouts with psycopg2 (PostgreSQL)
When working directly with PostgreSQL via psycopg2, you have more granular control over timeouts:
import psycopg2
try:
# Set connection timeout to 5 seconds
conn = psycopg2.connect(
"dbname=mydatabase user=myuser password=mypassword host=localhost",
connect_timeout=5 # How long to wait for connection establishment
)
cursor = conn.cursor()
# Set statement timeout to 10 seconds for this session
cursor.execute("SET statement_timeout TO 10000") # In milliseconds
# This query will be terminated by PostgreSQL after 10 seconds
cursor.execute("SELECT pg_sleep(20)") # Simulating a slow query
except psycopg2.OperationalError as e:
# Handles connection timeouts and other operational issues
print(f"Database connection error: {e}")
except psycopg2.Error as e:
# Handles query timeouts and other query-related errors
print(f"Query error: {e}")
finally:
if 'conn' in locals() and conn:
conn.close()
The PostgreSQL statement_timeout
setting doesn't just return an error - it actively cancels the query on the server side, freeing up resources. This is crucial for preventing long-running queries from consuming database resources.
You can also set a timeout for individual operations:
# Set timeout for a specific query to 5 seconds
with conn.cursor() as cur:
cur.execute("SET statement_timeout TO 5000") # 5 seconds
cur.execute("SELECT * FROM large_table WHERE complex_condition")
# Reset to default for subsequent operations
cur.execute("SET statement_timeout TO 30000") # Back to 30 seconds
Timeouts with pymongo (MongoDB)
MongoDB connections have several distinct timeout settings that control different aspects of the connection lifecycle:
import pymongo
# Set comprehensive timeout configuration
client = pymongo.MongoClient(
"mongodb://localhost:27017/",
serverSelectionTimeoutMS=5000, # How long to wait for server selection
connectTimeoutMS=5000, # How long to wait for TCP connection
socketTimeoutMS=10000, # How long to wait for socket operations
maxIdleTimeMS=60000 # How long to keep idle connections
)
try:
# Force immediate connection to verify it works
# Without this, connection is lazy and might not happen until first operation
client.admin.command('ping')
db = client["mydatabase"]
collection = db["mycollection"]
# This operation will time out after 10 seconds if the socket is idle
result = collection.find_one({"field": "value"})
except pymongo.errors.ServerSelectionTimeoutError:
# Occurs when no MongoDB server is available within serverSelectionTimeoutMS
print("Timed out selecting a MongoDB server - check if MongoDB is running")
except pymongo.errors.ConnectionFailure:
# Occurs when connection can't be established within connectTimeoutMS
print("Failed to connect to MongoDB server")
except pymongo.errors.PyMongoError as e:
# Other MongoDB errors, including socketTimeoutMS
print(f"MongoDB error: {e}")
finally:
client.close()
Understanding the differences between these MongoDB timeout settings is critical:
serverSelectionTimeoutMS
controls how long the driver searches for an appropriate server in the replica setconnectTimeoutMS
limits the time for establishing a TCP connectionsocketTimeoutMS
governs how long socket operations (sends/receives) can takemaxIdleTimeMS
determines how long unused connections remain in the pool
Unlike some databases, MongoDB doesn't have a server-side query timeout by default. For long-running operations, use the max_time_ms
parameter:
# This query will timeout after 5 seconds on the server side
result = collection.find({"complex": "query"}).max_time_ms(5000)
# For aggregations
collection.aggregate([...pipeline...], maxTimeMS=5000)
Timeouts with Redis (redis-py)
Redis connections also need timeout configuration to prevent hanging:
import redis
# Configure redis with timeouts
r = redis.Redis(
host='localhost',
port=6379,
socket_connect_timeout=5, # Timeout for establishing connection
socket_timeout=10, # Timeout for socket operations
health_check_interval=30 # How often to check connection health
)
try:
# This will time out after 10 seconds if the operation takes too long
result = r.get('my_key')
except redis.exceptions.TimeoutError:
print("Redis operation timed out")
except redis.RedisError as e:
print(f"Redis error: {e}")
When you carefully configure these database timeouts, you can prevent resource exhaustion and ensure your application remains responsive even when database performance degrades.
Timeouts in asynchronous programming
Asynchronous programming in Python introduces a new paradigm for handling timeouts that's more elegant and maintainable than synchronous approaches. When working with async code, timeouts become first-class citizens in the workflow rather than awkward add-ons.
Understanding asyncio's timeout model
Unlike synchronous code where timeouts typically throw exceptions that terminate execution, asyncio's timeout mechanism properly cancels coroutines. This means resources get properly released, and your application can continue processing other tasks without accumulating zombie coroutines.
The core timeout functions in asyncio are designed to work seamlessly with Python's async/await syntax:
asyncio.wait_for()
: The primary tool for timing out coroutinesasyncio.timeout()
(Python 3.11+): A context manager approach for timeouts- Custom timeout patterns using
asyncio.create_task()
andasyncio.shield()
Using asyncio.wait_for()
The wait_for()
function is the simplest way to add timeouts to async operations:
import asyncio
async def slow_operation():
await asyncio.sleep(10) # Simulating a slow network call
return "Operation completed successfully"
async def main():
try:
# Automatically cancels the coroutine after 5 seconds
result = await asyncio.wait_for(slow_operation(), timeout=5)
print(result)
except asyncio.TimeoutError:
print("The operation timed out - it took longer than 5 seconds")
asyncio.run(main())
What makes wait_for()
powerful is that it doesn't just passively wait for the timeout to expire - it actively cancels the underlying coroutine. This means system resources associated with the operation get released immediately rather than continuing to run in the background.
Behind the scenes, wait_for()
wraps your coroutine in a task and monitors its progress. When the timeout expires, it sends a cancellation signal to the task, allowing the Python runtime to clean up resources properly.
Timeout context manager (Python 3.11+)
In newer Python versions, you can use the asyncio.timeout()
context manager for more flexible timeout handling:
import asyncio
async def fetch_data():
await asyncio.sleep(3) # Simulating network request
return "Data received"
async def process_data():
await asyncio.sleep(2) # Simulating processing
return "Processing complete"
async def main():
# Wrap the entire operation in a 5-second timeout
async with asyncio.timeout(5):
data = await fetch_data()
result = await process_data()
print(f"{data}, {result}")
asyncio.run(main())
This approach is particularly useful when you need to apply a timeout to a sequence of async operations as a group rather than individually. The timeout applies to everything inside the context manager block, creating a cleaner way to express complex timeout logic.
Creating a timeout decorator for async functions
A reusable decorator pattern makes timeout handling consistent across your codebase:
import asyncio
import functools
def async_timeout(seconds):
"""Add a timeout to any async function."""
def decorator(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
try:
return await asyncio.wait_for(
func(*args, **kwargs),
timeout=seconds
)
except asyncio.TimeoutError:
# You can customize the error handling here
raise TimeoutError(f"Function '{func.__name__}' timed out after {seconds} seconds")
return wrapper
return decorator
# Usage
@async_timeout(2.5)
async def api_call(endpoint):
await asyncio.sleep(3) # This will time out
return f"Data from {endpoint}"
# Later in your code
try:
result = await api_call("/users")
except TimeoutError as e:
print(e) # Outputs: Function 'api_call' timed out after 2.5 seconds
This approach lets you define timeouts at the function definition level, making the timeout behavior a documented part of the function's contract rather than a hidden detail in the implementation.
Working with multiple operations under timeout
Sometimes you need to run multiple operations concurrently with a global timeout. The asyncio.gather()
function combined with wait_for()
handles this elegantly:
import asyncio
async def fetch_user(user_id):
await asyncio.sleep(1) # Simulating API call
return f"User {user_id}"
async def fetch_profile(user_id):
await asyncio.sleep(2) # Simulating API call
return f"Profile for user {user_id}"
async def fetch_user_data(user_id):
# Run both operations concurrently with a 3-second total timeout
try:
user, profile = await asyncio.wait_for(
asyncio.gather(
fetch_user(user_id),
fetch_profile(user_id)
),
timeout=3
)
return {"user": user, "profile": profile}
except asyncio.TimeoutError:
return {"error": "Fetching user data timed out"}
This pattern is instrumental in API gateway scenarios where you're aggregating data from multiple backend services and need to enforce a total response time limit.
For a more sophisticated approach, you can implement timeouts with fallback values:
import asyncio
async def fetch_with_fallback(coroutine, timeout, default_value):
"""Run a coroutine with a timeout and return a default value if it times out."""
try:
return await asyncio.wait_for(coroutine, timeout=timeout)
except asyncio.TimeoutError:
return default_value
async def main():
# Try to get fresh data, but fall back to cached data if it takes too long
result = await fetch_with_fallback(
fetch_fresh_data(), # Might be slow
timeout=1.5,
default_value=get_cached_data() # Guaranteed to be fast
)
print(f"Got result: {result}")
This pattern is particularly valuable for maintaining responsiveness in user-facing applications, where returning slightly stale data is preferable to making the user wait.
How to choose a timeout value
Selecting appropriate timeout values is as important as implementing timeouts correctly. Here are key considerations for choosing effective timeout durations:
Consider the operation type
Base your timeout decisions on actual performance data from your production environment. Use application performance monitoring (APM) tools to track response times across your system.
Log timing information for critical operations to establish baseline performance. Calculate percentiles (p95, p99) of operation durations to understand outliers.
A common guideline is to set timeouts at 2-3x the p99 response time to accommodate occasional slowdowns without failing too many requests.
Measure real-world performance
Base your timeout decisions on actual performance data:
- Use application performance monitoring (APM) tools to track response times
- Log timing information for critical operations
- Calculate percentiles (p95, p99) of operation durations
- A common guideline is to set timeouts at 2-3x the p99 response time
Consider user experience
For user-facing operations, your timeout strategy must align with user expectations:
- Interactive web requests: Users typically expect responses in 1-2 seconds
- Background operations: Can have longer timeouts since they don't directly impact users
- Critical operations: May need longer timeouts with appropriate user feedback
Account for network conditions
Adjust timeouts based on the network context of your application. Internal network calls within the same data center can use shorter timeouts (1-2s) as they typically have low latency and high reliability. Internet-facing calls need longer timeouts (5-10s) for network congestion and routing issues. Due to variable connectivity conditions, mobile network connections often require even longer timeouts (10-30s).
Balance resource utilization
Timeout settings directly impact resource management:
- Too short: Operations fail unnecessarily, creating poor user experiences
- Too long: Resources stay tied up, reducing concurrency
- Find a balance that maximizes successful completions while preventing resource exhaustion
Remember that timeout settings should be regularly reviewed and adjusted based on real-world performance data and changing requirements.
Final thoughts
This guide covered essential timeout techniques for HTTP requests, database operations, and function execution, along with strategies for selecting optimal timeout values.
You now know to implement effective timeout mechanisms in your Python applications, keeping them responsive, resilient, and reliable despite network fluctuations, slow external services, or unexpected errors.
Thanks for reading and happy coding!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github