What's New in Python 3.14
After years of work, Python 3.14 is finally here with big changes that affect how you write and run Python code. This release brings powerful new features like deferred annotation evaluation, built-in support for subinterpreters, and template string literals. Python 3.14 is a major step forward, adding new ideas while still keeping compatibility with older code in mind.
The Python team has also made the free-threaded build officially supported, opening the door to a new way of handling concurrency in Python.
Before looking at what’s next, let’s go through the main updates in Python 3.14 and how they can improve your projects.
Let's dive in!
Deferred evaluation of annotations
Python 3.14 introduces a fundamental change to how type annotations behave by deferring their evaluation until actually needed. This improvement eliminates the performance overhead of eagerly evaluating annotations and removes the need for string quotes around forward references.
Before Python 3.14, annotations were evaluated immediately when a function or class was defined, which could cause issues with forward references:
# Python 3.13 and earlier
from __future__ import annotations
def process_user(user: User) -> dict:
return {"name": user.name}
class User:
name: str
With Python 3.14, annotations are stored in special annotate functions and evaluated only when you actually inspect them. The runtime cost of defining annotations is now minimal:
# Python 3.14
def process_user(user: User) -> dict:
return {"name": user.name}
class User:
name: str
If User
hasn't been defined yet, Python won't throw an error during function definition. The annotation is only evaluated when you explicitly request it through the annotationlib
module. The new module provides three formats for retrieving annotations: VALUE
(evaluates to runtime values), FORWARDREF
(replaces undefined names with markers), and STRING
(returns annotations as strings). This gives you precise control over when and how annotations are processed, dramatically improving startup performance for heavily annotated codebases.
Native support for multiple interpreters
Python 3.14 brings a capability that existed in CPython for over two decades into the mainstream by adding the concurrent.interpreters
module. This feature exposes subinterpreters to Python code, enabling true multi-core parallelism without the Global Interpreter Lock limitations.
One transformative aspect is enabling genuine parallel execution on multiple CPU cores. In previous Python versions, the GIL prevented CPU-bound Python code from using multiple cores effectively:
# Traditional threading (limited by GIL)
import threading
def cpu_intensive_work(data):
result = heavy_computation(data)
return result
threads = [threading.Thread(target=cpu_intensive_work, args=(d,))
for d in datasets]
In Python 3.14, subinterpreters provide process-like isolation with thread-like efficiency:
# Python 3.14 with subinterpreters
from concurrent import interpreters
# Create a new subinterpreter
interp = interpreters.create()
def cpu_intensive_work(data):
result = heavy_computation(data)
return result
# Execute work in the subinterpreter
result = interp.call(cpu_intensive_work, dataset)
# Or run in a new thread for true parallelism
thread = interp.call_in_thread(cpu_intensive_work, dataset)
thread.join()
For a higher-level pool-based interface similar to ProcessPoolExecutor
, use InterpreterPoolExecutor
:
# Python 3.14 high-level API
from concurrent.futures import InterpreterPoolExecutor
def cpu_intensive_work(data):
return heavy_computation(data)
with InterpreterPoolExecutor() as executor:
results = list(executor.map(cpu_intensive_work, datasets))
Each subinterpreter operates independently with its own GIL, allowing CPU-intensive operations to execute in genuine parallel across multiple cores.
This architecture merges the isolation advantages of multiprocessing (distinct memory spaces, separate state) with the efficiency benefits of threading (minimal overhead, rapid initialization).
Consider subinterpreters as delivering process-level safety combined with thread-level performance, making Python viable for CPU-bound tasks that were previously challenging to parallelize effectively.
Template string literals for safer string processing
Python 3.14 introduces template strings (t-strings), a new mechanism for custom string processing that addresses security concerns with string interpolation. Unlike f-strings that immediately evaluate and concatenate values, t-strings return structured objects that separate static text from dynamic content.
One significant use case is preventing injection attacks in SQL queries and HTML output. In previous Python versions, string formatting could inadvertently expose your application to security risks:
# Risky SQL construction
user_input = request.get('username')
query = f"SELECT * FROM users WHERE name = '{user_input}'"
db.execute(query) # Vulnerable to SQL injection
With Python 3.14's t-strings, you can build processing functions that properly escape user input. Template strings preserve the distinction between your code and user data:
# Python 3.14 template strings
from string.templatelib import Interpolation
def sql_safe(template):
parts = []
for part in template:
if isinstance(part, Interpolation):
# Properly escape user input
parts.append(escape_sql(str(part.value)))
else:
parts.append(part)
return ''.join(parts)
username = request.get('username')
query = sql_safe(t"SELECT * FROM users WHERE name = '{username}'")
The t-string syntax looks familiar (using t
prefix instead of f
), but the behavior differs fundamentally. Instead of producing a string immediately, it creates a Template
object containing both static segments and Interpolation
instances. This separation allows processing functions to treat user input differently from code-defined strings, making it straightforward to implement proper escaping, validation, or transformation before final string construction.
Enhanced asyncio introspection capabilities
Python 3.14 delivers powerful new tools for understanding and debugging asynchronous code through built-in asyncio introspection. These capabilities address a common pain point: diagnosing stuck or misbehaving async applications in production environments.
The new command-line interface lets you inspect running Python processes to see exactly what async tasks are doing. Previously, debugging async code required adding extensive logging or attaching a full debugger:
# Difficult to debug in Python 3.13
async def process_orders():
async with asyncio.TaskGroup() as group:
for order in orders:
group.create_task(process_single_order(order))
Python 3.14 provides two inspection commands that work on running processes without modifying your code. Use python -m asyncio ps PID
to see a flat listing of all active tasks:
python -m asyncio ps 12345
This displays a detailed table showing task IDs, names, coroutine call stacks, and which tasks are waiting on others. For hierarchical visualization, use the pstree
command to see how tasks relate to each other:
python -m asyncio pstree 12345
The tree view reveals the complete async call graph, making it immediately clear where execution is blocked. If task upload_file
is waiting on compress_data
, which is waiting on read_chunks
, you'll see the entire chain. This visibility is invaluable when debugging production issues or optimizing task coordination. The introspection works by accessing internal asyncio state safely, without interrupting your application's execution.
Safe external debugger interface
Python 3.14 introduces a zero-overhead debugging interface through PEP 768, allowing debuggers and profilers to attach to running Python processes without stopping or restarting them. This capability is crucial for investigating issues in production environments where restarts aren't feasible.
The traditional approach to debugging required either planning ahead with built-in debugging hooks or accepting downtime. With long-running services or critical applications, neither option was ideal:
# Old approach: debugging had to be planned
import pdb
def critical_service():
# Must add breakpoints before problems occur
pdb.set_trace()
process_data()
Python 3.14's sys.remote_exec()
function enables dynamic code injection into running processes at safe execution points. The debugger attaches without modifying the interpreter's execution path:
# Python 3.14 remote debugging
import sys
from tempfile import NamedTemporaryFile
# Create debugging script
with NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
script_path = f.name
f.write('import debugger; debugger.inspect_state()')
# Execute in running process
sys.remote_exec(target_pid, script_path)
The interface includes multiple security controls: environment variables to disable it entirely, command-line options for fine-grained control, and compile-time flags for maximum security in sensitive deployments. Safe execution points ensure the debugger code runs only when it won't corrupt internal state or cause race conditions. This makes investigating memory leaks, performance bottlenecks, or stuck threads possible without service interruption.
Improved error messages and developer experience
Python 3.14 significantly enhances error messages to help developers identify and fix mistakes faster. The interpreter now provides contextual suggestions and clearer explanations when syntax errors occur.
When you mistype a Python keyword, the interpreter recognizes common mistakes and offers corrections. In previous versions, a simple typo resulted in a generic syntax error:
# Python 3.13
whille True:
pass
# SyntaxError: invalid syntax
Python 3.14 analyzes the mistake and suggests the correct keyword. The error message actively helps you fix the problem:
# Python 3.14
whille True:
pass
# SyntaxError: invalid syntax. Did you mean 'while'?
The improvements extend to structural errors as well. Placing an elif
after an else
block now produces a specific error message explaining why it's wrong, rather than a vague syntax error. Similarly, using statements instead of expressions in ternary operations generates targeted guidance about what's expected. String handling errors have also improved—if you forget to close a quote, Python suggests the misplaced text might be intended as part of the string rather than separate code.
Incremental garbage collection for better performance
Python 3.14 restructures garbage collection to use an incremental approach, dramatically reducing pause times for applications with large memory footprints. This change addresses a long-standing issue where garbage collection could cause noticeable latency spikes.
The new system replaces Python's traditional three-generation collector with a two-generation model optimized for incremental collection. Instead of scanning the entire heap when collecting old objects, the collector processes the old generation in small increments:
# Garbage collection behavior changes
import gc
# Python 3.13: Three generations, stop-the-world collection
gc.collect(2) # Collects generation 2 (old objects)
# Python 3.14: Two generations, incremental collection
gc.collect(1) # Performs one increment of collection
For applications managing hundreds of megabytes or gigabytes of data, this translates to an order of magnitude improvement in pause times. A collection that might have frozen your application for 100ms now completes in 10ms by breaking the work across multiple increments. The young generation still uses fast collection since most objects die young, while the old generation gets scanned incrementally to avoid long pauses.
The API for configuring the collector has changed accordingly. The gc.get_threshold()
function still returns three values for compatibility, but the third value is now always zero. The first value controls young collection frequency, while the second determines how aggressively the old generation is scanned. These changes happen transparently—your code continues working while benefiting from reduced latency.
Syntax highlighting in the default REPL
Python 3.14 brings syntax highlighting to the default interactive shell, making the REPL more readable and reducing cognitive load during exploratory programming. This feature activates automatically when Python detects a color-capable terminal.
The implementation uses the 4-bit VGA standard ANSI color codes for maximum compatibility across different terminal emulators. Keywords appear in one color, strings in another, and so forth, matching what developers expect from their code editors:
# Python REPL with syntax highlighting
>>> def calculate_total(items):
... return sum(item.price for item in items)
Colors distinguish different syntactic elements instantly, making it easier to spot typos or structural issues while typing. The highlighting updates in real-time as you write code, providing immediate visual feedback. If syntax highlighting interferes with your workflow, disable it by setting the PYTHON_BASIC_REPL
environment variable or through other color control variables.
The shell also gained import autocompletion, meaning typing import co
and pressing Tab suggests modules starting with those letters. Similarly, from concurrent import i
suggests submodules beginning with i
. This works through runtime introspection of available modules, making package discovery more intuitive. The combination of syntax highlighting and autocompletion significantly enhances the interactive programming experience.
Stricter handling of path route matching
Python 3.14 tightens route parameter behavior in web frameworks and path matching libraries by upgrading the underlying path-to-regexp
implementation from version 0.x to 8.x. These changes improve security and eliminate ambiguous pattern matching.
One critical improvement addresses Regular Expression Denial of Service (ReDoS) vulnerabilities. Previous versions allowed inline regex patterns that could cause exponential backtracking:
# Vulnerable pattern (removed in Python 3.14)
@app.route('/user/:id(\\d+)')
def get_user(id):
return f"User {id}"
Python 3.14 removes support for these sub-expression patterns entirely. For complex validation, use dedicated input validation libraries after extracting parameters. This separates concerns and prevents regex complexity from affecting routing performance.
Wildcard handling has also changed to require explicit naming. Where *
previously matched anything implicitly, you now specify capture behavior explicitly:
# Python 3.13 implicit wildcard
@app.route('/user*')
# Python 3.14 explicit wildcard
@app.route('/user(.*)')
Optional parameters use a new syntax with curly braces instead of the question mark suffix. This makes optionality visually distinct and prevents parser confusion:
# Python 3.13 optional parameter
@app.route('/user/:id?')
# Python 3.14 optional parameter
@app.route('/user{/:id}')
The changes include additional reserved characters and requirements for naming all parameters explicitly. These restrictions make routing code more predictable and maintainable while closing security vulnerabilities.
Free-threaded Python becomes officially supported
Python 3.14 marks the transition of free-threaded Python from experimental to officially supported, implementing phase II of PEP 703's roadmap. This build option removes the Global Interpreter Lock, enabling true multi-threaded parallelism for CPU-bound code.
The implementation includes substantial performance improvements over the initial Python 3.13 release. Free-threaded mode now enables the specializing adaptive interpreter (PEP 659), incorporating optimizations that were previously disabled. The performance penalty for single-threaded code has dropped from 40% to just 5-10%, making free-threaded builds viable for more use cases.
To support concurrent execution safely, Python 3.14 introduces context-aware warnings through the -X context_aware_warnings
flag, which defaults to enabled in free-threaded builds. This ensures warning filters work predictably when multiple threads or async tasks use catch_warnings
simultaneously. A new thread_inherit_context
flag makes threads inherit the calling context's Context()
, allowing warning filters and decimal settings to propagate to child threads naturally.
Building extensions for free-threaded Python requires specifying the Py_GIL_DISABLED
preprocessor variable explicitly on Windows. The setting used at compile time can be verified through sysconfig.get_config_var()
. While the free-threaded build is now officially supported and won't be removed without proper deprecation, the decision to make it default remains future work depending on ecosystem readiness and performance characteristics.
Zstandard compression support
Python 3.14 adds comprehensive support for the Zstandard compression format through a new compression.zstd
module. Zstandard offers superior compression ratios and speed compared to older algorithms, making it ideal for modern applications handling large data volumes.
The new module provides APIs consistent with existing compression modules like lzma
and bz2
, simplifying adoption for developers familiar with Python's compression interfaces:
# Python 3.14 Zstandard compression
from compression import zstd
import math
# Compress data
data = str(math.pi).encode() * 20
compressed = zstd.compress(data)
ratio = len(compressed) / len(data)
print(f"Compression ratio: {ratio:.2%}")
Integration extends beyond the core module—tarfile
, zipfile
, and shutil
all gained Zstandard support for reading and writing compressed archives. This unified support means you can use Zstandard anywhere Python handles compressed data. The Windows builds include zlib-ng as the implementation, providing better performance across all compression levels while maintaining compatibility with standard zlib.
The compression
package also reorganizes Python's compression modules, providing compression.lzma
, compression.bz2
, compression.gzip
, and compression.zlib
as the preferred import paths. Existing module names remain available without deprecation, ensuring backward compatibility. Any future deprecation won't occur sooner than five years after Python 3.14's release, giving ample migration time.
Enhanced pathlib operations
Python 3.14 expands pathlib
with methods for recursive copying and moving of files and directories, eliminating the need to drop down to shutil
for these common operations.
The new methods follow intuitive naming patterns that distinguish between operations. copy()
and move()
work with explicit destination paths, while copy_into()
and move_into()
place items into destination directories:
# Python 3.14 pathlib copy and move operations
from pathlib import Path
source = Path('/data/logs')
backup = Path('/backup/logs')
# Copy directory tree to explicit destination
source.copy(backup)
# Copy into existing directory
source.copy_into(Path('/backup'))
# Move to new location
source.move(Path('/archive/old_logs'))
These operations handle complex scenarios correctly, including permission preservation, symlink handling, and error recovery. The methods integrate seamlessly with pathlib's existing API, maintaining the fluent programming style that makes pathlib attractive.
Another addition is the info
attribute, which provides access to cached file type information. When you iterate a directory with iterdir()
, Python now captures file type data during the directory scan itself, avoiding redundant system calls when you subsequently check if items are files, directories, or symlinks. This optimization significantly improves performance when working with large directory structures.
Removed and deprecated features
Python 3.14 removes several long-deprecated features and deprecates others to streamline the language and standard library. If you're upgrading, reviewing these changes ensures a smooth transition.
asyncio.get_event_loop()
no longer creates loops implicitly
The asyncio.get_event_loop()
function now raises RuntimeError
if no event loop exists, ending years of confusing implicit loop creation. This change forces explicit loop management, eliminating subtle bugs from automatic loop creation:
# Python 3.13 (created loop implicitly)
loop = asyncio.get_event_loop()
loop.run_until_complete(async_function())
# Python 3.14 (use asyncio.run instead)
asyncio.run(async_function())
For the common pattern of running async code, simply use asyncio.run()
. If you need more control, such as running multiple async functions with blocking code between them, use asyncio.Runner
:
# Python 3.14 Runner for complex scenarios
with asyncio.Runner() as runner:
runner.run(operation_one())
blocking_code()
runner.run(operation_two())
multiprocessing changes default start method
On Unix platforms except macOS, multiprocessing now defaults to the forkserver
start method instead of fork
. This change improves stability and safety with modern multi-threaded applications:
# Python 3.14 explicit fork if needed
import multiprocessing as mp
if __name__ == '__main__':
ctx = mp.get_context('fork') # Explicitly request fork
with ctx.Pool() as pool:
results = pool.map(worker_function, data)
The fork
method copies all memory state to child processes, including thread state, which can cause deadlocks and corruption. The forkserver
method launches a clean server process that then forks workers, avoiding these issues. Most code works unchanged with forkserver, but if you rely on inherited state or non-picklable objects, you'll need adjustments.
types.UnionType and typing.Union unification
Python 3.14 unifies old-style unions (Union[int, str]
) and new-style unions (int | str
) into a single type. Both syntaxes now produce identical types.UnionType
instances:
# Python 3.14 union unification
from typing import Union
# These are now equivalent
old_style = Union[int, str]
new_style = int | str
# Same representation
repr(old_style) # 'int | str'
repr(new_style) # 'int | str'
# Same type
type(old_style) # <class 'types.UnionType'>
type(new_style) # <class 'types.UnionType'>
This unification simplifies introspection but removes union caching—repeatedly creating the same union returns different objects. Use ==
for comparison instead of is
. The change primarily affects code doing deep type introspection; typical usage remains unaffected.
Should you upgrade to Python 3.14?
If you're using Python 3.12 or newer, upgrading to Python 3.14 provides significant benefits with manageable migration effort. The performance improvements, enhanced debugging capabilities, and modernized annotation handling make it a compelling upgrade for most projects.
Before upgrading, review the official migration guide and run your comprehensive test suite to catch potential issues. Pay special attention to code using annotations, asyncio event loops, or multiprocessing, as these areas saw substantial changes.
Verify your Python version meets the minimum requirement by running:
python --version
Python 3.14.0
Using pyenv
or a similar version manager simplifies testing upgrades without affecting your system Python:
pyenv install 3.14.0
pyenv shell 3.14.0
Once ready, update your project dependencies and run your test suite. Address any deprecation warnings that appear, as features triggering warnings today will likely be removed in Python 3.15 or 3.16. Update routing patterns if using web frameworks, replace deprecated asyncio patterns, and verify multiprocessing code works with the new default start method.
Final thoughts
Python 3.14 is a big update that brings in features developers have been waiting on for years. Deferred annotations make type hints easier to use, subinterpreters allow real parallel work, and template strings give a safe and flexible way to handle text. On top of that, the new officially supported free-threaded build opens the door to a whole new way of writing Python programs that can run tasks at the same time.
If you want to learn more or see how these changes might affect your code, check out the official guides: the Python 3.14 Release Notes and the full Python 3.14 Documentation.
Happy coding!