Back to Scaling Python Applications guides

An Introduction to Python Subprocess

Stanley Ulili
Updated on April 8, 2025

Python's subprocess module lets you run other programs directly from your Python code. It replaced older methods like os.system and os.spawn* and works consistently across Windows, Mac, and Linux. Many developers rely on subprocess for DevOps tools, system utilities, and application wrappers because it's reliable and handles input/output streams well.

This article will show you how to use subprocess effectively in your Python applications. You'll learn to run external commands, communicate with other programs, and properly handle errors.

Prerequisites

Before starting this tutorial, you should have:

  • Basic Python programming knowledge
  • Python 3.8 or newer installed
  • Familiarity with basic command-line concepts

Getting started with subprocess

To get the most out of this tutorial, let's create a new Python project to try out the concepts we'll discuss.

Start by creating a new directory for the project and navigate to it:

 
mkdir python-subprocess && cd python-subprocess

The subprocess module provides several functions for creating and interacting with subprocesses, with run() and Popen() being the most commonly used.

Let's start with the simplest example using the high-level run() function, which was introduced in Python 3.5.

Create a new file called app.py in the project directory:

app.py
import subprocess

result = subprocess.run(['echo', 'Hello, subprocess!'])
print(f"Return code: {result.returncode}")

Run the file using the following command:

 
python app.py

You'll see the following output:

Output
Hello, subprocess!
Return code: 0

This example shows how subprocess.run() executes a shell command and returns a result object, including the command’s exit code. This return code is useful for understanding the outcome of the command—especially with tools like grep, where:

  • 0 means a match was found,
  • 1 means no match was found,
  • 2 or higher indicates an error occurred.

With the basics in place, let’s look at how to actually capture the output of a command.

Capturing output from subprocesses

In many cases, you won’t just want to run a command—you’ll want to capture and use its output in your Python code. For example, maybe you're listing files, checking a process status, or parsing command-line output for further logic.

Python’s subprocess.run() makes this easy with the capture_output parameter, which tells Python to store the command’s stdout and stderr so you can access them directly:

app.py
import subprocess

result = subprocess.run(['ls', '-la'], capture_output=True, text=True)
print("Command output:")
print(result.stdout)

Diagram with a basic explanation of the subprocess

This code runs the ls -la command to list files in detail. Setting capture_output=True grabs both standard output and error, while text=True converts them to strings instead of bytes. You can then access the output through result.stdout.

Run it:

 
python app.py

You'll see something like:

Output
Command output:
total 8
drwxr-xr-x@ 3 stanley  group   96 Apr  8 15:34 .
drwxr-xr-x@ 4 stanley  group  128 Apr  8 15:33 ..
-rw-r--r--@ 1 stanley  group  136 Apr  8 15:44 app.py

When you run this, you'll see the output of the ls -la command captured in the stdout attribute of the CompletedProcess object. The text=True parameter ensures that the output is decoded to a string instead of being returned as bytes.

Now that you’ve seen how to run commands and capture their output, let’s explore how to handle cases where those commands fail.

Handling command errors

External commands can fail, and your Python code should handle these failures gracefully. By default, subprocess.run() doesn't raise an exception if the command returns a non-zero exit code.

To change this behavior, use the check parameter:

app.py
import subprocess

try:
result = subprocess.run(['ls', '/nonexistent'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as e:
print(f"Command failed with return code {e.returncode}")
print(f"Error output: {e.stderr}")

In this code, check=True tells Python to raise an exception if the command fails. Since /nonexistent doesn’t exist, ls triggers a CalledProcessError, which is caught and handled to show the return code and error message.

Run the file:

 
python app.py

When the above code runs, the ls command will fail because the directory /nonexistent doesn't exist. This will raise a subprocess.CalledProcessError exception that we catch and handle:

Output
Command failed with return code 1
Error output: ls: /nonexistent: No such file or directory

This pattern gives you a clean way to catch and handle command failures gracefully.

Providing input to subprocesses

Some commands expect input from standard input (stdin)—for example, tools like grep, sort, or cat. With subprocess.run(), you can pass input directly from your Python code using the input parameter.

This is useful when you want to avoid writing to a temporary file or when the data you want to process is already in memory.

Here’s a simple example:

app.py
import subprocess

text_to_process = "Hello, world!\nThis is a test."
result = subprocess.run(
["grep", "test"], input=text_to_process, capture_output=True, text=True
)
print("Matching lines:")
print(result.stdout)

This code passes a string to grep via standard input using the input parameter. It filters the lines and returns only those that match "test".

When text=True, the input must be a string—making it easy to work with in-memory data without writing to files.

Run the file:

 
python app.py

You'll see the following output:

Output
Matching lines:
This is a test.

Now that you’ve seen how to pass input and capture output, let’s look at how you can structure the commands themselves.

Shell commands vs. command lists

The subprocess module supports two ways of specifying commands: as a list of arguments or as a shell command string. Let's compare the two approaches.

Update the app.py` with the following code:

app.py
import subprocess

# Using a command list (recommended)
print("Using command list:")
result1 = subprocess.run(['echo', 'Hello, world!'], capture_output=True, text=True)
print(result1.stdout)

# Using a shell command (with shell=True)
print("\nUsing shell command:")
result2 = subprocess.run('echo Hello, world!', shell=True, capture_output=True, text=True)
print(result2.stdout)

This code shows two ways to run a command with subprocess. The first uses a list of arguments, which is the preferred method—it’s safer and avoids shell interpretation. The second uses a single string and sets shell=True, which runs the command through the system shell.

While both produce the same result here, shell=True can introduce security risks if the command includes user input. The command list approach is safer because it treats each argument literally, preventing shell injection.

Run the file:

 
python app.py

You'll see the following output:

Output
Using command list:
Hello, world!


Using shell command:
Hello, world!

Both methods work, but as mentioned earlier, the command list approach is safer and avoids potential security issues—especially when working with user input.

To see why this matters, let’s look at a common mistake.

Create a new file called security_risk.py and add the following code:

security_risk.py
import subprocess

# DANGEROUS: Never do this with user input
user_input = 'file.txt; echo SECURITY BREACH'
print("Running dangerous command with shell=True:")
result = subprocess.run(f'cat {user_input}', shell=True, capture_output=True, text=True)
print("Command output:")
print(result.stdout)

This code shows a classic shell injection vulnerability. The intention is to display the contents of a file, but because the command is passed as a string with shell=True, the semicolon is interpreted as a command separator—and both cat and echo are executed.

Run the file:

 
python security_risk.py

You'll see output similar to:

Output
Running dangerous command with shell=True:
Command output:
SECURITY BREACH

Even though the file doesn’t exist, the second command (echo SECURITY BREACH) still runs. That’s because the shell interprets the semicolon as a command separator. In a real-world scenario, an attacker could use this to execute harmful commands on your system.

To prevent this, use a safer approach—pass arguments as a list instead of a shell string:

Create a file called safe_approach.py and add the following:

safe_approach.py
import subprocess

# SAFE: Use this approach, especially with user input
user_input = 'file.txt'
print("Running safe command with argument list:")
try:
    result = subprocess.run(['cat', user_input], capture_output=True, text=True)
    print("Command output:")
    print(result.stdout if result.stdout else "(No output - file probably doesn't exist)")
except Exception as e:
    print(f"Error: {e}")

In this code, you're using a list to pass the command and its arguments, which avoids shell interpretation. Even if the input includes special characters, they’ll be treated as plain text rather than executable commands.

Run the file:

 
python safe_approach.py

You might see:

Output
Running safe command with argument list:
Command output:
(No output - file probably doesn't exist)

Screenshot of the subprocess diagram

This approach is much safer—especially when working with user input—because it prevents shell injection by keeping the command arguments isolated and literal.

Using Popen for advanced process control

While subprocess.run() is convenient for most use cases, the subprocess.Popen class provides more control over process execution. It allows you to:

  • Start a process without waiting for it to complete
  • Communicate with a process while it's running
  • Control input and output streams independently
  • Manage process timeouts and signals

Here's a basic example of using Popen. Update the app.py file with:

app.py
import subprocess
import time

print("Starting a process...")
# Start a process
process = subprocess.Popen(
    [
        "python",
        "-c",
        'import time; print("Hello from a subprocess!"); time.sleep(2); print("Subprocess finished!")',
    ]
)

print("Process started, now we can do other work...")
# Do other work here while process runs
for i in range(3):
    print(f"Main program: doing work {i+1}/3")
    time.sleep(0.5)

# Wait for the process to complete
print("Waiting for subprocess to finish...")
process.wait()
print(f"Process completed with return code: {process.returncode}")

You start a subprocess in this code that runs a small inline Python script. While it sleeps for two seconds, the main program continues doing its own work. Only after that do we wait for the subprocess to finish using process.wait().

Run the file:

 
python app.py

You'll see output like:

Output
Starting a process...
Hello from a subprocess!
Process started, now we can do other work...
Main program: doing work 1/3
Main program: doing work 2/3
Main program: doing work 3/3
Waiting for subprocess to finish...
Subprocess finished!
Process completed with return code: 0

This example shows how Popen gives you more flexibility than run(). The subprocess begins running immediately, and your main Python program continues executing in parallel.

This is useful for non-blocking tasks like launching a background service, monitoring logs, or running multiple processes simultaneously.

Once your main code finishes, process.wait() pauses execution until the subprocess completes. You can also access the return code afterward to confirm that everything ran successfully.

Environment variables and working directories

In some cases, you may want to run a subprocess with a custom environment or from a specific directory. The subprocess.run() function is supported by both the env and cwd parameters.

Update your app.py file with the following:

app.py
import subprocess
import os

# Create a custom environment
env = os.environ.copy()  # Start with the current environment
env['CUSTOM_VAR'] = 'value'  # Add or modify variables

# Run a command with the custom environment
result = subprocess.run(['echo', '$CUSTOM_VAR'], env=env, shell=True, capture_output=True, text=True)
print(result.stdout)

# Specify a working directory
result = subprocess.run(['ls', '-la'], cwd='/tmp', capture_output=True, text=True)
print(result.stdout)

Here’s what’s happening:

  • env defines a custom set of environment variables passed to the subprocess. In this example, we add CUSTOM_VAR and print it using echo.
  • cwd changes the working directory for the subprocess. Here, we list the contents of /tmp.

Run the script:

 
python app.py
Output
total 8
drwxrwxrwt  12 root     wheel  384 Apr  8 15:09 .
drwxr-xr-x   6 root     wheel  192 Apr  8 07:32 ..
srwxrwxrwx@  1 stanley  wheel    0 Apr  8 16:15 .s.PGSQL.5432
-rw-------@  1 stanley  wheel   57 Apr  8 16:15 .s.PGSQL.5432.lock
-rw-r--r--@  1 stanley  wheel    0 Apr  8 08:15 MozillaUpdateLock-31210A081F86E80E
srwx------   1 root     wheel    0 Apr  8 07:32 SERVERENGINE_SOCKETMANAGER_2025-04-08T05:32:56Z_554
srwxr-xr-x   1 stanley  wheel    0 Apr  8 13:48 com.adobe.acrobat.rna.0.1f5.DC
srwxr-xr-x   1 stanley  wheel    0 Apr  8 13:48 com.adobe.acrobat.rna.12588.1f5
-rw-------   1 stanley  wheel    0 Apr  8 13:48 com.adobe.acrobat.rna.AcroCefBrowserLock.DC
drwx------   3 stanley  wheel   96 Apr  8 07:32 com.apple.launchd.94kuOS1ebp
srwx------@  1 stanley  wheel    0 Apr  8 07:32 mongodb-27017.sock
drwxr-xr-x   2 root     wheel   64 Apr  8 07:32 powerlog

You’ll see the custom environment output (if supported by the shell) and a directory listing from /tmp.

This approach is useful when isolating subprocesses, adjusting environment settings, or running tools that depend on a specific working directory.

Final thoughts

In this guide, you learned how to use Python’s subprocess module to run external commands, capture output, handle errors, pass input, and manage processes with precision. From simple run() calls to advanced use of Popen, you now have the tools to integrate Python with any command-line utility.

Whether you're building dev tools, automating tasks, or running system commands, subprocess gives you the control and flexibility you need—just remember to handle user input safely and avoid unnecessary use of shell=True.

If you’re ready to go further:

  • Try async process handling with asyncio.create_subprocess_exec()
  • Look into alternatives like sh or plumbum for higher-level APIs
  • Add logging or timeouts to make subprocess usage more robust

Thanks for following along—happy scripting!

Author's avatar
Article by
Stanley Ulili
Stanley Ulili is a technical educator at Better Stack based in Malawi. He specializes in backend development and has freelanced for platforms like DigitalOcean, LogRocket, and AppSignal. Stanley is passionate about making complex topics accessible to developers.
Got an article suggestion? Let us know
Next article
Getting Started with IPython
Learn how to use IPython to boost your Python development workflow with features like interactive debugging, magic commands, tab completion, session history, and extension support
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github