Back to Databases guides

MinIO: S3-compatible object storage that runs locally

Stanley Ulili
Updated on April 12, 2026

MinIO is an open-source object storage server that implements the Amazon S3 API. It runs in a single Docker container, accepts connections from any S3-compatible SDK or CLI tool, and provides a browser-based console at a separate port. The same boto3, aws-sdk, or other S3 client code that connects to AWS S3 connects to MinIO by changing only the endpoint URL and credentials.

Graphic illustrating the three primary challenges MinIO addresses: unpredictable cloud bills, data distance causing latency, and heavy workloads like AI and ML

Why run object storage locally

Cloud object storage costs are not just storage. They include egress fees, request counts, and data transfer charges. During development, CI/CD pipelines and repeated test runs accumulate real costs. A local MinIO instance eliminates these entirely.

Network round-trips to a remote S3 bucket add latency to every storage operation. On a laptop with files stored in a distant region, this makes storage-heavy development feel slow. MinIO reduces that latency to near-zero.

For AI and machine learning workloads that read large datasets repeatedly during training, remote storage becomes a throughput bottleneck. Local high-performance storage removes that constraint.

Running MinIO with Docker

Starting the container

 
docker run -p 9000:9000 -p 9001:9001 \
  --name minio-dev \
  -e "MINIO_ROOT_USER=minioadmin" \
  -e "MINIO_ROOT_PASSWORD=minioadmin" \
  minio/minio server /data --console-address ":9001"

Port 9000 serves the S3 API endpoint. Port 9001 serves the web console. MINIO_ROOT_USER and MINIO_ROOT_PASSWORD set the access key and secret key for the root user. The --console-address flag binds the web console to port 9001 explicitly, since the default port would otherwise conflict with the API port.

Docker Desktop interface showing the minio-dev container in a running state

Installing the MinIO client

MinIO's command-line client mc provides commands for managing buckets and objects:

On macOS with Homebrew:

 
brew install minio/stable/mc

On Linux:

 
wget https://dl.min.io/client/mc/release/linux-amd64/mc
 
chmod +x mc
 
sudo mv mc /usr/local/bin/mc

On Windows with Scoop:

 
scoop install mc

Configuring the client

An alias maps a friendly name to a server's address and credentials:

 
mc alias set mylocal http://localhost:9000 minioadmin minioadmin

Terminal showing the successful mc alias set command confirming the connection to the local MinIO instance

All subsequent mc commands use mylocal to reference this server.

Basic operations

Creating a bucket

 
mc mb mylocal/demo-bucket

Uploading files

Object storage paths accept virtual folders in the destination path without requiring the folders to be created first:

 
mc cp demo-file.txt mylocal/demo-bucket/text/
 
mc cp test.json mylocal/demo-bucket/data/

Listing objects

 
mc ls --recursive mylocal/demo-bucket

Web console

The console at http://localhost:9001 provides the same operations through a browser interface. After logging in with the configured credentials, the Object Browser shows all buckets and their contents with drag-and-drop upload support and inline file preview.

MinIO Object Browser interface displaying the demo-bucket and its internal folder structure with data, images, and text folders

Integrating with Python using boto3

The S3 API compatibility means the standard AWS SDK works against MinIO unchanged. The only difference from an AWS S3 connection is the endpoint_url parameter.

Install boto3:

 
pip install boto3
main.py
import boto3
import os

ENDPOINT_URL = 'http://localhost:9000'
ACCESS_KEY = 'minioadmin'
SECRET_KEY = 'minioadmin'
BUCKET_NAME = 'demo-bucket'
REGION_NAME = 'us-east-1'  # any string is valid for MinIO

s3_client = boto3.client(
    's3',
    endpoint_url=ENDPOINT_URL,
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY,
    region_name=REGION_NAME
)

files_to_upload = {
    'demo-file.txt': 'text/demo-file.txt',
    'test.json': 'data/test.json',
}

for local_file, object_name in files_to_upload.items():
    if os.path.exists(local_file):
        s3_client.upload_file(local_file, BUCKET_NAME, object_name)
        print(f"Uploaded {local_file} to {BUCKET_NAME}/{object_name}")

response = s3_client.list_objects_v2(Bucket=BUCKET_NAME)
if 'Contents' in response:
    for obj in response['Contents']:
        print(f"{obj['Key']} ({obj['Size']} bytes)")

Python script in a code editor highlighting the import boto3 line demonstrating use of the standard AWS SDK to connect to MinIO

 
python main.py

To point the same code at AWS S3 in production, remove endpoint_url and replace the credentials with real AWS keys. No other changes are needed.

Final thoughts

MinIO is most useful as a local development environment for applications that use S3. It removes cloud costs and network latency from the development loop and requires no code changes when switching to production S3. The web console and mc CLI cover most management needs without requiring a full AWS setup.

For large-scale production deployments, MinIO's commercial offerings add features like erasure coding, replication, and enterprise support. The open-source server is appropriate for development, testing, and smaller self-hosted deployments.

Documentation is available at min.io/docs.

Got an article suggestion? Let us know
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.