# PostgREST: Turning a PostgreSQL Database Directly into a REST API

[PostgREST](https://postgrest.org/) is a **standalone web server that connects to a PostgreSQL database, introspects its schema, and generates a complete REST API automatically**. A table named `todos` becomes available at `/todos` immediately, with `GET`, `POST`, `PATCH`, and `DELETE` operations mapped to the corresponding SQL statements. There is no controller layer, no ORM, and no custom route definitions.

PostgREST powers the data layer of [Supabase](https://supabase.com/) and is used in production at scale.

## The problem with traditional API development

Adding a single new field to a database table typically requires changes in multiple places: the column definition in the database, the model or entity in application code, the validation layer, the controller, and the API documentation. The database already knows the constraints, but they get redefined in application code. This duplication creates maintenance overhead and a fragmented source of truth. PostgREST addresses this by making the database schema the only definition that matters.

## How PostgREST works

On startup, PostgREST reads the database schema and maps each table and view to a URL path. It translates HTTP requests directly into SQL statements. URL parameters provide filtering, sorting, and field selection that map to `WHERE`, `ORDER BY`, and column selection clauses respectively.

Authorization uses PostgreSQL's native role system and Row Level Security (RLS) rather than application middleware. Every request runs under a configured database role, and RLS policies defined in SQL determine which rows are visible or writable. This means access rules are enforced by the database for every query regardless of where it originates, whether from PostgREST, a direct database connection, or an analytics tool.

## Setting up a to-do API

The following example builds a complete to-do API with three Docker services: PostgreSQL, PostgREST, and Swagger UI for documentation.

### Project files

Create a directory with two files: `docker-compose.yml` and `init.sql`.

### Docker Compose configuration

```yaml
[label docker-compose.yml]
services:
  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres_pass
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d app"]
      interval: 5s
      timeout: 5s
      retries: 10

  postgrest:
    image: postgrest/postgrest:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      PGRST_DB_URI: postgresql://postgres:postgres_pass@db:5432/app
      PGRST_DB_SCHEMAS: public
      PGRST_DB_ANON_ROLE: web_anon
      PGRST_OPENAPI_SERVER_PROXY_URI: http://localhost:3000
    depends_on:
      db:
        condition: service_healthy

  swagger:
    image: swaggerapi/swagger-ui
    restart: unless-stopped
    ports:
      - "8080:8080"
    environment:
      API_URL: http://localhost:3000/
    depends_on:
      - postgrest

volumes:
  pgdata:
```

![docker-compose.yml file in a code editor showing the three services defined](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6742d164-e860-405e-e0e2-ec11a245bc00/md1x =1280x720)

Key configuration points:

- The `./init.sql` volume mount places the schema file in `docker-entrypoint-initdb.d/`, which the PostgreSQL image executes automatically on first start.
- The `healthcheck` on the `db` service ensures `postgrest` does not start until the database is ready to accept connections.
- `PGRST_DB_ANON_ROLE: web_anon` tells PostgREST which database role to use for unauthenticated requests. This role is created in `init.sql`.
- `PGRST_DB_URI` uses `db` as the hostname because Docker Compose provides internal DNS resolution between services on the same network.
- Swagger UI fetches the OpenAPI specification from `http://localhost:3000/` to render interactive documentation at `http://localhost:8080`.

### Database schema and security policy

```sql
[label init.sql]
CREATE TABLE IF NOT EXISTS todos (
  id BIGSERIAL PRIMARY KEY,
  title TEXT NOT NULL,
  completed BOOLEAN NOT NULL DEFAULT FALSE,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Role for unauthenticated API requests
CREATE ROLE web_anon NOLOGIN;

GRANT USAGE ON SCHEMA public TO web_anon;
GRANT SELECT, INSERT, UPDATE, DELETE ON todos TO web_anon;
GRANT USAGE, SELECT ON SEQUENCE todos_id_seq TO web_anon;

-- Enable Row Level Security
ALTER TABLE todos ENABLE ROW LEVEL SECURITY;

-- Permissive policy for demonstration purposes
CREATE POLICY "anon_full_access" ON todos
  FOR ALL
  TO web_anon
  USING (true)
  WITH CHECK (true);

INSERT INTO todos (title, completed) VALUES
  ('Buy groceries', false),
  ('Walk the dog', true),
  ('Finish video script', true),
  ('Record the video', false);
```

![The CREATE POLICY statement highlighted within the init.sql file](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/607373ef-290d-482c-1959-a2934429a900/lg1x =1280x720)

The `web_anon` role is created with `NOLOGIN`, meaning it cannot be used to log in to the database directly. It exists only as a role PostgREST assumes when handling unauthenticated requests.

`ALTER TABLE todos ENABLE ROW LEVEL SECURITY` activates RLS on the table. Once enabled, all access is denied by default until a policy explicitly permits it. The `anon_full_access` policy uses `USING (true)` to make all rows readable and `WITH CHECK (true)` to allow all writes. This is appropriate for a public demo; real applications use more restrictive conditions.

### Starting the stack

```command
docker-compose up -d
```

![Docker Desktop showing the three containers db, postgrest, and swagger-ui running successfully](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d55436e2-00e1-4530-e20e-e61a51a30e00/public =1280x720)

All three services start within a minute. PostgREST connects to the database and begins serving the API at `http://localhost:3000`.

## API operations

### Reading data

```command
curl http://localhost:3000/todos
```

![Terminal showing formatted JSON output from the GET /todos API call](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cd2e48af-b45a-4add-27b7-6fa1468edd00/orig =1280x720)

PostgREST returns a JSON array of all rows.

### Filtering and sorting

URL parameters translate directly to SQL clauses:

```command
curl "http://localhost:3000/todos?completed=eq.true"
```

```command
curl "http://localhost:3000/todos?select=title&order=created_at.desc"
```

The `eq` operator maps to SQL `=`. PostgREST supports a full range of operators including `neq`, `lt`, `gt`, `like`, `ilike`, `in`, and `is`.

### Creating a record

```command
curl -X POST http://localhost:3000/todos \
     -H "Content-Type: application/json" \
     -d '{"title": "Learn PostgREST", "completed": false}'
```

### Updating a record

```command
curl -X PATCH "http://localhost:3000/todos?id=eq.1" \
     -H "Content-Type: application/json" \
     -d '{"completed": true}'
```

### Deleting a record

```command
curl -X DELETE "http://localhost:3000/todos?id=eq.4"
```

### Swagger UI

The interactive documentation at `http://localhost:8080` shows all generated endpoints, accepted parameters, and request and response schemas. It is generated from the OpenAPI specification PostgREST exposes at its root, and provides a "Try it out" interface for running requests from the browser.

![Swagger UI main page showing the automatically generated endpoints for /todos including GET, POST, DELETE, and PATCH](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/62439dbb-32af-436c-4e92-5b11ed4ccb00/md1x =1280x720)

## When PostgREST fits well

PostgREST is well-suited to data-centric applications where the primary operations are reading and writing structured data: prototypes, internal tools, admin panels, and backend-for-frontend layers for single-page applications.

It is less suited to workflows involving complex multi-step business logic, such as payment processing, third-party API orchestration, or operations that span multiple systems. For those cases, the typical pattern is a hybrid: PostgREST handles direct data access while a thin microservice handles the complex workflows.

Complex RLS policies evaluated at query time can increase CPU load on the database server, so indexing and query planning matter more than in a setup where filtering happens in application code.

## Final thoughts

PostgREST **demonstrates that a significant portion of what gets written as "backend code" is actually just translation between HTTP and SQL**. By letting PostgreSQL handle that translation directly, the surface area for bugs shrinks, security rules live in one place, and the time from schema to working API collapses to minutes.

Documentation and a full reference for the query language, authentication, and RLS patterns are available at [postgrest.org/en/stable](https://postgrest.org/en/stable/).