Getting Started with PostgreSQL in Go using PGX
PostgreSQL is a powerful, open-source relational database system with a strong reputation for reliability, feature robustness, and performance.
When building applications in Go that need to interact with PostgreSQL, you need a reliable and efficient driver. This is where PGX comes in - a pure Go driver and toolkit for PostgreSQL.
This guide will walk you through using the PGX driver to build a simple task manager application, exploring everything from basic connections to advanced query techniques.
Let's get started
Introduction to PGX
PGX is a PostgreSQL driver and toolkit for Go. Unlike other database drivers, PGX is specifically designed for PostgreSQL, providing native support for PostgreSQL data types and features that aren't available in database/sql drivers.
Key advantages of PGX include:
- Performance: PGX is designed to be faster than the standard
database/sql
interface - Native PostgreSQL features: Support for notifications, COPY protocol, and more
- Rich type support: Native handling of PostgreSQL-specific types
- Connection pooling: Built-in connection pooling with the pgxpool package
PGX offers direct use through its own interface or can be used as a driver with Go's standard library database/sql package. For most applications requiring PostgreSQL-specific features, the direct PGX interface provides more functionality and better performance.
Setting Up the PostgreSQL server
Before we begin coding with PGX, you need to ensure that a PostgreSQL database server is up an running. You can do so with Docker:
docker run \
--rm \
--name postgres \
--env POSTGRES_PASSWORD=admin \
--env POSTGRES_DB=gotodo \
--volume pg-data:/var/lib/postgresql/data \
--publish 5432:5432 \
postgres:bookworm
This command configures a container labeled postgres
that maps the internal
PostgreSQL port 5432 to your localhost's port 5432.
Including the --rm
flag ensures the container gets automatically removed once
stopped. Following PostgreSQL documentation recommendations, we've established
admin
as the password for the default PostgreSQL user (postgres
).
Setting up the project
Let's create a directory structure for our project and initialize it with Go modules:
mkdir gotodo && cd gotodo
go mod init github.com/<yourusername>/gotodo
We'll now add PGX to our project with:
go get github.com/jackc/pgx/v5
go get github.com/jackc/pgx/v5/pgxpool
We'll also need a CLI framework to build our task manager:
go get github.com/urfave/cli/v2
The CLI package will help us create
a command-line interface similar to Git, with subcommands like add
, list
,
etc.
Creating the database schema
Before writing Go code, let's set up our PostgreSQL database schema. This structure will define how our todo app data is stored.
Run to command below to connect your PostgreSQL database with psql
:
docker exec -it postgres psql -U postgres
\c gotodo;
Now, let's create a table for our tasks:
CREATE TABLE tasks (
id SERIAL PRIMARY KEY,
text TEXT NOT NULL,
completed BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
CREATE TABLE
This schema includes:
- An auto-incrementing ID
- The task description text
- A boolean to track completion status
- Timestamps for creation and updates
Having our database structure in place, we can now begin writing Go code to
interact with it. You may exit the psql
interface by typing \q
.
Connecting to PostgreSQL with PGX
Our first step in the Go application is establishing a connection to PostgreSQL. PGX provides multiple ways to connect, but we'll use connection pooling for better performance.
Let's create a main.go
file with our basic structure:
package main
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/urfave/cli/v2"
)
var pool *pgxpool.Pool
var ctx = context.Background()
// Task represents a task in our task manager
type Task struct {
ID int `json:"id"`
Text string `json:"text"`
Completed bool `json:"completed"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
func init() {
// Initialize the connection pool
var err error
pool, err = pgxpool.New(ctx, "postgres://<username>:<password>@localhost:5432/gotodo")
if err != nil {
log.Fatal("Unable to connect to database:", err)
}
// Verify the connection
if err := pool.Ping(ctx); err != nil {
log.Fatal("Unable to ping database:", err)
}
fmt.Println("Connected to PostgreSQL database!")
}
func main() {
app := &cli.App{
Name: "gotodo",
Usage: "A simple CLI program to manage your tasks",
Commands: []*cli.Command{
// We'll add commands here
},
}
err := app.Run(os.Args)
if err != nil {
log.Fatal(err)
}
}
Let's break down this initial code:
- We import necessary packages, including
pgxpool
for connection pooling. - We define global variables for our connection pool and context.
- We create a
Task
struct that maps to our database table. - In the
init()
function, we establish a connection to PostgreSQL. - We set up the basic CLI application structure.
Make sure to replace "username" and "password" with your actual PostgreSQL
credentials (postgres
and admin
in this case). The connection string follows
this format: postgres://<username>:<password>@host:port/database
.
The pgxpool
package creates a pool of connections, which is more efficient
than creating new connections for each operation. We also use pool.Ping()
to
verify that our connection is working.
To see this in action, run the command below:
go run main.go
Connected to PostgreSQL database!
NAME:
gotodo - A simple CLI program to manage your tasks
USAGE:
gotodo [global options] command [command options]
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--help, -h show help
The output will show that the connection was successful, along with some help text since we've not implemented a default command for the CLI.
Implementing CRUD operations
Now that we have our connection established, let's implement the basic CRUD operations for our task manager. We'll start by creating functions to interact with the database, then connect them to our CLI commands.
Creating Tasks
First, let's implement a function to create a new task:
func createTask(text string) error {
sql := `
INSERT INTO tasks (text, completed)
VALUES ($1, $2)
RETURNING id
`
var id int
err := pool.QueryRow(ctx, sql, text, false).Scan(&id)
if err != nil {
return fmt.Errorf("error creating task: %w", err)
}
fmt.Printf("Created task with ID: %d\n", id)
return nil
}
This function:
- Defines an SQL query with placeholders (
$1
,$2
) for parameters - Executes the query with
QueryRow()
, passing our context and parameters - Scans the returned ID into a variable
- Returns any errors or nil on success
PGX uses numeric placeholders ($1
, $2
, etc.) for parameters, similar to the
native PostgreSQL syntax. This helps prevent SQL injection by properly escaping
values.
The RETURNING id
clause in our SQL asks PostgreSQL to return the ID of the
newly created task, which we then capture with Scan()
.
Reading all tasks
Next, let's implement a function to read all tasks from the database:
func getAllTasks() ([]Task, error) {
sql := `
SELECT id, text, completed, created_at, updated_at
FROM tasks
ORDER BY created_at DESC
`
rows, err := pool.Query(ctx, sql)
if err != nil {
return nil, fmt.Errorf("error querying tasks: %w", err)
}
defer rows.Close()
var tasks []Task
for rows.Next() {
var task Task
err := rows.Scan(
&task.ID,
&task.Text,
&task.Completed,
&task.CreatedAt,
&task.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("error scanning task row: %w", err)
}
tasks = append(tasks, task)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating task rows: %w", err)
}
return tasks, nil
}
This function demonstrates several important patterns when working with PGX:
- We use
pool.Query()
to execute a SELECT statement that returns multiple rows - We use
defer rows.Close()
to ensure resources are cleaned up when the function exits - We iterate through results with
rows.Next()
and userows.Scan()
to populate our Task struct - We check for errors after iteration with
rows.Err()
The order of columns in your Scan()
call must match the order of columns in
your SELECT
statement. PGX doesn't do any mapping by column name.
Updating tasks
Next, let's implement a function to update a task's completion status:
func completeTask(id int) error {
sql := `
UPDATE tasks
SET completed = true, updated_at = NOW()
WHERE id = $1
`
commandTag, err := pool.Exec(ctx, sql, id)
if err != nil {
return fmt.Errorf("error completing task: %w", err)
}
if commandTag.RowsAffected() == 0 {
return fmt.Errorf("no task found with id %d", id)
}
return nil
}
For update operations:
- We use
pool.Exec()
since we don't need to return data from the query - We get a
commandTag
that tells us how many rows were affected - We check if any rows were affected to determine if the update was successful
The commandTag.RowsAffected()
method is particularly useful for determining if
the WHERE
clause matched any rows, helping us provide better error messages.
Deleting tasks
Next, let's implement task deletion:
func deleteTask(id int) error {
sql := `DELETE FROM tasks WHERE id = $1`
commandTag, err := pool.Exec(ctx, sql, id)
if err != nil {
return fmt.Errorf("error deleting task: %w", err)
}
if commandTag.RowsAffected() == 0 {
return fmt.Errorf("no task found with id %d", id)
}
return nil
}
This function follows the same pattern as our update function:
- We execute a DELETE statement with
pool.Exec()
- We check if any rows were affected
- We return an appropriate error message if no matching task was found
Filtering Tasks
Let's add two more functions to filter tasks by their completion status:
func getPendingTasks() ([]Task, error) {
sql := `
SELECT id, text, completed, created_at, updated_at
FROM tasks
WHERE completed = false
ORDER BY created_at DESC
`
rows, err := pool.Query(ctx, sql)
if err != nil {
return nil, fmt.Errorf("error querying pending tasks: %w", err)
}
defer rows.Close()
var tasks []Task
for rows.Next() {
var task Task
err := rows.Scan(
&task.ID,
&task.Text,
&task.Completed,
&task.CreatedAt,
&task.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("error scanning task row: %w", err)
}
tasks = append(tasks, task)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating task rows: %w", err)
}
return tasks, nil
}
func getCompletedTasks() ([]Task, error) {
sql := `
SELECT id, text, completed, created_at, updated_at
FROM tasks
WHERE completed = true
ORDER BY created_at DESC
`
rows, err := pool.Query(ctx, sql)
if err != nil {
return nil, fmt.Errorf("error querying completed tasks: %w", err)
}
defer rows.Close()
var tasks []Task
for rows.Next() {
var task Task
err := rows.Scan(
&task.ID,
&task.Text,
&task.Completed,
&task.CreatedAt,
&task.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("error scanning task row: %w", err)
}
tasks = append(tasks, task)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating task rows: %w", err)
}
return tasks, nil
}
These functions follow the same pattern as getAllTasks()
, but include a
WHERE
clause to filter by completion status.
In a more complex application, we might want to refactor this code to avoid
duplication. We could create a single function that accepts a WHERE
clause or
filter parameters.
Implementing CLI commands
Now that we have our database operations defined, let's connect them to our CLI interface. First, let's create a helper function to display tasks:
func printTasks(tasks []Task) {
if len(tasks) == 0 {
fmt.Println("No tasks found")
return
}
for _, task := range tasks {
status := "[ ]"
if task.Completed {
status = "[✓]"
}
fmt.Printf("%d. %s %s (Created: %s)\n",
task.ID,
status,
task.Text,
task.CreatedAt.Format("2006-01-02 15:04:05"))
}
}
This function formats and prints our tasks in a human-readable way, using checkboxes to indicate completion status.
Now, let's update our main()
function to add CLI commands that use our
database functions:
func main() {
app := &cli.App{
Name: "gotodo",
Usage: "A simple CLI program to manage your tasks",
Action: func(c *cli.Context) error {
// Default action - show pending tasks
tasks, err := getPendingTasks()
if err != nil {
return err
}
fmt.Println("Pending Tasks:")
printTasks(tasks)
return nil
},
Commands: []*cli.Command{
{
Name: "add",
Aliases: []string{"a"},
Usage: "Add a new task",
Action: func(c *cli.Context) error {
text := c.Args().First()
if text == "" {
return fmt.Errorf("task text cannot be empty")
}
return createTask(text)
},
},
{
Name: "all",
Aliases: []string{"l"},
Usage: "List all tasks",
Action: func(c *cli.Context) error {
tasks, err := getAllTasks()
if err != nil {
return err
}
fmt.Println("All Tasks:")
printTasks(tasks)
return nil
},
},
{
Name: "done",
Aliases: []string{"d"},
Usage: "Mark a task as completed",
Action: func(c *cli.Context) error {
idStr := c.Args().First()
if idStr == "" {
return fmt.Errorf("task ID required")
}
var id int
if _, err := fmt.Sscanf(idStr, "%d", &id); err != nil {
return fmt.Errorf("invalid task ID: %s", idStr)
}
return completeTask(id)
},
},
{
Name: "rm",
Aliases: []string{"r"},
Usage: "Remove a task",
Action: func(c *cli.Context) error {
idStr := c.Args().First()
if idStr == "" {
return fmt.Errorf("task ID required")
}
var id int
if _, err := fmt.Sscanf(idStr, "%d", &id); err != nil {
return fmt.Errorf("invalid task ID: %s", idStr)
}
return deleteTask(id)
},
},
{
Name: "completed",
Aliases: []string{"c"},
Usage: "Show completed tasks",
Action: func(c *cli.Context) error {
tasks, err := getCompletedTasks()
if err != nil {
return err
}
fmt.Println("Completed Tasks:")
printTasks(tasks)
return nil
},
},
},
}
err := app.Run(os.Args)
if err != nil {
log.Fatal(err)
}
}
Our CLI application now has:
- A default action that shows pending tasks when no subcommand is specified
- An "add" command to create new tasks
- An "all" command to list all tasks
- A "done" command to mark tasks as completed
- A "rm" command to delete tasks
- A "completed" command to show only completed tasks
Each command links to the appropriate database function, with error handling and user input validation.
Testing the application
Now that we've implemented our task manager's core functionality, let's test it to ensure everything works as expected. In this section, we'll build, run, and test our application with various commands to verify our database interactions are working correctly.
Building the application
First, let's compile our application and make the resulting binary executable:
go build -o gotodo
chmod +x gotodo
If the build succeeds, you'll have an executable file named gotodo
in your
current directory. If there are any compilation errors, review the previous
sections to ensure your code matches the examples.
Running the application
Now, let's run the application without any arguments to see the default output (pending tasks):
./gotodo
Since we haven't added any tasks yet, you should see:
Pending Tasks:
No tasks found
Testing task creation
Let's create a few tasks:
./gotodo add "Learn more about pgx"
./gotodo add "Implement a web interface for the task manager"
./gotodo add "Write tests for the application"
After each command, you should see a confirmation message like:
Created task with ID: 1
Now, run the application again without arguments to see your pending tasks:
./gotodo
You should see something like:
Pending Tasks:
1. [ ] Learn more about pgx (Created: 2023-04-15 14:30:45)
2. [ ] Implement a web interface for the task manager (Created: 2023-04-15 14:31:02)
3. [ ] Write tests for the application (Created: 2023-04-15 14:31:15)
Testing task completion
Let's mark a task as completed:
./gotodo done 1
This should complete the task with ID 1. Now, let's check our pending tasks again:
./gotodo
You should see only the uncompleted tasks:
Pending Tasks:
2. [ ] Implement a web interface for the task manager (Created: 2023-04-15 14:31:02)
3. [ ] Write tests for the application (Created: 2023-04-15 14:31:15)
To verify that our completion logic worked, let's check the completed tasks:
./gotodo completed
You should see:
Completed Tasks:
1. [✓] Learn more about pgx (Created: 2023-04-15 14:30:45)
Testing all tasks listing
Let's list all tasks to see both pending and completed tasks:
./gotodo all
You should see:
All Tasks:
1. [✓] Learn more about pgx (Created: 2023-04-15 14:30:45)
2. [ ] Implement a web interface for the task manager (Created: 2023-04-15 14:31:02)
3. [ ] Write tests for the application (Created: 2023-04-15 14:31:15)
Testing task deletion
Finally, let's test removing a task:
./gotodo rm 3
Now, check all tasks:
./gotodo all
You should see:
All Tasks:
1. [✓] Learn more about pgx (Created: 2023-04-15 14:30:45)
2. [ ] Implement a web interface for the task manager (Created: 2023-04-15 14:31:02)
Our task manager application is now fully functional! We've verified that we can add, list, complete, and delete tasks using our PostgreSQL backend.
In the following sections, we'll consider some other features of the PGX library for working with PostgreSQL.
Working with transactions
One of the powerful features of PostgreSQL and PGX is transaction support. Transactions allow you to group multiple operations together so they either all succeed or all fail, maintaining data consistency.
Let's implement a function that uses a transaction to archive all tasks:
func moveAllTasksToArchive() error {
// Start a transaction
tx, err := pool.Begin(ctx)
if err != nil {
return fmt.Errorf("error starting transaction: %w", err)
}
// Ensure the transaction will be rolled back if not committed
defer tx.Rollback(ctx)
// Copy tasks to archive table
_, err = tx.Exec(ctx, `
INSERT INTO tasks_archive (id, text, completed, created_at, updated_at)
SELECT id, text, completed, created_at, updated_at FROM tasks
`)
if err != nil {
return fmt.Errorf("error copying tasks to archive: %w", err)
}
// Delete all tasks
_, err = tx.Exec(ctx, "DELETE FROM tasks")
if err != nil {
return fmt.Errorf("error deleting tasks: %w", err)
}
// Commit the transaction
if err = tx.Commit(ctx); err != nil {
return fmt.Errorf("error committing transaction: %w", err)
}
return nil
}
This function demonstrates several important patterns for working with transactions:
- We start a transaction with
pool.Begin()
- We use
defer tx.Rollback(ctx)
to ensure the transaction is rolled back if not committed - We execute multiple operations within the transaction
- If all operations succeed, we commit the transaction with
tx.Commit(ctx)
Transactions are essential for maintaining data consistency in operations that require multiple changes to the database. PGX makes transaction management straightforward with its API.
Note: For this function to work, you would need to create a tasks_archive
table with a structure matching your tasks
table.
Prepared statements for better performance
For queries that are executed frequently, prepared statements can improve performance by avoiding the overhead of parsing and planning the query each time.
Here's how to use prepared statements with PGX:
func getTasksByStatus(completed bool) ([]Task, error) {
// Create a prepared statement
stmt, err := pool.Prepare(ctx, "get_tasks_by_status", `
SELECT id, text, completed, created_at, updated_at
FROM tasks
WHERE completed = $1
ORDER BY created_at DESC
`)
if err != nil {
return nil, fmt.Errorf("error preparing statement: %w", err)
}
// Execute the prepared statement
rows, err := pool.Query(ctx, "get_tasks_by_status", completed)
if err != nil {
return nil, fmt.Errorf("error executing prepared statement: %w", err)
}
defer rows.Close()
var tasks []Task
for rows.Next() {
var task Task
err := rows.Scan(
&task.ID,
&task.Text,
&task.Completed,
&task.CreatedAt,
&task.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("error scanning task row: %w", err)
}
tasks = append(tasks, task)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating task rows: %w", err)
}
return tasks, nil
}
With prepared statements:
- We use
pool.Prepare()
to create a named prepared statement - We provide a name ("gettasksby_status") that can be used to execute the statement later
- We use
pool.Query()
with the statement name and parameters to execute the query - PostgreSQL can reuse the query plan, potentially improving performance
Prepared statements are particularly useful for queries that are executed frequently with different parameters.
Batch operations
For inserting or updating multiple records at once, PGX provides a batch API that can improve performance:
func createTasks(texts []string) error {
// Begin a transaction
tx, err := pool.Begin(ctx)
if err != nil {
return fmt.Errorf("error starting transaction: %w", err)
}
defer tx.Rollback(ctx)
// Create a batch
batch := &pgx.Batch{}
// Add multiple insert commands to the batch
for _, text := range texts {
batch.Queue("INSERT INTO tasks (text, completed) VALUES ($1, $2)", text, false)
}
// Send the batch
results := tx.SendBatch(ctx, batch)
defer results.Close()
// Check for errors in the results
for i := 0; i < len(texts); i++ {
_, err := results.Exec()
if err != nil {
return fmt.Errorf("error executing batch command %d: %w", i, err)
}
}
// Commit the transaction
if err = tx.Commit(ctx); err != nil {
return fmt.Errorf("error committing transaction: %w", err)
}
return nil
}
The batch API provides several benefits:
- It reduces network round-trips by sending multiple commands at once
- When combined with a transaction, it ensures all operations succeed or fail together
- It can significantly improve performance for bulk operations
Note that you need to import the github.com/jackc/pgx/v5
package to use the
pgx.Batch
type.
Working with PostgreSQL-specific types
One of the advantages of PGX is its support for PostgreSQL-specific types like arrays, JSON, and more. Let's look at examples of working with these types:
Working with JSON
PostgreSQL has excellent support for JSON and JSONB data types. Here's how to work with them in PGX:
type TaskWithMetadata struct {
ID int `json:"id"`
Text string `json:"text"`
Completed bool `json:"completed"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Metadata map[string]interface{} `json:"metadata"`
}
func createTaskWithMetadata(text string, metadata map[string]interface{}) error {
sql := `
INSERT INTO tasks_with_metadata (text, completed, metadata)
VALUES ($1, $2, $3)
RETURNING id
`
var id int
err := pool.QueryRow(ctx, sql, text, false, metadata).Scan(&id)
if err != nil {
return fmt.Errorf("error creating task with metadata: %w", err)
}
fmt.Printf("Created task with ID: %d\n", id)
return nil
}
PGX automatically handles conversion between Go types and PostgreSQL JSON types:
- Go maps and structs can be directly used as parameters for JSON columns
- When reading JSON columns, PGX can scan them into appropriate Go types
- For more complex use cases, you can use the
pgtype
package for custom conversions
For this function to work, you would need a table with a JSON or JSONB column:
CREATE TABLE tasks_with_metadata (
id SERIAL PRIMARY KEY,
text TEXT NOT NULL,
completed BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
metadata JSONB
);
Working with arrays
PostgreSQL has native support for array types, and PGX makes it easy to work with them:
func getTasksByTags(tags []string) ([]Task, error) {
sql := `
SELECT id, text, completed, created_at, updated_at
FROM tasks_with_tags
WHERE tags @> $1
ORDER BY created_at DESC
`
rows, err := pool.Query(ctx, sql, tags)
if err != nil {
return nil, fmt.Errorf("error querying tasks by tags: %w", err)
}
defer rows.Close()
var tasks []Task
for rows.Next() {
var task Task
err := rows.Scan(
&task.ID,
&task.Text,
&task.Completed,
&task.CreatedAt,
&task.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("error scanning task row: %w", err)
}
tasks = append(tasks, task)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating task rows: %w", err)
}
return tasks, nil
}
This function demonstrates:
- Using Go slices as parameters for PostgreSQL array columns
- Using the
@>
operator to check if an array contains certain elements - PGX's automatic handling of array type conversions
For this function to work, you would need a table with an array column:
CREATE TABLE tasks_with_tags (
id SERIAL PRIMARY KEY,
text TEXT NOT NULL,
completed BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
tags TEXT[]
);
Error handling and connection management
Proper error handling and connection management are crucial for production applications. PGX provides several features to help:
func initDB() (*pgxpool.Pool, error) {
// Configuration
config, err := pgxpool.ParseConfig("postgres://username:password@localhost:5432/gotodo")
if err != nil {
return nil, fmt.Errorf("error parsing config: %w", err)
}
// Set pool configuration
config.MaxConns = 10
config.MinConns = 2
config.MaxConnLifetime = time.Hour
config.MaxConnIdleTime = 30 * time.Minute
// Create the pool
pool, err := pgxpool.NewWithConfig(context.Background(), config)
if err != nil {
return nil, fmt.Errorf("error creating connection pool: %w", err)
}
// Verify connection
if err := pool.Ping(context.Background()); err != nil {
pool.Close()
return nil, fmt.Errorf("error connecting to database: %w", err)
}
return pool, nil
}
This function demonstrates best practices for connection management:
- Using
pgxpool.ParseConfig()
to parse the connection string and create a configuration - Setting pool limits to manage resource usage
- Verifying the connection with
pool.Ping()
- Proper error handling and resource cleanup
Connection pooling is essential for production applications:
MaxConns
limits the maximum number of connections to avoid overwhelming the databaseMinConns
sets a minimum number of idle connections to reduce connection latencyMaxConnLifetime
ensures connections are periodically refreshedMaxConnIdleTime
ensures idle connections don't consume resources indefinitely
For error handling, PGX provides detailed error types that can help you diagnose problems:
if err != nil {
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) {
fmt.Printf("PostgreSQL error: %s (Code: %s)\n", pgErr.Message, pgErr.Code)
// Handle specific error codes
switch pgErr.Code {
case "23505": // unique_violation
return fmt.Errorf("task already exists: %w", err)
case "23503": // foreign_key_violation
return fmt.Errorf("referenced record does not exist: %w", err)
default:
return fmt.Errorf("database error: %w", err)
}
}
return fmt.Errorf("unexpected error: %w", err)
}
This code demonstrates:
- Using
errors.As()
to check if an error is a PostgreSQL error - Accessing the error message and code
- Handling specific error codes with appropriate messages
Final thoughts
This guide has demonstrated how to build PostgreSQL-backed Go applications using PGX. We covered many essential patterns from basic CRUD operations to transactions and prepared statements, providing a foundation for robust database applications.
The techniques demonstrated here extend beyond simple task managers to any PostgreSQL-backed Go application. Understanding these core database interaction principles equips you to build performant, maintainable applications that leverage PostgreSQL's power combined with Go's efficiency.
Remember that query optimization and database design remain crucial for application performance. Take time to explore PostgreSQL's capabilities to maximize your database system's potential.
Thanks for reading!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github