Back to Testing guides

Getting Started with Load Testing: A Beginner's Guide

Ayooluwa Isaiah
Updated on April 7, 2025

Load testing is a critical practice in application development that simulates real-world user traffic to evaluate system performance under expected or stressed conditions.

By generating artificial but realistic usage patterns, load testing helps teams identify bottlenecks, determine system capacity limits, and ensure applications can handle anticipated user loads without degrading performance or crashing.

This guide explores comprehensive load testing strategies for websites and web applications, with practical implementation advice to help you design effective tests that yield actionable insights.

Fundamental load testing approaches

Before diving into specific testing techniques, it's important to understand several key perspectives that shape your load testing strategy.

Backend vs. frontend performance testing

Performance testing can be divided into two primary categories based on what part of the application stack you're evaluating:

  • Frontend performance testing focuses on the user interface level, measuring how quickly page elements appear and become interactive in the browser. This approach examines the entire round-trip experience from a user perspective, including page rendering times, client-side script execution, and visual feedback.

Frontend metrics typically include:

  • First Contentful Paint (FCP)
  • Time to Interactive (TTI)
  • Cumulative Layout Shift (CLS)
  • First Input Delay (FID)

While frontend testing excels at revealing issues in the user experience, it has limitations. It requires fully integrated environments, can be resource-intensive to scale, and only indicates problems without necessarily identifying their root causes in the underlying architecture.

  • Backend performance testing targets the application servers and infrastructure, measuring how they process requests, handle database operations, and deliver assets. This approach helps identify bottlenecks in server-side code, database queries, and other infrastructure components.

Backend testing captures metrics like:

  • API response times
  • Server processing time
  • Database query performance
  • Resource utilization (CPU, memory, network)

Unlike frontend testing, backend performance evaluation can often begin earlier in the development cycle and typically requires fewer resources to execute at scale, making it more suitable for high-volume load tests.

The two approaches are complementary. While frontend testing provides direct insight into user experience, backend testing reveals how system performance degrades as concurrent user counts increase. The graph below illustrates this relationship:

For comprehensive results, both approaches should be incorporated into your testing strategy, though teams with limited resources may need to prioritize based on their specific concerns.

Protocol-based vs. browser-based vs. hybrid testing

Your testing approach determines what tools and methods you'll use to generate load:

  • Protocol-based load testing operates at the network request level, simulating the HTTP requests a browser would make without actually rendering the responses. This approach is efficient, allowing a single machine to simulate thousands of virtual users. However, it doesn't account for client-side rendering or JavaScript execution.

  • Browser-based load testing uses actual browser instances to interact with your application the way real users would. This approach executes JavaScript, renders pages, and allows interaction with UI elements, providing comprehensive frontend metrics. However, each virtual user requires significantly more resources, limiting the scale of browser-based tests.

  • Hybrid load testing combines both approaches, using protocol-based testing to generate the majority of the load and browser-based testing for a smaller subset of users. This gives you the scale benefits of protocol testing while still collecting critical frontend performance metrics.

Component testing vs. end-to-end testing

Another key consideration is the scope of your test:

  • Component testing focuses on isolated parts of your application, such as specific API endpoints, services, or functions. These targeted tests help identify performance issues in critical components and are typically faster to execute and debug.

  • End-to-end testing replicates complete user journeys across your application, validating performance across the entire stack. While these tests provide a more comprehensive view, they can be more complex to troubleshoot when issues arise.

Creating effective load testing scripts

Let's examine how to develop load testing scripts using different approaches.

Protocol-based script creation

Protocol-based scripts simulate HTTP requests directly without using a browser. Here's a simple example using a hypothetical JavaScript-based load testing framework:

protocol-test.js
import http from 'loadtest-library/http';
import { sleep, check } from 'loadtest-library/utils';

export function browseProductCatalog() {
  // Define common headers
  const headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/96.0.4664.110',
    'Accept-Language': 'en-US,en;q=0.9',
    'Cache-Control': 'no-cache'
  };

  // Step 1: Visit homepage
  let response = http.get('https://example-shop.com/', { headers });

  check(response, {
    'homepage loaded successfully': (r) => r.status === 200,
    'homepage contains expected content': (r) => r.body.includes('Welcome to Example Shop')
  });

  // Simulate user thinking time
  sleep(Math.random() * 3 + 2); // Random delay between 2-5 seconds

  // Step 2: Navigate to product category page
  response = http.get('https://example-shop.com/category/electronics', { headers });

  // Extract product IDs for later use (correlation)
  const productIdRegex = /product-(\d+)/g;
  const productIds = [];
  let match;

  while ((match = productIdRegex.exec(response.body)) !== null) {
    productIds.push(match[1]);
  }

  check(response, {
    'category page loaded': (r) => r.status === 200,
    'products found': () => productIds.length > 0
  });

  sleep(Math.random() * 2 + 1); // Random delay between 1-3 seconds

  // Step 3: View a product detail page
  if (productIds.length > 0) {
    const randomProduct = productIds[Math.floor(Math.random() * productIds.length)];
    response = http.get(`https://example-shop.com/product/${randomProduct}`, { headers });

    check(response, {
      'product page loaded': (r) => r.status === 200,
      'product details displayed': (r) => r.body.includes('Add to Cart')
    });
  }

  sleep(Math.random() * 4 + 3); // Random delay between 3-7 seconds
}

This script simulates a user browsing an e-commerce site by:

  1. Visiting the homepage
  2. Navigating to a product category
  3. Viewing a specific product

Key elements in this script include:

  • Request headers that mimic a real browser
  • Response validation using checks
  • Dynamic data extraction (correlation) to use product IDs from previous responses
  • Realistic think times between actions using randomized sleep intervals

Browser-based script development

Browser-based scripts interact with page elements similarly to how real users would. Here's an example using a browser automation approach:

browser-test.js
import { browser } from 'loadtest-library/browser';
import { sleep } from 'loadtest-library/utils';

export async function browseProductCatalog() {
  // Launch a browser instance
  const page = await browser.newPage();

  try {
    // Step 1: Navigate to homepage
    await page.goto('https://example-shop.com/');

    // Verify homepage loaded correctly
    await page.waitForSelector('.hero-banner');

    // Take screenshot for debugging
    await page.screenshot({ path: 'screenshots/01_homepage.png' });

    // Simulate user thinking
    await sleep(Math.random() * 3 + 2);

    // Step 2: Click on Electronics category
    const categoryLink = await page.locator('a[href="/category/electronics"]');
    await categoryLink.click();

    // Verify category page loaded
    await page.waitForSelector('.product-grid');
    await page.screenshot({ path: 'screenshots/02_category_page.png' });

    await sleep(Math.random() * 2 + 1);

    // Step 3: Select a random product
    const productCards = await page.locator('.product-card');
    const count = await productCards.count();

    if (count > 0) {
      const randomIndex = Math.floor(Math.random() * count);
      await productCards.nth(randomIndex).click();

      // Verify product page loaded
      await page.waitForSelector('.add-to-cart-button');
      await page.screenshot({ path: 'screenshots/03_product_page.png' });
    }

    await sleep(Math.random() * 4 + 3);

  } finally {
    // Always close the browser to release resources
    await page.close();
  }
}

This script performs the same user journey as the protocol-based example but using browser interactions:

  1. It launches an actual browser instance
  2. Navigates to pages by URL or by clicking elements
  3. Waits for specific elements to appear before proceeding
  4. Takes screenshots at each step to aid debugging

Browser-based scripts provide more realistic testing but consume more resources per virtual user.

Building hybrid testing solutions

A hybrid approach lets you combine protocol and browser testing for optimal results. The following example demonstrates how to structure a test that uses both methods:

hybrid-test.js
import { browser } from 'loadtest-library/browser';
import http from 'loadtest-library/http';
import { sleep, check } from 'loadtest-library/utils';
import { protocolBrowseProducts } from './protocol-functions.js';
import { browserBrowseProducts } from './browser-functions.js';

export const options = {
  scenarios: {
    // Main load using protocol-based testing
    protocol_users: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: '2m', target: 100 },  // Ramp up to 100 VUs
        { duration: '5m', target: 100 },  // Stay at 100 VUs
        { duration: '2m', target: 0 }     // Ramp down to 0 VUs
      ],
      exec: 'protocolBrowseProducts'
    },

    // Smaller set of browser-based users for frontend metrics
    browser_users: {
      executor: 'constant-vus',
      vus: 5,
      duration: '9m',
      exec: 'browserBrowseProducts',
      options: {
        browser: {
          type: 'chromium'
        }
      }
    }
  },
  thresholds: {
    'http_req_duration{scenario:protocol_users}': ['p95<500'],
    'browser_page_load{scenario:browser_users}': ['p95<3000']
  }
};

This hybrid approach:

  1. Defines two scenarios using different executors
  2. Generates the majority of load (100 VUs) using efficient protocol-based testing
  3. Supplements with a smaller number of browser-based users (5 VUs) for frontend metrics
  4. Sets appropriate thresholds for each testing type

Ensuring script realism

For load tests to provide accurate insights, they must realistically simulate user behavior. Consider these factors:

User journey recording

Starting with recordings of real user sessions helps create authentic test scripts. Many tools can capture browser interactions and convert them to load testing scripts, though these typically require refinement before use.

Data correlation techniques

Web applications often generate dynamic data that must be extracted from one response and used in subsequent requests. Common examples include:

  • Session IDs
  • CSRF tokens
  • Product IDs
  • Form validation codes

Without proper correlation, scripts will fail when they attempt to use invalid or expired values. Here's a simple example:

correlation-example.js
// Extract a CSRF token from a form
const csrfRegex = /name="csrf_token" value="([^"]+)"/;
const match = csrfRegex.exec(loginPageResponse.body);
const csrfToken = match ? match[1] : '';

// Use the token in the subsequent form submission
const formData = {
  username: 'testuser',
  password: 'password123',
  csrf_token: csrfToken
};

http.post('https://example.com/login', formData, { headers });

Resource handling

Modern websites request numerous resources when loading a page. Consider how to handle:

Static resources: Determine whether to include or exclude images, stylesheets, and JavaScript files. Include them when measuring complete user experience; exclude them when testing backend-specific performance.

Third-party requests: Avoid load testing third-party services you don't own. Most load testing tools allow you to filter out requests to external domains:

resource-filter.js
export function setup() {
  return {
    // Block requests to third-party services
    blockDomains: [
      'google-analytics.com',
      'facebook.net',
      'doubleclick.net',
      'hotjar.com'
    ]
  };
}

export function handleRequest(request, ctx) {
  const url = new URL(request.url);

  if (ctx.blockDomains.some(domain => url.hostname.includes(domain))) {
    // Return empty response instead of sending request
    return {
      status: 200,
      body: ''
    };
  }

  // Process normal request
  return http.request(request);
}

Concurrent requests

Browsers typically download resources in parallel. Your testing scripts should mimic this behavior using batched requests:

concurrent-requests.js
// Send multiple requests concurrently
const responses = http.batch([
  ['GET', 'https://example.com/page.html', { headers }],
  ['GET', 'https://example.com/styles.css', { headers }],
  ['GET', 'https://example.com/script.js', { headers }],
  ['GET', 'https://example.com/image.jpg', { headers }]
]);

// Process all responses
responses.forEach((response, index) => {
  check(response, {
    [`resource ${index} loaded`]: (r) => r.status === 200
  });
});

Think time and pacing

Real users don't interact with websites at constant intervals. Include varied delays between actions:

think-time.js
// Normal distribution around 5 seconds (between 3-7 seconds typically)
function normalThinkTime(mean = 5, stdDev = 1) {
  let u = 0, v = 0;
  while (u === 0) u = Math.random();
  while (v === 0) v = Math.random();

  const z = Math.sqrt(-2.0 * Math.log(u)) * Math.cos(2.0 * Math.PI * v);
  return Math.max(1, Math.round(mean + z * stdDev));
}

// Use variable think time between actions
sleep(normalThinkTime());

Test data management

Using varied test data improves realism. Create data files with different user credentials, search terms, or product selections:

test-data.js
import { SharedArray } from 'loadtest-library/data';

// Load test data from a JSON file
const users = new SharedArray('users', function() {
  return JSON.parse(open('./test-data/users.json'));
});

export function userJourney() {
  // Select a random user from the data set
  const user = users[Math.floor(Math.random() * users.length)];

  // Use the user data in the test
  const loginPayload = {
    email: user.email,
    password: user.password
  };

  // Continue with the test using this user's data
  http.post('https://example.com/login', loginPayload, { headers });
}

Building a reusable testing framework

Investing time in creating a structured, reusable testing framework pays dividends as your testing needs grow:

Organizing with tags and groups

Use tags and groups to categorize requests and make metrics more meaningful:

tags-groups.js
// Tag requests by feature area
const params = {
  tags: {
    feature: 'checkout',
    page: 'payment'
  }
};

// Group related requests
export function checkoutProcess() {
  group('View Cart', function() {
    http.get('https://example.com/cart', { headers, tags: { page: 'cart' } });
  });

  group('Checkout Form', function() {
    http.get('https://example.com/checkout', { headers, tags: { page: 'checkout' } });
  });

  group('Payment', function() {
    http.post('https://example.com/payment', paymentData, {
      headers,
      tags: { page: 'payment', type: 'transaction' }
    });
  });
}

Tags and groups make it easier to:

  • Filter metrics by feature, page, or request type
  • Create focused thresholds for critical functionality
  • Generate more meaningful reports

Implementing scenarios

Scenarios allow you to model different types of user behavior within a single test:

scenarios.js
export const options = {
  scenarios: {
    // Browsing users who just look at products
    browsers: {
      executor: 'ramping-arrival-rate',
      startRate: 0,
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { target: 10, duration: '5m' },
        { target: 10, duration: '10m' },
        { target: 0, duration: '5m' }
      ],
      exec: 'browseProducts'
    },

    // Shoppers who complete purchases
    shoppers: {
      executor: 'constant-arrival-rate',
      rate: 3,
      timeUnit: '1m',
      duration: '20m',
      preAllocatedVUs: 20,
      maxVUs: 50,
      exec: 'completePurchase'
    }
  }
};

This approach creates a more realistic traffic mix by:

  • Simulating different user personas with distinct behavior patterns
  • Applying appropriate load models for each user type
  • Controlling the ratio between different activities

Script modularization

Break complex tests into reusable modules:

modules/auth.js
export function login(username, password) {
  // Get login page to extract CSRF token
  const loginPage = http.get('https://example.com/login');
  const csrfToken = extractCsrfToken(loginPage.body);

  // Submit login form
  return http.post('https://example.com/login', {
    username,
    password,
    csrf_token: csrfToken
  }, { headers });
}

export function logout() {
  return http.get('https://example.com/logout');
}
modules/products.js
export function browseCategory(categoryId) {
  return http.get(`https://example.com/category/${categoryId}`);
}

export function viewProduct(productId) {
  return http.get(`https://example.com/product/${productId}`);
}

export function addToCart(productId, quantity = 1) {
  return http.post('https://example.com/cart/add', {
    product_id: productId,
    quantity: quantity
  }, { headers });
}
e-commerce-test.js
import { login, logout } from './modules/auth.js';
import { browseCategory, viewProduct, addToCart } from './modules/products.js';
import { checkout } from './modules/checkout.js';

export function shoppingJourney() {
  // Log in as test user
  login('testuser', 'password123');

  // Browse electronics category
  const categoryResponse = browseCategory('electronics');
  const productIds = extractProductIds(categoryResponse.body);

  // View random product
  const randomProductId = pickRandom(productIds);
  viewProduct(randomProductId);

  // Add to cart and checkout
  addToCart(randomProductId, 1);
  checkout();

  // Log out
  logout();
}

This modular approach:

  • Improves code reusability and maintainability
  • Makes tests easier to understand and debug
  • Allows faster development of new test scenarios

CI pipeline integration

Integrating load tests into your continuous integration pipeline ensures performance is tested consistently:

ci-pipeline.yml
performance_test:
  stage: test
  script:
    - npm install
    - npm run load-test -- --out results.json
  artifacts:
    paths:
      - results.json
    reports:
      performance: results.json
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"
    - if: $CI_COMMIT_TAG
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual

Testing environment considerations

Where and how you run your tests significantly impacts the results and their relevance to real-world performance.

Pre-production vs. production testing

Pre-production testing helps identify issues early before they affect users. This approach:

  • Allows more aggressive testing without impacting customers
  • Enables early detection of performance regressions
  • Supports iterative performance improvements

However, pre-production environments may differ from production in ways that affect test results:

  • Different infrastructure or scaling
  • Missing or simulated integrations
  • Artificial data sets

Production testing provides the most accurate results but carries more risk. To minimize disruption:

  • Run tests during off-peak hours
  • Use progressive ramp-up to abort before causing issues
  • Start with read-only operations before testing writes
  • Implement feature flags to control test exposure

A comprehensive strategy typically includes both approaches:

  1. Regular pre-production testing for early detection
  2. Periodic production testing to validate real-world performance

Load generator location

Where you generate load from affects network latency, routing, and other factors:

On-premises load generators work well for:

  • Internal applications
  • Early development testing
  • Applications where most users are on the corporate network

Cloud-based load generators better simulate:

  • Geographically distributed users
  • Public-facing applications
  • Realistic network conditions

For global applications, distribute load generators across regions proportional to your user base:

distributed-load.js
export const options = {
  ext: {
    loadimpact: {
      distribution: {
        'amazon:us:ashburn': { loadZone: 'amazon:us:ashburn', percent: 60 },
        'amazon:eu:frankfurt': { loadZone: 'amazon:eu:frankfurt', percent: 30 },
        'amazon:ap:singapore': { loadZone: 'amazon:ap:singapore', percent: 10 }
      }
    }
  }
};

Performance metrics and thresholds

Establishing clear performance thresholds provides objective pass/fail criteria:

thresholds.js
export const options = {
  thresholds: {
    // Protocol-level metrics
    'http_req_duration': ['p95<500', 'p99<1000'],
    'http_req_duration{staticAsset:true}': ['p95<100'],
    'http_req_duration{page:checkout}': ['p95<300'],

    // Error rates
    'http_req_failed': ['rate<0.01'],  // Less than 1% error rate

    // Browser-level metrics
    'browser_dom_content_loaded': ['p95<2000'],
    'browser_first_contentful_paint': ['p95<1500'],

    // Custom business metrics
    'checkout_completion_rate': ['value>0.95']  // 95% checkout completion
  }
};

Effective thresholds:

  • Align with business requirements and user expectations
  • Provide different criteria for different types of requests
  • Consider both average and percentile-based measurements
  • Include error rates and custom business metrics

Analyzing test results

Raw performance data becomes actionable when properly analyzed:

Time-series visualization

Plot metrics over time to identify:

  • When performance degraded
  • Whether degradation correlates with user count
  • If specific scenarios caused problems

Distribution analysis

Examine response time distributions to understand:

  • Typical performance (median)
  • Worst-case scenarios (95th/99th percentiles)
  • Outliers that may indicate specific issues

Error analysis

Categorize and count errors to prioritize fixes:

  • HTTP status codes (4xx, 5xx)
  • Response validation failures
  • Connection problems
  • Browser render issues
error-analysis.js
// During the test, track errors by type
const errorCounters = new Map();

export function handleResponse(response) {
  if (response.status >= 400) {
    const errorType = response.status >= 500 ? 'Server Error' : 'Client Error';
    const errorKey = `${errorType}: ${response.status}`;

    const currentCount = errorCounters.get(errorKey) || 0;
    errorCounters.set(errorKey, currentCount + 1);
  }
}

// At the end of the test, summarize errors
export function handleSummary(data) {
  console.log('Error Summary:');
  for (const [errorType, count] of errorCounters.entries()) {
    console.log(`${errorType}: ${count} occurrences`);
  }
}

Final thoughts

Load testing is an essential practice for ensuring web applications perform reliably under real-world conditions. By combining protocol-based and browser-based approaches, you can create comprehensive tests that evaluate both frontend and backend performance while maintaining reasonable resource requirements.

The most effective load testing strategy is tailored to your specific application needs, incorporating realistic user scenarios, appropriate test environments, and clear performance thresholds.

By implementing the techniques described in this guide, you'll be able to identify performance bottlenecks before they impact users and deliver a consistently responsive experience regardless of traffic volume.

Remember that load testing isn't a one-time activity but an ongoing practice that should evolve alongside your application. Regular testing throughout the development lifecycle helps catch performance regressions early and ensures your system continues to meet performance expectations as it grows.

Author's avatar
Article by
Ayooluwa Isaiah
Ayo is a technical content manager at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he's not writing or coding, he loves to travel, bike, and play tennis.
Got an article suggestion? Let us know
Next article
Playwright Testing Essentials: A Beginner's Guide
Explore the basics of Playwright in this guide covering its key features and setup, with examples demonstrating end-to-end web app testing
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github