Does your company rely on browser automation or web scraping? We have a wild offer for our early customers! Read more →

Node.js Fetch API: Complete Tutorial with Examples

published 4 days ago
by Robert Wilson

Key Takeaways

  • Node.js Fetch API is now stable and included by default since version 18.0.0, offering a native solution for HTTP requests without external dependencies
  • The API provides a modern, promise-based interface for making HTTP requests, with built-in support for JSON, streams, and request cancellation
  • Understanding error handling patterns is crucial - the API clearly distinguishes between network failures and HTTP status codes
  • Performance is optimized through the underlying Undici implementation, often outperforming traditional alternatives
  • The same API works in both browser and Node.js environments, enabling better code portability

Getting Started with Node.js Fetch API

Prerequisites

Before diving into the examples, ensure you have:

  • Node.js version 18.0.0 or higher installed (check with node -v)
  • Basic understanding of async/await and Promises in JavaScript
  • A code editor of your choice

Basic Syntax

The Fetch API follows a simple, intuitive syntax:

const response = await fetch(url[, options]);
const data = await response.json();

Making Your First Request

Let's start with a basic GET request:

async function getUser() {
  try {
    const response = await fetch('https://api.github.com/users/octocat');
    
    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }
    
    const data = await response.json();
    console.log(data);
  } catch (error) {
    console.error('Failed to fetch:', error);
  }
}

getUser();

Working with Different HTTP Methods

GET Requests

// Basic GET request
const response = await fetch('https://api.example.com/users');
const users = await response.json();

// GET request with query parameters
const username = 'john';
const response = await fetch(`https://api.example.com/users?name=${encodeURIComponent(username)}`);
const user = await response.json();

POST Requests

const createUser = async (userData) => {
  const response = await fetch('https://api.example.com/users', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
    },
    body: JSON.stringify(userData)
  });
  
  if (!response.ok) {
    throw new Error(`HTTP error! status: ${response.status}`);
  }
  
  return await response.json();
};

// Usage
const newUser = await createUser({
  name: 'John Doe',
  email: '[email protected]'
});

PUT and PATCH Requests

// PUT request
const updateUser = async (userId, userData) => {
  const response = await fetch(`https://api.example.com/users/${userId}`, {
    method: 'PUT',
    headers: {
      'Content-Type': 'application/json',
    },
    body: JSON.stringify(userData)
  });
  
  return await response.json();
};

// PATCH request
const partialUpdate = async (userId, updates) => {
  const response = await fetch(`https://api.example.com/users/${userId}`, {
    method: 'PATCH',
    headers: {
      'Content-Type': 'application/json',
    },
    body: JSON.stringify(updates)
  });
  
  return await response.json();
};

Advanced Features

Working with Headers

Understanding how to properly work with HTTP headers is crucial for effective API interactions:

const response = await fetch('https://api.example.com/data', {
  headers: {
    'Authorization': 'Bearer your-token-here',
    'Accept': 'application/json',
    'X-Custom-Header': 'custom-value'
  }
});

// Reading response headers
const contentType = response.headers.get('content-type');
const customHeader = response.headers.get('x-custom-header');

// Iterating over headers
for (const [key, value] of response.headers) {
  console.log(`${key}: ${value}`);
}

Handling Streams

// Download large file as stream
const downloadFile = async (url, targetFile) => {
  const response = await fetch(url);
  const fileStream = fs.createWriteStream(targetFile);
  
  for await (const chunk of response.body) {
    fileStream.write(chunk);
  }
  
  fileStream.end();
};

// Upload file as stream
const uploadFile = async (url, fileStream) => {
  const response = await fetch(url, {
    method: 'POST',
    body: fileStream,
    duplex: 'half'
  });
  
  return response.ok;
};

Error Handling Best Practices

Implementing robust error handling is crucial for reliable applications:

Comprehensive Error Handler

class FetchError extends Error {
  constructor(message, status, data) {
    super(message);
    this.name = 'FetchError';
    this.status = status;
    this.data = data;
  }
}

async function safeFetch(url, options = {}) {
  const controller = new AbortController();
  const timeoutId = setTimeout(() => controller.abort(), options.timeout || 5000);

  try {
    const response = await fetch(url, {
      ...options,
      signal: controller.signal,
    });

    clearTimeout(timeoutId);

    if (!response.ok) {
      const errorData = await response.json().catch(() => null);
      throw new FetchError(
        `HTTP error! status: ${response.status}`,
        response.status,
        errorData
      );
    }

    return await response.json();
  } catch (error) {
    clearTimeout(timeoutId);
    
    if (error instanceof FetchError) {
      throw error;
    }

    if (error.name === 'AbortError') {
      throw new FetchError('Request timed out', 408, null);
    }

    throw new FetchError('Network error', 0, error.message);
  }
}

Retry Mechanism

Implementing a proper retry mechanism can significantly improve your application's reliability:

async function fetchWithRetry(url, options = {}, maxRetries = 3) {
  let lastError;
  
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await safeFetch(url, options);
    } catch (error) {
      lastError = error;
      
      if (error.status && error.status < 500) {
        // Don't retry client errors
        throw error;
      }
      
      // Exponential backoff
      await new Promise(r => setTimeout(r, Math.pow(2, i) * 1000));
    }
  }
  
  throw lastError;
}

Performance Optimization

Connection Pooling

import { Agent } from 'undici';

const agent = new Agent({
  keepAliveTimeout: 60000,
  keepAliveMaxTimeout: 60000,
  connections: 10
});

const response = await fetch('https://api.example.com', {
  dispatcher: agent
});

Request Batching

async function batchRequests(urls, batchSize = 5) {
  const results = [];
  
  for (let i = 0; i < urls.length; i += batchSize) {
    const batch = urls.slice(i, i + batchSize);
    const promises = batch.map(url => safeFetch(url));
    
    const batchResults = await Promise.allSettled(promises);
    results.push(...batchResults);
  }
  
  return results;
}

Migrating from Other HTTP Clients

From Axios

// Axios
const response = await axios.get(url, {
  headers: { Authorization: 'Bearer token' }
});
const data = response.data;

// Fetch
const response = await fetch(url, {
  headers: { Authorization: 'Bearer token' }
});
const data = await response.json();

// Axios
axios.post(url, data, {
  headers: { 'Content-Type': 'application/json' }
});

// Fetch
fetch(url, {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify(data)
});

Security Considerations

  • Always validate SSL certificates in production environments
  • Set appropriate request timeouts to prevent DoS attacks
  • Validate response content types before parsing
  • Use environment variables for sensitive data like API keys
  • Implement rate limiting for outgoing requests
  • Be cautious with automatic redirects

Developer Experiences

Technical discussions across various platforms reveal interesting insights about how the Node.js Fetch API is being received in real-world applications. One particularly notable aspect that developers appreciate is the API's clear separation between network-level and application-level errors. Engineers point out that this design choice aligns well with HTTP's architectural principles - a 404 or 500 response represents a successful HTTP transaction, even if it indicates an application-level error.

The developer community particularly values the API's simplicity and composability. Several senior engineers highlight how Fetch's function-based approach makes it easy to wrap and extend, unlike more complex class-based HTTP clients. This simplicity also translates into better testing capabilities, as mocking a single function is straightforward. However, some teams note that this minimalist approach means writing additional error handling code, particularly for those transitioning from libraries like Axios that automatically reject promises on non-200 status codes.

Another significant theme in community discussions is the API's role in code portability between browser and Node.js environments. Development teams report fewer configuration issues and reduced dependency management challenges when using the native Fetch API. This is especially valuable for full-stack applications and libraries that need to work in both environments. However, some developers point out that differences still exist, particularly around features like CORS and cookie handling, which need to be carefully considered in cross-environment code.

The integration with Node's streaming capabilities has also received positive attention from the community, particularly for handling large files and real-time data. However, developers working on more complex applications note that additional utilities are often needed for common tasks like request timeouts, retries, and advanced error handling, leading many teams to build their own utility wrappers around the basic Fetch API. These capabilities make it particularly useful for tasks like web scraping and data processing.

Conclusion

The Node.js Fetch API represents a significant step forward in standardizing HTTP requests across JavaScript environments. Its native implementation in Node.js 18+ provides excellent performance through the Undici engine while maintaining a familiar Promise-based interface that developers know from browser environments.

While some developers may initially find its error handling approach different from other HTTP clients, the clear separation between network and application errors aligns well with HTTP principles and promotes better error handling practices. The API's simplicity and extensibility make it an excellent choice for both simple scripts and complex applications.

Further Reading

Robert Wilson
Author
Robert Wilson
Senior Content Manager
Robert brings 6 years of digital storytelling experience to his role as Senior Content Manager. He's crafted strategies for both Fortune 500 companies and startups. When not working, Robert enjoys hiking the PNW trails and cooking. He holds a Master's in Digital Communication from University of Washington and is passionate about mentoring new content creators.
Try Rebrowser for free. Join our waitlist.
Due to high demand, Rebrowser is currently available by invitation only.
We're expanding our user base daily, so join our waitlist today.
Just share your email to unlock a new world of seamless automation.
Get invited within 7 days
No credit card required
No spam
Other Posts
web-scraping-vs-api-the-ultimate-guide-to-choosing-the-right-data-extraction-method
Learn the key differences between web scraping and APIs, their pros and cons, and how to choose the right method for your data extraction needs in 2024. Includes real-world examples and expert insights.
published 2 months ago
by Nick Webson
python-requests-proxy-guide-implementation-best-practices-and-advanced-techniques
A comprehensive guide to implementing and managing proxy connections in Python Requests, with practical examples and best practices for web scraping, data collection, and network security.
published 2 months ago
by Robert Wilson
understanding-http-cookies-a-developers-implementation-guide
Learn everything about HTTP cookies - from basic concepts to advanced implementation patterns, security best practices, and modern alternatives for state management in web applications.
published 7 days ago
by Nick Webson
how-to-scrape-seatgeek-com-protected-by-datadome-in-2024
This article presents a technical analysis of SeatGeek.com's data protection measures, focusing on the challenges posed by DataDome's anti-bot system. The study explores potential methodologies for accessing publicly available ticket information at scale.
published 4 months ago
by Nick Webson
farmed-accounts-unveiled-a-comprehensive-guide-to-their-effectiveness-and-alternatives
Explore the world of farmed accounts, their pros and cons, and discover effective alternatives for managing multiple online profiles securely.
published 6 months ago
by Nick Webson
the-ultimate-guide-to-ethical-email-scraping-best-practices-for-collection-and-verification
Master the art of ethical email data collection with this comprehensive guide covering technical implementation, compliance requirements, and verification best practices.
published 18 days ago
by Robert Wilson