Efficient S3 File Uploads: Speed & Large File Handling in NestJS


Uploading files efficiently to S3 isn’t just about getting data from point A to point B but it’s about doing it fast, reliably, and at scale. Whether you’re handling 5MB images or 5GB videos, the right approach makes all the difference.



📑 Table of Contents




🚀 The Core Strategy: Direct-to-S3 Uploads

Never route files through your server. This is the #1 performance killer.



❌ The Wrong Way (Slow & Resource-Heavy)

Client → Your Server → S3

Problems: Server bottleneck, memory spikes, timeouts, infinite scalability.



✅ The Right Way (Fast & Scalable)

Client → S3 (directly)

Your Server → Generates presigned URL only

Benefits: Maximum speed, no server memory issues, infinite scalability




📦 Implementation: Basic Presigned URL Upload



S3 Service Setup

// src/s3/s3.service.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuidv4 } from 'uuid';

@Injectable()
export class S3Service {
  private readonly s3Client: S3Client;
  private readonly bucketName: string;

  constructor(private configService: ConfigService) {
    this.s3Client = new S3Client({
      region: this.configService.get('AWS_REGION'),
      credentials: {
        accessKeyId: this.configService.get('AWS_ACCESS_KEY_ID'),
        secretAccessKey: this.configService.get('AWS_SECRET_ACCESS_KEY'),
      },
      // Performance optimizations
      maxAttempts: 3,
      requestHandler: {
        connectionTimeout: 5000,
        socketTimeout: 5000,
      },
    });
    this.bucketName = this.configService.get('AWS_BUCKET_NAME');
  }

  async generatePresignedUrl(
    filename: string,
    contentType: string,
    fileSize: number,
  ): Promise<{ uploadUrl: string; key: string }> {
    const key = `uploads/${uuidv4()}/${filename}`;

    const command = new PutObjectCommand({
      Bucket: this.bucketName,
      Key: key,
      ContentType: contentType,
      ContentLength: fileSize,
    });

    // Short expiration for security, adequate for upload speed
    const uploadUrl = await getSignedUrl(this.s3Client, command, {
      expiresIn: 900, // 15 minutes
    });

    return { uploadUrl, key };
  }
}
Enter fullscreen mode

Exit fullscreen mode



Upload Controller

// src/upload/upload.controller.ts
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { UploadService } from './upload.service';
import { JwtAuthGuard } from '../auth/jwt-auth.guard';

@Controller('api/upload')
@UseGuards(JwtAuthGuard)
export class UploadController {
  constructor(private readonly uploadService: UploadService) {}

  @Post('initiate')
  async initiateUpload(@Body() body: {
    filename: string;
    contentType: string;
    fileSize: number;
  }) {
    return await this.uploadService.initiateUpload(
      body.filename,
      body.contentType,
      body.fileSize,
    );
  }
}
Enter fullscreen mode

Exit fullscreen mode



Client-Side Upload with Progress

async function uploadFileToS3(file) {
  // Step 1: Get presigned URL
  const response = await fetch('/api/upload/initiate', {
    method: 'POST',
    headers: { 
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${token}`,
    },
    body: JSON.stringify({
      filename: file.name,
      contentType: file.type,
      fileSize: file.size,
    }),
  });

  const { uploadUrl, key } = await response.json();

  // Step 2: Upload directly to S3 with progress tracking
  return new Promise((resolve, reject) => {
    const xhr = new XMLHttpRequest();

    xhr.upload.addEventListener('progress', (e) => {
      if (e.lengthComputable) {
        const percentage = Math.round((e.loaded / e.total) * 100);
        updateProgressBar(percentage);
      }
    });

    xhr.addEventListener('load', () => {
      if (xhr.status === 200) {
        resolve(key);
      } else {
        reject(new Error('Upload failed'));
      }
    });

    xhr.addEventListener('error', () => reject(new Error('Upload failed')));
    xhr.addEventListener('abort', () => reject(new Error('Upload cancelled')));

    xhr.open('PUT', uploadUrl);
    xhr.setRequestHeader('Content-Type', file.type);
    xhr.send(file);
  });
}
Enter fullscreen mode

Exit fullscreen mode




🚄 Multipart Upload: For Large Files (100MB+)

For files over 100MB, use S3’s multipart upload. This provides:

  • Faster uploads: Parallel part uploads
  • Resumable uploads: Retry individual failed parts
  • Better reliability: Network issues don’t kill the entire upload



Multipart Upload Service

// src/s3/multipart.service.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import {
  S3Client,
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand,
  AbortMultipartUploadCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuidv4 } from 'uuid';

@Injectable()
export class MultipartService {
  private readonly s3Client: S3Client;
  private readonly bucketName: string;
  // Optimal chunk size for network efficiency
  private readonly CHUNK_SIZE = 10 * 1024 * 1024; // 10MB

  constructor(private configService: ConfigService) {
    this.s3Client = new S3Client({
      region: this.configService.get('AWS_REGION'),
      credentials: {
        accessKeyId: this.configService.get('AWS_ACCESS_KEY_ID'),
        secretAccessKey: this.configService.get('AWS_SECRET_ACCESS_KEY'),
      },
    });
    this.bucketName = this.configService.get('AWS_BUCKET_NAME');
  }

  async initiateMultipartUpload(
    filename: string,
    contentType: string,
  ) {
    const key = `uploads/${uuidv4()}/${filename}`;

    const command = new CreateMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      ContentType: contentType,
    });

    const response = await this.s3Client.send(command);

    return {
      uploadId: response.UploadId,
      key: key,
      chunkSize: this.CHUNK_SIZE,
    };
  }

  async getPresignedPartUrl(
    key: string,
    uploadId: string,
    partNumber: number,
  ): Promise<string> {
    const command = new UploadPartCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
      PartNumber: partNumber,
    });

    // Longer expiration for large file parts
    return await getSignedUrl(this.s3Client, command, { 
      expiresIn: 3600 // 1 hour
    });
  }

  async completeMultipartUpload(
    key: string,
    uploadId: string,
    parts: Array<{ PartNumber: number; ETag: string }>,
  ) {
    const command = new CompleteMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
      MultipartUpload: { 
        Parts: parts.sort((a, b) => a.PartNumber - b.PartNumber) 
      },
    });

    return await this.s3Client.send(command);
  }

  async abortMultipartUpload(key: string, uploadId: string) {
    const command = new AbortMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
    });

    await this.s3Client.send(command);
  }
}
Enter fullscreen mode

Exit fullscreen mode



Multipart Controller

// src/upload/multipart.controller.ts
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { MultipartService } from '../s3/multipart.service';
import { JwtAuthGuard } from '../auth/jwt-auth.guard';

@Controller('api/upload/multipart')
@UseGuards(JwtAuthGuard)
export class MultipartController {
  constructor(private readonly multipartService: MultipartService) {}

  @Post('initiate')
  async initiateMultipart(
    @Body() body: { filename: string; contentType: string; fileSize: number },
  ) {
    return await this.multipartService.initiateMultipartUpload(
      body.filename,
      body.contentType,
    );
  }

  @Post('part-url')
  async getPartUrl(
    @Body() body: { key: string; uploadId: string; partNumber: number },
  ) {
    const url = await this.multipartService.getPresignedPartUrl(
      body.key,
      body.uploadId,
      body.partNumber,
    );
    return { url };
  }

  @Post('complete')
  async completeMultipart(
    @Body() body: {
      key: string;
      uploadId: string;
      parts: Array<{ PartNumber: number; ETag: string }>;
    },
  ) {
    return await this.multipartService.completeMultipartUpload(
      body.key,
      body.uploadId,
      body.parts,
    );
  }

  @Post('abort')
  async abortMultipart(@Body() body: { key: string; uploadId: string }) {
    await this.multipartService.abortMultipartUpload(body.key, body.uploadId);
    return { success: true };
  }
}
Enter fullscreen mode

Exit fullscreen mode



Client-Side Multipart Upload with Parallel Parts

class MultipartUploader {
  constructor(file, options = {}) {
    this.file = file;
    this.chunkSize = options.chunkSize || 10 * 1024 * 1024; // 10MB
    this.maxConcurrent = options.maxConcurrent || 3; // Upload 3 parts simultaneously
    this.onProgress = options.onProgress || (() => {});
    this.uploadedBytes = 0;
  }

  async upload() {
    // Step 1: Initiate multipart upload
    const initResponse = await fetch('/api/upload/multipart/initiate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${token}`,
      },
      body: JSON.stringify({
        filename: this.file.name,
        contentType: this.file.type,
        fileSize: this.file.size,
      }),
    });

    const { uploadId, key, chunkSize } = await initResponse.json();
    this.chunkSize = chunkSize;

    // Step 2: Calculate parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const parts = [];

    // Step 3: Upload parts in parallel (with concurrency limit)
    const uploadQueue = [];
    for (let partNumber = 1; partNumber <= numParts; partNumber++) {
      uploadQueue.push(partNumber);
    }

    const completedParts = [];
    while (uploadQueue.length > 0) {
      // Take up to maxConcurrent parts
      const batch = uploadQueue.splice(0, this.maxConcurrent);

      const batchPromises = batch.map(partNumber => 
        this.uploadPart(key, uploadId, partNumber)
      );

      const batchResults = await Promise.all(batchPromises);
      completedParts.push(...batchResults);
    }

    // Step 4: Complete multipart upload
    await fetch('/api/upload/multipart/complete', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${token}`,
      },
      body: JSON.stringify({
        key,
        uploadId,
        parts: completedParts,
      }),
    });

    return key;
  }

  async uploadPart(key, uploadId, partNumber) {
    // Get presigned URL for this part
    const urlResponse = await fetch('/api/upload/multipart/part-url', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${token}`,
      },
      body: JSON.stringify({ key, uploadId, partNumber }),
    });

    const { url } = await urlResponse.json();

    // Extract chunk
    const start = (partNumber - 1) * this.chunkSize;
    const end = Math.min(start + this.chunkSize, this.file.size);
    const chunk = this.file.slice(start, end);

    // Upload chunk
    return new Promise((resolve, reject) => {
      const xhr = new XMLHttpRequest();

      xhr.upload.addEventListener('progress', (e) => {
        if (e.lengthComputable) {
          this.uploadedBytes += e.loaded;
          const totalProgress = (this.uploadedBytes / this.file.size) * 100;
          this.onProgress(totalProgress);
        }
      });

      xhr.addEventListener('load', () => {
        if (xhr.status === 200) {
          const etag = xhr.getResponseHeader('ETag');
          resolve({
            PartNumber: partNumber,
            ETag: etag.replace(/"/g, ''),
          });
        } else {
          reject(new Error(`Part ${partNumber} upload failed`));
        }
      });

      xhr.addEventListener('error', () => 
        reject(new Error(`Part ${partNumber} upload failed`))
      );

      xhr.open('PUT', url);
      xhr.send(chunk);
    });
  }
}

// Usage
const uploader = new MultipartUploader(file, {
  maxConcurrent: 5, // Upload 5 parts at once for faster speed
  onProgress: (percentage) => {
    console.log(`Upload progress: ${percentage.toFixed(2)}%`);
    updateProgressBar(percentage);
  },
});

await uploader.upload();
Enter fullscreen mode

Exit fullscreen mode




⚡ Performance Optimizations



1. S3 Transfer Acceleration

Enable S3 Transfer Acceleration for 50-500% faster uploads over long distances:

// In S3Service constructor
this.s3Client = new S3Client({
  region: this.configService.get('AWS_REGION'),
  credentials: { /* ... */ },
  useAccelerateEndpoint: true, // Enable Transfer Acceleration
});
Enter fullscreen mode

Exit fullscreen mode

Setup: Enable in S3 bucket settings → Properties → Transfer Acceleration



2. Optimal Chunk Sizes

File Size Recommended Chunk Size Reason
< 100MB Single upload Overhead not worth it
100MB – 1GB 10MB chunks Balance speed/reliability
1GB – 5GB 25MB chunks Fewer API calls
> 5GB 100MB chunks Maximum efficiency
/**
 * Calculates the optimal S3 multipart upload chunk size (in bytes)
 * based on the total file size to balance speed, reliability, and API efficiency.
 */
calculateOptimalChunkSize(fileSize: number): number {
  if (fileSize < 100 * 1024 * 1024) return fileSize; // Single upload
  if (fileSize < 1024 * 1024 * 1024) return 10 * 1024 * 1024; // 10MB
  if (fileSize < 5 * 1024 * 1024 * 1024) return 25 * 1024 * 1024; // 25MB
  return 100 * 1024 * 1024; // 100MB
}
Enter fullscreen mode

Exit fullscreen mode

Tips:

  • For unstable networks → smaller chunks (5–10MB) for easier retries
  • For high-speed connections → larger chunks (25–100MB) for better throughput
  • AWS caps at 10,000 parts, so chunk size × parts ≤ total file size
  • Combine with parallel uploads (e.g., Promise.allSettled()) to fully utilize bandwidth



3. Parallel Upload Configuration

const OPTIMAL_CONCURRENCY = {
  // Based on network speed
  slow: 2,      // < 5 Mbps
  medium: 3,    // 5-50 Mbps
  fast: 5,      // 50-100 Mbps
  veryFast: 8,  // > 100 Mbps
};

// Auto-detect network speed
async function detectNetworkSpeed() {
  const start = Date.now();
  const response = await fetch('https://your-cdn.com/test-1mb.bin');
  await response.blob();
  const duration = (Date.now() - start) / 1000; // seconds
  const speedMbps = (1 * 8) / duration; // 1MB = 8Mb

  if (speedMbps < 5) return OPTIMAL_CONCURRENCY.slow;
  if (speedMbps < 50) return OPTIMAL_CONCURRENCY.medium;
  if (speedMbps < 100) return OPTIMAL_CONCURRENCY.fast;
  return OPTIMAL_CONCURRENCY.veryFast;
}
Enter fullscreen mode

Exit fullscreen mode



4. Connection Pooling & Keep-Alive

// src/s3/s3.service.ts
import { NodeHttpHandler } from '@smithy/node-http-handler';
import { Agent } from 'https';

constructor(private configService: ConfigService) {
  const agent = new Agent({
    keepAlive: true,
    maxSockets: 50, // Allow multiple concurrent connections
    keepAliveMsecs: 1000,
  });

  this.s3Client = new S3Client({
    region: this.configService.get('AWS_REGION'),
    credentials: { /* ... */ },
    requestHandler: new NodeHttpHandler({
      httpsAgent: agent,
      connectionTimeout: 5000,
      socketTimeout: 5000,
    }),
  });
}
Enter fullscreen mode

Exit fullscreen mode




📊 Monitoring Upload Performance

// src/upload/upload.service.ts
@Injectable()
export class UploadService {
  async trackUploadMetrics(
    key: string,
    fileSize: number,
    duration: number,
  ) {
    const speedMbps = (fileSize * 8) / (duration * 1024 * 1024);

    // Log to monitoring service (DataDog, CloudWatch, etc.)
    this.logger.log({
      event: 'upload_completed',
      key,
      fileSize,
      duration,
      speedMbps,
      timestamp: new Date(),
    });

    // Alert if speed is below threshold
    if (speedMbps < 1) {
      this.alertService.warn('Slow upload detected', {
        key,
        speedMbps,
      });
    }
  }
}
Enter fullscreen mode

Exit fullscreen mode




🛡️ Handling Upload Failures Gracefully



Retry Logic with Exponential Backoff

async function uploadPartWithRetry(chunk, url, maxRetries = 3) {
  let lastError;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await uploadChunk(chunk, url);
    } catch (error) {
      lastError = error;

      if (attempt < maxRetries) {
        // Exponential backoff: 1s, 2s, 4s
        const delay = Math.pow(2, attempt) * 1000;
        await new Promise(resolve => setTimeout(resolve, delay));
      }
    }
  }

  throw lastError;
}
Enter fullscreen mode

Exit fullscreen mode



Resume Failed Uploads

class ResumableUploader extends MultipartUploader {
  constructor(file, options = {}) {
    super(file, options);
    this.uploadState = this.loadUploadState() || {
      uploadId: null,
      key: null,
      completedParts: [],
    };
  }

  saveUploadState() {
    localStorage.setItem(
      `upload_${this.file.name}`,
      JSON.stringify(this.uploadState)
    );
  }

  loadUploadState() {
    const saved = localStorage.getItem(`upload_${this.file.name}`);
    return saved ? JSON.parse(saved) : null;
  }

  async upload() {
    // Resume existing upload if available
    if (this.uploadState.uploadId) {
      return await this.resumeUpload();
    }

    // Start new upload
    return await super.upload();
  }

  async resumeUpload() {
    const { uploadId, key, completedParts } = this.uploadState;
    const completedPartNumbers = new Set(
      completedParts.map(p => p.PartNumber)
    );

    // Upload only remaining parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const remainingParts = [];

    for (let i = 1; i <= numParts; i++) {
      if (!completedPartNumbers.has(i)) {
        remainingParts.push(i);
      }
    }

    // Upload remaining parts
    // Complete upload

    localStorage.removeItem(`upload_${this.file.name}`);
  }
}
Enter fullscreen mode

Exit fullscreen mode




📈 Speed Benchmarks

Efficient / Ideal upload speeds for a 1GB file:

Method Time Speed Notes
Through server 4-6 min ~2.5 MB/s Bottleneck
Direct presigned URL 1.5-2 min ~8 MB/s Good
Multipart (3 parts) 45-60 sec ~17 MB/s Better
Multipart (5 parts) + Acceleration 30-40 sec ~25 MB/s Best



✅ Production Checklist

  • ✅ Direct-to-S3 uploads implemented
  • ✅ Multipart upload for files > 100MB
  • ✅ Parallel part uploads (3-5 concurrent)
  • ✅ S3 Transfer Acceleration enabled
  • ✅ Optimal chunk sizes configured
  • ✅ Connection pooling enabled
  • ✅ Progress tracking implemented
  • ✅ Retry logic with exponential backoff
  • ✅ Resume capability for failed uploads
  • ✅ Upload speed monitoring
  • ✅ S3 lifecycle policies for cleanup
  • ✅ CloudFront CDN for download speed



🎯 Key Takeaways

  1. Never route files through your server – Use presigned URLs
  2. Use multipart uploads for large files (> 100MB)
  3. Upload parts in parallel – 3-5 concurrent uploads optimal
  4. Enable S3 Transfer Acceleration – Massive speed boost for global users
  5. Implement retry logic – Network issues happen
  6. Monitor upload speeds – Alert on degraded performance
  7. Optimize chunk sizes – Bigger files need bigger chunks

The difference between a slow upload system and a fast one often comes down to these fundamentals. Get them right, and your users will notice.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *