Using FileReader.readAsDataURL() on 30MB film scans causes base64 bloat. Each image balloons into a 40MB+ string in memory, and with 36 images that’s nearly 1GB of memory pressure before the browser even starts decoding them. It causes blank cells in the exported canvas with no errors in sight. Switching to URL.createObjectURL() combined with image resizing cut memory usage by ~95%, and all 36 images now export correctly.
I was building a Contact Sheet Generator app for film photographers. You upload your scanned negatives, arrange them in a grid matching film formats (35mm, 120mm), and export a PNG image styled like a traditional darkroom contact sheet. Pretty straightforward, right?
Then I tested it with my actual film scans.
These weren’t typical smartphone photos. We’re talking high-resolution film scans which are 10-30MB per image. For a 35mm contact sheet, that’s 36 images.
Here’s what happened:
I was stumped. 🤔
Let me show you the original code:
const handleFileSelect = useCallback(async (index: number, file: File) => {
const reader = new FileReader()
reader.onload = e => {
const img = new Image()
img.onload = () => {
const isPortrait = img.height > img.width
const rotate = isPortrait
setCellData(prev =>
new Map(prev).set(index, {
src: e.target?.result as string,
rotate,
})
)
}
img.src = e.target?.result as string
}
reader.readAsDataURL(file)
}, [])
See the issue? FileReader.readAsDataURL() was:
With 36 images: 30MB × 36 = ~1GB of base64 strings just sitting in memory.
But wait, it gets worse. When the browser decodes those base64 strings to display the images, each one expands to ~60-90MB of raw image data. So during processing, we’re looking at 2-3GB of memory just for the images.
The browser was literally running out of memory and failing to load some images.
The first fix was obvious — resize the images before storing them. Film scans don’t need to be massive for a contact sheet. I added a resize step:
const resizeImage = (file: File): Promise<string> => {
return new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onload = e => {
const img = new Image()
img.onload = () => {
// Calculate new dimensions (max 2000px)
let width = img.width
let height = img.height
if (width > MAX_DIMENSION || height > MAX_DIMENSION) {
if (width > height) {
height = (height * MAX_DIMENSION) / width
width = MAX_DIMENSION
} else {
width = (width * MAX_DIMENSION) / height
height = MAX_DIMENSION
}
}
// Create canvas and resize
const canvas = document.createElement("canvas")
canvas.width = width
canvas.height = height
const ctx = canvas.getContext("2d")
ctx.drawImage(img, 0, 0, width, height)
resolve(canvas.toDataURL("image/jpeg", 0.9)) // ← 1-2MB base64
}
img.src = e.target?.result as string // ← Still 30MB base64 temporarily
}
reader.readAsDataURL(file) // ← Still creates 30MB base64
})
}
This helped! Now we’re storing ~1-2MB per image instead of 30MB. Total: 36-72MB instead of 1GB.
But there was still a problem — we were briefly creating that 30MB base64 string during processing. With rapid uploads or very large files, this could still cause issues.
URL.createObjectURL()Then I discovered URL.createObjectURL(). Instead of loading the entire file into memory as a base64 string, it creates a blob URL — just a pointer to the file.
const resizeImage = (file: File): Promise<string> => {
return new Promise((resolve, reject) => {
// Create a blob URL pointer (~100 bytes, not 30MB!)
const imageUrl = URL.createObjectURL(file)
const img = new Image()
img.onload = () => {
// ... resize logic ...
// Clean up the blob URL to free memory
URL.revokeObjectURL(imageUrl)
resolve(canvas.toDataURL("image/jpeg", 0.9))
}
img.onerror = () => {
URL.revokeObjectURL(imageUrl)
reject(new Error("Failed to load image"))
}
// Streams directly from file, no 30MB base64 string
img.src = imageUrl
})
}
readAsDataURL()
URL.createObjectURL()
URL.revokeObjectURL()The browser streams the image data directly from the file, decoding it on-demand. No massive base64 string, no memory spike.
Before:
After:
That’s a ~95% reduction in memory usage! 🎉
With URL.createObjectURL(), you must call URL.revokeObjectURL() when done:
// Success path
URL.revokeObjectURL(imageUrl)
resolve(canvas.toDataURL("image/jpeg", 0.9))
// Error path
img.onerror = () => {
URL.revokeObjectURL(imageUrl)
reject(new Error("Failed to load image"))
}
Forget this, and you’ve got a memory leak on your hands.
I chose MAX_DIMENSION = 2000px with 90% JPEG quality:
You could go lower or higher depending on your use case.
Added error logging for debugging:
img.onerror = () => {
console.error(`Failed to load resized image at index ${index}`)
}
After implementing both optimizations:
Know your tools: FileReader.readAsDataURL() is convenient, but URL.createObjectURL() is more memory-efficient for large files.
Resize early: Don’t store full-resolution images unless you need them. Resize on upload, not on export.
Profile memory: Browser DevTools Memory tab is your friend. Seeing that 1GB spike made the problem obvious.
Clean up manually: When you create blob URLs, remember to revoke them.
You can check out the project post and live demo here: