If you just need one, use our image metadata stripper.
Why this matters
Photos often leak more than the pixels. GPS coordinates can expose where the image was taken. Device model and software tags can identify what created it. Timestamps can anchor it in time. Many image cleaning sites ask you to upload the file to remove that data. Now the photo and the hidden data are both sitting on someone else's server before anything is cleaned.
If a metadata remover needs a server, you have traded one leak for another. A local architecture fixes the problem at the right layer.
Be precise about the claim. A browser re-encode is the right way to remove the obvious location, device, and timestamp data embedded in common image formats. That does not make it a forensic-grade file scrubber.
The local architecture
- Use the browser file picker or drag and drop so the image stays local.
- Parse EXIF or related metadata directly from the file in memory with a local parser such as
exifr. - Render the decoded image into a canvas.
- Export a fresh blob from that canvas. The cleaned file keeps the rendered pixels but drops the original metadata payload from the source file.
- Where the browser supports it, export back to the same format instead of forcing everything into PNG.
- Offer the cleaned file for download without any upload step, storage layer, or analytics event.
Minimal implementation
The example below uses a local copy of exifr to inspect metadata, then uses canvas re-encoding to remove it from the cleaned copy. No API calls. No upload fallback. No CDN.
<input type="file" id="image" accept="image/jpeg,image/png,image/webp" />
<img id="preview" alt="" />
<button id="clean" disabled>Strip metadata</button>
<script src="./exifr-lite.umd.js"></script>
<script>
let sourceFile;
image.addEventListener('change', async (event) => {
sourceFile = event.target.files[0];
if (!sourceFile) return;
const tags = await exifr.parse(sourceFile, {
exif: true,
gps: true,
tiff: true,
xmp: true,
mergeOutput: true
});
console.log('Local metadata summary:', tags);
preview.src = URL.createObjectURL(sourceFile);
clean.disabled = false;
});
clean.addEventListener('click', async () => {
const img = new Image();
img.src = URL.createObjectURL(sourceFile);
await img.decode();
const canvas = document.createElement('canvas');
canvas.width = img.naturalWidth;
canvas.height = img.naturalHeight;
canvas.getContext('2d').drawImage(img, 0, 0);
const outputType = sourceFile.type || 'image/png';
canvas.toBlob((blob) => {
const a = document.createElement('a');
a.download = 'image-clean';
a.href = URL.createObjectURL(blob);
a.click();
}, outputType, 0.92);
});
</script>
What to avoid
- CDN script loads. Even if the tool runs locally after load, pulling a parser from a CDN still leaks the visit and page context.
- Server-side cleaning. If the image leaves the device first, the privacy claim is already broken.
- Overclaiming. Say what the tool actually does. It removes the original metadata from the cleaned copy. Do not pretend it is a universal forensic scrubber.
- Telemetry. Analytics and session logging on a privacy utility are self-own goals.
- Feature creep. A metadata stripper should strip metadata. It does not need to become an image lab.
If a task can run locally, it should run locally. Image metadata stripping is one of the clearest examples. Do not ask users to trust a server for a job that never needed one.