Why I stopped uploading client photos to image compressors
An opinion piece on why browser-side image compression has quietly become the right default for client-bound work, where cloud compressors still earn their keep, and the honest limitations of staying local.
A few years ago I had a 4 GB folder of unpublished product shots from a client who later turned out to have a fairly aggressive in-house counsel. The brief was simple: optimize the images for the launch site and email me the sizes by Friday. I dragged the folder into the well-known cloud compressor that everyone in the design world knew, hit batch process, and went to lunch.
The compressed images came back. The launch went well. Months later, someone from the client's legal team asked, very politely, where exactly those images had been processed and whether the platform had retained any copies. I had to read terms of service for an hour to give them a half-confident answer. The exact wording was something like "uploaded files may be cached for up to 24 hours for service operation," which is fine, except 24 hours is a long time when the photos in question are under a strict pre-launch embargo.
Nothing bad happened. The client did not break out the lawyers, the compressor did not get hacked, the files did not surface on a torrent site. But the entire experience was the kind of thing that makes you ask: why did I upload those at all? They were already on my laptop. The compression algorithm runs on a CPU, and I have a CPU.
I have been compressing images locally ever since. This is the case for that habit, an honest acknowledgement of where it still falls short, and the rough rules I have settled on for when local-first compression is and is not the right call.

图像压缩器
批量压缩为 WebP、AVIF、JXL,支持高级参数调节
The sales pitch is also true
The standard argument for browser-side or local image compression is privacy. That argument is real. When the compression runs on your machine via WebAssembly, the file never crosses the network. You do not need to read terms of service. You do not need to ask "what happens if this service gets breached." You do not need to assume that "we delete your files immediately" is exactly true. The file is on your disk before you start and on your disk when you finish; the bytes never leave.
For client work bound by NDAs, embargo agreements, or basic professional courtesy, that property changes the math.
The other arguments that get less attention but matter more in practice:
No file size cap. Cloud compressors charge for tier upgrades or block large files entirely. The free tier of one well-known service caps individual uploads at 5 MB; a 4 GB folder of phone photos would not even fit. Browser-side compression is bounded only by your RAM, which on a modern laptop comfortably handles batches in the hundreds of files.
No queue. When the compressor is in your browser tab, you are the only customer. Cloud services queue uploads behind other users during peak hours, which is invisible until you are watching a progress bar inch forward at 3 PM Pacific.
No round-trip recompression. Some cloud services apply their own re-encoding stack on top of the format-conversion you asked for. They claim to be "smart" about this. They are sometimes wrong, especially with PNG inputs that they decide to convert to "optimized" lossy WebP without asking. Local compression does exactly what you tell it to do.
Bandwidth. Uploading a 200 MB folder of camera raw files to a compression service and downloading the compressed versions back is, depending on your home connection, a five-to-twenty-minute operation that does nothing visible except move bytes around. The local pipeline finishes in the time it takes to load the WebAssembly module.
Where the cloud still earns its keep
I want to argue this honestly. The cloud-compression default is wrong for a meaningful slice of work, but it is not wrong for everything. There are three cases where I still use a cloud service or recommend one.
Continuous CDN integration. If your build pipeline uploads images to a CDN that auto-optimizes on serve (Cloudflare Images, Cloudinary, ImageKit, the Fastly Image Optimizer, Vercel's image route), the per-image optimization is happening at edge time anyway. There is no "staging step" to do locally. The CDN is doing what a desktop tool used to do, with the side effect of also handling responsive variants and CDN delivery. For sites that ship dozens of new images a week, this is genuinely the right architecture.
Very large one-off jobs. If you have a 50 GB archive of historical photos to convert from JPEG to AVIF, doing that in a browser tab is the wrong tool. The right tool is a server-side pipeline you spin up, run for an hour, and tear down. There are services that do this well; there are also dozen-line scripts that do it on a rented EC2 instance for less money.
Workflows where the client controls the toolchain. If the client says "use this service, here are the credentials," that is the service. Picking your own tool because it has a better privacy story is a fight you will lose at the Friday deadline.
For everything else (single-photo tweaks, ten-to-fifty image batches, pre-launch confidential material, anything where the optimization is a one-time op that does not need to be in a build pipeline), local-first is the sensible default in 2026. The reason it is the default now and was not the default five years ago is WebAssembly. The browser became a usable place to run real image codecs, with libavif, libwebp, mozjpeg, libjxl, and oxipng all available as WASM ports that produce byte-identical output to their native equivalents.
The honest limitations of staying local
I do not want to oversell this. Browser-side compression has real limitations and I have hit each of them.
EXIF metadata is not stripped automatically. The compressor on this site re-encodes the image data but does not currently remove EXIF metadata (camera model, GPS coordinates, capture timestamp). For most uses, that does not matter. For uploading client photos, real-estate listings, or anything where location metadata could be sensitive, you need a separate metadata-stripping step. There are tools for this; I usually run images through ExifTool on the command line as a second pass when it matters. This is a real gap. If you assumed "local-first" was equivalent to "privacy-safe" without that step, please correct the assumption.
RAM is a real cap. A browser tab is allocated something between 1 and 4 GB of memory depending on the browser and the platform. Decoding and re-encoding a high-resolution photo can briefly hold the full-resolution pixel buffer plus working memory, which adds up fast. Batching more than about 200 high-resolution photos in a single browser tab will eventually trigger an out-of-memory error or a tab crash. The fix is to batch in chunks of 50 to 100 and download each chunk before starting the next.
The encoder you get is the encoder you get. Local tools call a specific build of libavif, libwebp, mozjpeg, etc. Each tool's results depend on the build version. Cloud services sometimes update encoder builds more frequently. The difference is usually small (a few percent at most) but if you are doing serious benchmarking, the encoder version matters and you should check.
No automatic CDN delivery. Local compression produces a file. That file then has to get to your CDN, your S3 bucket, your origin server. This is a one-line script for most architectures, but it is a step. If you wanted "drop image, get URL, done," cloud is still the path of least friction.
No webhook or pipeline integration. If you need an image-optimization step inside a serverless function, a queue worker, or a CI pipeline, browser-side does not help. The serverless equivalent is to run the same WASM modules on the server side (which is increasingly common; the libraries that power Squoosh-derived browser tools also run on Node and Deno), but that is its own project.
What the rules look like in practice
After enough cycles of "should I run this locally or upload it," I have settled on a small set of rules. Yours may differ.
- Anything covered by an NDA, embargo, or confidentiality clause: local, always.
- Anything containing minors, named individuals in private settings, or recognizable private spaces: local, always, with EXIF strip as a separate step.
- Anything in a real-estate listing, since the location metadata can be sensitive: local, with EXIF strip.
- Anything for a pre-launch product page where the client has not announced yet: local.
- Routine optimization of images already public on the web: cloud is fine.
- Production CDN-integrated optimization for a high-traffic site: cloud, on a CDN with appropriate contracts.
- One-off massive batch jobs (50 GB+): server-side script on a rented compute instance.
- Everything else (most of the work): local, because the friction of uploading to and downloading from a service is higher than running a browser tab.
The rules above sound paranoid until you have been on the wrong side of an information-security review. Then they sound reasonable.
The toolchain question
For local compression in a browser, there are roughly three families of tools.
Squoosh and Squoosh-derived. Squoosh is the original Google Chrome Labs reference app. It still works. Development on the official app has slowed considerably; the CLI was deprecated. The codecs themselves (libavif, libwebp, mozjpeg, libjxl, oxipng) are now packaged as the jSquash library, which is what most newer tools are built on. Single-image focus.
jSquash-derived batch tools. The compressor on this site is one of these. So is Squish from Addy Osmani's GitHub, BulkImagePro, and a handful of others. They share the same underlying codec WASM modules but are built around batch workflows. Pick whichever has the UI you prefer.
Native desktop apps. ImageOptim on macOS is the long-time favorite, with a Mac-native batch interface and integration with system Shortcuts. Fast, free, and the right call if you compress images on a Mac frequently enough to want a desktop tool. The encoder choice is roughly the same as the browser-based options.
I default to a browser tool because the same tab works on any machine I sit down at, including a borrowed laptop or a desktop in a different office. The desktop apps are objectively a bit faster and more convenient for daily use; the browser is more portable.
What changed in five years
The reason this opinion is even possible is technological. In 2020, browser-based image compression was a curiosity. The codecs available in WebAssembly were limited, the encoding speeds were poor, and most production work needed a server-side pipeline. By 2026, all five major modern image codecs are available as performant WASM modules, the browser's WebAssembly runtime has gotten substantially faster, and the overall encode times for in-browser compression are within a small constant factor of native.
That single technical shift is what turned local-first from "possible if you really insist" to "actively preferable for most work." Cloud compression services were the right answer when the browser could not do this. They are still the right answer for some categories of work (CDN integration, very large batches, third-party platforms that own the toolchain). For everyone else, the case for uploading client photos to a third-party compressor in 2026 is weak enough that the default should flip.
The piece I have been holding back
I will be honest about one thing. There is a small but non-zero chance that I am being overly conservative. The cloud compression services I am suspicious of are mostly run by reputable companies, the breaches have been few, and "your image got cached for 24 hours" is a real but quantitatively small risk. The pragmatic counter-argument, which I respect, is that the time spent worrying about this is greater than the expected harm.
That argument is fine, and it is correct for many people. It stopped being correct for me the day a client asked, politely, where their photos had been processed and I had to read terms of service to find out. The cost of the next conversation like that one is high enough that the upfront cost of "always local" is worth it.
If you handle client material professionally, I think you should adopt the same default. If you mostly compress your own travel photos, the cloud default is fine and you should not change anything. The line between those two cases is something only you can draw.
For the work I do, the line ended up being closer to "always local" than "sometimes local," and the in-browser tools are good enough now that the choice does not cost me anything. The compressor on this site is the one I use for that work; it runs the same codecs that Squoosh exposes and handles batches up to the practical RAM limit. No upload, no queue, no terms of service to read.
Image Compressor · Z.Tools
Batch compress to WebP, AVIF, JXL with advanced controls
