Why Output Size Can Increase: Deep Compression and B/pixel
When converted files become larger, PicShift explains why with practical rules. This page defines those rules and how B/pixel is computed.
How PicShift identifies deep compression
PicShift treats very small source data density as a deeply compressed signal. It uses source size and decoded dimensions, then checks whether B/pixel is below a threshold.
- Pixel count is read from decoded dimensions (width x height).
- B/pixel is computed from original bytes divided by pixel count.
- If B/pixel is under the threshold, the source is treated as deeply compressed for explanation logic.
B/pixel formula
B/pixel = originalFileSizeBytes / (width x height)
Example: For an image with dimensions 3000 x 2000 and file size 350 KB, substitute into the formula: (350 x 1024) / (3000 x 2000) = 0.0597, which is about 0.06 B/pixel.
Why larger output can still be expected
- Converting from lossy or highly compressed formats to less efficient settings can increase size.
- Very small source B/pixel usually means little room for further size reduction at similar visual quality.
- Small images can show codec overhead more clearly, especially with heavier modern encoders.
Scope and boundaries
- This is an explanation heuristic, not a strict predictor for every single image.
- Different content types (photo, screenshot, graphics) can shift final results.
- Different quality levels and format targets can change trends.
FAQ
Does PicShift run a second full encode pass to judge size increase?
No. The explanation uses lightweight metadata and format rules, not a second full trial-encode, so the UI remains responsive.
How is B/pixel calculated?
B/pixel is source bytes divided by decoded pixel count: originalFileSizeBytes / (width × height).
Is deep compression detection an exact quality score?
No. It is a practical heuristic for explanation quality, not a visual quality metric and not a strict codec-level score.
Related pages
Last updated: 2026-03-07