
How to Fix Underexposed Dark Old Photos with AI — Recovering Detail from Shadows
AI tools can recover surprising detail from underexposed, too-dark old photographs. Here is how the technology works and how to get the best results from severely dark vintage prints.
Maya Chen
One of the most surprising things about AI photo restoration is how much it can pull out of a photograph that appears nearly black. The technology has reached a point where photos that seemed lost in darkness — too underexposed to see clearly, too shadowy to make out faces — can yield recognizable, printable images.
This is not magic. There are real limits to what is recoverable. But understanding how the technology works helps you know when to attempt restoration and what kind of result to expect.
Why Are Old Photos Underexposed or Too Dark?
When you hold an old photograph that appears too dark, it usually has one of two histories.
Original underexposure — The camera operator misjudged the exposure settings for the lighting conditions. Interior shots without flash, photos taken on overcast days with consumer cameras, or photos taken in the shade often came out significantly underexposed. Consumer box cameras from the 1920s through 1960s had fixed or limited exposure controls, making underexposure in anything but bright outdoor light very common. These photos were always dark — they just did not become a priority for saving until the subject was no longer alive to photograph again.
Chemical deterioration — Silver-based photographic prints darken over time due to chemical reduction of the silver image, particularly in humid storage conditions. A photo that printed at correct exposure in 1960 may look muddy and dark by 2026 because the silver compounds have shifted. This is different from silver mirroring (which causes bright metallic spots) but equally damaging to visibility. The chemical darkening follows the same tonal range as the original image, meaning underlying detail is still there — just suppressed.
Both types of darkening respond to AI restoration because the underlying signal — a photographic record of some kind — is still present. The AI lifts and clarifies what was captured, rather than inventing content that was never there.
How Does AI Restoration Recover Dark Photographs?
ArtImageHub processes dark and underexposed photos through a pipeline of models that each address specific aspects of the problem.
NAFNet (Non-linear Activation Free Network) is particularly important for dark photos. This model handles noise reduction and deblurring, but it also performs tonal correction that lifts shadow detail. Dark photographs often have high noise levels in shadow areas — film grain that becomes visible when the image is brightened. NAFNet addresses this noise while recovering the underlying detail, preventing the classic digital problem where brightening a dark photo reveals only noise rather than content.
Real-ESRGAN then upscales the corrected image. Old photos that were originally small or printed at low resolution benefit from upscaling after tonal correction, allowing recovered shadow detail to be seen clearly in larger output formats.
GFPGAN applies face-specific reconstruction. Faces in underexposed photos are doubly degraded — both the tonal range and the fine facial detail are compromised. GFPGAN detects faces in the image and applies targeted reconstruction, often producing striking results where a face that was barely visible becomes fully recognizable.
What Is the Critical Difference Between a Dark Signal and No Signal?
The most important concept in dark photo restoration is the distinction between a signal that is dark and a signal that is absent.
Dark signal means light hit the film or sensor and created a record, but that record printed or scanned darker than the actual scene. This signal can be boosted and clarified. It contains real information about what was in front of the camera — just suppressed.
Absent signal means the exposure was so extreme that no record was created, or physical damage destroyed the emulsion. In these areas, there is nothing to recover. The AI fills in plausible texture based on surrounding context, which may look reasonable but is not an accurate representation of the original scene content.
In practice: if you can see any hint of detail in the dark areas of a photo when you look carefully — a barely visible edge, a suggestion of a face, any tonal variation at all — there is likely a recoverable signal. If an area is a completely undifferentiated black with no variation whatsoever, the signal may be absent.
How Should You Scan a Dark Photo for the Best Results?
The scanning step is critical and often overlooked. When scanning a very dark print, the goal is to capture as much of the latent shadow detail as possible before the file is saved.
Increase scanner brightness: Most scanner software has exposure or image adjustment controls. Scan at a brightness setting that makes the image appear noticeably too bright on screen. This counterintuitive step pushes the scanner to record detail in the dark areas that a correctly-calibrated scan would compress into a featureless shadow.
Use TIFF format: JPEG compression is particularly destructive in dark areas, creating blocky artifacts in shadow regions. TIFF preserves the full tonal information and gives the AI more to work with.
Scan at 600 DPI minimum: For small prints (4x6 or smaller), 1200 DPI. Higher resolution captures more tonal variation per inch, which means more recoverable detail in shadow areas.
Do not use automatic exposure correction in the scanner software: Some scanner apps auto-correct the image during scanning. Disable this and save the raw scan — AI restoration is better at tonal correction than scanner auto-exposure algorithms, and the auto-correction may discard shadow detail in ways that make AI recovery harder.
What Do Results Look Like in Practice?
Testing ArtImageHub on a set of underexposed photos from the 1940s through 1970s:
1940s indoor portrait (significantly underexposed): A family gathering shot with a box camera in indoor lighting. The original scan showed silhouette-level darkness — faces barely distinguishable. After restoration: faces clearly visible, wall decor legible, clothing texture present. Not perfect, but genuinely recognizable.
1960s outdoor photo in deep shade: Children playing in shadow under large trees. Heavy shadow cast across faces. After restoration: faces restored with good detail, depth of shadow remains but becomes transparent to the content rather than obscuring it.
1970s film photo, chemically darkened: A print that had been stored in a humid basement for 30 years, darkening the entire image uniformly. The color version had shifted to muddy brown-orange. After restoration: tonal range corrected, faces visible, colors shifted back toward natural ranges.
The failure case: A 1930s photo with a section completely burned by light exposure — a portion of the image was pure white from light damage, not underexposed but overexposed. This area cannot be recovered because the film received so much light that the silver was destroyed rather than insufficient light failing to expose it. AI filled this with plausible content, but it was AI-generated, not recovered original content.
What Should You Do After Restoration to Make the Photo Usable?
Once you have downloaded the restored image from ArtImageHub, you have a file ready for print or display.
Check the output at 100% zoom before printing. Look at faces: are they recognizable? Is the tonal range natural, without the washed-out look that sometimes comes from aggressive brightening? Is there visible noise in the shadow areas, or has the restoration handled them cleanly?
For prints, 8x10 and smaller sizes typically look excellent from ArtImageHub's Real-ESRGAN output. For very large prints (16x20 and above), start with the highest-resolution source scan you can produce.
The process costs $4.99 one-time — start with your darkest photo and see what comes out of the shadows.
Technical references: NAFNet described in Chen et al., "Simple Baselines for Image Restoration" (2022); Real-ESRGAN described in Wang et al. (2021); GFPGAN described in Wang et al., Tencent ARC Lab (2021).
About the Author
Maya Chen
Photo Restoration Specialist
Maya has spent 8 years helping families recover damaged and faded photographs using the latest AI restoration technology.
Share this article
Ready to Restore Your Old Photos?
Try ArtImageHub's AI-powered photo restoration. Bring faded, damaged family photos back to life in seconds.