The fact that detectors can only capture part of the information in a diffraction pattern is called the phase problem. Phase retrieval is exactly what it sounds like: retrieving the information that was in the phase of the diffraction pattern. Before we look at how this is done, however, I want to drive home how ridiculous of a problem this is.
Let’s say you’re using a computer with 64-bit numerical precision. That means it can reliably perform calculations out to 15 significant digits. So the number of possible values for the phase of each pixel is[1] $2\pi\times 10^{15}$. Now let’s say you have an 8 MP detector, which means there are about 8 million pixels. This puts the total number of identical diffraction patterns at
$$ N_\mathrm{possible}=(2\pi\times10^{15})(8×10^6)≈5.03×10^{22}. $$
One of these will give you the correct object, the rest are imposters. In other words, you’re looking for a needle in a really, really big haystack. Fortunately, needles have different properties from hay. Needles are magnetic. Needles don’t burn. Needles don’t float. That means there are certain things you can do to the haystack to systematically isolate the needle.
In the same way, your correct diffraction pattern has certain properties that are not shared by the imposters. These properties become constraints, which allow you to quickly and systematically rule out patterns that don’t fit. How this is actually done varies based on the exact type of experiment, but the key principle in any phase retrieval is this:
Use what you do know to reconstruct what you don’t know.
Notes
- ↑ This is a gross oversimplification; 64-bit precision is more nuanced than just dividing the number line into tiny pieces. But it’s true enough for this argument.