A step-by-step tutorial (with example Matlab functions) on how to set up your own processing code is available here.
AppSFDI is a Windows tool for processing SFDI images at 0 and 0.2 (1/mm). It is meant to be a helpful tool to help you validate your own processing code.
Processing SFDI images can be broadly broken down into 4 parts:
- Model inversion
- Chromophore fitting
Demodulation is the process where the envelope of the AC images are determined. The most widely used method requires 3 images at each frequency with phases of 0, 120, and 240 degrees. Following demodulation, the 3 striped images will be converted into a single AC image with no stripes. Any remaining stripes following demodulation means that something has gone wrong. The most common culprits are harmonics in the sinewave leading to stripes that are twice the projected spatial frequency, and changes in illumination intensity between the 3 images which can lead to stripes at the same frequency as was projected. A demodulation without stripes means that the imaging system is working well.
Demodulation is often the most difficult part of building an SFDI system. A successful demodulation will not have any stripes in it! Stripes in the demodulated image can be caused by harmonics in the projected pattern. These can be caused by nonlinearities in the DMD response which are induced by pre-processing. With the suggested DMD running the system in video mode activates pre-processing which can distort the sinewave patterns. Switching the DMD to run in pattern sequence mode with the source as the video port will disable pre-processing and remove the harmonics in the sinewave.
Calibration is where the Instrument Response Function (IRF) of the system is removed. This requires a calibration phantom of known optical properties. The calibration sample is imaged and demodulated to yield the diffuse reflectance (Rd) in the arbitrary unit of camera counts. Using the known optical properties, the theoretical MTF can be calculated and the diffuse reflectance estimated. Dividing the theoretical diffuse reflectance by the raw data will yield a conversion factor from counts to Rd. The IRF is both spatially and spectrally dependent so different values are calculated for each pixel in the image.
The model inversion phase is where the Rd values at different spatial frequencies are converted to optical properties (absorption and scattering). The first step in the process is to generate a look-up table of diffuse reflectance values for the two spatial frequencies as a function of absorption and scattering. At each pixel of the image, there is a unique combination of absorption and scattering coefficients that will yield the measured diffuse reflectance. Interpolation of the look-up table is used to find that combination.
The chromophore fitting phase uses only the absorption maps. The pre-requisite here is to know the extinction coefficients for the pure chromophores you expect to see. A linear combination of the extinction spectra and the absorption maps will yield the concentration of chromophores at each pixel. You need to use the same number (or more) of wavelengths than chromophores you expect to see. The best fit is typically found in the least squares sense using standard linear algebra techniques.