Extracting Super-resolution Details Directly from a Diffraction-Blurred Image or Part of Its Frequency Spectrum

It is usually believed that the low frequency part of a signal’s Fourier spectrum represents its profile, while the high frequency part represents its details. Conventional light microscopes filter out the high frequency parts of image signals, so that people cannot see the details of the samples (objects being imaged) in the blurred images. However, we find that in a certain “resolvable condition”, a signal’s low frequency and high frequency parts not only represent profile and details respectively. Actually, any one of them also contains the full information (including both profile and details) of the sample’s structure. Therefore, for samples with spatial frequency beyond diffraction-limit, even if the image’s high frequency part is filtered out by the microscope, it is still possible to extract the full information from the low frequency part. On the basis of the above findings, we propose the technique of Deconvolution Super-resolution (DeSu-re), including two methods. One method extracts the full information of the sample’s structure directly from the diffraction-blurred image, while the other extracts it directly from part of the observed image’s spectrum (e.g., low frequency part). Both theoretical analysis and simulation experiment support the above findings, and also verify the effectiveness of the proposed methods.


Introduction
When an object (sample) is imaged by a conventional light microscope, the result is not an ideal image which shows structure details clearly. Instead, it is equivalent to the ideal image convolved with a Point Spread Function (PSF) whose central part is called Airy disk. Therefore, even a point is infinitely small, its image is an Airy-disk-shaped spot rather than an ideal point. In 1873, Ernst Abbe found the diffraction-limit: for two points with a distance less than a half of visible light's wavelength, i.e., about 200~300nm, their images overlap with each other and cannot be resolved. Samples' structures smaller than the diffraction-limit were not resolvable with such microscopes until super-resolution techniques emerged. These techniques are divided into two categories [1] . The first category uses structural-illumination to image the sample several times, and then processes the resulting images to get a super-resolution image. Representative techniques: STED [2] , RESOLFT [3] , SIM [4] , NL-SIM [5] , et al. The second category manages to turn on individual molecules at different time, i.e., separates them in time, and then also reconstructs a super-resolution image. Representative techniques: STORM [6] , PALM [7] , PAINT [8] , et al. Besides, a technique named MINFLUX [9] combines the advantages of the two categories. It can localize individual molecules with ultra-high precision. There is a different technique named Expansion Microscopy (ExM) [10] . It expands samples physically to resolve structures which are unresolvable directly. In techniques such as STED, PALM, STORM, Confocal, etc., luminous points are distant from one another (or there may be only one luminous point at one time). Thereby, their images (almost) do not overlap, and the locations and light intensities of these points can be extracted from the blurred image. Inspired by these techniques, we find a condition relevant to their imaging conditions. In the proposed condition, structures (both inter-points and inner-points) smaller than the diffraction-limit can be extracted directly from the blurred image, even if the points within the structures are imaged at the same time. From the point view of frequency, the image's high frequency part is filtered out by the microscope. But the structures' full information (including both profile and details) could still be recovered from the low frequency part, in the proposed condition. In the following sections, some basic knowledge is introduced first, and then two methods are proposed to extract full information in space domain and frequency domain, respectively.

Basic knowledge
First of all, an appropriate model should be chosen to represent the images. In this study, we adopt a classic model widely used in the field of Digital Image Processing [11] . An image is divided into several uniform grids, then each grid is treated as a pixel, and its light intensity is called a pixel value. As a result, the image is represented as a matrix. The matrix (digital image signal) is an approximation of the physical image in given sampling rate and quantization accuracy. The higher the sampling rate, the more the pixels. The higher the quantization accuracy, the more accurate the pixel values. The structure information of samples, which is what people concern, is carried in the corresponding digital image signals. This core of this study is based on a usually ignored phenomenon: information can be carried in the same signal in a variety of ways. There is a common opinion in the field of Digital Image Processing: a sample's profile information, which changes slowly in space, correspond to the low frequency part of its image's Fourier spectrum; while its detail information, which changes fast in space, corresponds to the high frequency part. This opinion is correct for normal imaging system because each pixel value corresponds directly to a grid in image area. Thereby, the spatial structure information is carried directly in pixel values. However, the situation might be different if the information is carried indirectly. Strictly speaking, both high frequency and low frequency components are concepts attached to signals rather than information. They do demonstrate fast-changing or slow-changing forms in space domain. But they might not necessarily correspond to the profile or details of a sample if the information is not carried directly.
Here are some simplified examples about information and its carriers. Example 1: if two physical points are used to carry information, their amount could represent the integer-value "2", or their distance could represent a real-value such as 16.3625940683957262. In this example, the information carriers are physical objects. In more cases, observed signals are used to carry information. Example 2: in a Single-Molecule-Localization microscope, the observed image of individual molecules is blurred, and the pixel values do not show the molecules' detailed structure directly. But what people concern are the molecules' locations and light intensities carried by the pixels indirectly. Such information can be extracted, with methods such as data fitting, when the microscope's Point Spread Function (PSF) is known. In both of the examples, prior knowledge plays a key role, and determines how the information is carried in the signals. In example 1, it tells whether the information needed is carried in the amount or the distance of the two points. In example 2, it provides the template required for data fitting, i.e., the PSF. Besides the above examples, there are more researches [12,13] relevant to how information is carried in signals in indirect or implicit ways. We find that in a certain condition, observed images always carry the full information of a sample's structure, no matter they are sharp or diffraction-blurred. Therefore, the condition is termed "resolvable condition" here, and it has two aspects. The first aspect is named isolated lighting (or separated lighting) here. It means the Region of Interest (ROI) in the sample's image is only affected by its own structure and lighting, and is independent of the rest of the sample and the whole surrounding. For example, only one small area of the sample is lighted, or only one molecule is turned on, while the rest part and the surrounding are either totally dark or have no light collected by the microscope. In practice, an ROI is treated to fulfill isolated lighting as long as the effect of the rest part and surrounding is ignorable. For example, all the other light sources are far enough away from the ROI, just like what happens in some super-resolution techniques. Such a condition is not difficult to implement with existing techniques. But it actually provides very strong prior knowledge because it gives infinitely many pixel values outside ROI, i.e., zeros. The second aspect is named compact ROI here, which means that the ROI should be smaller than the diffraction-limit. More about these aspects will be explained in the following sections. The pixels of sharp images carry the full information directly, including both profile and details. The blurred images carry only the profile information directly, but also carry the full information indirectly in the "resolvable condition". Such a situation of "one carrier, two types of information" is somewhat similar to the above example 1. Different ways of carry lead to different methods for extraction. It only requires reading pixel values directly for the extraction of full information from sharp images. But more steps might be required to extract full information from blurred images, e.g., solving a system of equations. The basis of this study is the aforementioned image model, no matter for space domain or frequency domain. Therefore, the task of information extraction is translated into the calculation of unknown pixel values, i.e., matrix elements. The rough locations of the unknown pixels in images should be estimated first, and this could be done using existing techniques such as Single-Molecule-Localization. In the following sections, two methods will be described, for space domain and frequency domain respectively.

Method for spatial domain
The effect of diffraction on imaging is usually modeled as the convolution of PSF with the ideal images. It diffuses the light intensity of each pixel to other pixels, and thereby lowers the diversity of pixel values, i.e., blurs the images. For convenience, we first explain this procedure on 1D signals. Analogous to the aforementioned image model, a 1D range is also divided into several uniform segments, and each segment is represented by a value. Let's take the following simple signal as an example, as shown by Fig. 1. It has two adjacent values which are greater than zero, while all the other values are zeros. This is an analogy to the situation that there are only two point light sources.  Fig. 1(a) shows the 1D signal before convolution, which named "ideal signal" here. The two values in dash line need to be figured out. Fig. 2(b) shows the situation during convolution, where the convolution kernel is already known. It is equivalent to the PSF of a 2D imaging system, and called Impulse Response Function (IRF) here. Fig. 1(c) shows the resulting signal after convolution. It is also known already, and named "observed signal" here. The result of convolution, i.e., the observed signal looks like the IRF. It is much smoother than the ideal signal, which comprises two impulses. For image signals, being smoother usually means being more blurred, and harder to identify their details. However, detail information can be recovered from the observed signal, in the "resolvable condition". For the 1D situation in Fig. 1, the ROI is the region containing the two non-zero values. The condition means the observed signal's values in the ROI are only relevant to the ideal signal's values in the ROI, and the values of the IRF. The other values of the ideal signal do not affect the result of convolution because they are all zeros. In this case, there is a mathematical relationship among the ideal signal, the IRF and the observed signal. Therefore, the two unknown values in Fig. 1 can be figured out from the known IRF and observed signal. Denote: 1. The ideal signal's unknown values are 1 and 2 at the left and right, respectively; 2. The amplitude, i.e., the central value of the IRF is ; 3. The IRF has value 1 at the location of 1 when its center is at the location of 2 . Therefore, the value 1 • 2 is shown by the left dark diamond in Fig. 1(b). 4. The IRF has value 2 at the location of 2 when its center is at the location of 1 . Therefore, the value 2 • 1 is shown by the right dark diamond in Fig. 1(b). 5. The observed signal has values 1 and 2 at the location of 1 and 2 , respectively. Since the observed signal is the convolution of the ideal signal and the IRF, we get the following system of equations: The solution of the system of equations is: When = 1 and the IRF is symmetrical, i.e., 1 = 2 = , formula (2) becomes: For example, suppose that the ideal signal is (⋯ 1. In other words, the ideal signal is recovered from the observed signal and the IRF. It can be seen that only part of the observed signal and the IRF is used. In fact, it is not necessary to use 1 and 2 which have the corresponding location to 1 and 2 . The method also works if values at other locations are chosen from the observed signal. In practice, it may be helpful for relieving the effect of observation errors if more values are used to build an overdetermined system of equations. Now we will extend the procedure to 2D signals, and describe the situation of imaging, as shown by Fig. 2. Where, Fig.  2(a) shows the image in ideal conditions, i.e., without the effect of diffraction. Such an image is named "ideal image" in this case. Fig. 2(b) shows the PSF of the imaging system, which is already known. Only the major part of the PSF is shown in the figure, and the other part is very far away from the center. Fig. 2(c) shows the image observed by the imaging system, which is named "observed image". Owing to diffraction, it is the convolution of the ideal image and the PSF. It can be seen that all the pixels in the ideal image equal zeros except in a rectangular ROI. Therefore, the condition of isolated lighting is fulfilled. The PSF's Fourier spectrum is an ideal low pass filter, and therefore the PSF extends indefinitely in space domain. But the PSF's energy or light intensity is mainly concentrated in the central area. The observed image is badly blurred, and looks similar to the PSF. It is very difficult to see any details in the observed image, especially in the ROI, which is shown as a dotted rectangle. In fact, the condition of isolated lighting does not restrict the ROI's shape. Even an ROI with many disconnected and irregular areas is acceptable, as long as its rough location could be estimated in the observed image. However, one easy way is to find a rectangle to cover all the areas, and then treat the rectangle as the ROI. In order to decrease the complexity of calculation, the rectangle should be as small as possible. 2. Denote the PSF as image ( , ), and set a coordinate with its origin at the PSF's center. Thereby, the pixel value is (0, 0) at the PSF's center, and both and belong in [−∞, +∞]; 3. Treat the observed image's ROI as an image named ( , ), where = 1, 2, ⋯ , and = 1, 2, ⋯ , . Let's take a rotationally symmetrical PSF as an example, and the other PSF could be handled similarly. The observed image is the convolution of the ideal image with the PSF. Specifically, let the PSF overlap the ideal image, and align the PSF's center with each pixel in the ideal image's ROI each time. Then, multiply each pixel in the ideal image's ROI with its corresponding PSF pixel, and accumulate all the results. The accumulative value is the observed image's pixel value in the corresponding location. The above procedure could be implemented concisely with a program. In mathematics, this could be expressed with a system of linear equations as follows: = (4) Where: It is a matrix with a size of ( • ) × ( • ). Then: It is a matrix (vector) with a size of ( • ) × 1, and it is actually a sequence of all the pixels in the ideal image's ROI, arranged row by row from top to bottom. Then: It is also a matrix (vector) with a size of ( • ) × 1, and it is actually a sequence of all the pixels in the observed image's ROI, arranged row by row from top to bottom. The above is determined by the PSF, and is determined by the observed image. In other words, both and are already known, and is actually the rearrangement of the ideal image's known pixels. Therefore, the ideal image can be get by solving formula (4). In the above procedure, only the pixels in the observed image's ROI are adopted. Actually, if the other pixels of the observed image are also used, an overdetermined system could be build including more equations. In practice, that may be helpful for improving the method's capability of noise resistance. In this case, the unknowns are still the ideal image's pixels in the ROI because all the other pixels are known to be zeros, in the condition of isolated lighting. There are many classic or cutting-edge methods could be adopted for solving the above system of equations. Its solvability can be explain as follows. On the one hand, the system of equations should obviously have at least one solution, i.e., the ideal image itself. On the other hand, the PSF is physically the image of an ideal point, and thereby its light intensity should be no smaller than zero at any location. In the model described in section 2, the PSF should be a 2D image signal (matrix) with no values (elements) smaller than zeros. Furthermore, the ROI is smaller than the diffraction-limit according to the second aspect of the "resolvable condition". As a result, only the central part of the PSF (the Airy disk) affects the convolution results within the ROI. The other part of the PSF would only affect the convolution results outside the ROI. For normal light microscopes, the values in PSF's central part could be treated as always greater than zeros. Therefore, the elements of matrix are all positive. In addition, the elements of vector , i.e., the ideal image's pixels are all non-negative also, thereby = 0 is true only when = 0. In other words, the system of homogeneous linear equations = 0 only has the zero solution. According to the property of a system of linear equations, the corresponding system of nonhomogeneous linear equations = has only one solution [14] (i.e., the ideal image).
In the above procedure, the ideal image is figured out by building and solving a system of equations. This seems to be unreasonable in frequency domain because the high frequency part of the image signal cannot pass through the imaging system at all. However, we find that the full information of a sample's spatial structure could be recovered even with only a small amount of low frequency elements. In the following section, we will describe another method which figures out all the ideal image's pixels in the ROI with low frequency elements.

Method for frequency domain
Convolution caused by diffraction is equivalent to filtering the ideal image with an ideal low pass filter. Assume that the low pass filter's amplitude is 1, without loss of generality. First, let's describe the method on a simple 1D signal, as shown by Fig. 3.  Fig. 3(a) shows an ideal signal. It has only two unknown value which are adjacent, while all the other values are zeros. Therefore, the condition of isolated lighting is fulfilled. Fig. 3(b) shows the ideal signal's Fourier spectrum. After low pass filtering, all the high frequency components are removed. Therefore, the known spectrum only includes low frequency components, and is named "observed spectrum" here. For example, the most left two values in the spectrum are preserved, and treated as the observed spectrum. In 1D case, the low pass filter is equivalent to the IRF's Fourier spectrum, which is usually called System Transfer Function (STF) here. In this case, there is a mathematical relationship between the ideal signal and the observed spectrum. Assume that: 1. The length of the ideal signal is , and ≥ 2; 2. The ideal signal's unknown values are and respectively, where both and are integers within [0, − 1];

Arbitrarily choose two values
and from the observed spectrum, where both and are integers. The formula of 1D discrete Fourier Transform is as follows [15] : Since all the ideal signal's values are zeros except and , the above formula becomes: Let equals and respectively, then substitute them into formula (6), and we get: Solve the above system of linear equations, and we get:  (8) and (9), and we get = 6.7 and = 8.9. In other words, the ideal signal is recovered from the observed (low frequency) spectrum. It can be seen that only two frequency components of the observed spectrum are used. Actually, they could be chosen from the low frequency part, or the high frequency part, or even arbitrary part. This method works as long as a system of equations is built and a unique solution is got. In practice, it may be helpful for relieving the effect of errors if more frequency components are used to build an overdetermined system of equations. Now we will extend the procedure to 2D signals, and describe the situation of imaging. In this case, the STF is called Optical Transfer Function (OTF) which is equivalent to the PSF's Fourier spectrum, as shown by Fig. 4.  Fig. 4(a) shows an ideal 2D signal without being filtered, which is named "ideal image" here. Fig. 4(b) shows the ideal image's spectrum of 2D discrete Fourier transform. After being filtered by a low pass filter, the spectrum's components outside the dotted circle are all removed. Thereby, the known spectrum includes components inside the circle only, and is named "observed spectrum". Please note that the spectrum is shown conventionally in Fig. 4(b), i.e., the low frequency components are in the center. But it is still handled in original format in the following procedure, i.e., the high frequency components are in the center. In this case, the condition of isolated lighting is fulfilled, and there is a mathematical relationship between the ideal image and the observed spectrum. Assume that: 1. The ideal image is ( , ); = 1, 2, ⋯ , ， = 1, 2, ⋯ , , where and are the amount of the image's row and column, respectively; and start from (1, 1) at the image's most left-top pixel; 2. The size of the ideal image's ROI is × , and the row number and the column number of the ROI's left-top pixel are and , respectively; 3. The ideal image's full spectrum is ( , ), where = 1, 2, ⋯ , ， = 1, 2, ⋯ , ; 4. Choose a rectangular area from the observed spectrum, with a size of × ; the row number and the column number of its left-top pixel are and , respectively; for example, = = 0. The formula of 2D discrete Fourier transform is as follows [11] : Since all the ideal image's pixels are zeros except those within the ROI, the above formula becomes: Substitute the chosen components of the observed spectrum into formula (11), and we get a system of equations = , where : It is a matrix with a size of ( • ) × ( • ). Then: It is a matrix (vector) with a size of ( • ) × 1, and it is a sequence of all the pixels in the ideal image's ROI, arranged row by row from top to bottom. Then: It is also a matrix (vector) with a size of ( • ) × 1, and it is a sequence of the chosen components of the observed spectrum, arranged row by row from top to bottom. Similar to the method for spatial domain, vector can be got by solving = , and then the ideal image can be got by the rearrangement of . Although the rectangular area are chosen from the observed spectrum, other shapes or even randomly chosen components are also allowed in this method. This method works as long as a system of equations is built and a unique solution is got. In practice, it may be helpful for relieving the effect of errors if more frequency components are used to build an overdetermined system of equations. It can be seen that larger ROI could be recovered if more spectrum components are preserved after filtering. In an extreme case, the ROI is as large as the ideal image. Since all the spectrum components are known, the ideal image could be got directly by inverse Fourier transform or inverse filtering. Actually, it is a classic way recovering a full image from its full Fourier spectrum. This study find more possibility: recovering a ROI image, with full details, from part of its Fourier spectrum (e.g., including only low frequency). That means even low frequency components carry full details of a sample's spatial structure, and seems inconsistent with usual opinions. After excluding several explanations, we believe that the reason is the "integrity of spectrum", i.e., different frequency components are tightly relevant. Let's take 2D case as an example. As can be seen from the formula of 2D discrete Fourier transform, the spectrum is actually the accumulation of the products of each pixel ( , ) with its corresponding basis function in frequency domain. Each product is as follows: This is a function includes all frequency components, and its amplitude is affected by the corresponding pixel value ( , ). When the pixel value varies, the function's values change accordingly at any frequency with the same percentage. In other words, each pixel value is carried on the amplitude of its corresponding function, and the function's value carries its corresponding pixel value at each frequency (from the lowest to the highest frequency). Taking an extreme situation as an example, there is only one function (product) in the image's spectrum when there is only one pixel in the image's ROI. In other words, the spectrum is the products of the only pixel with its corresponding basis function. When , and the pixel's location ( , ) is known, the basis function 1 • −2 ( • + • ) is also known. Therefore, pick the observed spectrum's value at an arbitrarily selected frequency, and divided it by the basis function's value at the same frequency, then the result is the unknown pixel value ( , ). Actually, the selected frequency could even be zero frequency, i.e., DC component in this case. This situation is similar to that when individual molecule's light intensity is extracted using Single-Molecule-Localization techniques. When there are more unknown pixels, they can be figured out by building and solving a system of equations, as shown by the aforementioned procedure. It can be seen that the full frequency spectrum is relatively redundant when the ROI is smaller than the ideal image. In this case, there is no need to recover the full spectrum, as what conventional deconvolution techniques do, for the recovery of the ideal image. Besides, please note that the results of ideal low pass filtering extend indefinitely in spatial domain, no matter for 1D or 2D situation. If we want to get the spectrum by Fourier transform accurately, the observed signal should be fully (or at least sufficiently, in practice) collected in spatial domain.

a) Experiment for spatial domain
In theory, the Airy-disk-shaped PSF extends indefinitely. But according to the analysis in section 3, the convolution result in the ROI is affected only by the PSF's central area when the PSF is large enough. Therefore, the PSF of size 161 × 161 is adopted in this experiment, and its major part is shown by Fig. 5(a). The PSF's Fourier spectrum is shown by Fig. 5(b), which is an ideal low pass filter. All the components are removed if they are more distant than 2 pixels to the center.  Fig. 6. Where, lateral axis shows the ROI's size, from 2 × 2 to 20 × 20 pixels. Vertical axis shows the averaged error, which means the average of all the 20 testing errors for each size. The testing error is defined as: ‖ − ′ ‖ ( • ) ⁄ , which represents the mean square error between the recovered pixels and their corresponding pixels in the ideal image. In the formula, ′ represents the vector rearranged from the ideal image's pixels in the ROI. All the averaged errors are also shown in Table 1, where 6.0E-08 means 6.0 × 10 −8 and the other values are similar. It can be seen from Table 1 that the averaged errors are very tiny for sizes 2 × 2 and 3 × 3. Given the maximal pixel value is 256, the recovered results can be treated as almost the same as the corresponding ideal images. That verifies the effectiveness of the method: the spatial resolution is increased by 3 times in each dimension if each pixel is resolved into 3 × 3 pixels with full details. For sizes 4 × 4 to 20 × 20, the averaged errors are larger, and it is hard to judge whether the method still works in these cases. Therefore, we substitute the vector of ideal image, i.e., ′ into formula (4), and then calculate the difference value ‖ ′ − ‖ ( • ) ⁄ . By checking the difference value, we can see how well the ideal images fulfill the corresponding system of equations. For each size of ROI, the difference values are averaged for all the 20 random tests. The results are named Averaged Differences (ADs), as shown in Table 2.  In addition, the standard deviations of difference values are less than 1E − 16 for all the above sizes. According to these results, the system of equations = can model the imaging procedure accurately enough. That suggests that the large errors in Table 1 are not caused by the method's principle. More accurate results could be got if more effective approaches are used to solve the system of equations, and this could be done in future studies. Therefore, the method's effectiveness is also proved indirectly for size 4 × 4 to 20 × 20. In principle, this method could achieve unlimited resolutions. Averaged Error b) Experiment for frequency domain In this experiment, the observed image's frequency spectrum is zero except its low frequency part. Figure out the ideal image's unknown pixels using the method described in section 4, and the resulting averaged errors are as shown in Fig. 7. Fig. 7. The averaged errors for the frequency domain method Similar to the experiment for spatial domain, the lateral axis shows the ROI's size, and the vertical axis shows the averaged errors. All the averaged errors are also shown in Table 3.  Averaged Error As can be seen from Table 3, the averaged errors are very tiny for sizes 2 × 2 to 5 × 5. Therefore, the recovered results can be treated as almost the same as the corresponding ideal images. That verifies the effectiveness of the method: the spatial resolution is increased by 5 times in each dimension if each pixel is resolved into 5 × 5 pixels with full details. For sizes 6 × 6 to 20 × 20, the Averaged Differences (ADs) are also calculated as shown by Table 4. In addition, the standard deviations of difference values are less than 1E − 12 for all the above sizes. According to these results, the system of equations = can model the imaging procedure accurately enough. More accurate results could be got if more effective approaches are used to solve the system of equations, and this could be done in future studies. Therefore, the method's effectiveness is also proved indirectly for sizes 6 × 6 to 20 × 20.

Conclusion
Existing super-resolution techniques have not only broken the barrier of diffraction-limit, but also significantly improved the resolution of microscopic images. In some techniques such as STED, PALM, STORM, etc., adjacent points are illuminated at different time. As a result, their locations and light intensities can be determined even if the images are blurred due to diffraction. We generalize the condition and add another constraint (i.e., ROIs should be smaller than the diffraction-limit), then find the resulting condition is a "resolvable condition". In such a condition, a sample's structure (including both profile and details) can be extracted directly from a blurred image, even if its size is much smaller than the diffraction-limit and its points are imaged together. In other words, the information in the ROI is not affected by the diffraction-limit, and can be resolved directly in this condition. Then, a technique is proposed based on image deconvolution and the above findings. Image deconvolution is usually employed for the recovery of degraded images, e.g., relieving the effect of defocused light. But the proposed technique can extract sharp images from diffraction-blurred images directly, and get full details of samples' spatial structure, in the "resolvable condition". Therefore, this technique could be named "Deconvolutional Super-resolution (DeSu-re)". In principle, it could be used for resolving points of infinite proximity, and thereby could achieve unlimited resolutions. The simulation experiments directly prove 2~3 times of resolution improvement for the spatial method, and 2~5 times of improvement for the frequency method. In addition, they also indirectly prove 2~20 times of improvement for both methods. One of the future directions is to achieve higher resolution and verify the effectiveness in practice. There are still some practical difficulties, especially the strong effect of the distortion of observed image and PSF (e.g., noises) on the results. Therefore, it is also an important future direction to get practical results as near to simulation results as possible. With the development of imaging devices and the improvement of signal-noise ratio, the accuracy of the methods would also improve accordingly.