A Dynamically Adaptive Imaging Algorithm for Wavelet Encoded MRI
Lawrence P. Panych and Ferenc A. Jolesz
Harvard Medical School and Brigham and Women's Hospital
75 Francis Street
Boston, Massachusetts 02115
Running Title: Adaptive Wavelet Encoded MRI
Address Correspondence to:
Lawrence P. Panych
Department of Radiology
Brigham and Women's Hospital
75 Francis Street
Boston, Massachusetts 02115
Tel. (617) 278-0615
Fax (617) 278-0610
Published: MRM 32:738-748 (1994).
A new adaptive algorithm based on wavelet encoded MRI is presented
for application in dynamic imaging. This algorithm is adaptive because
the strategy for updating image data in the dynamic series of images is
determined by processing of the most recently acquired data. The spatially
selective multi-resolution properties of the wavelet transform are exploited
to selectively update only those regions of the field-of-view where change
is actually occurring. A theoretical imaging model is presented to motivate
use of the adaptive algorithm and simulation results using both artificial
and experimental wavelet encoded data are presented.
Key words: dynamic MRI; adaptive MRI; spatial encoding; wavelet encoding.
ACKNOWLEDGEMENTS Work performed with support from NIH CA45743-06.
In this paper, we outline a unique new approach to dynamic MR imaging based on wavelet transform encoding. This approach, which we call dynamically adaptive MRI, is motivated by the need to develop more efficient encoding methods for imaging dynamic, time-varying processes. Dynamic MRI differs from standard MRI in that a sequence of time-ordered images is obtained by continually updating image data. Dynamic MRI has been applied to investigating changes following bolus injection of contrast agents , studying physiological motion , imaging cerebral function , and the monitoring of some surgical procedures . To date, dynamic studies have employed either echo-planar, low flip-angle gradient-echo, or RARE techniques with temporal resolution ranging from about 1 image per minute to 5 images per second. Recently, `keyhole' techniques  have been introduced for dynamic imaging applications. These techniques are a modification of basic Fourier encoding and are based on the assumption that, in certain dynamic studies, updating high spatial frequency data is redundant. Using keyhole techniques, images can be produced at a faster rate with no sacrifice in terms of resolution.
We are investigating alternative approaches to reducing redundancy in data acquisition through the use of adaptive methods. Adaptive methods are distinguished as those where the image data acquisition strategy is modified dynamically depending on information obtained after processing the most recently acquired data. Our hypothesis is that significant improvement in temporal resolution, without loss of image quality, can be obtained in dynamic MRI if an adaptive method is used to minimize redundancy in image encoding and data acquisition.
In this paper, we will show that, when using wavelet encoded MRI, we can adaptively locate then finely resolve only those regions of the field-of-view where change is occurring. This is because wavelet encoding has the unique capability to `selectively resolve' images in the sense of including high resolution detail only within certain regions-of-interest. For example, Figure 1 (left side) shows a wavelet encoded image of a kiwifruit that we first published in . To the right of this image is another image reconstructed after approximately 2/3 of the original wavelet data values were set to 0. Resolution in the second image is spatially variable because wavelet coefficients encoding high resolution detail were not included for all parts of the field-of-view. In Fourier imaging, if high resolution information is removed, the entire image is affected. Therefore, it is not possible in standard Fourier encoding to selectively resolve an image as is done in the wavelet case.
Figure 1: Reconstruction of wavelet encoded image using all data (left side image) and using only 1/3 of the data (right side image) to demonstrate spatially selective resolution of the wavelet transform. Data was obtained by wavelet encoding in the horizontal direction and frequency encoding in the vertical direction.
Exploiting this selective resolution capability of wavelet encoded MRI, we have developed a wavelet-based dynamically adaptive imaging algorithm. In the next section, we briefly describe the method of wavelet encoding. Next, we develop a multi-resolution imaging model based on the unique characteristics of wavelet encoded data from which we derive the wavelet-based adaptive imaging algorithm. In later sections, we present our simulation and experimental results when applying this new adaptive imaging algorithm.
Adopting an approach similar to that first proposed by Weaver and Healy  we have implemented wavelet encoded MRI . This was accomplished using a modified line scanning sequence similar to that shown in Figure2. With this pulse sequence, one dimension was frequency encoded as usual and the second dimension was wavelet encoded by exciting unique wavelet-shaped RF profiles on each excitation of a series of excitations.
Figure 2: Pulse sequence for wavelet transform encoding.
Figure 3: A wavelet basis is constructed by scaling the basic wavelet. The top row of plots shows Battle-Lemarie wavelets at three scales. At each scale the wavelets are translated in increments of as shown in the bottom row of plots for the scale j. In this figure, only every 16th translate is plotted and the positions of all other translates are shown with tick marks.
The wavelet functions that define the RF excitation profiles form an
orthogonal basis set. All wavelet functions in the set have the same basic
shape but have different widths corresponding to different levels of resolution
or scale. Unlike the Fourier basis functions, wavelets are spatially localized
and, therefore, to cover the full field-of-view, wavelet profiles must
also be generated at different locations across the field-of-view. Figure
3 demonstrates how a set of wavelets
for encoding is constructed by scaling (in factors of 2) the `basic' wavelet,
and then translating each of the scaled wavelets in steps proportional
to their widths. The set of these scaled and translated wavelets, ,
is defined in Eq.1.
A discrete wavelet transform of a function f(x) is comprised of coefficients as defined in Eq.2 which, according to the multi-resolution description of Mallat , encode detail at different levels of resolution or scale (index j) within different portions of the field-of-view (index k).
The detail is defined as the information linking the approximation of a function at one resolution with its approximation at a resolution which is twice as fine. Resolution is defined such that, at the resolution , a function is approximated by evenly spaced samples separated by units. Note that, by the convention adopted here, a higher index j corresponds to a poorer (coarser) resolution.
If the function is encoded to J scales or levels of resolution, then the discrete wavelet transform includes the sets where . Each set contains values where N is the total number of coefficients in the discrete transform. Note that the finest level of resolution (j=1) contains the most detail. In addition to these sets of detail coefficients, the discrete wavelet transform of a function must also include a final set of coefficients which represents the discrete approximation of the function at the coarsest scale, J. Because the wavelet basis is orthogonal, the total number of wavelet coefficients necessary to encode a field-of-view is no more than the required number of Fourier coefficients.
In our implementation, we used the Battle-Lemarie wavelet basis , encoding at 3 levels of resolution. According to the multi-resolution interpretation  of a wavelet transform decomposition, this wavelet encoded data represents (a) a set of very coarse resolution data plus (b) detail image data at J=3 additional levels of resolution. To reconstruct a high resolution image, the detail information is added in stages to the coarse resolution data. The coarsest detail is added first, followed by the next coarsest detail and then the fine detail. The wavelet reconstructions are very efficient and are comparable, for example, to the fast Fourier transform (FFT).
An important consequence of the fact that wavelets are spatially localized is that the multi-scale detail information contained in each wavelet coefficient pertains to a specific section of the field-of-view. It is because of this unique combination of the spatial and the scale selectivity of the wavelet transform that it becomes possible to selectively resolve images when wavelet encoding. In the next sections, we describe a model based on the special features of wavelet encoding from which we derive a dynamically adaptive imaging algorithm.
Multi-resolution transforms have been applied extensively in image processing to solve problems such as template matching , edge detection  and data compression  and these ideas are exploited in the development of our wavelet-based adaptive imaging algorithm. The basic idea is that features can be detected by examining information in representations of the image at multiple scales starting at the coarsest resolution and continuing until the finest resolution is reached. Coarser resolution representations (images) are obtained by low-pass filtering of the original data and sampling the result at a rate proportional to the filter bandwidth. Detection of a feature in one area of a low resolution image narrows the search for the next highest level.
As a simple example of the use of multi-resolution processing, consider the problem of searching a 256x256 image to locate the one pixel that has an intensity significantly greater than 0. The image might represent, for example, the output from image processing feature enhancement filters. Assume that, as a result of the prior processing, there also exists 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, and 2x2 images, each of which is a progressively crude approximation of the original 256x256 image. Assume, for example, that each pixel in the 128x128 image is a sum of the pixel intensities within 2x2 blocks of the original image. Each pixel in the 64x64 image is a sum of 4x4 blocks of pixels in the original image, and so on.
As an alternative to an exhaustive search of every pixel in the 256x256 image to locate the pixel of interest, we can search the cruder resolution images beginning with the 2x2 image. Location of the pixel of significant intensity in the 2x2 image allows us to restrict the search of pixels in the 4x4 image to the 4 pixels of one quadrant. Searching the pixels in the selected quadrant of the 4x4 image will then allow us to restrict the search of the 8x8 image, and so on. In each image only 4 pixels need to be searched. Thus, by using the coarse resolution images, a total of only 32 pixels is searched compared to as many as 256x256 pixels if the original image was searched directly. Highly localized features can be detected in this way without exhaustive searching of the entire high resolution image. This is a 2D analogue of the bisection method  used to find the roots of a polynomial.
The orthogonal wavelet transform is perfectly suited to this sort of application because it extracts the multi-resolution detail information from an image without any redundancy in the representation . For features, such as edges, that generally contain detail at all levels of resolution, a detection algorithm need only consist of tagging those areas where significant energy exists in the wavelet transform coefficients and searching through the coefficients from coarsest to finest resolution in order to locate the edges. If the edge has significant detail at each level of resolution, then there is an exact analogy with the previous one-pixel example and the search strategy is guaranteed to find the edge (just as the bisection method will find the root of a polynomial). This idea has been exploited with success for compression of image data  and is the essence of the adaptive algorithm using wavelet encoding that is presented here.
In order to motivate our wavelet-based
adaptive algorithm, a model of the structure of image information is presented.
In this model it is assumed that all useful information in an image can
be decomposed on a hierarchal binary tree structure of detail at multiple
levels of resolution. The model is described by the following points.
Figure 4: Multi-resolution decomposition of the field-of-view.
Figure 5: Hierarchal tree model of the detail in a MR image.
As an example to demonstrate the above ideas, Figure6 shows a plot of a 64-point 1-dimensional image, s(n), along with its detail coefficients in a 3-level wavelet decomposition.
Figure 6: A 64-point 1-dimensional image, s(n), and its detail from a 3-level Haar wavelet decomposition. The decomposition consists of 32 high resolution detail coefficients (), 16 medium resolution coefficients () and 8 coarse resolution detail coefficients (). Note that the wavelet decomposition is completed by the addition of an 8-point coarse approximation (not shown) of s(n). The detail coefficient at the bottom left, , is considered insignificant by our multi-resolution tree model even though it produces the large spike on the left side of s(n).
Assume that all the detail coefficients marked with an ``*'' surpass a preset threshold. The detail coefficient at the bottom right of Figure6 is marked with a ``*'' and passes our test of significance. It is significant, however, not only because is above the significance threshold, but also because, for all detail coefficients at coarser resolutions in the multi-resolution decomposition (ie. and ), the energy in the coefficients is above the significance threshold. The energy of surpasses the preset threshold, however, is not considered significant detail because there is no corresponding significant detail at the next coarsest level.
Removing will eliminate the large spike on the left side of s(n). This may, or may not, cause a loss of information and points out the importance of justifying (1) the validity of the assumption of the multi-resolution tree model and (2) a theory for setting the thresholds. With reference to the first point, we are not aware of and do not attempt to give a rigorous theoretical justification of the multi-resolution model. Our justification is based somewhat more loosely on the experience of others in the signal processing community who have successfully applied multi-resolution approaches for edge and singularity detection. Mallat , for example, demonstrates that singularities such as edges are distinguished from noise by how the energy in the singularity propogates from scale to scale. The model is presented here more as an intuitively appealing hypothesis that must be justified by experimental result. As to the second point, a method for setting thresholds on theoretical grounds will be given in the next section.
The multi-resolution tree model of the previous section presents an idealized view of the structure of image data. In this section, the model is extended to include a prescription for including noisy measurements of the wavelet coefficients and determining significance thresholds. It draws on the theory of stochastic estimation  and is inspired by the work of Chou  who developed a stochastic model and optimal filtering algorithms for multi-scale wavelet data.
Given the goal of avoiding the acquisition of redundant data in a dynamic
imaging series, a reasonable first assumption is that, in the absence of
evidence to the contrary, there is no change in an arbitrary detail coefficient,
D, from one time to a short time
later. We formalize this assumption with Eq.3,
where v(t) is a zero-mean random sequence that represents the uncertainty of the assumption. (v is usually refered to as the `model noise' and its energy is a measure of the probability that change actually occurs.) A measurement of D, can be made at any time and the measurement, written as Z, can be represented as the sum of the `true' value plus some measurement noise, , as expressed in Eq.4,
Now, given the measurement, Z, of D at time
plus an estimate of D at time t, we require an updated estimate
of D for the time
which is calculated using Eq.5,
K is a gain factor between 0 and 1 defined in order to weight the measurement according to our confidence in it. If confidence in the measurement is high then K should be set close to 1. On the other hand, if the measurements are known to be very noisy, then K should be close to 0.
K should reflect the balance between the uncertainty of the assumptions
(represented mathematically here by the model noise in Eq.3)
and the measurement noise (Eq.4). This
is given in Eq.6,
where means `expectation value of x'. Eq.6 can be derived by minimizing the mean square error of the estimate and assuming that the model noise dominates accumulated errors.
The measurement noise energy, , can be estimated experimentally without difficulty. We do not know , however, so we estimate it from the change in the estimate of the detail at coarser resolution. In the absence of any other information, a reasonable measure of the probability that change has occurred is the amount of change estimated from measurements at coarser resolution.
For example, consider the interval ,
as defined in the previous section. On this interval we define a new gain
as given in Eq.7,
where is a proportionality constant (that we set arbitrarily to 1). is a gain factor analogous to K defined previously except that we assign it to a specific interval, , in the multi-resolution decomposition of the field-of-view. is used in place of K in Eq.5 to update estimates of the finer resolution detail coefficients, and , that are assigned to the intervals directly below .
Equation 8 gives the relation for
updating an estimate of the detail coefficient at time
given an estimate at time t and a gain factor computed from coarser
A similar equation can be written for the neighboring detail coefficient, . We have high confidence in the measurements if . If then measurements at the finer resolution will not be heavily weighted. In the adaptive MR imaging algorithm described in the next section we will use this to define a threshold in order to decide if the finer resolution detail coefficients should be acquired.
The goal of dynamically adaptive imaging is to speed up the imaging process by a selective data acquisition strategy that attempts to restrict new acquisitions to regions only where change is occurring. In the dynamic imaging algorithm to be described here, it is assumed that some prior estimate (at time t) of the wavelet detail coefficients already exists. New coefficient data (at time ) is then acquired starting with the coarser resolution detail coefficients. The change in the estimates of these detail coefficients at coarse resolution is used to decide if higher resolution data should be acquired. The following is a detailed description of the dynamic imaging algorithm based on a J-level wavelet encoding.
This algorithm implements a top-down left-to-right ordering of data acquisitions along the tree structure shown in Figure5 using the function MOVE-DOWN-TREE which recursively calls itself twice. The first time the function calls itself is to follow the left branch of the tree and the second time is to follow the right branch. Note that there are three conditions that can cause a RETURN in the function, thereby ending the downward movement on the tree. Downward movement ends when: (1) Both branches are explored; (2) The bottom of the tree is reached (j=0); or (3) The energy of change in the detail at the coarser resolution, , is below the threshold. If the last condition was removed, this would be a simple binary search algorithm. The last condition is responsible for `pruning' the tree to avoid acquisition of `insignificant' image detail.
A unique characteristic of the signal-to-noise ratio for wavelet encoding complements this adaptive algorithm. In wavelet encoding, the volume of spins excited is dependent on the level of resolution of the detail coefficients being encoded because the wavelet excitation profiles have variable widths. If the maximum flip angle of each wavelet encode is constant, then the signal-to-noise ratio also varies with the level of resolution . Signal-to-noise ratios are highest for the coarsest detail coefficients in wavelet encoding. This is extremely advantageous for the adaptive algorithm because the prediction of whether or not detail is significant is based on estimates at coarser resolution.
Note that this algorithm is described for encoding along one dimension only. This seems reasonable since the implementation of wavelet MR encoding is done only along one dimension. However, the fact that one dimension is frequency encoded has an important implication for the practical application of this algorithm in its present form. Because one dimension is frequency encoded, we never actually encode only a single wavelet coefficient. After exciting with one wavelet-shaped profile and Fourier transforming the echo data, an entire line of wavelet encoded values is obtained. Thus, the decision about whether or not to make further measurements at higher resolution should not be made on the basis of a single value of but must consider the change in detail within the line of encoded values. In our simulation and experimental work discussed in the next sections, for simplicity, we have ignored this fact and applied the algorithm independently to each line of data in the wavelet encoded direction. Our work is, therefore, not strictly correct in terms of simulating a wavelet-based adaptive imaging senario. However, we believe that it is sufficient to demonstrate the important principles and that further development of the algorithm for a more exact representation of the reality is a practical matter of limited interest.
A simulation was designed to test the adaptive algorithm under dynamic imaging conditions and compare results to those obtained by Fourier and wavelet encoding. We simulated 2D image encoding where one direction (the horizontal) is frequency encoded and the other direction (the vertical) is either phase or wavelet encoded. Motion is in the vertical direction only, thus, all artifact due to motion appears only in the phase or wavelet encoding direction.
An ideal, 128x128, noise-free MR density distribution is shown in Figure 7a. As each of 128 wavelet or Fourier coefficients is acquired in order to resolve the density function, the thin rectangle's position is changed by pixel (total field-of-view = 128 pixels) where the direction is determined by a random function. During the 128 data acquisition intervals (each of time ), the rectangle drifts across a large part of the field-of-view. Figure 7b shows the position of the rectangle at the end of the acquisition period ().
Figure 7: (a) A grey level representation of a simulated time-varying MR density at the beginning and (b) at the end of data acquisition (128 seconds later). The position of the thin rectangle changes by 1 pixel in each period where the direction of motion is random. (c) Fourier encoded image reconstructed from data obtained by simulating random motion during imaging. (d) Wavelet encoded image reconstructed from data obtained by simulating random motion during imaging.
We simulated the effect of the motion of the thin rectangle on a phase encoded image as follows. Assume that frequency encoded data (128 points) from each phase encoding step (128 steps total) is Fourier transformed and stored in the rows of a 128x128 matrix, . To simulate the acquisition of this data for the nth phase encode, the MR density distribution for time is Fourier transformed along each of its columns and the nth row is inserted into . Note that, because of the motion of the rectangular shape, the data is not the same in each time interval and, therefore, data for the nth encode is time dependent. The simulated image is obtained by computing where is the inverse Fourier transform matrix.
The effect of the motion of the thin rectangle on a wavelet encoded image is simulated in a similar manner. To simulate the acquisition of data for the nth wavelet encode, the MR density distribution for time is wavelet transformed along each of its columns and the nth row is inserted into . The simulated image is obtained by computing where is the inverse wavelet transform matrix.
A simulation of adaptive imaging was also done using the artificial density data. As explained in Section 4, the adaptive algorithm for dynamic imaging starts with an initial estimate of all wavelet coefficients and tries to update only those where change is occuring. In the simulation, the initial estimate was set to 0. As in the non-adaptive simulation, each encoding step consisted of taking values from the artificial density data for the time . However, the decision of which set of data to acquire in each encoding time interval was under the control of the adaptive algorithm. After each 128 time intervals, the same data were used again (in a reverse time order) to extend the length of the simulation.
As discussed previously, the adaptive algorithm chooses which data values should be encoded based on a prediction of where change occurs. Thus, in general, each new image update will be done with less than 128 values. Each time the algorithm completes processing of the tree a new image estimate is obtained. Because a different number of encodes is in general used for each update, the time intervals between each new image also varies.
The adaptive algorithm discussed in Sec.4 was applied to experimental wavelet encoded image data. A phantom was wavelet encoded to 3 levels of resolution using the method described in our previous work . The phantom consisted of 2 syringes, a large one and a small one, lying next to each other and containing NiCl-doped water.
Figure 8 shows reconstructed wavelet images of the phantom (wavelet encoding in the vertical direction) with the syringes shifted slightly with respect to each other. We wished to demonstrate that the adaptive algorithm can be applied to actual image data collected when a change occurs within the field-of-view. A complete demonstration should be under dynamic conditions where image data are collected in the presence of continuous sample motion. At this time, however, the capability to apply the algorithm in real time has not been implemented.
Figure 8: (a) Wavelet encoded image of a phantom consisting of 2 syringes containing NiCl-doped water. Wavelet encoding is in the vertical direction (128 encodes). (b) Wavelet encoded image of the same phantom with the syringes moved slightly with respect to each other. Data for both images were obtained under static conditions.
Here we simulate the situation where we have used wavelet encoding to produce an image when the the field-of-view is static throughout data acquisition. However, at time t an instantaneous change occurs placing the smaller syringe at the position shown in Figure8b from its original position in Figure8a. All data collected for a new image estimate is assumed to be collected after this change occurs.
The wavelet data used to produce Figure8a was first processed by the adaptive algorithm to remove insignificant detail. We call this the image estimate at time t. An updated estimate of the wavelet coefficients of an image at time was then obtained by further application of the adaptive algorithm. The change in the detail coefficients at the lower resolution were used to determine if new measurements at higher resolution should be made. If it was determined by the adaptive algorithm that measurements should be made, the measurements were simulated by taking the appropriate data from the coefficients of the image in Figure8b.
Results of simulating Fourier and non-adaptive wavelet encoding in a dynamic imaging situation are shown in Figures 7c and 7d. The Fourier encoded image is almost completely obscurred by the artifact due to the motion. There is also noticeable artifact in the wavelet encoded image, but of significance in this case is that the artifact is confined to the region where the motion occurs and has no noticeable effect on the static structure at the bottom of the image.
Figure 9 shows the last 4 images from the simulation of continuous adaptive imaging. These images are similar to the non-adaptive wavelet encoded image shown in Figure 7d in the sense that the artifact is confined to the location where the motion occurs.
Figure 9: Simulation of 4 images reconstructed from data selected by the adaptive algorithm. During data acquisition, the thin rectangle changes position by 1 pixel (in a random direction) after each period and drifts back and forth between the two positions shown in Figure7a and 7b after 256 periods. On average, 44 encoding steps (requiring 44 periods) were used to update each image compared to the 128 encoding steps to acquire data for the simulated images in Figure7c and d.
Except for the right-most image in the series, the adaptive images have less artifact than Figure7d and different positions of the rectangle are apparent. There is considerable variability in the artifact of each image. The exact form of the artifact cannot generally be predicted since the wavelet coefficients which are chosen to be updated are selected by the adaptive algorithm according to its dynamic predictions about the location of change. On average, 44 coefficients had to be updated for each new image so that data for about 3 of the adaptive images could be acquired in the time taken to acquire a complete set of 128 coefficients.
Figure 10 shows a comparison of the high resolution (j=1) detail in the image both before and after processing by the algorithm. There is a significant reduction of the noise in the detail image due to processing. Detail is left virtually untouched near edges of the phantom indicating that the algorithm has predicted these to be regions of significant detail.
Figure 10: (a) High resolution detail from the wavelet encoded image in Figure 8a before processing by the adaptive algorithm and (b) after processing.
Figure 11a shows the reconstruction of an image simulating the dynamic update for time . Just under 1/2 of the coefficients were selected to be updated by the algorithm. Figure 11b shows the difference image of 11a and 8b. The image estimate is updated by the algorithm without any significant loss of information as demonstrated by the difference image. This result demonstrates that, given actual wavelet encoded MRI data, the algorithm works in the sense that it can find the set of data that should be used for an image update.
Figure 11: (a) Wavelet encoded image obtained by starting with the wavelet encoded data used to reconstruct image in Figure8a and then updating with data selected from the wavelet coefficients of Figure8b by using the dynamically adaptive algorithm.
(b) Difference image of (a) and Figure8b.
In this paper we have introduced a dynamically adaptive imaging algorithm based on wavelet transform encoding. In earlier work  we have shown that wavelet encoding of MRI images is feasible and (apart from signal-to-noise ratio considerations) the quality of such images compares favorably with phase encoded images. In this work we explore an application of wavelet encoding in MRI that exploits the spatially selective multi-resolution properties of the wavelet transform. It is this feature which makes selective resolution of images possible and further motivates the development of adaptive acquisition strategies.
Our adaptive acquisition algorithm is based on the idea of using the change in image data at coarse resolution as a predictor of change at finer resolutions. Savings in imaging time (by a factor of 3 or more) may be obtained when dense high resolution data is not updated in a dynamic imaging series because it is determined that such updating would be redundant. This differs from the keyhole type approach which limits the acquisition of higher resolution data based on a blanket assumption that updating of all high resolution data is redundant. Our assumption that changes in high resolution detail can be predicted from coarser resolution data is not rigorously justified. However, it is similar to assumptions applied successfully in other applications (eg. image data compression ), it has inituitive appeal, and has been shown to be successful in our simulation experiments.
The imaging method proposed here is of particular value when changes occuring during dynamic imaging are localized within a relatively small section of the field-of-view. When changes are not localized but still confined to a small part of the field-of-view, the method can still be of considerable benefit. In order to deal with these cases, however, we are investigating the use of encoding functions built from linear combinations of the standard wavelets in order to obtain better signal-to-noise ratios.
In dynamic imaging, there is a benefit to using the locally supported standard wavelets for encoding because, if changes are spatially localized, artifact may be significantly reduced compared to Fourier imaging. This paper has shown an example of such benefit when an adaptive data acquisition strategy is used. However, when there is little or no change either phase encoding or wavelet encoding using linear combinations of wavelets is preferable in order to maximize signal-to-noise ratios. Therefore, further development of the adaptive method described here is now focusing on interleaving encoding using standard wavelets and linear combinations of these functions. With this approach we aim to combine the superior artifact suppression properties of standard wavelet encoding with the higher signal-to-noise ratios which can be obtained by using combinations of wavelets (eg. wavelet packets whose signal-to-noise properties are discussed in ).
The choice of which wavelet basis to use is an issue not addressed in this paper. All of our experimental work has been done using the Battle-LeMarie basis. These basis functions are not well localized spatially (as, for example, compared to the Haar and Daubechies bases). They are, however, convenient for MRI implementation because they require much shorter RF pulses than the less smooth wavelet functions. These issues are discussed in detail elsewhere .
The computational requirements of the algorithm are not severe requiring, on average, only a few floating point operations per image pixel update. We have not implemented the algorithm for real-time imaging. Under real-time conditions, even though computation is light, the proper syncronization of data acquisition with computation may require the use of a dedicated external processor communicating with the imaging system. If real-time reconstruction of images is desireable then this poses no more difficulty than that encountered in Fourier transform MRI. Algorithms for wavelet transform computations require on the order of floating point computations.
In conclusion, we have investigated a new wavelet-based approach for dynamic imaging. Our approach is radically different from standard encoding methods because the data acquisition strategy is based on predictions made using previously acquired data. The purpose of this adaptive approach is to reduce redundancy in dynamic imaging applications. In spite of recent advances such as the development of various fast  and ultrafast  data acquisition methods, imaging time will remain a major concern, especially as new dynamic applications for MRI are explored. As the maximum MR data acquisition rate of the fast imaging techniques is approached, methods such as ours that aim to minimize redundancy in data acquisition are likely to grow in importance.
LIST OF SYMBOLS