I've done it!
I'm now convinced that I have an algorithm that can evaluate oscillatory integrals in n dimensions
$$\int_{-\infty}^\infty\,g(\mathbf{x})\cos(f(\mathbf{x}))\,d\mathbf{x}$$
as fast as it can calculate the equivalent non-oscillatory integrals
$$\int_{-\infty}^\infty\,g(\mathbf{x})e^{-f(\mathbf{x})}\,d\mathbf{x}$$
Replacing cos by sin in order to compute the imaginary part doesn't slow the algorithm down.
A slight problem so far is that it doesn't evaluate the non-oscillatory integral particularly quickly, because unlike for example the vegas algorithm it doesn't have adaptive gridding, yet.
I've tested this using my previously mentioned 7-D path integral:
$$I=\int_{-\infty}^\infty e^{i((x_1-1)^2+(x_2-x_1)^2+(x_3-x_2)^2+(x_4-x_3)^2+(x_5-x_4)^2+(x_6-x_5)^2+(x_7-x_6)^2+(1-x_7)^2+\sum_{n=1}^7 p(x,n))}\,dx_1dx_2...dx_7$$
I began by subtracting off the value of ##\mathbf{x}## for which ##f## is a minimum, but this is probably an unnecessary operation.
Define on the surface of a unit sphere in n-D, the endpoints of ##m>>n## unit vectors using an algorithm that spaces these endpoints ##\mathbf{x}_1## to ##\mathbf{x}_m## as far apart from each other as possible. To do this I'm using additive quasirandom numbers. Define starting vector
$$\mathbf{v}_1=(\sqrt{2},\sqrt{3}\sqrt{5},\sqrt{7},\sqrt{11},\sqrt{13},\sqrt{17},...)$$
Map it back onto the n-D hyperecube using ##\mathbf{v}_1:=\mod(\mathbf{v}_1,1)##
and iterate
$$\mathbf{v}_j=\mod(\mathbf{v}_{j-1}+\mathbf{v}_1,1)$$
Map onto the n-D interval ##[-1,1]## using ##\mathbf{x}=2\mathbf{v}-1## and reject those values of ##\mathbf{x}## outside the unit n-sphere to get ##|\mathbf{x}|<=1## . For numerical reasons I also reject values of ##\mathbf{x}## whose norm is too close to zero to get ##|\mathbf{x}|>=0.001##. Then I map the result onto the surface of the unit n-sphere using ##\mathbf{x}:=\mathbf{x}/|\mathbf{x}|##. This gives a set of ##m## uniformly distributed vectors on the surface of an n-sphere.
Quasirandom numbers have two slight advantages over polyhedron methods. One is that with a polyhedron there's a risk of a false result due to alignment of the symmetries of the problem with one of the symmetries of the polyhedron. The second is that quasirandom numbers work almost equally well with any arbitrary number of vectors whereas with polyhedron methods it's much more difficult to achieve that.
I don't use a weighted sum, just a normnal Riemann sum over these vectors. If adaptive gridding is used then a weighted sum would be needed. In mathematics this is a sum of 1-D integrals. Let's write this using symbol ##y## for a radial distance along the direction defined by ##\mathbf{x}_j##, so ##g(\mathbf{x})=g(y\mathbf{x}_j)## and ##f(\mathbf{x})=f(y\mathbf{x}_j)##. ##S_n## is the surface area of a unit n-ball. For example, ##S_7=16\pi^3/15##. For arbitrarily large ##n##,
$$S_{n-1}=2\pi^{n/2}/\Gamma(n/2)$$
The desired answer is given by
$$\frac{S_n}{m}\sum_{j=1}^m{\int_0^\infty{y^{n-1}g(y)\cos(f(y))\,dy}}$$
I split this into three parts, a near zero part ##y<\pi## that is evaluated directly, a constant part at ##y=\pi## that comes from integration by parts of the integral for ##y\ge\pi##, and an integral for ##y>\pi## that comes from the integration by parts. For my 7-D path integral, integrating by parts four times suppresses oscillations beautifully.
Then
$$I=I_1+I_2+I_3$$
$$I=\int_0^\infty{y^{n-1}g(y)\cos(f(y))\,dy}$$
$$I_1=\int_0^{f^{-1}(\pi)}y^{n-1}g(y)\cos(f(y))\,dy$$
Values of ##f^{-1}()## are found using Newton's method.
$$I_2=-y^{(0)}\sin(\pi)-y^{(1)}\cos(\pi)+y^{(2)}\sin(\pi)+y^{(3)}\cos(\pi)=y^{(1)}-y^{(3)}$$
This doesn't work for every possible choice of ##g()##, but does work for most choices of ##g()## that are non-oscillatory. The third component of the integral is
$$I_3=\int_\pi^\infty y^{(4)}\cos(f)df$$
For best numerics, differentiate ##f()## analytically and define ##y^\prime=1/(df/dy)##. Then iteratively define
$$y^{(0)}=y^{n-1}g(y)y^\prime$$
$$y^{(k+1)}=y^\prime\frac{d}{dy}y^{(k)}$$
##y^{(0)}## is evaluated analytically and ##y^{(k)}## for ##k\ge 1## can be found analytically or numerically. I find ##y^{(k)}## for ##k\ge 1## numerically by first expanding the iterative formula using divided differences, which gives ##y^{(k)}## directly from ##y^{(0)}##. For example
$$y^{(1)}=\frac{y^\prime_0}{\Delta y}(y^{(0)}_{0.5}-y^{(0)}_{-0.5})$$
$$y^{(3)}=\frac{y^\prime_0}{(\Delta y)^3}(y^{(0)}_{1.5}y^\prime_1 y^\prime_{0.5}-y^{(0)}_{0.5}(y^\prime_1 y^\prime_{0.5}+y^\prime_{0.5} y^\prime_{0}+y^\prime_0 y^\prime_{-0.5})+y^{(0)}_{-0.5}(y^\prime_{0.5} y^\prime_{0}+y^\prime_0 y^\prime_{-0.5}+y^\prime_{-0.5}y^\prime_{-1})-y^{(0)}_{-1.5}y^\prime_{-1}y^\prime_{-0.5})$$
The formula for ##y^{(4)}## is twice as long as the formula for ##y^{(3)}##, reply here if you want the full version.
The choice of ##\Delta y## is a balance, too large and errors appear due to function nonlinearity, too small and roundoff errors are amplified. I use ##\Delta y=\pi/200## in the 7-D integral.
The subscript notation is shorthand for the following. From given real number ##f##, which comes from the integration point in ##I_3##, define ##y_0=f^{-1}(f)##, then for any subscript ##a##, ##y_a=y_0+a\Delta y##, ##y^\prime_a## is ##1/(df/dy)## where ##df/dy## is evaluated analytically at ##y=y_a##. ##y^{(0)}_a=y_a^{n-1}g(y_a)y^\prime_a##. I haven't yet said how to analytically evaluate ##df/dy##, from the definition of ##f(\mathbf{x})## define the vector ##df/d\mathbf{x}## and substitute ##\mathbf{x}=y\mathbf{x}_j##. If not used correctly, calculating ##f^{-1}()## could also be time consuming, but the algorithm can be and is set so that this is only called once per vector in evaluating ##I_1## and called once per integration point per vector in evaluating ##I_3##.
To evaluate ##I_3##, replace ##\infty## with some multiple of ##\pi##. ##3\pi## gives errors of order ##10^{-5}## on my 7-D path integral. I suggest ##4\pi##, so use three intervals.
$$I_3=\sum_{j=1}^3\int_{j\pi}^{(j+1)\pi} y^{(4)}\cos(f)df$$
Across each interval for ##I_1## and ##I_3## I use a single application of the 10 point Lagrange-Gauss quadrature (see Press "Numerical Recipes" for details) which has a maximum error of about ##10^{-9}##. So each evaluation of ##I_3## requires ##10\times 3=30## integration points. Overall, I get
$$\int_{-\infty}^\infty\cos\left((x_1-1)^2+(1-x_7)^2+\sum_{k=1}^6(x_{k+1}-x_k)^2+\sum_{k=1}^7(x_k^2+x_k^4)\right)dx_1dx_2dx_3dx_4dx_5dx_6dx_7=-0.627$$
For the imaginary part, the three components of the integral are
$$I=I_1+I_2+I_3$$
$$I_1=\int_0^{f^{-1}(\pi/2)}y^{n-1}\sin(f(y))\,dy$$
$$I_2=y^{(1)}-y^{(3)}$$
$$I_3=\sum_{j=1}^3\int_{(j-1/2)\pi}^{(j+1/2)\pi} y^{(4)}\sin(f)df$$
The integration limits are chosen to help suppress oscillations because, for all ##j##,
$$ \int_{j\pi}^{(j+1)\pi} \cos(f)df=\int_{(j-1/2)\pi}^{(j+1/2)\pi} \sin(f)df=0$$
The amount of oscillation suppression obtained by choosing these limits is roughly equivalent to one step of integration by parts.
The most numerically time consuming part of the algorithm is ##I_3##, and this only accounts for 4% of the total value in my test case, so evaluate it using fewer vectors than used for ##I_1## and ##I_2##. One tenth of the number of vectors works well.
---------------
An outline for a tentative possible method for grid adaption is as follows. (Here I use the words "point" and "vector" interchangeably because a point on the surface of an n-sphere defines a normalised vector from the origin to that point). From the cumulative distribution of evaluated integrals ##I_1+I_2## (not ##I_3##) along each vector, attract points to where the slope of the cumulative distribution is largest or repel points from where the slope is smallest. The process of attraction or repuslion is as follows. Given an attractor as a normalised vector ##\mathbf{a}##, and using ##\mathbf{x}=2\mathbf{v}-1## as above for random points inside the n-sphere (no need to normalise yet, but it also works for other point generation methods where the vectors are normalized), map ##\mathbf{x}## using ##\mathbf{x}:=\mathbf{x}+\alpha(\mathbf{x}\cdot\mathbf{a})\mathbf{a}## followed by normalization. When ##0<\alpha## this is an attaction towards both ##\mathbf{a}## and its antipodal point ##-\mathbf{a}## of maximum strength ##(1+\alpha)^{n-1}## in n-D. When ##-1<\alpha<0## this is a repulsion away from ##\mathbf{a}## and its antipodal point of maximum strength ##(1+\alpha)^{1-n}##.
This is a very long way from being a working algorithm. Issues still to be considered include how to avoid distorting the cumulative distribution once mapping is enacted, how many attraction/repulsion points should be used and with what values of ##\alpha##, and how to keep track of the strength of attraction/repulsion for every vector (for ##I_3## as well) along which the integral is calculated. Once that's done, the weighting for the Riemann sum is trivial to calculate, it's identical to the net strength of repulsion.