Most power-generating schemes based on accelerating ions are impractical. Look at the numbers; you need to supply on the order of 50keV to each nucleus you want to fuse (D-T), and even then there's only a tiny probability that you will actually get fusion to release the 17MeV of the D-T reaction. So to get net power output from acceleration, you must have a system with at least one fusion for every ~400 nuclei accelerated. This is nowhere near achievable - thermodynamically, the high-energy nuclei will scatter far more often than fuse, and lose their energy to heating up colder ions. Essentially any scheme based on acceleration has this probolem; most of the energy goes into heating by collisions, and you're many orders of magnitude short from power output.
The essential point here is that there's a huge temperature difference - 50keV ions in a 25-50meV thermal environment. Heating losses can be much less significant if the whole environment is at the same 50keV temperature - that's the principle of "hot" fusion such as magnetic-confinement fusion; the whole plasma is heated up, and thermal losses are limited to those from collisions with the walls of the container, which the magnetic field tries to minimize (there are more important losses in these systems.) Inertial-confinement fusion would do the same thing; lasers suddenly heat up a large region of a solid pellet, and the inertia of the surround pellet keeps it together for long enough for fusion (sort of like a nuclear bomb).
It's a basic principle; "hot" fusion gets net power output, "cold" fusion doesn't.
There was a chapter of a rather old nuclear physics book I looked at once, where he discussed this at length and did some very insightful OOM estimation, with cross sections and Maxwell distributions and such, to get upper-bounds on energy output from "cold" fusion processes like accelerating ions at a target. Unfortunately I don't remember who it was - maybe Astronuc knows?