B Why Does My Vacuum Leak Rate Change at 25 Microns of Mercury?

AI Thread Summary
The discussion focuses on a high school student's inquiry about the changing leak rate in their vacuum system at 25 microns of mercury during a leak test of an inertial electrostatic confinement fusion reactor. The student notes that below 25 microns, the leak rate follows a quadratic curve, while above this pressure, it becomes constant, suggesting a possible transition between free molecular flow and viscous flow. Participants emphasize the importance of understanding gas composition and the implications of leak rates on the project's viability, particularly concerning contamination of deuterium. The student clarifies that achieving a vacuum of 10e-7 torr is necessary for their experiment, despite the recorded leak rate being higher than ideal. The discussion concludes with considerations about the pressure gauge's accuracy and its potential impact on the observed leak rate behavior.
kubaanglin
Messages
47
Reaction score
5
Hello Physics Forums,

After about one year of research and construction, I have nearly finished building a functioning inertial electrostatic confinement fusion reactor. Just to be clear, I do not wish to discuss the dangerous activities that are involved with my project as I know such topics are prohibited on this forum. My question is purely related to a recent leak test I recorded.

For reference, here is my reactor generating a 7 kV plasma at around 60 microns of mercury:

20160406_221656.jpg

20160406_222241.jpg

Here is the plasma at a lower pressure of 10 microns of mercury:
photo_2016_03_31_13_49.jpg


For the leak test, I first evacuated the entire vacuum system using only the mechanical pump to a pressure of 0 microns of mercury. This pressure was read by an electronic gauge that can accurately read pressures within one micron. Then, I closed the gate valve and started my stopwatch. As air began leaking back into the main chamber, I recorded the pressure every 30 seconds. Here is the data I recorded:

2nd_vacuum_leak_test.jpg


I have repeated this test multiple times and I can verify that the results are consistent. When the pressure is below 25 microns, the leak rate follows a quadratic curve. When the pressure in the chamber is above 25 microns, the leak rate suddenly slows and becomes constant. Could 25 microns be the transition point between free molecular flow and viscous flow within my reactor? I asked my AP physics teacher, but he was not certain this was the cause. I don't think off-gassing has anything to do with this, but it does seem like a possibility.

I would greatly appreciate any ideas that might explain this data.

I wasn't sure which "thread level" to tag this thread, so I put "high school" as I am a high school student. If this is wrong, please let me know.

Thanks,
Kuba
 
Last edited:
  • Like
Likes Drakkith
Physics news on Phys.org
So your leak test involves watching the pressure increase over time after you stop evacuating? How do you intend to find the leak?
 
All vacuum systems have a leak rate of some value as there is no such thing a perfectly sealed vacuum chamber (due to real and virtual leaks). The leak rate I recorded from my vacuum chamber is extremely low and more than suitable for my project. I am asking why the leak rate's relationship to time suddenly changes at 25 microns. I am not looking to find the leak, as that would not only be impossible but also pointless considering the tolerances required for my project to work.
 
Is there any way to determine the composition of the gas you are getting?

That would be an extremely valuable clue as to where it's coming from, which would be an extremely valuable clue as you why you're getting the curve that you are.
 
The gas flowing into the chamber is just the air in my room, so mostly Nitrogen and Oxygen. I did not introduce deuterium into the chamber previous to the test.
 
Are you using a two stage mechanical pump?
 
Then why if the mechanical pump can provide a vacuum to your specs so you need a diffusion pump that I see in your pic.
 
The mech pump can pump down to about 10e-4 torr. For fusion to occur, the contents of the chamber must be only deuterium. Therefore, I must use a diffusion pump to get the chamber down to about 10e-7 torr before refilling with deuterium.
 
  • #10
kubaanglin said:
I am not looking to find the leak, as that would not only be impossible but also pointless considering the tolerances required for my project to work.

But what is your tolerance? You need 10-7 Torr but your leak rate is 3.745xE10-5
/sec. two orders of magnitude greater than your lowest pressure to introduce the deuterium. How can you justify that leak rate?

kubaanglin said:
I am not looking to find the leak, as that would not only be impossible but also pointless considering the tolerances required for my project to work.
So considering the rate of contamination of the deuterium since you cannot/will not control your leak do you justify the validity of your experiment?

Forget about the "anomalous" behavior of the leak rate, concentrate on assuring your deuterium is not unduly contaminated.
 
  • #11
Getting the chamber down to 10e-7 is a bit overkill, its just that my diffusion pump happens to be capable of doing that. The mech pump is capable of getting the pressure just above the level at which it should be, so a diffusion pump is required. The deuterium will be leaked into the chamber while the gate valve is just barely open in order to sustain a dynamic equilibrium. Refilling the chamber to about 10 microns occurs rather quickly, so the leak rate is not significant. When the dynamic equilibrium is stable, the leak rate will not over-contaminate the deuterium. The deuterium can have some contaminants, just very very few.
 
  • #12
By what principle does your pressure gauge measure pressure? I will be signing off until tomorrow.
 
  • #13
It uses a self-heated thermistor bridge with integral temperature compensation from 0 to 50 °C.
 
  • #14
Maybe it's an issue with the pressure meter. What kind is it?
 
Back
Top