X-ray entrance surface dose and effective dose

In summary, the conversation discusses the concepts of entrance surface dose and effective dose in relation to X-ray physics. It clarifies that effective dose is used to estimate the consequences of exposure, while entrance surface dose refers to the absorbed dose to the first few millimeters of tissue. The statements that "entrance dose varies proportionally to kV squared" and "as kV increases, effective dose decreases" are both technically incorrect and require more information to fully understand. The equations for surface dose and the relationship between kV and receptor dose are also discussed. The conversation ends with a question about whether receptor dose and contrast are the same, to which the response is that they are not.
  • #1
BobP
74
1
I have recently had a series of lectures on X-ray physics. I have been quite confused by the concept of effective dose and entrance surface dose.

I have been told that entrance surface dose varies proportionally to kV squared. I have also been told that as kV increases, effective dose decreases.

My confusion arises because if surface dose increases, it would seem that effective dose decreases. This is very counter intuitive so I was wondering if someone could let me know if I have misunderstood something?
Thanks
 
Biology news on Phys.org
  • #2
First let's clarify terms to make sure that we're on the same page. Entrance surface dose doesn't always have a concrete definition, but typically it means the physically absorbed dose to the first ~ 1-2 mm of the patient or a water phantom for a single x-ray field. (A term like "skin dose" has a more precise definition as the dose to the first 70 microns down from the surface.)

Effective dose is a term that's commonly used in radiation protection and is used to translate a physical absorbed dose from a radiation of a given quality or type to a given tissue or set of tissues in Gy, to a relative "whole body dose" for purposes of estimating the consequences of a given exposure for things such as the increase in cancer risk. It's calculated by multipying the absorbed dose D by a radiation weighting factor (w_R) which accounts for the relative biological effectiveness of a given radiation, and then by a tissue weighting factor (w_T) which accounts for the relative sensitivity of the irradiatied tissue. For x-rays w_R = 1, regardless of energy.

To say that "entrance dose varies proportionally to kV squared" is a little confusing to me because one needs to define this relative to something. X-ray dose is usually measured in proportion to the dose rate on the surface and then the surface dose is just a factor of how long you leave your beam on for. Pehaps this is to generate an image with equivalent contrast noise ratio? Higher kV would lead to lower contrast, so more surface dose would be required to reduce the noise. It doesn't strike me as an unreasonable statement, just that a little more information is needed.

The second statement, "as kV increases, effective dose decreases" is technically incorrect. As pointed out above, w_R = 1 for x-rays, regardless of energy. Hence, "effective dose" would be calculated as the same, regardless of the beam spectrum. What the statement may refer to is something called "biologically effective dose" or BED, which is a little more subtle. There is some evidence that as the mean energy of the secondary electrons produced by the x-rays decreases, quantities such as lineal energy (the amount of energy imparted into a microscopic volume over the mean chord length through the volume) increase, leading to increased levels of damage to DNA per given macroscopic absorbed dose. This is generally a small effect (~10% or less depending on specific end points and kVs used). In radiation protection terms, it's considered a wash.
 
  • #3
Thanks for your reply. I have two additional questions, if that is ok?

Choppy said:
To say that "entrance dose varies proportionally to kV squared" is a little confusing to me because one needs to define this relative to something. X-ray dose is usually measured in proportion to the dose rate on the surface and then the surface dose is just a factor of how long you leave your beam on for. Pehaps this is to generate an image with equivalent contrast noise ratio? Higher kV would lead to lower contrast, so more surface dose would be required to reduce the noise. It doesn't strike me as an unreasonable statement, just that a little more information is needed.
So I have learned that if kV increases, more X-rays will penetrate through the body and thus the to achieve the same receptor dose, entrance surface dose can be decreased. Would I be correct in thinking therefore that receptor dose and contrast are not the same?

Also the full equation I have been told is surface dose = (kV^2 * current * time) / (distance from receptor)^2

Choppy said:
The second statement, "as kV increases, effective dose decreases" is technically incorrect. As pointed out above, w_R = 1 for x-rays, regardless of energy. Hence, "effective dose" would be calculated as the same, regardless of the beam spectrum. What the statement may refer to is something called "biologically effective dose" or BED, which is a little more subtle. There is some evidence that as the mean energy of the secondary electrons produced by the x-rays decreases, quantities such as lineal energy (the amount of energy imparted into a microscopic volume over the mean chord length through the volume) increase, leading to increased levels of damage to DNA per given macroscopic absorbed dose. This is generally a small effect (~10% or less depending on specific end points and kVs used). In radiation protection terms, it's considered a wash.
I see. I thought that higher kV X-rays were more penetrating and so less likely to be absorbed, hence effective dose decreases. I didn't think it had anything to do with secondary electrons? Is this definitley incorrect as my lecturer made this point a few times.

Thanks
 
  • #4
BobP said:
So I have learned that if kV increases, more X-rays will penetrate through the body and thus the to achieve the same receptor dose, entrance surface dose can be decreased. Would I be correct in thinking therefore that receptor dose and contrast are not the same?
Okay, so you're looking at surface dose in relation to a detector or receptor that's beyond the body. In that case, yes, it make sense that surface dose is going to increase with kV. Higher energy x-rays will be attenuated less and therefore less surface dose will be required for a given dose level beyond the patient. Receptor dose and contrast are not the same. Contrast is defined as a difference in signal intensity due to the present of an object with different radiological properties than the medium around it.

Also the full equation I have been told is surface dose = (kV^2 * current * time) / (distance from receptor)^2
Seems reasonably to me. "kV" can be a little tricky though, so that part is likely an approximation or a rule of thumb. Remember that kV is really a kilovoltage spectrum defined by a peak voltage (kVp) and a number of other factors such as filtration and anode properties. Often a spectrum is characterized by it's half value layer. What the means is that I can have two beams that might be called 60 kVp spectra, but they could have somewhat different penetrating properties if one of them has a 1 mm Al filter and the other doesn't.

I see. I thought that higher kV X-rays were more penetrating and so less likely to be absorbed, hence effective dose decreases. I didn't think it had anything to do with secondary electrons?
The first part is correct. The second part ("dose decreases" - I'll return to the "effective" part in a moment) has to specify in relation to what.

Is this definitley incorrect as my lecturer made this point a few times.
You should ask your instructor about the ICRP (see ICRP Publication 103 [2007]) definition of "effective dose." My guess is that he or she is using the term in a non-radiation protection context, but anyone who has worked with x-rays should be familiar with this concept.
 
  • #5
Choppy said:
Okay, so you're looking at surface dose in relation to a detector or receptor that's beyond the body. In that case, yes, it make sense that surface dose is going to increase with kV. Higher energy x-rays will be attenuated less and therefore less surface dose will be required for a given dose level beyond the patient. Receptor dose and contrast are not the same. Contrast is defined as a difference in signal intensity due to the present of an object with different radiological properties than the medium around it.Seems reasonably to me. "kV" can be a little tricky though, so that part is likely an approximation or a rule of thumb. Remember that kV is really a kilovoltage spectrum defined by a peak voltage (kVp) and a number of other factors such as filtration and anode properties. Often a spectrum is characterized by it's half value layer. What the means is that I can have two beams that might be called 60 kVp spectra, but they could have somewhat different penetrating properties if one of them has a 1 mm Al filter and the other doesn't.The first part is correct. The second part ("dose decreases" - I'll return to the "effective" part in a moment) has to specify in relation to what.You should ask your instructor about the ICRP (see ICRP Publication 103 [2007]) definition of "effective dose." My guess is that he or she is using the term in a non-radiation protection context, but anyone who has worked with x-rays should be familiar with this concept.

Thanks :)
 

1. What is X-ray entrance surface dose (ESD)?

X-ray entrance surface dose (ESD) is the amount of radiation that is absorbed by the patient's skin when exposed to an X-ray beam. It is typically measured in units of milligray (mGy).

2. How is X-ray ESD calculated?

X-ray ESD is calculated by multiplying the X-ray tube output in mGy/mAs (milligray per milliampere-second) by the exposure time in seconds and the area of the X-ray beam in square centimeters. This calculation takes into account the intensity and duration of the X-ray beam as well as the size of the area being exposed.

3. What is the significance of X-ray ESD?

X-ray ESD is a measure of the amount of radiation that is absorbed by the patient's skin during an X-ray procedure. It is important because it can help determine the potential risk of radiation-induced skin damage and can be used to optimize patient safety and reduce unnecessary exposure.

4. What is effective dose in relation to X-ray radiation?

Effective dose is a measure of the overall risk of harm from radiation exposure to the whole body. It takes into account the type of radiation, the area of the body exposed, and the sensitivity of different organs and tissues to radiation. It is typically measured in units of millisievert (mSv).

5. How is effective dose calculated for X-ray radiation?

Effective dose for X-ray radiation is calculated by multiplying the X-ray ESD by a weighting factor for the specific organ or tissue being exposed. These weighting factors take into account the relative sensitivity of different organs and tissues to radiation. The sum of the weighted ESD values for all organs and tissues in the body gives the effective dose.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
7
Views
402
Replies
2
Views
3K
Replies
46
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
16
Views
2K
  • Introductory Physics Homework Help
Replies
4
Views
747
  • Biology and Medical
Replies
4
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
4
Views
790
  • Art, Music, History, and Linguistics
Replies
2
Views
973
  • Introductory Physics Homework Help
Replies
1
Views
778
Back
Top