Modelling of two phase flow in packed bed using conservation equations

AI Thread Summary
The discussion focuses on deriving mass, momentum, and energy conservation equations for modeling two-phase fluid flow with phase changes in a porous medium. The goal is to track temperature changes and phase fronts over time, particularly in a cryogenic energy storage system. Preliminary modeling strategies suggest starting with simplified models to understand the system dynamics before adding complexity. Key considerations include pressure variations, residence time for phase changes, and the impact of axial dispersion on temperature uniformity within the bed. The participants aim to collaboratively brainstorm and refine these models to effectively address the complexities of the system.
  • #401
casualguitar said:
The initial condition array is y0 = [h0,Ts0,m0, h1,Ts1,m1,...hn,Tsn,mn]
The solution array however is y = [h0,Ts0,mdot0, h1,Ts1,mdot1,...hn,Tsn,mdotn]

So I am supplying mass holdup as one of the initial conditions (mdot is only supplied as a boundary condition i.e. the inlet flow), and the solution array has mdot, not mass holdup

This is the code I am currently using:

Its a lengthy script, however it is just the my_system function where the algebraic solutions for each derivative at each position are set up that are important, and the few lines after this where the initial conditions are set up

I could put mass holdup in the solution array also, and then solve for mdot in the after processing stage. I don't know if this would affect the output though. I'll try this

What parameters are being integrated with respect to time (I.e.,your dependent variables)?
 
Engineering news on Phys.org
  • #402
Chestermiller said:
What parameters are being integrated with respect to time (I.e.,your dependent variables)?
The parameters in the dydt list (in the script) are being integrated w.r.t. time and these are fluid enthalpy, solid temperature, and m_dot (mass flow rate out), which are equations 2, 3, and 4a in the image below (your original equations).

Now that you mention it, eq.2. and 3 have time derivatives, but 4a does not. We can solve directly for dm/dt and m_dot, so would I just need to integrate eq.2. and 3 here? I'm actually not sure why I assumed I would need to integrate 3 equations
Screenshot 2022-02-25 at 17.17.42.png
 
Last edited:
  • #403
casualguitar said:
The parameters in the dydt list (in the script) are being integrated w.r.t. time and these are fluid enthalpy, solid temperature, and m_dot (mass flow rate out), which are equations 2, 3, and 4a in the image below (your original equations).

Now that you mention it, eq.2. and 3 have time derivatives, but 4a does not. We can solve directly for dm/dt and m_dot, so would I just need to integrate eq.2. and 3 here? I'm actually not sure why I assumed I would need to integrate 3 equations
View attachment 297588
Yes, just two. The other equation is spatial only.
 
  • #404
Chestermiller said:
Yes, just two. The other equation is spatial only.
Hi Chet, I have removed dm/dt from the dependent variables (required some small code adjustments to the after processing etc also). The simulation takes close to 10 minutes to run 1000s of simulation time, so this evening I am looking into optimising this code (replacing the nested for loop with a vectorised alternative, etc).

These are the plots produced by the simulation. They are not the 'perfect' S shaped curves I was thinking they might be. Do these curves look reasonable, or do they suggest there are further simulation errors? If the latter, would you have a suggestion as for how to systematically approach debugging?

Fluid enthalpy for 3 tanks:
Screenshot 2022-02-27 at 20.19.58.png

Solid Temperature:
Screenshot 2022-02-27 at 20.20.36.png

Mass holdup for all tanks (note mole unit):
Screenshot 2022-02-27 at 20.22.08.png

Fluid temperature (not reason for red plot here is I used two different approaches to calculate this, just checking that they are equivalent):
Screenshot 2022-02-27 at 20.22.46.png

The main thing I am concerned about in these plots, is that the plot shape of the first tank is not the same as the other tanks, suggesting that I am possibly doing something wrong in how I am setting up the initial/boundary conditions
 
Last edited:
  • #405
casualguitar said:
Hi Chet, I have removed dm/dt from the dependent variables (required some small code adjustments to the after processing etc also). The simulation takes close to 10 minutes to run 1000s of simulation time, so this evening I am looking into optimising this code (replacing the nested for loop with a vectorised alternative, etc).

These are the plots produced by the simulation. They are not the 'perfect' S shaped curves I was thinking they might be. Do these curves look reasonable, or do they suggest there are further simulation errors? If the latter, would you have a suggestion as for how to systematically approach debugging?

Fluid enthalpy for 3 tanks:
View attachment 297676
Solid Temperature:
View attachment 297677
Mass holdup for all tanks (note mole unit):
View attachment 297678
Fluid temperature (not reason for red plot here is I used two different approaches to calculate this, just checking that they are equivalent):
View attachment 297679
The main thing I am concerned about in these plots, is that the plot shape of the first tank is not the same as the other tanks, suggesting that I am possibly doing something wrong in how I am setting up the initial/boundary conditions
Check the 1st tank response against the 1 tank model with 1/n the volume.
 
  • Like
Likes casualguitar
  • #406
Chestermiller said:
Check the 1st tank response against the 1 tank model with 1/n the volume.
Yes the plot for the 1 tank model is in the same ballpark as the full model (semi surprised that they line up this well). This is using an averaged value for the T dependent parameters like heat capacity. The constant temperature zone is more pronounced in the main model, however at all times the temperature discrepancy between the two models is fairly reasonable, which suggests that the main model does not set up the boundary/initial conditions incorrectly, suggesting that the model may be bug-free?
Screenshot 2022-02-28 at 14.18.58.png

There does remain the issue of the simulation taking a very long time to run. I am currently timing it but it is definitely 10+ minutes (EDIT: it took 15 mins to run to completion). It does look like there is something going on in the saturation zone where thermo takes a while to converge. If this can be fixed then simulation time will come down by a lot because the gas and liquid zones are relatively quick to simulate
 
Last edited:
  • #407
Hi Chet, some short updates -

It is the thermo functions that are computationally expensive. I am looking into how these can be made less expensive to use for now (mostly to speed up model development)

We previously discussed tuning the model to experimental data. I have checked with the lab and the experimental data will be in excel format (straightforward to deal with from a programming point of view). Python has some nice curve fitting libraries that take x-y data, the model, and the tuning parameter(s), and return 'optimised' parameters using say least squares for example.

We discussed tuning parameters earlier on in the model (screenshot below is from post #99). This dispersivity parameter no longer seems to be accessible as it is not on the 'top layer' of the model anymore. Should this still be used as the tuning parameter, or are we now using another parameter that is at the top level of the model?
Screenshot 2022-03-01 at 23.08.13.png


And lastly, I have no model functionality left on my list that seems to be very important to implement (pressure gradient using relative permeability, internal particle temperature gradients). Do you see any functionality that should be implemented, before moving to the next section of this model (adding CO2 to the air mixture)?
Edit: Before moving forward, I will do a write up of everything we have discussed on the final version of the model tomorrow (late here now)
 
  • #408
Hi Chet, short write up of the discussion we've had to date (on the most complex version of this model only so far) :https://docs.google.com/document/d/...ouid=111822275555236809516&rtpof=true&sd=true

The link is to a word doc of a summarised version of this discussion, plus output from the model and the actual Python script I made. I'll leave this link active indefinitely in case this is useful to any future passers by on this forum. I will also be updating it regularly until the packed bed modelling is finished so I'll probably change the location to an open google drive file

If it is ok with you I would like to discuss adding CO2 to the existing model. The idea here is that an ambient temperature air-CO2 mixture is passed through a cold packed bed, liquefying the air and freezing the CO2. The solid CO2 will move through the bed as a 'plug' which accumulates in mass as it passes through the bed, and will be extracted from the line at the end of the packed bed therefore 'cleaning' the air stream of CO2. If this is ok with you, I would like to attempt to lay out the mass and energy balances for this model (similar to what you did initially for the air model). I will definitely have questions on the mass balance for CO2, but I think it would be useful anyway to make an attempt so we have an initial discussion point!
 
Last edited:
  • #409
Hi Chet, just posting the updated plots here. These plots include the heat transfer coefficient correlation and the ##d\rho/dH## correlation, and now runs a lot faster than it previously did (about 5 times faster):
Screenshot 2022-09-05 at 12.15.01.png

Screenshot 2022-09-05 at 12.15.16.png


Currently working towards allowing non-constant enthalpy profiles to be supplied to the model
 
  • #410
Hi Chet, updating here as discussed. So here is the position vs gas temperature plot for a molar flow of 0.001mol/s, and then below for 0.008mol/s.

The lower molar flow rate allows for a sharper temperature front that I think you mentioned was desirable

0.001mol/s:

Screenshot 2022-09-16 at 12.39.06.png


0.008mol/s:

Screenshot 2022-09-16 at 12.47.27.png


Note the 0.001mol/s flow also had n=10, I just zoomed in on the relevant section. But yes as you can see the temperature front is much more pronounced for the lower flow
 
  • #411
casualguitar said:
Hi Chet, updating here as discussed. So here is the position vs gas temperature plot for a molar flow of 0.001mol/s, and then below for 0.008mol/s.

The lower molar flow rate allows for a sharper temperature front that I think you mentioned was desirable

0.001mol/s:

View attachment 314228

0.008mol/s:

View attachment 314229

Note the 0.001mol/s flow also had n=10, I just zoomed in on the relevant section. But yes as you can see the temperature front is much more pronounced for the lower flow
This is a very difficult comparison because the advance of the fronts is different because of the different flow rates (IT'S NOT ON A COMMON BASIS). Compare the cases on a common basis in which the injected volumes are equal,, and see how they compare. So run the 0.001 for 8x the times of the 0.008.
 
  • #412
Chestermiller said:
Compare the cases on a common basis in which the injected volumes are equal
Ah yes, I didn't think of that (not a common basis)

Running for m = 0.001mol/s and t = 400s, and also m = 0.008mol/s and t=50s gives these plots which match almost exactly:
Screenshot 2022-09-20 at 09.03.57.png

Screenshot 2022-09-20 at 09.04.06.png


In addition, these are the final arrays, in the format [h0,Ts0,h1,Ts1,...hn,Tsn] where h is the fluid enthalpy and Ts is the solid temperature. They are almost identical:
Screenshot 2022-09-20 at 08.31.55.png


So returning to the initial question about which parameters most affect axial dispersion, it seems we can conclude from the above that molar flowrate does not affect axial dispersion significantly?
 
  • #413
Hey Chet, just in addition, I've got a working 'stopping condition' now. Meaning that the simulation stops once the outlet temperature of the fluid reaches the inlet temperature of the fluid

The reason this is useful is because I can now sort of quantify the beds performance degradation over time (unless I'm wrong) in this way:
1) assume the bed is initially at ambient temperature
2) pass cold air through the bed at 0.008 mol/s until the bed reaches steady state
3) save the time taken to reach this stopping condition
4) pass warm air through the bed at 0.008 mol/s (liquefying it), until the stopping condition is met, meaning that liquefaction can no longer occur
5) check the difference in time between the hot and cold 'passes', and multiply this by the molar flowrate. Then we can divide this value by the total amount of air liquefied and multiply by 100 to quantify the percentage of 'liquefying power' lost per pass

Does this approach sound reasonable?
 
  • #414
casualguitar said:
Ah yes, I didn't think of that (not a common basis)

Running for m = 0.001mol/s and t = 400s, and also m = 0.008mol/s and t=50s gives these plots which match almost exactly:
View attachment 314374
View attachment 314375

In addition, these are the final arrays, in the format [h0,Ts0,h1,Ts1,...hn,Tsn] where h is the fluid enthalpy and Ts is the solid temperature. They are almost identical:
View attachment 314372

So returning to the initial question about which parameters most affect axial dispersion, it seems we can conclude from the above that molar flowrate does not affect axial dispersion significantly?
It affects dispersion to about the same extent for a given total volume of flow.
 
  • #415
casualguitar said:
Hey Chet, just in addition, I've got a working 'stopping condition' now. Meaning that the simulation stops once the outlet temperature of the fluid reaches the inlet temperature of the fluid

The reason this is useful is because I can now sort of quantify the beds performance degradation over time (unless I'm wrong) in this way:
1) assume the bed is initially at ambient temperature
2) pass cold air through the bed at 0.008 mol/s until the bed reaches steady state
3) save the time taken to reach this stopping condition
4) pass warm air through the bed at 0.008 mol/s (liquefying it), until the stopping condition is met, meaning that liquefaction can no longer occur
5) check the difference in time between the hot and cold 'passes', and multiply this by the molar flowrate. Then we can divide this value by the total amount of air liquefied and multiply by 100 to quantify the percentage of 'liquefying power' lost per pass

Does this approach sound reasonable?
I'm not able to follow this.
 
  • #416
Chestermiller said:
I'm not able to follow this.
Yes I explained it badly really

So we are modelling the charging (cooling) and discharging (heating) of the bed in a cyclical operation i.e. cold air is passed through to cool the bed, then hot air is passed through with the goal of liquefying it. Liquid air being the product we want here. Then the produced liquid air is passed back through the bed to retool it. The idea is to quantify how the liquefaction performance of the bed degrades over the charge/discharge cycles due to losses etc. i.e. does the amount of liquid we get after each charge/discharge cycle stay the same, or does it gradually decrease?

The question I'm asking is - how can we quantify the degradation in liquefaction performance of the packed bed over time?

One thought I had was to simply check how much cold air is required to charge the bed, then check how much liquid air this cold bed can produce, then pass this produced liquid air back through the bed to cool the bed, then pass ambient air though to liquefy it, and repeat this cycle. We might see that each pass produces slightly less liquid air, due to losses in the system

Is there any aspect of this explanation that is unclear or can I continue with how I was thinking of calculating these losses?
 
  • #417
Chestermiller said:
I'm not able to follow this.
Hi Chet,

I've got the cooling/heating of the packed bed fully working now. And here is a plot of a number of charge/discharge (cooling/heating) cycles:
Screenshot 2022-09-28 at 11.40.16.png

The simulation details:
Bed initial temperature = 80K
Fluid inlet temperature = 293K

The discharge (bed heating) is run until the last position in the bed (position 4 in this case) heats up to a temperature of 1K above 80K i.e. 81K. This signals that the bed can no longer cool the gas to 80K and liquefy it.

Then, from the outlet (position 4) liquid gas at 80K is passed through for the same amount of time, and the bed recools. It won't recoil all the way to 80K though because of some losses.

Then again hot air is passed through until the last position increases by 1K etc this repeats for a few cycles and the final temperature profiles after each cooling phase are plotted above. If this cycle was repeated a large number of times then eventually we would no longer be able to liquefy air (and the bed would undergo a full cooling phase to fully reset the temperature to 80K)

My question is how can I quantify how the performance of the bed degrades over time?

I was thinking of using the time required to increase the last position by 1K. If I store this value for each cycle, we should see that this time decreases (as the bed performance degrades). We can do ## moles-of-liquid-lost = \Delta t * \dot{m}## to calculate the amount of moles we lose per cycle, compared to the theoretical max. Does this sound reasonable?
 
  • #418
casualguitar said:
Yes I explained it badly really

So we are modelling the charging (cooling) and discharging (heating) of the bed in a cyclical operation i.e. cold air is passed through to cool the bed, then hot air is passed through with the goal of liquefying it. Liquid air being the product we want here. Then the produced liquid air is passed back through the bed to retool it. The idea is to quantify how the liquefaction performance of the bed degrades over the charge/discharge cycles due to losses etc. i.e. does the amount of liquid we get after each charge/discharge cycle stay the same, or does it gradually decrease?

The question I'm asking is - how can we quantify the degradation in liquefaction performance of the packed bed over time?

One thought I had was to simply check how much cold air is required to charge the bed, then check how much liquid air this cold bed can produce, then pass this produced liquid air back through the bed to cool the bed, then pass ambient air though to liquefy it, and repeat this cycle. We might see that each pass produces slightly less liquid air, due to losses in the system

Is there any aspect of this explanation that is unclear or can I continue with how I was thinking of calculating these losses?
I have no idea what you are driving at here. Please show graphs of what you have in mind both for the heating and cooling operations.
 
  • #419
casualguitar said:
Hi Chet,

I've got the cooling/heating of the packed bed fully working now. And here is a plot of a number of charge/discharge (cooling/heating) cycles:
View attachment 314754
The simulation details:
Bed initial temperature = 80K
Fluid inlet temperature = 293K

The discharge (bed heating) is run until the last position in the bed (position 4 in this case) heats up to a temperature of 1K above 80K i.e. 81K. This signals that the bed can no longer cool the gas to 80K and liquefy it.

Then, from the outlet (position 4) liquid gas at 80K is passed through for the same amount of time, and the bed recools. It won't recoil all the way to 80K though because of some losses.

Then again hot air is passed through until the last position increases by 1K etc this repeats for a few cycles and the final temperature profiles after each cooling phase are plotted above. If this cycle was repeated a large number of times then eventually we would no longer be able to liquefy air (and the bed would undergo a full cooling phase to fully reset the temperature to 80K)

My question is how can I quantify how the performance of the bed degrades over time?

I was thinking of using the time required to increase the last position by 1K. If I store this value for each cycle, we should see that this time decreases (as the bed performance degrades). We can do ## moles-of-liquid-lost = \Delta t * \dot{m}## to calculate the amount of moles we lose per cycle, compared to the theoretical max. Does this sound reasonable?
I am really confused about what is going on here. Please show graphs for the heating and then the subsequent cooling.
 
  • #420
Chestermiller said:
I am really confused about what is going on here. Please show graphs for the heating and then the subsequent cooling.
Hi Chet, my apologies for the delay. I was running a few simulations for various values of n. Anything greater than n=10 takes considerable time now. n=30 took 24 hours actually.

Anyway, I wanted to do a 'grid independence' study of sorts. If you run hot air through the bed for x amount of time, we get a thermocline in the bed. I did this for n=5,10,20, and 30 and here are the results. Clearly increasing n results in a 'tighter' thermocline or a sharper drop from high temperature to low:
n=5:
1665585588077.png

n=10:
1665585599965.png

n=20:
1665585608585.png

n=30:
1665585619234.png

I'm running higher n now (n=100) however this will take a few days. The goal of this analysis is to 1) find the value of n where the output plots stop changing i.e. grid independence and 2) to quantify the losses in the bed across a charge/discharge cycle i.e. if we discharge for 100s and then charge for 100s, how close to we get to the original temperature profile?

No questions on this, just updating
 
  • #421
casualguitar said:
Hi Chet, my apologies for the delay. I was running a few simulations for various values of n. Anything greater than n=10 takes considerable time now. n=30 took 24 hours actually.

Anyway, I wanted to do a 'grid independence' study of sorts. If you run hot air through the bed for x amount of time, we get a thermocline in the bed. I did this for n=5,10,20, and 30 and here are the results. Clearly increasing n results in a 'tighter' thermocline or a sharper drop from high temperature to low:
n=5:
View attachment 315477
n=10:
View attachment 315478
n=20:
View attachment 315479
n=30:
View attachment 315480
I'm running higher n now (n=100) however this will take a few days. The goal of this analysis is to 1) find the value of n where the output plots stop changing i.e. grid independence and 2) to quantify the losses in the bed across a charge/discharge cycle i.e. if we discharge for 100s and then charge for 100s, how close to we get to the original temperature profile?

No questions on this, just updating
I think they will continue to change as you increase n. Our idea was to calibrate the value of n to the actual data on column operation to match the actual amount of dispersion occurring experimentally.
 
  • #422
Chestermiller said:
I think they will continue to change as you increase n
Are you saying that you don't think there will be a point where increasing n no longer changes the output i.e. grid independence?

Chestermiller said:
Our idea was to calibrate the value of n to the actual data on column operation to match the actual amount of dispersion occurring experimentally.
I understand this yes. I see how we could use n as a calibration parameter

I also understand that the grid independence value of n is likely to be very different from the calibrated value of n

Edit: Just reading back over previous posts. Didn't you say that one big advantage of this model is that there is actually no numerical dispersion (I might be referencing you incorrectly)? However I don't think all upwind schemes have zero numerical dispersion, so why does ours?

Why would the output change as it does above, if there is no numerical dispersion present in the system? Apologies for my misunderstanding
 
Last edited:
  • #423
casualguitar said:
Are you saying that you don't think there will be a point where increasing n no longer changes the output i.e. grid independence?
Yes. Or, more precisely, the temperature profile will approach a step function at large n.
casualguitar said:
I understand this yes. I see how we could use n as a calibration parameter

I also understand that the grid independence value of n is likely to be very different from the calibrated value of n

Edit: Just reading back over previous posts. Didn't you say that one big advantage of this model is that there is actually no numerical dispersion (I might be referencing you incorrectly)? However I don't think all upwind schemes have zero numerical dispersion, so why does ours?
All upwind schemes feature numerical dispersion. This is a key feature of our scheme. We are using the numerical dispersion of the unwinding scheme to simulate the actual dispersion. They will match when ##\Delta x=l/2## (or wherever we determined the 2 belongs; I con't remember), where l is the dispersion length parameter.
casualguitar said:
Why would the output change as it does above, if there is no numerical dispersion present in the system? Apologies for my misunderstanding
There is no numerical dispersion if unwinding is not used. We have included dispersion explicitly in the model, but have found that the mathematical representation of the dispersion is such that, when the value of ##\Delta x## is chosen in a certain specific relation to the dispersion length l, the equations become exactly the same as with numerical dispersion of the advection terms, but without unwinding included in the equations.
 
  • #424
Chestermiller said:
Yes. Or, more precisely, the temperature profile will approach a step function at large n.
Yes I'm running n=100 currently (will take a few days and it seems to be about half way through). So hopefully this value of n will be high enough to see something resembling this
Chestermiller said:
All upwind schemes feature numerical dispersion. This is a key feature of our scheme. We are using the numerical dispersion of the unwinding scheme to simulate the actual dispersion. They will match when Δx=l/2 (or wherever we determined the 2 belongs; I con't remember), where l is the dispersion length parameter.
At low values of n the plots produced are obviously 'low resolution' in that they are somewhat jagged (not smooth). Is it not true that this will be unavoidable if if it turns out that a low value of n matches numerical dispersion with actual dispersion?

Chestermiller said:
but have found that the mathematical representation of the dispersion is such that, when the value of Δx is chosen in a certain specific relation to the dispersion length l, the equations become exactly the same as with numerical dispersion of the advection terms, but without unwinding included in the equations.
Understood
 
  • #425
casualguitar said:
At low values of n the plots produced are obviously 'low resolution' in that they are somewhat jagged (not smooth). Is it not true that this will be unavoidable if if it turns out that a low value of n matches numerical dispersion with actual dispersion?
I don't think. that will be the case.o. Typically, the dispersion length will be on the order of a few particle diameters.
 
  • #426
Chestermiller said:
I don't think. that will be the case.o. Typically, the dispersion length will be on the order of a few particle diameters.
Ah interesting so for a bed length of 0.1m, and a particle diameter of 0.001m, and assuming a value of 5 particle diameters for the dispersion length, then the value of n (where n = L/dz) would be n=20? Or have I miscalculated here?

Edit: Just one other note - it seems a bit unreasonable that n=100 will take a few days to run, so I'm currently looking into replacing the thermo library with algebraic correlations where possible
 
  • #427
casualguitar said:
Ah interesting so for a bed length of 0.1m, and a particle diameter of 0.001m, and assuming a value of 5 particle diameters for the dispersion length, then the value of n (where n = L/dz) would be n=20? Or have I miscalculated here?
yes.
casualguitar said:
Edit: Just one other note - it seems a bit unreasonable that n=100 will take a few days to run, so I'm currently looking into replacing the thermo library with algebraic correlations where possible
Good idea. This seems to be the right direction to go.
 
  • #428
Chestermiller said:
yes.

Good idea. This seems to be the right direction to go.
Is that a yes to n=20, or a yes to it being a miscalculation?

Great I’ll work on this. There is actually a possibility that the thermo library will allow us to trade accuracy for time. I’ll check this first.

Anyway, we’re using thermo to get temperature as a function of enthalpy and pressure, and also density as a function of enthalpy and pressure, for the liquid, gas, and two phase regions
 
  • #429
casualguitar said:
Is that a yes to n=20, or a yes to it being a miscalculation?
n=20
 
  • #430
Chestermiller said:
n=20
Definitely noticed differences going from n=20 to n=30. n=100 is actually still running believe it or not. As discussed I'm working on speeding up this script now
 
  • #431
Chestermiller said:
n=20
Hi Chet,
I'm about to start on that "modelling axial heat dispersion as numerical dispersion" analysis. But I did spot one thing in the model derivation -

If we're saying a given ##\Delta x## introduces a given amount of axial heat dispersion into the system, and we have tuned our model to one set of process conditions, can we really say that if we change the length/flow/pressure then we will still see approximately the same amount of axial heat dispersion? The model I tuned to uses much larger flows and a much larger diamter than we have in our model
 
Last edited:
  • #432
casualguitar said:
Hi Chet,
I'm about to start on that "modelling axial heat dispersion as numerical dispersion" analysis. But I did spot one thing in the model derivation -

If we're saying a given ##\Delta x## introduces a given amount of axial heat dispersion into the system, and we have tuned our model to one set of process conditions, can we really say that if we change the length/flow/pressure then we will still see approximately the same amount of axial heat dispersion? The model I tuned to uses much larger flows and a much larger diamter than we have in our model
According to our derivation, yes. The dispersion length l is some constant times the particle size only. So, if we have the same particle size, the l and ##\Delta x## should be the same.
 
  • #433
Chestermiller said:
According to our derivation, yes. The dispersion length l is some constant times the particle size only. So, if we have the same particle size, the l and ##\Delta x## should be the same.
Hi Chet, back on this. In terms of looking further into this "modelling numerical dispersion as physical dispersion" observation, there seems to be a few options for progression.
1. Check if this happens for other equations in other domains in an equivalent sense
2. Consider the observation for this specific case in more depth.

I think option 2 is a clearer path but if you like the idea of option 1 I'm happy to work on that.

Option 2:
I think we have two main possibilities here -
1. Performance Analysis - comparing the performance of the central differencing scheme to that of the upwind scheme for a range of conditions, by comparing computation speed, accuracy, stability, and maybe memory usage
2. We could look into the characteristics of this PDE to see if there is any underlying mechanism that allows this to occur.

I like the performance analysis approach, it would be straightforward to get a sense of how much faster the upwind scheme is (if at all), and how computationally efficient it is in comparison to the 2nd order scheme
 
  • #434
casualguitar said:
Hi Chet, back on this. In terms of looking further into this "modelling numerical dispersion as physical dispersion" observation, there seems to be a few options for progression.
1. Check if this happens for other equations in other domains in an equivalent sense
I don't know what this means.
casualguitar said:
2. Consider the observation for this specific case in more depth.

I think option 2 is a clearer path but if you like the idea of option 1 I'm happy to work on that.

Option 2:
I think we have two main possibilities here -
1. Performance Analysis - comparing the performance of the central differencing scheme to that of the upwind scheme for a range of conditions, by comparing computation speed, accuracy, stability, and maybe memory usage
We are already using central differencing, with a specific value of the finite difference length. For this specific value of the finite difference length, the equations reduce to the upwind differencing equations without explicit dispersion included in the PDEs. Are you saying you want to use a specific value of l, and vary ##\Delta x## to test the accuracy of the central difference finite difference scheme and the accuracy of using our relationship with ##Delta x=2l##?
 
  • #435
Chestermiller said:
We are already using central differencing, with a specific value of the finite difference length.
Sorry yes my mistake I suppose there is no "comparing upwind and central differencing schemes", because only have central differencing.

Chestermiller said:
Are you saying you want to use a specific value of l, and vary Δx to test the accuracy of the central difference finite difference scheme and the accuracy of using our relationship with Deltax=2l?
I am saying this, but we may have already done this unless I'm misunderstanding.

We calculated ##l## with that Ruthven et al correlation, where ##l## is a function of particle diameter. This allowed us to calculate a ##\Delta x## value (I think it was about ##0.027m##). Then separately we matched our model to experimental data by varying the number of tanks, n, until our temperautre profiles matched experimental data, and then calculated another ##\Delta x## value from that using ##\Delta x## = L/n (the value came to be about ##0.03m##. So the two values calculated by different methods are almost equal. Is this equivalent to what you're saying above?
 
  • #436
casualguitar said:
Sorry yes my mistake I suppose there is no "comparing upwind and central differencing schemes", because only have central differencing.I am saying this, but we may have already done this unless I'm misunderstanding.

We calculated ##l## with that Ruthven et al correlation, where ##l## is a function of particle diameter. This allowed us to calculate a ##\Delta x## value (I think it was about ##0.027m##). Then separately we matched our model to experimental data by varying the number of tanks, n, until our temperautre profiles matched experimental data, and then calculated another ##\Delta x## value from that using ##\Delta x## = L/n (the value came to be about ##0.03m##. So the two values calculated by different methods are almost equal. Is this equivalent to what you're saying above?
##\Delta x## is always 2l, where l = kDp; n has to be chosen such ##n=\frac{L}{\Delta x}=\frac{L}{2l}=\frac{L}{2kD_p}##

The real issues are if l is a constant, or whether k is a function of Re and Pr or Sc. We are taking it to be constant based on the groundwater porous media references. Other researchers are taking k to be a function of Re and Pr or Sc based on packed bed literature. There really should be no difference except for the flow in porous media being laminar and the flow in packed beds sometimes being turbulent (because of the larger tortuous flow channel dimensions). That's why we were checking to see if, over the range if Re in the beds of interest or our bed, Re varies enough that their correlations to provide substantial variation in k (and whether, in our model, a constant value of k is adequate to match the observational data). If k is a substantial function of Re and Pr or Sc, we can't use ##\Delta x=2l=2kD_p## along with our upwind differencing approximation (to our central differencing equations) to model the dispersion.

Another issue is numerical: whether ##\Delta x## is small enough using ##\Delta x=2l=2kD_p## for the finite difference equations to be sufficiently accurate numerically. This can be checked numerically by fixing l, varying ##\Delta x## independently, and not reducing the central difference equations to the forward difference version obtained if ##l=\Delta x/2##. (or wherever the 2 goes; I can never remember). But this would require some additional coding with additional chance of making numerical errors; it would be a real pain in the butt.
 
  • #437
Chestermiller said:
If k is a substantial function of Re and Pr or Sc, we can't use Δx=2l=2kDp along with our upwind differencing approximation (to our central differencing equations) to model the dispersion.
To check this assumption for the CO2 model, we did something like the plot below:


If suitable, I will redo this calculation for this system and run for a range of Re values. It seems like we will at least get an Re 'range of validity' for the ##\Delta x = 2l## assumption, and won't have to completely throw it out. Is this similar to what you had in mind?
 
  • #438
casualguitar said:
To check this assumption for the CO2 model, we did something like the plot below:


If suitable, I will redo this calculation for this system and run for a range of Re values. It seems like we will at least get an Re 'range of validity' for the ##\Delta x = 2l## assumption, and won't have to completely throw it out. Is this similar to what you had in mind?

A direct comparison with their numerical results and yours would be more powerful.
 
  • #439
casualguitar said:

Some advice. When you are making a technical argument (like the "l" value being close to constant as a function of the Reynolds number), and you are confident in your contention, it is best to present your argument using visuals that most strongly emphasize your contention. In the case of this graph, even though you are using a log scale for the ordinate, I think it would be much better if you used a full decade on the ordinate scale. This would better illustrate the small percentage variation in the function, compared to the variation shown which only goes from 1 to 2.4, but spans the entire vertical domain of your graph. How you present things visually is very important to making your case.
 
  • #440
Chestermiller said:
Some advice. When you are making a technical argument (like the "l" value being close to constant as a function of the Reynolds number), and you are confident in your contention, it is best to present your argument using visuals that most strongly emphasize your contention. In the case of this graph, even though you are using a log scale for the ordinate, I think it would be much better if you used a full decade on the ordinate scale. This would better illustrate the small percentage variation in the function, compared to the variation shown which only goes from 1 to 2.4, but spans the entire vertical domain of your graph. How you present things visually is very important to making your case.
Thanks Chet. Something like this (going from 0.1 to 10)?

Chestermiller said:
A direct comparison with their numerical results and yours would be more powerful.
If you mean calculate an ##\frac{l_h}{d_p}## value for their data, then I calculated this as being equal to about 1.33 which is inside our range of 1.2 - 2.4. I can post this calculation if necessary.

So from the above we can possibly conclude that:
1) ##l## is not a significant function of Re inside our Re range of interest (0 to about 6000)
2) The range of ##l## values we get is similar to that of gas phase experimental data for air

We do have condensing/evaporating flow though, can the above be extended to the liquid phase? I haven't managed to find much clear experimental data on two phase flow. The Brouwers et al reference was the most useful thing I had found: https://josbrouwers.bwk.tue.nl/publications/Journal21.pdf
 
  • #441
casualguitar said:
Thanks Chet. Something like this (going from 0.1 to 10)?


If you mean calculate an ##\frac{l_h}{d_p}## value for their data, then I calculated this as being equal to about 1.33 which is inside our range of 1.2 - 2.4. I can post this calculation if necessary.

So from the above we can possibly conclude that:
1) ##l## is not a significant function of Re inside our Re range of interest (0 to about 6000)
2) The range of ##l## values we get is similar to that of gas phase experimental data for air

We do have condensing/evaporating flow though, can the above be extended to the liquid phase? I haven't managed to find much clear experimental data on two phase flow. The Brouwers et al reference was the most useful thing I had found: https://josbrouwers.bwk.tue.nl/publications/Journal21.pdf

Check Collins' book for a chapter on two phase flow in porous media.
 
  • #442
Chestermiller said:
Check Collins' book for a chapter on two phase flow in porous media.
Hi Chet, I think I found something relevant in section 8.30 "Miscible Displacement in Porous Media", Figure 8.2 linked here:848448


Would it be reasonable to make the case that the dispersion coefficient is not a strong function of phase but velocity?

If so, I could:
1) take the range of velocities we see in our simulations
2) calculate dispersion coefficients using our previously calculated dispersion length, ##l## of about ##0.03m##
3) plot our dispersion coefficients versus the data above and see if we get a reasonable match
 
  • #443
casualguitar said:
Hi Chet, I think I found something relevant in section 8.30 "Miscible Displacement in Porous Media", Figure 8.2 linked here:848448


Would it be reasonable to make the case that the dispersion coefficient is not a strong function of phase but velocity?

If so, I could:
1) take the range of velocities we see in our simulations
2) calculate dispersion coefficients using our previously calculated dispersion length, ##l## of about ##0.03m##
3) plot our dispersion coefficients versus the data above and see if we get a reasonable match

4) in the case that we do get a match (with the Carberry et al liquid and gas phase dispersion coefficients above), we can make the claim that our dispersion length of ##0.03m## is suitable for both liquid and gas phase in the range of velocities simulated
 
  • #444
casualguitar said:
4) in the case that we do get a match (with the Carberry et al liquid and gas phase dispersion coefficients above), we can make the claim that our dispersion length of ##0.03m## is suitable for both liquid and gas phase in the range of velocities simulated
Hi Chet, in your view is the above approach reasonable? This would mean that I am not tuning to any liquid phase temperature profiles, but instead tuning this two phase model to gas phase temperature profiles only (of which there is lots of data), and also showing that the dispersion coefficients present in the gas phase are applicable in the liquid phase, and therefore we can expect that tuning to gas phase profiles would give us reasonable liquid phase accuracy also
 
  • #445
Chestermiller said:
I think they will continue to change as you increase n. Our idea was to calibrate the value of n to the actual data on column operation to match the actual amount of dispersion occurring experimentally.
Hi Chet, I hope you're doing well. I see you're very active here lately which is great (really is a god send for postgraduate students).

I wanted to respond to the above comment from you. There now exists an experimental setup that will allow for the measurement of axial temperature profiles in a packed bed. The bed can be maintained at a set pressure value that we choose, between about 1 and 50 bar. We can also measure the flow into and out of the bed.

As you say above, to measure the dispersion coefficient we just need to tune ##\Delta x## to the experimental temperature profiles and calculate the dispersion length (and therefore the dispersion coefficient range) from this.

I was planning on doing this for gas phase only, liquid phase only, and for the condensation/evaporation case, for a range of pressures.

I was only wondering if you see any other useful experiments that could be run that might give insight into the system in practice? Or experiments that might typically be run on a system like this. If not then no worries however I had to ask
 
  • #446
casualguitar said:
Hi Chet, I hope you're doing well. I see you're very active here lately which is great (really is a god send for postgraduate students).

I wanted to respond to the above comment from you. There now exists an experimental setup that will allow for the measurement of axial temperature profiles in a packed bed. The bed can be maintained at a set pressure value that we choose, between about 1 and 50 bar. We can also measure the flow into and out of the bed.

As you say above, to measure the dispersion coefficient we just need to tune ##\Delta x## to the experimental temperature profiles and calculate the dispersion length (and therefore the dispersion coefficient range) from this.

I was planning on doing this for gas phase only, liquid phase only, and for the condensation/evaporation case, for a range of pressures.

I was only wondering if you see any other useful experiments that could be run that might give insight into the system in practice? Or experiments that might typically be run on a system like this. If not then no worries however I had to ask
I think that, at this point in your thesis work, you should be focusing on minimizing the amount of additional work that you commit to, and, instead be focusing on wrapping things up. You need to negotiate an agreed upon end point to your work.

That being said, I would run the model to see which kind to experimental test of temperature profiles would give you the best shot on an accurate answer without excessive effort.
 
  • #447
Chestermiller said:
I think that, at this point in your thesis work, you should be focusing on minimizing the amount of additional work that you commit to, and, instead be focusing on wrapping things up. You need to negotiate an agreed upon end point to your work.

That being said, I would run the model to see which kind to experimental test of temperature profiles would give you the best shot on an accurate answer without excessive effort.
Hi Chet, my apologies yes I have effectively completed thesis writing at this point, I was more curious about future work on this than anything.

I did have one question on the modelling of physical dispersion using numerical dispersion (I was hoping to flesh this out a small bit more as it is very central to the model) - is it true to say that for this set of equations, if we use a second order accurate finite difference representation of the mass/energy balances, and let ##l## equal to ##\Delta x/2##, it will always mean that the amount of numerical dispersion we introduce is equal to the amount of physical dispersion we see? i.e. are there conditions where the physical dispersion cannot be modelled using numerical dispersion, for this set of equations? Or can it be mathematically shown that numerical dispersion must equal physical dispersion when the grid spacing = ##2l##?

The equations:
\begin{equation}
\frac{\partial h_x}{\partial t} = \frac{\phi_{x-\Delta x/2}(h_{x-\Delta x} - h_x)}{\rho_x \Delta x} + \frac{Ua}{\rho_x\epsilon}(T_{s,x}-T_x)
\end{equation}

\begin{equation}
m_j\frac{dh_j}{dt} = \dot{m}_{j-1}(h_{j-1} - h_j) + UA(T_{S,j}-T_j)
\end{equation}
 
  • #448
casualguitar said:
Hi Chet, my apologies yes I have effectively completed thesis writing at this point, I was more curious about future work on this than anything.

I did have one question on the modelling of physical dispersion using numerical dispersion (I was hoping to flesh this out a small bit more as it is very central to the model) - is it true to say that for this set of equations, if we use a second order accurate finite difference representation of the mass/energy balances, and let ##l## equal to ##\Delta x/2##, it will always mean that the amount of numerical dispersion we introduce is equal to the amount of physical dispersion we see? i.e. are there conditions where the physical dispersion cannot be modelled using numerical dispersion, for this set of equations? Or can it be mathematically shown that numerical dispersion must equal physical dispersion when the grid spacing = ##2l##?

The equations:
\begin{equation}
\frac{\partial h_x}{\partial t} = \frac{\phi_{x-\Delta x/2}(h_{x-\Delta x} - h_x)}{\rho_x \Delta x} + \frac{Ua}{\rho_x\epsilon}(T_{s,x}-T_x)
\end{equation}

\begin{equation}
m_j\frac{dh_j}{dt} = \dot{m}_{j-1}(h_{j-1} - h_j) + UA(T_{S,j}-T_j)
\end{equation}
We are setting this up with a central difference scheme for better numerical accuracy. But the way that we set up these central differences matters too. For fluxes into a cell, we use the 2nd order central difference approximation for the 1st derivative on the left face of the cell and for fluxes leaving the cell, we use the 2nd order central difference approximation for the 1st derivative on the right face of the cell. This is based on knowing the overall flows into and out of the cell on the left and right faces. This approach automatically conserves mass, not only for the differential equation but also numerically in the finite difference scheme. In my judgment, if the finite difference scheme is not set up in the specific way that we have done it, it would be impossible to recognize the relationship we have elucidated between numerical dispersion and actual dispersion. That's what so neat about what we have done. Of course, for this to hold together, the dispersion coefficient has to be expressible as in the groundwater literature (##D=v_ol##).

It is important to recognize that the first order picture of what is happening is provided by the sharp front model. All that the diffusion does is smooth the transitions between zones, and we can see from the results that this is not a major effect. So getting the dispersion included with super accuracy is not going to be an imperative (in my judgment) from an engineering standpoint.
 
  • #449
Chestermiller said:
We are setting this up with a central difference scheme for better numerical accuracy. But the way that we set up these central differences matters too. For fluxes into a cell, we use the 2nd order central difference approximation for the 1st derivative on the left face of the cell and for fluxes leaving the cell, we use the 2nd order central difference approximation for the 1st derivative on the right face of the cell. This is based on knowing the overall flows into and out of the cell on the left and right faces. This approach automatically conserves mass, not only for the differential equation but also numerically in the finite difference scheme. In my judgment, if the finite difference scheme is not set up in the specific way that we have done it, it would be impossible to recognize the relationship we have elucidated between numerical dispersion and actual dispersion. That's what so neat about what we have done. Of course, for this to hold together, the dispersion coefficient has to be expressible as in the groundwater literature (##D=v_ol##).

It is important to recognize that the first order picture of what is happening is provided by the sharp front model. All that the diffusion does is smooth the transitions between zones, and we can see from the results that this is not a major effect. So getting the dispersion included with super accuracy is not going to be an imperative (in my judgment) from an engineering standpoint.
Hi Chet, is the claim that the numerical dispersion from the upwind scheme is exactly the same as the physical dispersion in the central differencing scheme, or is the claim only that they are of the same order of magnitude?
 
  • #450
We are comparing the numerical dispersion introduced by upwind differencing to the physical dispersion captured by the central differencing scheme.

Given the energy balance:
\begin{equation}
\frac{d(\rho_x h_x)}{dt} = \frac{\phi_{x-\Delta x/2}(\frac{h_{x-\Delta x} + h_x}{2})- \phi_{x+\Delta x/2}(\frac{h_{x+\Delta x} + h_x}{2})}{\Delta x} + l\frac{\phi_{x+\Delta x/2}(h_{x+\Delta x} - h_x)- \phi_{x-\Delta x/2}(h_x - h_{x - \Delta x})}{\Delta x^2} + \frac{Ua}{\epsilon}(T_s-T)
\end{equation}

Using Taylor series expansion about point x:
\begin{align}
h_{x+\Delta x} &= h_x + \Delta x \frac{\partial h}{\partial x} + \frac{\Delta x^2}{2!}\frac{\partial^2 h}{\partial x^2} + O(\Delta x^3) \\
h_{x-\Delta x} &= h_x - \Delta x \frac{\partial h}{\partial x} + \frac{\Delta x^2}{2!}\frac{\partial^2 h}{\partial x^2} - O(\Delta x^3)
\end{align}

The upwind scheme for the first derivative at x can be expressed as:
##\frac{\partial h}{\partial x} \approx \frac{h_x - h_{x-\Delta x}}{\Delta x}##

Substituting in the Taylor expansion for ##( h_{x-\Delta x})##, we get:
## \frac{\partial h}{\partial x} \approx \frac{\partial h}{\partial x} + \frac{\Delta x}{2} \frac{\partial^2 h}{\partial x^2} + O(\Delta x^2)##

The truncation error, introducing numerical dispersion, is:
##\frac{\Delta x}{2} \frac{\partial^2 h}{\partial x^2} + O(\Delta x^2)##

Matching this with the dispersion term in the 2nd order representation:
\begin{equation}
l\frac{\phi_{x+\Delta x/2}(h_{x+\Delta x} - h_x)- \phi_{x-\Delta x/2}(h_x - h_{x - \Delta x})}{\Delta x^2}
\end{equation}

Showing equivalence:
\begin{equation}
\frac{\Delta x}{2} \frac{\partial^2 h}{\partial x^2} = l\frac{\phi_{x+\Delta x/2}(h_{x+\Delta x} - h_x)- \phi_{x-\Delta x/2}(h_x - h_{x - \Delta x})}{\Delta x^2}
\end{equation}

I may have done this in a roundabout way. Does the above somewhat make sense?
 
Back
Top