How Safe is the Boeing 737 Max's MCAS System?

In summary, the MCAS system was not the cause of the crash and it is possible for the plane to fly without the system if the angle of attack sensor is not working correctly. However, the plane is more likely to stall if the angle of attack sensor is not working correctly and the pilots need to manually fly the plane back to correct pitch attitude.
  • #1
cyboman
248
45
Hi,

I have a question regarding the tragic crash of the latest 737 Max.

Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude? Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?

I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.

-cybo
 
Physics news on Phys.org
  • #2
cyboman said:
I'm also confused as to how this MCAS system and other autopilot software is implemented.

Actually, the MCAS itself is not an autopilot; it's a system to adjust the trim to compensate for a change in the engine from previous 737 models, which, as I understand it (and pilot reports appear to bear this out, see the link below), is supposed to be active only when the pilot is manually flying the plane.

However, from some of the information I've seen online [1], it appears that the 737 MAX has also had uncommanded pitch down events while in autopilot. That indicates a different problem from the MCAS--I suspect it has to do with a faulty angle of attack sensor not being correctly detected by the automated system, so the faulty input is used to trigger a pitch down instead of the plane being kicked out of autopilot and the pilot being notified. Airbus aircraft are known to have a similar problem which has caused several incidents.

[1] https://news.ycombinator.com/item?id=19373707

cyboman said:
Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control?

Yes, and as you'll see from the reports in the Hacker News thread I linked to above, the pilots who submitted those reports (who were US carrier pilots and appear to have been much more careful about pre-briefing possible issues and acting quickly when an issue happened) did in fact immediately disengage autopilot and bring the plane back to correct pitch attitude when an uncommanded pitch down happened. But again, that had nothing to do with MCAS; in fact, one of the pilots noted that he had engaged autopilot on that flight earlier than he normally would have in order to remove a possible MCAS threat during a manual climb.

cyboman said:
Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?

From an automatic pilot point of view, not necessarily. If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime. Yes, you might skim pretty close to the ground with wings level, but that's better than falling into the ground tail first with the nose pitched way up in a stall.

The problem, as I see it, is that the automated systems are not properly programmed to detect and respond to faulty angle of attack input. The automated systems believe, based on faulty angle of attack (AoA) sensor input, that the nose is pitched way up and a stall is imminent, when in fact either the wings are level, or the airplane is in a controlled climb and is nowhere near a stall. In the case of the Airbus incidents I mentioned above, there were in fact three AoA sensors, one of which went bad--and instead of comparing its output to the other two sensors, spotting the bad sensor, and taking it out of the loop, the automatic system executed uncommanded pitch down based on the faulty input. That seems like an obvious design error to me.

In the case of the 737-MAX, it's not entirely clear what role AoA sensors played, since investigation is still ongoing, but I've seen at least one online article suspecting that there are only two AoA sensors instead of three in this system, which makes fault detection a lot harder. The obvious solution to me would be to add a third sensor (and properly compare the sensors to spot a faulty one, as above).
 
  • Like
Likes Git_OptoElectro, Edward Camic, dlgoff and 5 others
  • #3
cyboman said:
Hi,

I have a question regarding the tragic crash of the latest 737 Max.

Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude? Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?
[edit]
So, this is an issue I've been putting some thought into and I've moved this post of mine from the Ethiopia Air thread, that should answer your first two questions, though may prompt follow-ups: [/edit]
[re-located content]
My understanding of the 737 Max's Maneuvering Characteristic Augmentation System (MCAS)'s function is that it adjusts the "feel" of the airplane so that elevator backpressure is required to pitch-up and maintain pitch at high angle of attack and throttle. That would be normal for most airplanes, but the 737 MAX's large engines provide a pitch-up torque that reverses this behavior at high aoa and throttle; pilots would have to push forward to prevent the nose from continuing to rise. MCAS adjusts the trim to counter-act this "feel". It's an atypical system.
https://boeing.mediaroom.com/news-releases-statements?item=130402

It appears to me that the media is mis-reporting this as the MCAS reacting to a too-high angle of attack by automatically lowering the nose to prevent a stall. That's a common (universal on airliners?) but notably separate behavior. Strictly speaking - if I understand it correctly - MCAS does not actually override pilot action, but only changes the amount and direction of force required for the pilot to move/position the elevator.
[/re-located content]

I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.
Kind of. Flight control systems and autopilots are very complex and multi-layered. There are multiple mode of each. Here's a good primer:
https://www.skybrary.aero/index.php/Flight_Control_Laws

Perhaps the bottom line is this: Airliners don't get flown via direct control unless there are major failures.
 
  • Like
Likes Edward Camic and jim hardy
  • #4
PeterDonis said:
From an automatic pilot point of view, not necessarily. If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime. Yes, you might skim pretty close to the ground with wings level, but that's better than falling into the ground tail first with the nose pitched way up in a stall.

The problem, as I see it, is that the automated systems are not properly programmed to detect and respond to faulty angle of attack input.
Example:
Air France 447 crashed because the flying pilot held full back-pressure on the control stick and stalled the plane from cruise until it hit the ocean about 4 minutes later. The flight control system had a stall-prevention system, but it was receiving faulty airspeed indication, so it disconnected that feature. It's difficult to know what the pilot was thinking, but it is possible he didn't realize it was possible to stall the plane.
 
  • #5
cyboman said:
Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude?
If by "nose down" you mean lowering the nose when the angle of attack is too high, then the answer is "of course not" - the whole point of the system is to bring the aircraft closer to a level attitude from too much pitch up. nose down to reduce the angle of attack. The Lion Air crash involved a bad attitude angle of attack sensor telling the software that the nose was too high aircraft was in danger of stalling because the angle of attack was too high when it wasn't.
Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?
The MCAS system is disabled whenever the flaps are deployed
I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.
That would be... problematic... on a fly-by-wire aircraft.

This might be a good time to mention that it is way premature to assume that ET961 is a repeat of LN610 (Lion Air). Wait for the investigation to complete. If you can't stand to wait for the report (a long wait - these things take a while) you'll much informed discussion on the anonymous professional pilots' forums.

[Post edited to properly reflect the distinction between pitch attitude and angle of attack. They are not generally the same, and it is possible for the AoA to be dangerously high even when the nose is level or pitched down]
 
Last edited:
  • Like
Likes anorlunda
  • #6
Thanks for all your replies. This is definitely a much more technical analysis than what's in the media and is what I was looking for.

If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime.

It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all. Given the scenario as it looks in these recent tragic cases. It appears the pilot, perhaps not knowing how to disable the MCAS or overwhelmed at the time, was literally fighting the computer, so the rate of climb and decent is all over the map (from the data I've seen). The computer pitches down, the pilot pitches up... In this case, if such a rule existed that said, under this altitude, the MCAS is disabled. The purpose being as is suggested, if there is a failure in any of the sensors, it prevents the MCAS from putting the plane into a dive at such an irrecoverable altitude that it ends up flying into the ground. If the MCAS is engaged say above 10 000 feet, there is time for the pilot to actually disable the system, and recover manually. However, as I'll suggest in a moment, perhaps the existence of such a MCAS is problematic intrinsically.

Perhaps the bottom line is this: Airliners don't get flown via direct control unless there are major failures.

I think given these tragic case studies, there is a precedent to have an all encompassing master override, that in such a traumatic scenario (warning buzzers and warning lights going off everywhere), the pilot can with one switch, take complete control of the airplane. This seems a given, or else we might as well call these aircraft drones and get rid of the pilot all together. In which case, we'd have no one to blame but the software. In other words, if the pilot cannot take complete control of the fly-by-wire system with a simple pull of a lever. Then the pilot is not really in control of the vehicle. If the MCAS is in fact part of the fly-by-wire system itself, then we're in a scenario that the airplanes design is so unstable it needs an additional software "layer" to control it due to it's atypical aerodynamics. I would argue this is acceptable perhaps in military stealth aircraft, but not in commercial airliners. Perhaps at one point, but I think it's fair to say we're not there yet, no matter how expensive kerosene is.
 
  • Like
Likes Spinnor and nitsuj
  • #7
cyboman said:
It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all.

The key input is the angle of attack. That's why I focused on the angle of attack sensors in my previous post. It seems to me that a simple fix would be to have three sensors and use a majority vote among them to determine when one is giving faulty input, and average them to obtain the angle of attack that drives the rest of the software. In fact it worries me that that is not already how these systems are designed; what that tells me is that an obvious design flaw can make it all the way through multiple levels of design review and signoff by two major aircraft manufacturers plus all the regulators that are supposed to certify the systems, without being spotted. That does not seem like a good state of affairs.

cyboman said:
an all encompassing master override

There is one. If you look at the Hacker News thread I linked to, and the reports quoted there, you will see that the pilots in two incidents were able to immediately disengage automatic control and take manual control of the airplane when they had to. I don't think that is the issue.

The issue I see is that, because of the faulty design of the automatic systems, an uncommanded event can put the plane in jeopardy (for example with an uncommanded pitch down at low altitude when the plane is not actually close to a stall) without the pilot having enough time to react even if all he has to do is flip one switch and take control. What should happen is that faulty input should be detected and ignored, instead of being allowed to trigger automated actions. If it gets to the point where enough sensors have to be ignored that the automatic system can't fly the airplane, a big red light should go on in the cockpit and no control changes should be made until the pilot takes manual control.

Another obvious difference between the incidents referred to in the Hacker News thread and the ones that caused crashes is that the former were US carriers and the latter were carriers from countries with looser standards for pilots. That's not to say the manufacturers should not fix the design flaw; they should. But it shows that pilot training and pilot discipline makes a big difference when unforeseen events happen.
 
  • Like
Likes Dale and Klystron
  • #8
Nugatory said:
it is way premature to assume that ET961 is a repeat of LN610 (Lion Air).

Yes, this is a good point. My previous posts are based on what we know of the Lion Air incident, plus what I know of the previous Airbus incidents based on considerable digging into them that I did at the time. That information is already enough, in my view, to know that there is a significant design flaw that needs to be addressed. But that doesn't mean we know enough now to know that ET961 was caused by the same flaw.
 
  • Like
Likes russ_watters
  • #9
cyboman said:
It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all. Given the scenario as it looks in these recent tragic cases. It appears the pilot, perhaps not knowing how to disable the MCAS or overwhelmed at the time, was literally fighting the computer, so the rate of climb and decent is all over the map (from the data I've seen). The computer pitches down, the pilot pitches up... In this case, if such a rule existed that said, under this altitude, the MCAS is disabled. The purpose being as is suggested, if there is a failure in any of the sensors, it prevents the MCAS from putting the plane into a dive at such an irrecoverable altitude that it ends up flying into the ground. If the MCAS is engaged say above 10 000 feet, there is time for the pilot to actually disable the system, and recover manually.
We may have reached a condition where planes are so safe due to automation that additional automation causes problems by lowering pilot skill or attentiveness. It appears to be true - at least for Lion Air 610 - that failure of this system created a situation the pilots were unable to deal with, but should have been. On the other hand, turning off such a system in Air France 447 created a situation the pilots were unable to deal with. So its pick your poison.
I think given these tragic case studies, there is a precedent to have an all encompassing master override, that in such a traumatic scenario (warning buzzers and warning lights going off everywhere), the pilot can with one switch, take complete control of the airplane.
This assumes that this will help more than it hurts. The reason these systems exist is precisely because they have deemed to be more helpful than hurtful. And indeed a plane crashed due to shutting off (actually, rebooting) such a system and the pilots subsequently losing control:
https://en.wikipedia.org/wiki/Indonesia_AirAsia_Flight_8501
 
  • #10
PeterDonis said:
There is one. If you look at the Hacker News thread I linked to, and the reports quoted there, you will see that the pilots in two incidents were able to immediately disengage automatic control and take manual control of the airplane when they had to. I don't think that is the issue.

If this was the case, and the override is clearly standardized and established, then why would the data suggest a scenario where the pilot could not overcome the computers input.
 
  • #11
PeterDonis said:
The key input is the angle of attack. That's why I focused on the angle of attack sensors in my previous post. It seems to me that a simple fix would be to have three sensors and use a majority vote among them to determine when one is giving faulty input, and average them to obtain the angle of attack that drives the rest of the software. In fact it worries me that that is not already how these systems are designed; what that tells me is that an obvious design flaw can make it all the way through multiple levels of design review and signoff by two major aircraft manufacturers plus all the regulators that are supposed to certify the systems, without being spotted. That does not seem like a good state of affairs.

I agree with much what you state here. But it does not address the relevance of a limit to a system that pitches the aircraft's nose down to correct the AOA. Should this system be limited by a minimum threshold of altitude or not.
 
  • #12
russ_watters said:
We may have reached a condition where planes are so safe due to automation that additional automation causes problems by lowering pilot skill or attentiveness. It appears to be true - at least for Lion Air 610 - that failure of this system created a situation the pilots were unable to deal with, but should have been. On the other hand, turning off such a system in Air France 447 created a situation the pilots were unable to deal with. So its pick your poison.

This is exactly what I'm getting at. Automation is only so good as it can be overseen by a more competent system. Which is the human brain of an experienced pilot with hundreds or thousands of hours of experience. I don't see it as a binary choice of pick your poison. Automation such that it is, if it exceeds the limits of a pilots ability to take over with manual control, leaves us at the whims of software that is at this point not as dependable and rigorous as human experience. My argument is that we're simply not even close to there yet. Maybe 90% of the time the autonomous system performs better, but the other 10% of the time. It fails catastrophically without warning because it's a machine with no real inductive or deductive ability. It can only operate within it's parameters, exceed those and it crashes. Humans don't work like that. They are improvisational for lack of a better word.

russ_watters said:
This assumes that this will help more than it hurts. The reason these systems exist is precisely because they have deemed to be more helpful than hurtful. And indeed a plane crashed due to shutting off (actually, rebooting) such a system and the pilots subsequently losing control:
https://en.wikipedia.org/wiki/Indonesia_AirAsia_Flight_8501

That's one example you give and not conclusive that autonomous systems superseded human piloting. In reality, our systems that replace humans are still in infancy and it seems, as I see it, we have expected these algorithms to fly before they really have proved, with any rigor, that they can crawl consistently.
 
  • #13
cyboman said:
why would the data suggest a scenario where the pilot could not overcome the computers input.

Because if an uncommanded pitch down on autopilot happens at low altitude, there might not be time to recover even if the pilot immediately perceives the problem and disengages the autopilot. By the time he's done that and manually pitched the nose back up again, the plane might have gotten too low to avoid a crash.

cyboman said:
Should this system be limited by a minimum threshold of altitude or not.

I would say no, because, as I said in a previous post, if the airplane really is stalling, pitching down can still be the right response even at very low altitude, because the only chance you have to avoid a crash from a stall at low altitude is to lower the nose and regain airspeed as rapidly as possible. Not pitching down just means the airplane crashes tail first. So if you're going to have an automatic stall prevention system, the right thing is not to disable it at low altitude; the right thing is to properly detect and ignore faulty sensor input, so the system can take the right action when it knows it has good data.
 
  • #14
cyboman said:
Maybe 90% of the time the autonomous system performs better, but the other 10% of the time. It fails catastrophically without warning

Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).

cyboman said:
it seems, as I see it, we have expected these algorithms to fly before they really have proved, with any rigor, that they can crawl consistently

I think this is too pessimistic about automatic flight controls as a whole. They are not new; commercial airline flights have been done mostly on autopilot for decades now.

Automated systems for things like stall prevention are newer, but really, the logic of the systems, assuming the sensor data is accurate, is pretty simple, and the flight regimes in question are very well understood. What concerns me is that the design, review, and certification process, in two major airplane manufacturers and for multiple countries, has let what seems to me to be an obvious design flaw in how sensor data is used make it all the way through without being spotted.
 
  • #15
PeterDonis said:
Because if an uncommanded pitch down on autopilot happens at low altitude, there might not be time to recover even if the pilot immediately perceives the problem and disengages the autopilot. By the time he's done that and manually pitched the nose back up again, the plane might have gotten too low to avoid a crash.

From what I understand the MCAS is not supposed to engage in autopilot at all. Further, if there was a clear way as I've suggested for the pilot to circumvent the computer from pitching down, it would of occurred and we wouldn't see the erratic climb data. The pilot would correct by pitching the plane up and the MCAS would never kick in again. But it did. That's the entire scenario we see in the data.

PeterDonis said:
I would say no, because, as I said in a previous post, if the airplane really is stalling, pitching down can still be the right response even at very low altitude, because the only chance you have to avoid a crash from a stall at low altitude is to lower the nose and regain airspeed as rapidly as possible. Not pitching down just means the airplane crashes tail first. So if you're going to have an automatic stall prevention system, the right thing is not to disable it at low altitude; the right thing is to properly detect and ignore faulty sensor input, so the system can take the right action when it knows it has good data.

That's given the system is getting proper data, and the pilot is incompetent or leading the plane in a stall. However, if the sensor fails, which is possible in even the best scenario (it's possible all 3 sensors could fail even if it's very unlikely), the computer executing a pitch down could and did repeatably result in an irrecoverable dive. I simply am not convinced the system should ever execute a pitch down maneuver at a potentially irrecoverable altitude. I would also suggest there is no experimental data to suggest a plane would be in a stall scenario caused by a pilot at such low altitude where the computer should intervene.
 
  • #16
PeterDonis said:
Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).

You are conflating two systems. Auto pilot at cruise is a much simpler system than anti-stall prevention which as we see in these cases, is happening during take off. A system that keeps a plane on course at high altitude is not under the amount of variables and rigor that can occur during take off or in flight complications like stalling. To channel a pilot, the computer can't "feel" what's going on. It has no instinct or experience. It can only deduct based on it's trained neural net. Introduce something it's never seen before, and watch that system completely halt or do something completely unexpected.

PeterDonis said:
I think this is too pessimistic about automatic flight controls as a whole. They are not new; commercial airline flights have been done mostly on autopilot for decades now.

Automated systems for things like stall prevention are newer, but really, the logic of the systems, assuming the sensor data is accurate, is pretty simple, and the flight regimes in question are very well understood. What concerns me is that the design, review, and certification process, in two major airplane manufacturers and for multiple countries, has let what seems to me to be an obvious design flaw in how sensor data is used make it all the way through without being spotted.

I agree, I think cruise control works on cars, because when you touch the brake, the entire system halts and shuts down. Auto pilot at cruising altitude on aircraft is likely similar. If it detects an anomaly it halts and alerts the pilot. When you try to throw in systems that circumvent pilot input in specific scenarios like stalls, at mission crucial maneuvers like take off. Then you're pushing the ability of our systems and their ability. And accidents are bound to happen. I think we're in agreement on that. It's the arrogance and trust in such systems that leads to underestimating the actual infinite amount of use cases that could crash the system.
 
  • #17
PeterDonis said:
Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).
I don't disagree with the message, but I don't think you demonstrated that in your post. Humans wouldn't crash every flight. Total accident rates alone don't tell us anything about the safety of automated flying systems vs. humans.
 
  • #18
cyboman said:
From what I understand the MCAS is not supposed to engage in autopilot at all.

Yes, but the angle of attack sensor and the automated stall protection system, which is the key control algorithm that the AoA sensor feeds into, are used by both MCAS and autopilot. So the same analysis of possible sensor fault and how to detect it and what to do about it applies to both. The only difference with MCAS is that, as I understand it, instead of just telling the nose to pitch down, MCAS increases nose down trim, which increases the control effort the pilot has to exert to keep a certain pitch attitude. Pilots who aren't prepared for that or don't understand what the control system is doing could be unable to properly correct what's going on fast enough at low altitude.

It's worth mentioning here that the reason Boeing put MCAS on the 737 MAX was to avoid having to retrain pilots to handle the different trim characteristics induced by a change in engines as compared to previous 737 models. The new engines created a pitch up moment as compared to previous 737s, and Boeing didn't want pilots to feel like the plane flew differently from previous 737s since that might trigger retraining requirements, so it added MCAS to automatically adjust the trim down to compensate. If this had been a new aircraft type, that wouldn't have been done; pilots would just have learned about the trim characteristics of the plane when they learned to fly it. One of the issues pilots appear to have with this is that they were not given all of this information about MCAS in the first place; it's only come out as a result of incidents happening.

cyboman said:
if there was a clear way as I've suggested for the pilot to circumvent the computer from pitching down, it would of occurred and we wouldn't see the erratic climb data

That's if the pilot understood what was going on and hit the "disengage" button. However, you make a valid point that a "disengage autopilot" button is not the same as a "disengage MCAS" button, or a "stop everything automatic you're doing and let me fly the freaking airplane myself" button. It would be worth looking into what information is available about Boeing's system and what overrides are available to the pilot for the latter two cases. (AFAIK Airbus has the first button--which every autopilot has--and the third, which they call "Direct Law" and which disables even automatic stall prevention. I'm not sure what Boeing's equivalent of Direct Law is or whether it would disable MCAS--if there isn't such a switch, I would agree with you that there ought to be.)

cyboman said:
Auto pilot at cruise is a much simpler system than anti-stall prevention which as we see in these cases, is happening during take off.

So does autopilot. Autopilot usually gets engaged soon after takeoff, and doesn't get disengaged until shortly before landing, or sometimes not even then, since auto-landing systems have been available and usable for some time now. So autopilot is not just handling the cruise flight regime, it's handling all of them.

Also, anti-stall prevention doesn't just happen during takeoff. It can happen during a wind shear at altitude, for example. Nor do uncommanded pitch down events only happen during takeoff. The Quantas flights I referred to above had multiple uncommanded pitch down events at cruising altitude, because Airbus's control algorithm didn't compare the outputs from all three AoA sensors to detect faulty input, but just used the faulty input of one sensor to trigger the event.

cyboman said:
Auto pilot at cruising altitude on aircraft is likely similar. If it detects an anomaly it halts and alerts the pilot.

Yes, if it detects an anomaly. The issue I'm talking about is a flaw in the way AoA sensor input is handled that fails to use an obvious method of detecting an anomaly with that input. The fix for that issue seems obvious to me, and would not require any reduction in the use of autopilot or automatic stall prevention; it would just require using the obvious method for detecting faulty sensor input when you have three sensors to compare.

cyboman said:
Then you're pushing the ability of our systems and their ability.

I actually don't see it that way. As I said, all of the flight regimes involved are well understood, as are the correct actions to take to handle problems. I see this as a flaw in the design, engineering, and certification process that allowed an obvious design flaw to make it through. That's concerning, but it's not a concern about the capabilities of the systems themselves. It's a concern about the culture of those organizations.
 
  • #19
mfb said:
Total accident rates alone don't tell us anything about the safety of automated flying systems vs. humans.

Yes, that's a fair point.
 
  • Like
Likes cyboman
  • #20
First of all Peter, I have to commend your knowledge and time to converse with me. Your expertise far exceeds mine and I greatly appreciate your continued engagement.

PeterDonis said:
It's worth mentioning here that the reason Boeing put MCAS on the 737 MAX was to avoid having to retrain pilots to handle the different trim characteristics induced by a change in engines as compared to previous 737 models. The new engines created a pitch up moment as compared to previous 737s, and Boeing didn't want pilots to feel like the plane flew differently from previous 737s since that might trigger retraining requirements, so it added MCAS to automatically adjust the trim down to compensate. If this had been a new aircraft type, that wouldn't have been done; pilots would just have learned about the trim characteristics of the plane when they learned to fly it. One of the issues pilots appear to have with this is that they were not given all of this information about MCAS in the first place; it's only come out as a result of incidents happening.

Yes, I read some of this. This seems to be the nexus of the potential culpability and negligence on Boeing's part. Cost cutting.

PeterDonis said:
Also, anti-stall prevention doesn't just happen during takeoff. It can happen during a wind shear at altitude, for example. Nor do uncommanded pitch down events only happen during takeoff. The Quantas flights I referred to above had multiple uncommanded pitch down events at cruising altitude, because Airbus's control algorithm didn't compare the outputs from all three AoA sensors to detect faulty input, but just used the faulty input of one sensor to trigger the event.

This is interesting. It seems that if a system like MCAS engages at a higher altitude there is substantial time and altitude to react and correct any error the system may execute by the pilot. The issue as we see in these cases is the system is engaging at mission critical maneuvers like take-off. I wonder too if there needs to be a better interface between the pilot and the system. A sort of intelligent HUD of sorts that communicates to the pilot exactly what the plane / computer is executing and how to override if need be.
PeterDonis said:
Yes, if it detects an anomaly. The issue I'm talking about is a flaw in the way AoA sensor input is handled that fails to use an obvious method of detecting an anomaly with that input. The fix for that issue seems obvious to me, and would not require any reduction in the use of autopilot or automatic stall prevention; it would just require using the obvious method for detecting faulty sensor input when you have three sensors to compare.

That's again presupposing that an event of total sensor failure could not happen.
PeterDonis said:
I actually don't see it that way. As I said, all of the flight regimes involved are well understood, as are the correct actions to take to handle problems. I see this as a flaw in the design, engineering, and certification process that allowed an obvious design flaw to make it through. That's concerning, but it's not a concern about the capabilities of the systems themselves. It's a concern about the culture of those organizations.

Fair enough, I am personally less trust worthy in our ability to program all potential use cases into an algorithm that is completely bulletproof for mission critical maneuvers.
 
  • #21
PeterDonis said:
That's if the pilot understood what was going on and hit the "disengage" button. However, you make a valid point that a "disengage autopilot" button is not the same as a "disengage MCAS" button, or a "stop everything automatic you're doing and let me fly the freaking airplane myself" button. It would be worth looking into what information is available about Boeing's system and what overrides are available to the pilot for the latter two cases. (AFAIK Airbus has the first button--which every autopilot has--and the third, which they call "Direct Law" and which disables even automatic stall prevention. I'm not sure what Boeing's equivalent of Direct Law is or whether it would disable MCAS--if there isn't such a switch, I would agree with you that there ought to be.)

This seems like a laymen conclusion, but sometimes the obvious escapes the experts when they are burdened with the abundance of complexities of a system. When I replay what happened in my mind in these flights, not a pilot myself, I imagine the stall and I think, OK we're stalling, we're falling, let's increase thrust, hell let's max out the power of these engines, let's get level and get the heck out of this situation. Perhaps, though simplistic, this is what the pilot wished to do, but the systems and I say plural, system(S), would not allow him to execute such a maneuver. It's evident in our discourse that the flight controls are incredibly complex, and necessarily so but in some use cases, complexity is a burden and the pilot just needs to "fly the plane". I think with all the complexity inherent in these modern aircraft, there should be a standardized, "direct law" lever (not a button), a big lever, that's typically in the same spot, like an emergency break on a car, that allows the pilot to enable the "direct law" control, you allude to. I think every passenger would feel better knowing that such a circumvent exists.
 
Last edited:
  • #22
https://www.nytimes.com/2019/03/13/world/africa/boeing-ethiopian-airlines-plane-crash.html
At least two pilots who flew Boeing 737 Max 8 planes on routes in the United States had raised concerns in November about the noses of their planes suddenly dipping after engaging autopilot, according to a federal government database of incident reports.

The problems the pilots experienced appeared similar to those preceding the October crash of Lion Air Flight 610 in Indonesia, in which 189 people were killed. The cause of that crash remains under investigation, but it is believed that inaccurate readings fed into the Max 8’s computerized system may have made the plane enter a sudden, automatic descent.

In both of the American cases, the pilots safely resumed their climbs after turning off autopilot. One of the pilots said the descent began two to three seconds after turning on the automated system.

“I reviewed in my mind our automation setup and flight profile but can’t think of any reason the aircraft would pitch nose down so aggressively,” the pilot wrote.

A pilot on a separate flight reported in November a similar descent and hearing the same warnings in the cockpit, and said neither of the pilots on board was able to find an inappropriate setup.

“With the concerns with the MAX 8 nose down stuff, we both thought it appropriate to bring it to your attention,” the pilot said.

The complaints were listed in a public database maintained by NASA and filled with thousands of reports, which pilots file when they encounter errors or issues. The database does not include identifying information on the flights, including airline, the pilot’s name or the location.

Another pilot wrote of having been given insufficient training to fly the Max 8, a new, more fuel-efficient version of Boeing’s best-selling 737.

“I think it is unconscionable that a manufacturer, the F.A.A., and the airlines would have pilots flying an airplane without adequately training, or even providing available resources and sufficient documentation to understand the highly complex systems that differentiate this aircraft from prior models,” the pilot wrote.

The pilot continued: “I am left to wonder: what else don’t I know? The Flight Manual is inadequate and almost criminally insufficient.”
 
  • #23
cyboman said:
When I replay what happened in my mind in these flights, not a pilot myself, I imagine the stall and I think, OK we're stalling, we're falling, let's increase thrust, hell let's max out the power of these engines, let's get level and get the heck out of this situation.

No. No. No. When the plane stalls you must put the nose down to increase airspeed, not get level. As a glider pilot, I'm used to flying at the edge of stall speed for prolonged periods. Adding an engine changes the parameters, but it does not change the basic physics of flight.

It is counter-intuitive at first. If you stall close to the ground, you must immediately push the stick forward to put the nose down. But after training, the counter-intuitive becomes intuitive.
 
  • Like
Likes krater, Klystron and russ_watters
  • #24
cyboman said:
It seems that if a system like MCAS engages at a higher altitude there is substantial time and altitude to react and correct any error the system may execute by the pilot. The issue as we see in these cases is the system is engaging at mission critical maneuvers like take-off.

Thinking of the system as "engaging" is misleading. The MCAS system is always adjusting the trim in manual flight to compensate for the pitch up moment of the engines. Its purpose is not "spot some particular condition we don't want and adjust to get out of it". Its purpose is "change the way the plane feels to the pilot to make it like previous 737s". If the system were only active part of the time in manual flight mode, the "feel" of the plane would change from one flight regime to another. That would not be good.

cyboman said:
I wonder too if there needs to be a better interface between the pilot and the system. A sort of intelligent HUD of sorts that communicates to the pilot exactly what the plane / computer is executing and how to override if need be.

I agree this would be a good idea. I don't know what information Boeing's cockpit communicates currently, but I know that in several previous Airbus incidents, one of the findings of the investigation was that the cockpit information system was not communicating information well to the pilots.

cyboman said:
That's again presupposing that an event of total sensor failure could not happen.

No, it isn't. If a total sensor failure happens, the odds of all three sensors agreeing with each other to within the required tolerance are too remote to worry about. The system I've described would see a total sensor failure as all three sensors disagreeing, and would stop believing any of them and light up a big red light in the cockpit that says "automatic systems disabled because of bad sensors", and the pilot would take over.
cyboman said:
I am personally less trust worthy in our ability to program all potential use cases into an algorithm that is completely bulletproof for mission critical maneuvers.

It's not a matter of "use cases" not being known; they are known. Remember we're not talking about aerobatics or military flying or recreation; we're talking about commercial airliners flying between known airports on known routes with known flight profiles. That's a much narrower requirement than "be able to deal with anything that could ever be done with an airplane, better than a human does". But the algorithms do require accurate sensor data, so making sure all sensor data is checked for accuracy before acting on it seems like an obvious design requirement.
 
  • #25
cyboman said:
When I replay what happened in my mind in these flights, not a pilot myself, I imagine the stall

Then you're imagining wrong. None of these planes stalled. (See further note below.) That's not the problem.

What you should be imagining is the plane doing its normal thing--climbing out after takeoff, or cruising at altitude--and then the automatic system suddenly doing something different--pitching the nose down, or putting in a lot more nose down trim that makes your control effort much greater to keep the plane's nose up--for no reason because of faulty sensor data. That's the problem.

cyboman said:
Perhaps, though simplistic, this is what the pilot wished to do, but the systems and I say plural, system(S), would not allow him to execute such a maneuver.

No. That's not what happened. See above.

Note: Here I am talking about the incidents I've described where we know what happened, and where MCAS issues were observed, or where uncommanded pitch down events happened in autopilot. We don't know enough yet about the Ethiopian Airlines incident to know what caused it. With Lion Air I think we know more, but I'm not sure MCAS has been definitely fingered as the only root cause--but we know from the flight profile that "the plane stalled and the automatic system prevented the pilot from recovering properly" is not what happened; the plane wasn't stalled at any point.
 
  • #26
@fresh_42, re post #22, these look like the same group of incidents that were being discussed in the Hacker News thread I linked to. The pilots' comments are very good information.
 
  • #27
Some thoughts and questions for experts. Please correct me if I have anything wrong. I only ever flew gliders..

1) Inherent stability: Most cambered airfoils have a negative pitching moment (a tendency to pitch the nose down). This is usually countered by a combination of a rearward centre of gravity position and down force on the tail. Problem is down force on the tail creates drag and the wing has to create more lift to compensate which creates more drag. So to improve efficiency and reduce fuel consumption, you ideally want as little downforce on the tail as possible. You can reduce this by moving the centre of gravity further back but that can create stability issues. These stability issues can be fixed using computers.

Q. How far down this path have we gone in passenger aircraft?

2) "The balance of power": On something like a light aircraft pitch trim is effected by a small trim tab, typically much smaller than the elevator. As I understand it most large passenger aircraft have both elevators and a "all moving tail plane" or AMT. Typically the pilot controls pitch via the elevators, trim and flight systems control pitch via the AMT. I understand the AMT has and needs to have quite a large range of movement to cope with different flight configurations (with/without flaps extended for example). In some aircraft if the pilot holds in up elevator flight systems interpret this as an out of trim condition and move the tail to reduce the stick forces the pilot has to apply. I believe this may have been a factor in AF447? In the Lion Air accident MCAS appears to have applied repeated nose down inputs (trim runaway) presumably also via the powerful AMT.

Q. Is anyone else a little uneasy that electronic flight systems have a more powerful control surface (AMT) under their command than the human pilot (Elevator)? I'm sure aircraft designers will say there are systems in place to ensure pilots have full control at all times - but it just feels wrong.
 
  • #28
cyboman said:
[snip] Automation such that it is, if it exceeds the limits of a pilots ability to take over with manual control, leaves us at the whims of software that is at this point not as dependable and rigorous as human experience.[snip]
Crew factors have been a focus for research since the dawn of aviation, indeed one of the founding reasons for NACA, now NASA. Airline flight crews have shrunk over time. For instance Flight Engineers have been replaced by automation and advanced engine designs. Navigators, almost always trained pilots, have been mostly if not entirely replaced on commercial flights by enhanced navigation and communication. Relief pilots and Training pilots ("third seaters") are specific to regulations and airlines but have proved vital in alleviating problems with advice, also available if a pilot becomes incapacitated.

No matter what the cause of the accident, consider the composition of the Ethiopian Airlines crew. Airliner.net reports a 29 year old captain supervising a first officer with ~200 hours flight experience. While a common on-the-job training (OJT) scenario, what happens if the captain is unable to perform?

Managing crew costs are a major economic factor for successful airlines. While analogies with ground transportation are common, even long-haul trucks only have one driver. Transportation unions fight for reasonable hours and relief operators. Autonomous ground vehicles soon followed by surface vessels are rapidly becoming reality. Transport crew sizes have reduced while information to pilots and drivers has increased. As airliner automation improves, two trained pilots may seem economically redundant. Triple redundancy, an important engineering principle, may not have an economical counterpart.
 
  • #30
anorlunda said:
No. No. No. When the plane stalls you must put the nose down to increase airspeed, not get level. As a glider pilot, I'm used to flying at the edge of stall speed for prolonged periods. Adding an engine changes the parameters, but it does not change the basic physics of flight.

It is counter-intuitive at first. If you stall close to the ground, you must immediately push the stick forward to put the nose down. But after training, the counter-intuitive becomes intuitive.

Ahh, right. That makes sense, so you can increase airspeed by pitching down and then increase lift as well.
 
  • #31
PeterDonis said:
Thinking of the system as "engaging" is misleading. The MCAS system is always adjusting the trim in manual flight to compensate for the pitch up moment of the engines. Its purpose is not "spot some particular condition we don't want and adjust to get out of it". Its purpose is "change the way the plane feels to the pilot to make it like previous 737s". If the system were only active part of the time in manual flight mode, the "feel" of the plane would change from one flight regime to another. That would not be good.

Right, to clarify, when I say engage, what I mean is the when the MCAS activates erroneously. Such that as we see in the Lionair case, it repeatably tries to pitch down and the pilot fights it trying to climb. My point is that when this system is in error, say during take-off, there very likely is not time to bypass it before the system has pitched the aircraft into an irrecoverable dive. I sounds to me that the MCAS is basically a sub system of the fly-by-wire system, since as you said under manual control it is always active. If that's the case then it looks like we're moving too far from "direct law" with sub systems that make the plane "easier" to fly or more like previous aircraft. It seems that was a very expensive mistake and training for the different flight characteristics would of been a way better direction, regardless of cost or convenience. I also find it hard to believe that with all the reports we see (including those that didn't result in crashes) it's always a faulty sensor causing MCAS to fail. I think the code should take into account pilot input and altitude. If it detects the pilot is constantly pitching up and the altitude is falling to a dangerously low level, it should automatically shut down without the need for the pilot to deliberately bypass.

PeterDonis said:
I agree this would be a good idea. I don't know what information Boeing's cockpit communicates currently, but I know that in several previous Airbus incidents, one of the findings of the investigation was that the cockpit information system was not communicating information well to the pilots.

It almost sounds like a good fit for an AI. A CGI human face that is reporting to the pilot what the computer is doing at the time. Almost like another member of the flight crew. I was watching some coverage and they said in the one of the reports of the problem the computer was saying "Don't Sink, Don't Sink". If that's true that seems like a pretty ridiculous verbal feedback to a pilot. But maybe I'm wrong and that's following best practices? I mean I would expect that sort of verbiage in a videogame sim. An actual airliner I would expect to say something more specific like, Stall risk, or MCAS pitching down, push to deactivate...etc...

PeterDonis said:
No, it isn't. If a total sensor failure happens, the odds of all three sensors agreeing with each other to within the required tolerance are too remote to worry about. The system I've described would see a total sensor failure as all three sensors disagreeing, and would stop believing any of them and light up a big red light in the cockpit that says "automatic systems disabled because of bad sensors", and the pilot would take over.

Right, I'm still not convinced with all the reports it's always a sensor failure. It could be in the code itself and how it's interpreting that input. Also, "too remote to worry about" sounds like something an engineer might regret stating, but I get your point.

PeterDonis said:
It's not a matter of "use cases" not being known; they are known. Remember we're not talking about aerobatics or military flying or recreation; we're talking about commercial airliners flying between known airports on known routes with known flight profiles. That's a much narrower requirement than "be able to deal with anything that could ever be done with an airplane, better than a human does". But the algorithms do require accurate sensor data, so making sure all sensor data is checked for accuracy before acting on it seems like an obvious design requirement.

I see your point. I do still think you're underestimating the variables and complexity of landing and take-off, weather, unsecured cargo load, possible mechanical failure all with aerodynamically unstable vehicles that need MCAS like systems.
 
Last edited:
  • #32
PeterDonis said:
Note: Here I am talking about the incidents I've described where we know what happened, and where MCAS issues were observed, or where uncommanded pitch down events happened in autopilot. We don't know enough yet about the Ethiopian Airlines incident to know what caused it. With Lion Air I think we know more, but I'm not sure MCAS has been definitely fingered as the only root cause--but we know from the flight profile that "the plane stalled and the automatic system prevented the pilot from recovering properly" is not what happened; the plane wasn't stalled at any point.

Right, I'm getting confused with how the MCAS is trying to prevent a stall and what's actually happening is that there is no risk of stall and instead there's basically a battle between the MCAS input and the pilot input, where unfortunately the MCAS prevailed.
 
  • #33
CWatters said:
Q. Is anyone else a little uneasy that electronic flight systems have a more powerful control surface (AMT) under their command than the human pilot (Elevator)? I'm sure aircraft designers will say there are systems in place to ensure pilots have full control at all times - but it just feels wrong.

To confirm, are you saying there is a mechanical device, AMT, that controls pitch which the pilot has no control over?
 
  • #34
Klystron said:
Crew factors have been a focus for research since the dawn of aviation, indeed one of the founding reasons for NACA, now NASA. Airline flight crews have shrunk over time. For instance Flight Engineers have been replaced by automation and advanced engine designs. Navigators, almost always trained pilots, have been mostly if not entirely replaced on commercial flights by enhanced navigation and communication. Relief pilots and Training pilots ("third seaters") are specific to regulations and airlines but have proved vital in alleviating problems with advice, also available if a pilot becomes incapacitated.

No matter what the cause of the accident, consider the composition of the Ethiopian Airlines crew. Airliner.net reports a 29 year old captain supervising a first officer with ~200 hours flight experience. While a common on-the-job training (OJT) scenario, what happens if the captain is unable to perform?

Managing crew costs are a major economic factor for successful airlines. While analogies with ground transportation are common, even long-haul trucks only have one driver. Transportation unions fight for reasonable hours and relief operators. Autonomous ground vehicles soon followed by surface vessels are rapidly becoming reality. Transport crew sizes have reduced while information to pilots and drivers has increased. As airliner automation improves, two trained pilots may seem economically redundant. Triple redundancy, an important engineering principle, may not have an economical counterpart.

Very interesting. I for one would pay more for a ticket from an airline that embraces human redundancy and a little over-engineering in the name of robustness and safety.
 
  • Like
Likes Klystron
  • #35
In this thread alone there seems to be different opinions about failures of AOA sensors in the Lion Air Crash.

Had anybody seen claims of sensor failure on Lion Air that are based on the FDR data rather than speculation by journalists?
 

Similar threads

Replies
28
Views
5K
  • General Discussion
Replies
4
Views
7K
Back
Top