Boeing 737 Max MCAS System

198
28
Hi,

I have a question regarding the tragic crash of the latest 737 Max.

Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude? Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?

I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.

-cybo
 
23,846
5,796
I'm also confused as to how this MCAS system and other autopilot software is implemented.
Actually, the MCAS itself is not an autopilot; it's a system to adjust the trim to compensate for a change in the engine from previous 737 models, which, as I understand it (and pilot reports appear to bear this out, see the link below), is supposed to be active only when the pilot is manually flying the plane.

However, from some of the information I've seen online [1], it appears that the 737 MAX has also had uncommanded pitch down events while in autopilot. That indicates a different problem from the MCAS--I suspect it has to do with a faulty angle of attack sensor not being correctly detected by the automated system, so the faulty input is used to trigger a pitch down instead of the plane being kicked out of autopilot and the pilot being notified. Airbus aircraft are known to have a similar problem which has caused several incidents.

[1] https://news.ycombinator.com/item?id=19373707

Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control?
Yes, and as you'll see from the reports in the Hacker News thread I linked to above, the pilots who submitted those reports (who were US carrier pilots and appear to have been much more careful about pre-briefing possible issues and acting quickly when an issue happened) did in fact immediately disengage autopilot and bring the plane back to correct pitch attitude when an uncommanded pitch down happened. But again, that had nothing to do with MCAS; in fact, one of the pilots noted that he had engaged autopilot on that flight earlier than he normally would have in order to remove a possible MCAS threat during a manual climb.

Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?
From an automatic pilot point of view, not necessarily. If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime. Yes, you might skim pretty close to the ground with wings level, but that's better than falling into the ground tail first with the nose pitched way up in a stall.

The problem, as I see it, is that the automated systems are not properly programmed to detect and respond to faulty angle of attack input. The automated systems believe, based on faulty angle of attack (AoA) sensor input, that the nose is pitched way up and a stall is imminent, when in fact either the wings are level, or the airplane is in a controlled climb and is nowhere near a stall. In the case of the Airbus incidents I mentioned above, there were in fact three AoA sensors, one of which went bad--and instead of comparing its output to the other two sensors, spotting the bad sensor, and taking it out of the loop, the automatic system executed uncommanded pitch down based on the faulty input. That seems like an obvious design error to me.

In the case of the 737-MAX, it's not entirely clear what role AoA sensors played, since investigation is still ongoing, but I've seen at least one online article suspecting that there are only two AoA sensors instead of three in this system, which makes fault detection a lot harder. The obvious solution to me would be to add a third sensor (and properly compare the sensors to spot a faulty one, as above).
 

russ_watters

Mentor
18,050
4,559
Hi,

I have a question regarding the tragic crash of the latest 737 Max.

Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude? Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?
[edit]
So, this is an issue I've been putting some thought into and I've moved this post of mine from the Ethiopia Air thread, that should answer your first two questions, though may prompt follow-ups: [/edit]
[re-located content]
My understanding of the 737 Max's Maneuvering Characteristic Augmentation System (MCAS)'s function is that it adjusts the "feel" of the airplane so that elevator backpressure is required to pitch-up and maintain pitch at high angle of attack and throttle. That would be normal for most airplanes, but the 737 MAX's large engines provide a pitch-up torque that reverses this behavior at high aoa and throttle; pilots would have to push forward to prevent the nose from continuing to rise. MCAS adjusts the trim to counter-act this "feel". It's an atypical system.
https://boeing.mediaroom.com/news-releases-statements?item=130402

It appears to me that the media is mis-reporting this as the MCAS reacting to a too-high angle of attack by automatically lowering the nose to prevent a stall. That's a common (universal on airliners?) but notably separate behavior. Strictly speaking - if I understand it correctly - MCAS does not actually override pilot action, but only changes the amount and direction of force required for the pilot to move/position the elevator.
[/re-located content]

I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.
Kind of. Flight control systems and autopilots are very complex and multi-layered. There are multiple mode of each. Here's a good primer:
https://www.skybrary.aero/index.php/Flight_Control_Laws

Perhaps the bottom line is this: Airliners don't get flown via direct control unless there are major failures.
 

russ_watters

Mentor
18,050
4,559
From an automatic pilot point of view, not necessarily. If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime. Yes, you might skim pretty close to the ground with wings level, but that's better than falling into the ground tail first with the nose pitched way up in a stall.

The problem, as I see it, is that the automated systems are not properly programmed to detect and respond to faulty angle of attack input.
Example:
Air France 447 crashed because the flying pilot held full back-pressure on the control stick and stalled the plane from cruise until it hit the ocean about 4 minutes later. The flight control system had a stall-prevention system, but it was receiving faulty airspeed indication, so it disconnected that feature. It's difficult to know what the pilot was thinking, but it is possible he didn't realize it was possible to stall the plane.
 

Nugatory

Mentor
11,838
4,372
Is it not a huge error in the flight laws and the MCAS software to execute a nose down maneuver at any altitude?
If by "nose down" you mean lowering the nose when the angle of attack is too high, then the answer is "of course not" - the whole point of the system is to bring the aircraft closer to a level attitude from too much pitch up. nose down to reduce the angle of attack. The Lion Air crash involved a bad attitude angle of attack sensor telling the software that the nose was too high aircraft was in danger of stalling because the angle of attack was too high when it wasn't.
Should the system not have a rule to prohibit such a maneuver below a minimum altitude threshold?
The MCAS system is disabled whenever the flaps are deployed
I'm also confused as to how this MCAS system and other autopilot software is implemented. Is it not the case that all planes with such computer automation functionality are equipped with a clear standardized master switch to turn off ALL computer control? This seems mission critical to me.
That would be..... problematic.... on a fly-by-wire aircraft.

This might be a good time to mention that it is way premature to assume that ET961 is a repeat of LN610 (Lion Air). Wait for the investigation to complete. If you can't stand to wait for the report (a long wait - these things take a while) you'll much informed discussion on the anonymous professional pilots' forums.

[Post edited to properly reflect the distinction between pitch attitude and angle of attack. They are not generally the same, and it is possible for the AoA to be dangerously high even when the nose is level or pitched down]
 
Last edited:
198
28
Thanks for all your replies. This is definitely a much more technical analysis than what's in the media and is what I was looking for.

If the plane is about to stall because it's pitched up too much, it's going to fall out of the sky if nothing is done; so even at a pretty low altitude, a pitch down can be the correct maneuver in order to increase airspeed and get back into a controlled flight regime.
It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all. Given the scenario as it looks in these recent tragic cases. It appears the pilot, perhaps not knowing how to disable the MCAS or overwhelmed at the time, was literally fighting the computer, so the rate of climb and decent is all over the map (from the data I've seen). The computer pitches down, the pilot pitches up... In this case, if such a rule existed that said, under this altitude, the MCAS is disabled. The purpose being as is suggested, if there is a failure in any of the sensors, it prevents the MCAS from putting the plane into a dive at such an irrecoverable altitude that it ends up flying into the ground. If the MCAS is engaged say above 10 000 feet, there is time for the pilot to actually disable the system, and recover manually. However, as I'll suggest in a moment, perhaps the existence of such a MCAS is problematic intrinsically.

Perhaps the bottom line is this: Airliners don't get flown via direct control unless there are major failures.
I think given these tragic case studies, there is a precedent to have an all encompassing master override, that in such a traumatic scenario (warning buzzers and warning lights going off everywhere), the pilot can with one switch, take complete control of the airplane. This seems a given, or else we might as well call these aircraft drones and get rid of the pilot all together. In which case, we'd have no one to blame but the software. In other words, if the pilot cannot take complete control of the fly-by-wire system with a simple pull of a lever. Then the pilot is not really in control of the vehicle. If the MCAS is in fact part of the fly-by-wire system itself, then we're in a scenario that the airplanes design is so unstable it needs an additional software "layer" to control it due to it's atypical aerodynamics. I would argue this is acceptable perhaps in military stealth aircraft, but not in commercial airliners. Perhaps at one point, but I think it's fair to say we're not there yet, no matter how expensive kerosene is.
 
23,846
5,796
It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all.
The key input is the angle of attack. That's why I focused on the angle of attack sensors in my previous post. It seems to me that a simple fix would be to have three sensors and use a majority vote among them to determine when one is giving faulty input, and average them to obtain the angle of attack that drives the rest of the software. In fact it worries me that that is not already how these systems are designed; what that tells me is that an obvious design flaw can make it all the way through multiple levels of design review and signoff by two major aircraft manufacturers plus all the regulators that are supposed to certify the systems, without being spotted. That does not seem like a good state of affairs.

an all encompassing master override
There is one. If you look at the Hacker News thread I linked to, and the reports quoted there, you will see that the pilots in two incidents were able to immediately disengage automatic control and take manual control of the airplane when they had to. I don't think that is the issue.

The issue I see is that, because of the faulty design of the automatic systems, an uncommanded event can put the plane in jeopardy (for example with an uncommanded pitch down at low altitude when the plane is not actually close to a stall) without the pilot having enough time to react even if all he has to do is flip one switch and take control. What should happen is that faulty input should be detected and ignored, instead of being allowed to trigger automated actions. If it gets to the point where enough sensors have to be ignored that the automatic system can't fly the airplane, a big red light should go on in the cockpit and no control changes should be made until the pilot takes manual control.

Another obvious difference between the incidents referred to in the Hacker News thread and the ones that caused crashes is that the former were US carriers and the latter were carriers from countries with looser standards for pilots. That's not to say the manufacturers should not fix the design flaw; they should. But it shows that pilot training and pilot discipline makes a big difference when unforeseen events happen.
 
23,846
5,796
it is way premature to assume that ET961 is a repeat of LN610 (Lion Air).
Yes, this is a good point. My previous posts are based on what we know of the Lion Air incident, plus what I know of the previous Airbus incidents based on considerable digging into them that I did at the time. That information is already enough, in my view, to know that there is a significant design flaw that needs to be addressed. But that doesn't mean we know enough now to know that ET961 was caused by the same flaw.
 

russ_watters

Mentor
18,050
4,559
It seems this is assuming many variables, variables the system is getting erroneously or perhaps not calculating at all. Given the scenario as it looks in these recent tragic cases. It appears the pilot, perhaps not knowing how to disable the MCAS or overwhelmed at the time, was literally fighting the computer, so the rate of climb and decent is all over the map (from the data I've seen). The computer pitches down, the pilot pitches up... In this case, if such a rule existed that said, under this altitude, the MCAS is disabled. The purpose being as is suggested, if there is a failure in any of the sensors, it prevents the MCAS from putting the plane into a dive at such an irrecoverable altitude that it ends up flying into the ground. If the MCAS is engaged say above 10 000 feet, there is time for the pilot to actually disable the system, and recover manually.
We may have reached a condition where planes are so safe due to automation that additional automation causes problems by lowering pilot skill or attentiveness. It appears to be true - at least for Lion Air 610 - that failure of this system created a situation the pilots were unable to deal with, but should have been. On the other hand, turning off such a system in Air France 447 created a situation the pilots were unable to deal with. So its pick your poison.
I think given these tragic case studies, there is a precedent to have an all encompassing master override, that in such a traumatic scenario (warning buzzers and warning lights going off everywhere), the pilot can with one switch, take complete control of the airplane.
This assumes that this will help more than it hurts. The reason these systems exist is precisely because they have deemed to be more helpful than hurtful. And indeed a plane crashed due to shutting off (actually, rebooting) such a system and the pilots subsequently losing control:
https://en.wikipedia.org/wiki/Indonesia_AirAsia_Flight_8501
 
198
28
There is one. If you look at the Hacker News thread I linked to, and the reports quoted there, you will see that the pilots in two incidents were able to immediately disengage automatic control and take manual control of the airplane when they had to. I don't think that is the issue.
If this was the case, and the override is clearly standardized and established, then why would the data suggest a scenario where the pilot could not overcome the computers input.
 
198
28
The key input is the angle of attack. That's why I focused on the angle of attack sensors in my previous post. It seems to me that a simple fix would be to have three sensors and use a majority vote among them to determine when one is giving faulty input, and average them to obtain the angle of attack that drives the rest of the software. In fact it worries me that that is not already how these systems are designed; what that tells me is that an obvious design flaw can make it all the way through multiple levels of design review and signoff by two major aircraft manufacturers plus all the regulators that are supposed to certify the systems, without being spotted. That does not seem like a good state of affairs.
I agree with much what you state here. But it does not address the relevance of a limit to a system that pitches the aircraft's nose down to correct the AOA. Should this system be limited by a minimum threshold of altitude or not.
 
198
28
We may have reached a condition where planes are so safe due to automation that additional automation causes problems by lowering pilot skill or attentiveness. It appears to be true - at least for Lion Air 610 - that failure of this system created a situation the pilots were unable to deal with, but should have been. On the other hand, turning off such a system in Air France 447 created a situation the pilots were unable to deal with. So its pick your poison.
This is exactly what I'm getting at. Automation is only so good as it can be overseen by a more competent system. Which is the human brain of an experienced pilot with hundreds or thousands of hours of experience. I don't see it as a binary choice of pick your poison. Automation such that it is, if it exceeds the limits of a pilots ability to take over with manual control, leaves us at the whims of software that is at this point not as dependable and rigorous as human experience. My argument is that we're simply not even close to there yet. Maybe 90% of the time the autonomous system performs better, but the other 10% of the time. It fails catastrophically without warning because it's a machine with no real inductive or deductive ability. It can only operate within it's parameters, exceed those and it crashes. Humans don't work like that. They are improvisational for lack of a better word.

This assumes that this will help more than it hurts. The reason these systems exist is precisely because they have deemed to be more helpful than hurtful. And indeed a plane crashed due to shutting off (actually, rebooting) such a system and the pilots subsequently losing control:
https://en.wikipedia.org/wiki/Indonesia_AirAsia_Flight_8501
That's one example you give and not conclusive that autonomous systems superseded human piloting. In reality, our systems that replace humans are still in infancy and it seems, as I see it, we have expected these algorithms to fly before they really have proved, with any rigor, that they can crawl consistently.
 
23,846
5,796
why would the data suggest a scenario where the pilot could not overcome the computers input.
Because if an uncommanded pitch down on autopilot happens at low altitude, there might not be time to recover even if the pilot immediately perceives the problem and disengages the autopilot. By the time he's done that and manually pitched the nose back up again, the plane might have gotten too low to avoid a crash.

Should this system be limited by a minimum threshold of altitude or not.
I would say no, because, as I said in a previous post, if the airplane really is stalling, pitching down can still be the right response even at very low altitude, because the only chance you have to avoid a crash from a stall at low altitude is to lower the nose and regain airspeed as rapidly as possible. Not pitching down just means the airplane crashes tail first. So if you're going to have an automatic stall prevention system, the right thing is not to disable it at low altitude; the right thing is to properly detect and ignore faulty sensor input, so the system can take the right action when it knows it has good data.
 
23,846
5,796
Maybe 90% of the time the autonomous system performs better, but the other 10% of the time. It fails catastrophically without warning
Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).

it seems, as I see it, we have expected these algorithms to fly before they really have proved, with any rigor, that they can crawl consistently
I think this is too pessimistic about automatic flight controls as a whole. They are not new; commercial airline flights have been done mostly on autopilot for decades now.

Automated systems for things like stall prevention are newer, but really, the logic of the systems, assuming the sensor data is accurate, is pretty simple, and the flight regimes in question are very well understood. What concerns me is that the design, review, and certification process, in two major airplane manufacturers and for multiple countries, has let what seems to me to be an obvious design flaw in how sensor data is used make it all the way through without being spotted.
 
198
28
Because if an uncommanded pitch down on autopilot happens at low altitude, there might not be time to recover even if the pilot immediately perceives the problem and disengages the autopilot. By the time he's done that and manually pitched the nose back up again, the plane might have gotten too low to avoid a crash.
From what I understand the MCAS is not supposed to engage in autopilot at all. Further, if there was a clear way as I've suggested for the pilot to circumvent the computer from pitching down, it would of occurred and we wouldn't see the erratic climb data. The pilot would correct by pitching the plane up and the MCAS would never kick in again. But it did. That's the entire scenario we see in the data.

I would say no, because, as I said in a previous post, if the airplane really is stalling, pitching down can still be the right response even at very low altitude, because the only chance you have to avoid a crash from a stall at low altitude is to lower the nose and regain airspeed as rapidly as possible. Not pitching down just means the airplane crashes tail first. So if you're going to have an automatic stall prevention system, the right thing is not to disable it at low altitude; the right thing is to properly detect and ignore faulty sensor input, so the system can take the right action when it knows it has good data.
That's given the system is getting proper data, and the pilot is incompetent or leading the plane in a stall. However, if the sensor fails, which is possible in even the best scenario (it's possible all 3 sensors could fail even if it's very unlikely), the computer executing a pitch down could and did repeatably result in an irrecoverable dive. I simply am not convinced the system should ever execute a pitch down maneuver at a potentially irrecoverable altitude. I would also suggest there is no experimental data to suggest a plane would be in a stall scenario caused by a pilot at such low altitude where the computer should intervene.
 
198
28
Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).
You are conflating two systems. Auto pilot at cruise is a much simpler system than anti-stall prevention which as we see in these cases, is happening during take off. A system that keeps a plane on course at high altitude is not under the amount of variables and rigor that can occur during take off or in flight complications like stalling. To channel a pilot, the computer can't "feel" what's going on. It has no instinct or experience. It can only deduct based on it's trained neural net. Introduce something it's never seen before, and watch that system completely halt or do something completely unexpected.

I think this is too pessimistic about automatic flight controls as a whole. They are not new; commercial airline flights have been done mostly on autopilot for decades now.

Automated systems for things like stall prevention are newer, but really, the logic of the systems, assuming the sensor data is accurate, is pretty simple, and the flight regimes in question are very well understood. What concerns me is that the design, review, and certification process, in two major airplane manufacturers and for multiple countries, has let what seems to me to be an obvious design flaw in how sensor data is used make it all the way through without being spotted.
I agree, I think cruise control works on cars, because when you touch the brake, the entire system halts and shuts down. Auto pilot at cruising altitude on aircraft is likely similar. If it detects an anomaly it halts and alerts the pilot. When you try to throw in systems that circumvent pilot input in specific scenarios like stalls, at mission crucial maneuvers like take off. Then you're pushing the ability of our systems and their ability. And accidents are bound to happen. I think we're in agreement on that. It's the arrogance and trust in such systems that leads to underestimating the actual infinite amount of use cases that could crash the system.
 
32,369
8,347
Your percentages are way off here. Without in any way minimizing the tragedy of these accidents, or the issues they bring to light, there are roughly 100,000 commercial airline flights a day around the world, which use automatic flight controls for most of the flight (in fact, pretty much all of it except right at takeoff and right at landing), and the current incident rate for commercial aviation is less than 1 incident per 2 million flights, i.e., less than 1 incident per roughly 20 days of flying. So we're talking about all but well under a millionth of the time for automatic flight controls performing better than human pilots (that's why pilots let the automated systems do practically all the work).
I don't disagree with the message, but I don't think you demonstrated that in your post. Humans wouldn't crash every flight. Total accident rates alone don't tell us anything about the safety of automated flying systems vs. humans.
 
23,846
5,796
From what I understand the MCAS is not supposed to engage in autopilot at all.
Yes, but the angle of attack sensor and the automated stall protection system, which is the key control algorithm that the AoA sensor feeds into, are used by both MCAS and autopilot. So the same analysis of possible sensor fault and how to detect it and what to do about it applies to both. The only difference with MCAS is that, as I understand it, instead of just telling the nose to pitch down, MCAS increases nose down trim, which increases the control effort the pilot has to exert to keep a certain pitch attitude. Pilots who aren't prepared for that or don't understand what the control system is doing could be unable to properly correct what's going on fast enough at low altitude.

It's worth mentioning here that the reason Boeing put MCAS on the 737 MAX was to avoid having to retrain pilots to handle the different trim characteristics induced by a change in engines as compared to previous 737 models. The new engines created a pitch up moment as compared to previous 737s, and Boeing didn't want pilots to feel like the plane flew differently from previous 737s since that might trigger retraining requirements, so it added MCAS to automatically adjust the trim down to compensate. If this had been a new aircraft type, that wouldn't have been done; pilots would just have learned about the trim characteristics of the plane when they learned to fly it. One of the issues pilots appear to have with this is that they were not given all of this information about MCAS in the first place; it's only come out as a result of incidents happening.

if there was a clear way as I've suggested for the pilot to circumvent the computer from pitching down, it would of occurred and we wouldn't see the erratic climb data
That's if the pilot understood what was going on and hit the "disengage" button. However, you make a valid point that a "disengage autopilot" button is not the same as a "disengage MCAS" button, or a "stop everything automatic you're doing and let me fly the freaking airplane myself" button. It would be worth looking into what information is available about Boeing's system and what overrides are available to the pilot for the latter two cases. (AFAIK Airbus has the first button--which every autopilot has--and the third, which they call "Direct Law" and which disables even automatic stall prevention. I'm not sure what Boeing's equivalent of Direct Law is or whether it would disable MCAS--if there isn't such a switch, I would agree with you that there ought to be.)

Auto pilot at cruise is a much simpler system than anti-stall prevention which as we see in these cases, is happening during take off.
So does autopilot. Autopilot usually gets engaged soon after takeoff, and doesn't get disengaged until shortly before landing, or sometimes not even then, since auto-landing systems have been available and usable for some time now. So autopilot is not just handling the cruise flight regime, it's handling all of them.

Also, anti-stall prevention doesn't just happen during takeoff. It can happen during a wind shear at altitude, for example. Nor do uncommanded pitch down events only happen during takeoff. The Quantas flights I referred to above had multiple uncommanded pitch down events at cruising altitude, because Airbus's control algorithm didn't compare the outputs from all three AoA sensors to detect faulty input, but just used the faulty input of one sensor to trigger the event.

Auto pilot at cruising altitude on aircraft is likely similar. If it detects an anomaly it halts and alerts the pilot.
Yes, if it detects an anomaly. The issue I'm talking about is a flaw in the way AoA sensor input is handled that fails to use an obvious method of detecting an anomaly with that input. The fix for that issue seems obvious to me, and would not require any reduction in the use of autopilot or automatic stall prevention; it would just require using the obvious method for detecting faulty sensor input when you have three sensors to compare.

Then you're pushing the ability of our systems and their ability.
I actually don't see it that way. As I said, all of the flight regimes involved are well understood, as are the correct actions to take to handle problems. I see this as a flaw in the design, engineering, and certification process that allowed an obvious design flaw to make it through. That's concerning, but it's not a concern about the capabilities of the systems themselves. It's a concern about the culture of those organizations.
 
198
28
First of all Peter, I have to commend your knowledge and time to converse with me. Your expertise far exceeds mine and I greatly appreciate your continued engagement.

It's worth mentioning here that the reason Boeing put MCAS on the 737 MAX was to avoid having to retrain pilots to handle the different trim characteristics induced by a change in engines as compared to previous 737 models. The new engines created a pitch up moment as compared to previous 737s, and Boeing didn't want pilots to feel like the plane flew differently from previous 737s since that might trigger retraining requirements, so it added MCAS to automatically adjust the trim down to compensate. If this had been a new aircraft type, that wouldn't have been done; pilots would just have learned about the trim characteristics of the plane when they learned to fly it. One of the issues pilots appear to have with this is that they were not given all of this information about MCAS in the first place; it's only come out as a result of incidents happening.
Yes, I read some of this. This seems to be the nexus of the potential culpability and negligence on Boeing's part. Cost cutting.

Also, anti-stall prevention doesn't just happen during takeoff. It can happen during a wind shear at altitude, for example. Nor do uncommanded pitch down events only happen during takeoff. The Quantas flights I referred to above had multiple uncommanded pitch down events at cruising altitude, because Airbus's control algorithm didn't compare the outputs from all three AoA sensors to detect faulty input, but just used the faulty input of one sensor to trigger the event.
This is interesting. It seems that if a system like MCAS engages at a higher altitude there is substantial time and altitude to react and correct any error the system may execute by the pilot. The issue as we see in these cases is the system is engaging at mission critical maneuvers like take-off. I wonder too if there needs to be a better interface between the pilot and the system. A sort of intelligent HUD of sorts that communicates to the pilot exactly what the plane / computer is executing and how to override if need be.


Yes, if it detects an anomaly. The issue I'm talking about is a flaw in the way AoA sensor input is handled that fails to use an obvious method of detecting an anomaly with that input. The fix for that issue seems obvious to me, and would not require any reduction in the use of autopilot or automatic stall prevention; it would just require using the obvious method for detecting faulty sensor input when you have three sensors to compare.
That's again presupposing that an event of total sensor failure could not happen.


I actually don't see it that way. As I said, all of the flight regimes involved are well understood, as are the correct actions to take to handle problems. I see this as a flaw in the design, engineering, and certification process that allowed an obvious design flaw to make it through. That's concerning, but it's not a concern about the capabilities of the systems themselves. It's a concern about the culture of those organizations.
Fair enough, I am personally less trust worthy in our ability to program all potential use cases into an algorithm that is completely bulletproof for mission critical maneuvers.
 

Want to reply to this thread?

"Boeing 737 Max MCAS System" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Top Threads

Top