Ethiopian jet’s data points to a system that was an issue for 737 MAX before the crash

Ethiopian jet’s data points to a system that was an issue for 737 MAX before the crash

8:11pm, 18th March, 2019
Ethiopian Airlines employees conduct a memorial service on March 15 to pay tribute to colleagues and passengers who lost their lives in the March 10 crash of a Boeing 737 MAX 8 jet. (Ethiopian Airlines Photo) The latest word from the investigation of is that readings retrieved from the flight data recorder reportedly point to circumstances similar to those that surrounded a 737 MAX crash less than five months earlier in Indonesia. Regulators around the world suspected as much, based on data received via satellite from the plane during its minutes-long flight from Addis Ababa heading for Kenya on March 10. That’s what led them to last week. The March 10 crash killed all 157 people aboard Ethiopian Airlines Flight 302, while the October crash killed all 189 people aboard Lion Air Flight 610. In the Lion Air investigation, safety experts focused on an automatic flight control system known as the Maneuvering Characteristics Augmentation System, or MCAS. Boeing added the MCAS system to the 737 MAX to guard against having the airplane stall under extreme conditions. The measure was taken because the MAX’s engines are bigger than the engines on the previous line of 737s, changing the aerodynamics. Preliminary findings from the Indonesia probe suggested that the MCAS system was receiving spurious data about the plane’s aerodynamic “angle of attack” just after takeoff. That would lead the automatic system to force the plane’s nose downward into an uncalled-for dive. In the Lion Air case, the pilots repeatedly fought against the MCAS commands — and ultimately lost. Afterward, Boeing said pilots can use a procedure to disengage the MCAS system, but that procedure wasn’t followed by the Lion Air pilots. Today, Reuters quoted an unnamed source as saying the angle-of-attack readings from the Ethiopian Airlines jet’s flight data recorder were “very, very similar” to the Lion Air readings. The similarities will be the focus of further investigation, Reuters quoted its source as saying. The double disaster has raised deeper questions about the Federal Aviation Administration’s oversight of Boeing during the certification of the 737 MAX. Over the weekend, The Seattle Times quoted sources as saying that assessing the MCAS system’s safety. The Times said those analyses understated how much leeway the automatic system was given to move the horizontal tail in order to avoid a stall — or force a dive if the system malfunctioned. Another potential flaw with the system was that it depended on readings from a single angle-of-attack sensor, rather than multiple sensors. Much of The Seattle Times’ report was based on research conducted before the Ethiopian Airlines jet crashed. Aerospace reporter Dominic Gates wrote that “both Boeing and the FAA were informed of the specifics of this story and were asked for responses” a few days before the crash. People shouldn’t misread this point. I was not telling Boeing or the FAA anything they didn’t know.As noted, Boeing has been working since the first crash on a fix for the flaws my story listed — Dominic Gates (@dominicgates) In a , Boeing CEO Dennis Muilenburg said that “safety is at the core of who we are at Boeing” and that the company is working with authorities and airlines to support the investigation and “help prevent future tragedies.” “Soon we’ll release a software update for the 737 MAX that will address concerns discovered in the aftermath of the Lion Air Flight 610 accident,” Muilenburg said. That update, and revisions in pilot training procedures, should address the MCAS’ behavior in response to erroneous sensor inputs, Boeing says. that Justice Department and Transportation Department officials are reviewing how the 737 MAX was developed, and how the plane won its regulatory approvals. it gave to 737 MAX jets. U.S. Sen. Roger Wicker, R-Miss., has said he intends to hold a hearing into the issues raised by the crashes, in his capacity as the chairman of the Senate Commerce, Science and Transportation Committee. The committee’s ranking Democratic member is Sen. Maria Cantwell, D-Wash. She touched on the matter briefly in response to a question today at the . “Paramount in all of this is safety,” Cantwell told GeekWire. “So we’re going to keep looking at all the data and information until we are sure that we understand every aspect of this.”
Google’s new voice recognition system works instantly and offline (if you have a Pixel)

Google’s new voice recognition system works instantly and offline (if you have a Pixel)

2:56pm, 12th March, 2019
Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa or to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition , eliminating that delay altogether — though of course mangling is still an option. The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether. Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention. Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel. Google’s work on the topic, , built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly. So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing. “Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization. Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.