This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
“Perfect marriage” between AI & humans: Rolls-Royce’s safety framework
On average, 3,000 Rolls-Royce engines are in the sky at any given time on aircraft worldwide. The British manufacturing firm constantly pulls data from those engines while in flight and transfers it back to the ground.
Surprisingly, this isn’t a safety-critical activity since engines undergo thorough inspections before flying. The data is processed in three minutes, and any anomalies are flagged for an engineer.
This engineer will then assess the data and inform the airline if any action — a fuel filter that needs changing, for instance — could prevent a flight.
The data analytics service operates 24 hours a day, seven days a week, 52 weeks a year.
From day one of an engine’s lifecycle, Rolls-Royce runs more than 150 algorithms to assess it. These operate in tandem with simulation software and the engineering team to determine what insights are required.
There is also a separate monthly assessment by the Rolls-Royce team for safety reasons, according to Lee Glazier, the engineering firm’s head of digital integrity, who spoke during last month’s AI Summit in London.
Due to the complexity of the data and the need to look at it across more than two dimensions, Rolls-Royce turned to AI and machine learning to support its engineers.
“If you are looking at trace data, by definition, you are looking across two dimensions and can create clever plotting to see a wider picture,” he explained in his keynote case study. “But beyond that, humans really can’t cope.”
AI, he added, is not limited by its dimensions, enabling it to offer a more rounded picture from all sensors at once.
Glazier confirmed that Rolls-Royce adopted the 2.0 version of AI around two years ago, which does indeed look at all the sensors at once.
“We look across all the dimensions in one go. That is too complex for humans to even program into the system. The AI learns from the engineers and uses the 155 algorithms that are already there to learn what a ‘good’ engine is over various first flights.”
The AI then looks for anomalies, and if it spots one, it alerts an engineer, who can react in a way the AI cannot.
“That is why I call it the perfect marriage,” added Glazier. “We have engineers doing what they are fantastic at and like doing, while the AI does the bits that humans don’t really enjoy doing – going through a load of traces.”
Cross-domain usage
The tool has now gone beyond just Rolls-Royce’s Aerospace division. It is being implemented on projects including power packs used on Hitachi trains, power generation on Royal Navy carriers, and wider manufacturing operations.
“AI doesn’t care where the data comes from,” he explained. “To AI, it is just data.”
This prompted Rolls-Royce to examine how AI could benefit its safety-critical areas while appeasing regulators concerned about trusting AI.
One area the engine testing sensors can identify, using AI analytics tools, is whether specific components need to be switched out due to usage.
Traditionally, this would be judged by the number of flights an engine had been used on. Once you reach X number of flights, you replace component A.
AI modelling improves accuracy by considering actual stress levels on the component, allowing for later replacement if used for shorter flights or under less pressure. This increases the amount of time the plane can be “on wing” and reduces downtime.
“This is a safety critical activity,” Glazier said. “So, it had to be approved by regulation and certification.”
Aletheia
To address this, Rolls-Royce has developed a framework of five tests to show that the analytics tool meets all the criteria necessary to be regarded as ‘safe’ by regulators.
The first is simply a sense check: does the output look like it should?
Then, a continuous testing system processes data from half a million flights every 12 minutes.
“We know what the answer to this should be, adjusted to 10 decimal places. Statistically, it is almost impossible for the system to be malfunctioning and give the right answer to 10 decimal places,” says Glazier.
There is also an independent check, where the same data is run through different algorithms to see if it returns the same answer.
Finally, the data is checked for its integrity.
These five checks form the foundation of the Rolls-Royce AI safety framework, the Aletheia Framework, which the manufacturer has made available for free to industries.
“The Aletheia Framework is our toolkit for ethics and trustworthiness in artificial intelligence that we believe is too useful to keep to ourselves. So, we’ve made it freely available to everyone,” Rolls-Royce explains on its website.
The framework aims to promote trust and transparency in AI systems so they can be used for safety-critical functions. The toolkit is a practical one-page guide for developers, executives, and boards looking to deploy AI.
It asks them to consider 32 facets of social impact, governance, trust, and transparency and provide evidence that can then be used to engage with approvers, stakeholders, or auditors.
Subsequently, Rolls-Royce has published further frameworks based on Aletheia for other sectors, including music, oncology, and education.
“It’s very accessible and includes an area on trust. So, how can you trust your AI? It also goes beyond the ethics into how we can realise ethics,” he said.
“We took all the guidance and condensed it onto a single page — whether you were looking at the Good Cooperation, the EU, European Parliament — all recognised bodies producing fantastic guidance, but it was generally 100 pages long, and no data scientist is going to read all 100 pages.”
#BeInformed
Subscribe to our Editor's weekly newsletter