Artemis II Fault Tolerance

(alearningaday.blog)

30 points | by speckx 3 hours ago

7 comments

  • ranger207 9 minutes ago
    > The self-checking pairs ensure that if a CPU performs an erroneous calculation due to a radiation event, the error is detected immediately and the system responds.

    How does a pair determine which of the pair did the calculation correctly?

    • Ductapemaster 0 minutes ago
      In simple terms, this works by doing an XOR on the outputs and if they disagree, performing a fault recovery.

      There's also space systems that use 3 processors and a majority vote for the correct output, but that's different.

    • kqr 1 minute ago
      It doesn't need to. If they differ, one of them must have been erroneous and they should reboot.
  • MiracleRabbit 48 minutes ago
    Interesting. In safety components we are using Lockstep Microcontrollers which are doing something similar in a much smaller scale.

    https://en.wikipedia.org/wiki/Lockstep_(computing)

    Example: https://www.st.com/resource/en/datasheet/spc574k72e5.pdf

    • pclmulqdq 43 minutes ago
      Lockstep processors were used here, as well.

      > each FCM consists of a self-checking pair of processors.

      • willis936 17 minutes ago
        Never take to clocks to sea. Always sail with one or three.
  • WorkerBee28474 1 hour ago
    > Orion utilizes two Vehicle Management Computers, each containing two Flight Control Modules, for a total of four FCMs. But the redundancy goes even deeper: each FCM consists of a self-checking pair of processors.

    Who sits down and determines that 8 is the correct number? Why not 4? Or 2? Or 16 or 32?

    • echoangle 1 hour ago
      They probably set an acceptable total loss rate for the mission and worked backwards to determine how many replicas of each system they need to achieve that while minimizing total cost/weight.

      So the answer is "some engineers sat down after talking to management".

      • y1n0 38 minutes ago
        This is correct.
    • nine_k 1 hour ago
      Given a list of estimates of failure probabilities, finding the right mix of redundancy becomes a very tractable problem, maybe even freshman-level.
      • cubefox 52 minutes ago
        Getting the probabilities could be very difficult though, especially for issues that never occurred before.
        • kqr 8 minutes ago
          For issues that have never occurred before, probabilities are the wrong tool. The right thing to do is list all the behaviour the vehicle must never exhibit and think of ways it still might, despite all redundancies -- maybe even despite every single component working as intended.

          Lots of mission failures in history were caused by unexpected interactions between fully functional components. Probabilities of failures don't help with that.

        • 9dev 33 minutes ago
          That is what you hire an army of engineers for.
  • _whiteCaps_ 35 minutes ago
    I'm a big fan of Dissimilar Redundancies (but didn't know that was the term until today) for building system software.

    Build for various Linux distros, and some of the BSDs. You'll encounter weird compile errors or edge cases that will pop up. Often times I've found that these will expose undefined behaviour or incorrect assumptions that you wouldn't notice if you were building for a single platform.

  • tcp_handshaker 1 hour ago
    For the Airbus they used different CPUs because CPUs have bugs too...
    • echoangle 1 hour ago
      Not just CPUs, they run a whole different (but also simpler) fallback program in case the main computers fail. I think they were more worried about programming errors but this should avoid all shared failures between the main computers (be it programming or hardware).
      • kqr 4 minutes ago
        It does not.

        Even if different teams write software in different languages, they end up creating very similar bugs because the bugs crop up in the complexities of the domain and insufficiencies of the specification.

        N-version programming doesn't work as well as you think. See Knight and Leveson (1986).

        (N-version programming does guard against "random" errors like typos or accidentally swapping parameters to a subroutine call. But so does a good test suite and a powerful compiler.)

  • y1n0 37 minutes ago
    What I would like to see is the fault data. Also a graph of the # of in sync FMCs over time and how well did it correlate with predictions.

    I other words, how over engineered is it.

  • m3kw9 25 minutes ago
    The training the astronauts need must be a lot