Reasoning About Safety-Critical Information Flow Between Pilot and Computer
This paper presents research results that develop a dynamic logic for reasoning about safety-critical information flow among humans and computers. The logic advances previous efforts to develop logics of agent knowledge, which make assumptions that are too strong for realistic human agents. We introduce Dynamic Agent Safety Logic (DASL), based on Dynamic Epistemic Logic (DEL), with extensions to account for safe actions, belief, and the logical relationships among knowledge, belief, and safe action. With this logic we can infer which safety-critical information a pilot is missing when executing an unsafe action. We apply the logic to the Air France 447 incident as a case study and provide a mechanization of the case study in the Coq proof assistant.
PDF Abstract