MIT researchers find the explanation methods designed to help users determine whether to trust a machine-learning model'