Had to look up the definition of interpretability for this one

It’s an interesting topic. I hadn’t really thought about the effectiveness of a machine learning algorithm being linked with its ability to be understood. But I can see how people would be less likely to implement a model’s recommendations if they don’t understand how it got there. I guess part of an ML model’s impact is whether or not it gets people to change their decision making!

Thanks for the comment. I added a suggestion to this discussion in the MOOC repo for linking to external resources “to go beyond”.

Interpretability is usually overlooked at, but there are some efforts to legally regulate it when decision making is potentially harmful to the final target, for instance, in fraud detection.