• HOT NEWS

    Friday, 19 June 2020

    The advantages of self-explainable AI over interpretable AI


    Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9% of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can’t be explained can’t be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks. In the past few years, explainable artificial intelligence has become a growing field of interest. Scientists and developers are deploying deep learning algorithms…

    This story continues at The Next Web

    from The Next Web https://ift.tt/2AHJ6Tr