With AI and machine learning applications proliferating there is great concern around the ability to explain decisions that are made by these AI applications. How do we ensure they are not biased? How do we provide accountability? Are we heading towards a world of inaccessible, machine-driven decisions that affect all our lives but are beyond any real human understanding? This discussion will examine whether these fears are real and look at the progress being made in delivering explainability for AI.
Currently Director of Analytics/Data Science within the Fraud, Security and Compliance team at FICO. Derek has worked in Advanced Analytics and Machine Learning for over 20 years and is a specialist in the application of machine learning techniques for fraud detection, compliance monitoring and risk solutions across multiple industries and lines of business.
He develops innovative approaches to business challenges, relying on years of experience working with clients around the world, leveraging new analytical and machine learning techniques and business consulting expertise to develop and deliver effective solutions.