Practitioners Lecture Series: Workshop on AAD by Antoine SavineAbout Talks
Workshop on AAD by Antoine Savine
Contents: Automatic Adjoint Differentiation (AAD) and back-propagation are key technologies in modern machine learning and finance. It is back-prop that enables deep neural networks to learn to identify faces on photographs in reasonable time. It is AAD that allows financial institutions to compute the risks of complex derivatives books in real time. The two technologies share common roots. See the AAD book here. All the material, including slides, C++ code, excel add-ins, tensorFlow notebooks and more is freely available on GitHub: https://github.com/asavine/CompFinance
About the Speaker: Antoine Savine is a French mathematician, academic and a leading practitioner with financial derivatives. Antoine was the Global Head of Derivatives Research at BNP for more than ten years, before moving to Danske Bank in Copenhagen. He is an expert C++ programmer and one of the key contributors to Danske Bank's xVA system, which won the In-House System of the Year 2015 Risk award. His current interests are in the combination of Deep Learning with financial modeling to unify derivatives risk management with CVA/XVA, FRTB, CCR, MVA and other capital calculations, and resolve related numerical and computational bottlenecks. Antoine lectures Volatility and Numerical Finance at Copenhagen University since 2016. He is a regular speaker and chairman in professional and academic conferences in quantitative finance, including QuantMinds, RiskMinds and World Business Strategies.
Antoine authored the series Modern Computational Finance with John Wiley and Sons, three volumes teaching financial quants, derivatives and risk professionals the essential mathematical, modeling, risk management and programming skills of modern finance. Antoine holds a Masters in Mathematics from Paris-Jussieu and a PhD in Mathematics from Copenhagen University. He is best known for his work on volatility, scripting, multi-factor interest rate models, parallel Monte-Carlo simulations and AAD.
This workshop, given at Kings College London on 28-29 March 2018 and entirely recorded here, explains back-prop and AAD in deep detail, demystifies them in words, mathematics and C++ code, investigates their similarities and differences and provides viewers with a thorough understanding necessary to successfully implement these technologies in their own projects.