I am a researcher with specialties in numerical accuracy, artificial intelligence, data analysis, and high-performance computing.

You can find most of my publications on Researchgate:

This paper (published as part of the proceedings of Supercomputing 2023) contrasts and compares the use of both JAX and OpenMP target offload to port a large cosmology code to GPU.

It gives a practical look at porting a large pre-existing application to GPU, studying the performance of the resulting code but also usability and productivity.

This paper (published as part of the proceedings of the Cray User Group 2023) contrasts and compares the use of both JAX and OpenMP target offload to port a large cosmology code to GPU.

It gives a practical look at porting a large pre-existing application to GPU, studying the performance of the resulting code but also usability and productivity.

This paper covers my work on encapsulated error, a method designed to measure the numerical error of computations while being efficient enough to be applied to large parallel applications running on a supercomputer.

The method is interesting in that it is both relatively easy to implement as a library, accurate, and significantly faster than most alternatives. Its main downside is the need to replace floating-point types used in an application with an instrumented alternative (which might not be practical when one has limited access to the source or when they are unwieldy to modify).

The Ranger21 paper was born from testing a large number of optimizers for deep learning and realizing that, while people were branding them as new optimizers, they often just included one new idea to an existing optimizer. We realized a lot of those ideas were orthogonal and synergistic: you would get better results putting them together than what would be expected by looking at them individually.

The result is an optimizer that is surprisingly robust and, looking further, the idea that we should build modular optimizers to foster research in that direction.

This paper covers my work on tagged error, an extension of encapsulated error designed to follow numerical error through a computation.

While the method introduces an important overhead, it is the best method I am aware of to find the source of a numerical error in computations. I have even used it to improve the numerical stability of algorithms, fixing problems one after the other until I reached the desired precision.

This covers all the work I did during my Ph.D. (do not let the first pages fool you, it is written in English).

It covers the theory behind Shaman including encapsulated error (a very efficient way to measure the numerical error of computations) and tagged error (a very precise way to find the source of the numerical error that ends up in a result).

It also includes some work on applying artificial intelligence to pick the proper solver and preconditioner to solve a linear system (we obtained really promising results, a paper dedicated to the subject should come out at some point).

This paper concludes a large study, measuring 118 neuroanatomical parameters over 1,566 mutant mice. This leads to the identification of 198 genes that impact brain formation.

I contributed some data analysis to the paper (admittedly a drop in the bucket, with 18 co-authors contributing much more important pieces of the puzzle).

Here are slides (or recordings when available) of talks I gave:

This talk was given at the French École Supérieure Arts Appliqués Textile (School of Higher Studies in Applied Arts and Textiles), an art school specializing in design and textiles, in April 2024. The goal was to provide an overview of how modern generative AI works, what it can and cannot currently do, current legislation, as well as societal impacts of these techniques. With an emphasis on what we know, don’t know, and cannot yet know as techniques are rapidly evolving.

An audio recording of the talk and subsequent discussion can be found on RADAR, the school’s radio station.

This talk was given for the February 2024 NERSC Week. The presentation introduces the fundamentals of large language model architecture and usage before delving into the design of a chatbot designed to extract information from NERSC’s documentation.

The chatbot was built to be able to answer user questions with NERSC-specific sourced answers using open models running fully locally on NERSC’s premises.

You can find the implementation here.

This talk was given at P3HPC (Performance, Portability & Productivity in HPC, a workshop given as part of Supercomputing 2023) and the Summit Series XIII (an NVIDIA and US national labs joined conference). It contrasts and compares the use of both JAX and OpenMP target offload to port a large cosmology code to GPU, looking at the performance of the resulting code but also usability and productivity.

A paper is also available in the proceedings of the conference.

This talk, given at the Cray User Group 2023, contrasts and compares the use of both JAX and OpenMP target offload to port a large cosmology code to GPU, looking at the performance of the resulting code but also usability and productivity.

A paper is also available in the proceedings of the conference.

This workshop (given in 2022 for the Commonwealth Computational Summit 2022 and later at the Data Day 2022, NUG Meeting 2022, and Laboratoire Astroparticule & Cosmologie 2024) is an introduction to porting Python code, and in particular numerical and scientific applications, to GPU with JAX.

It comes with exercises and is designed such that, by the end of the workshop, someone starting with knowledge of Python and Numpy should be able to port their code to GPU using JAX and decide on whether it is the best way forward.

This talk (given in 2021 for the 28th IEEE International Symposium on Computer Arithmetic.) covers my work on tagged error, an extension of encapsulated error designed to follow numerical error through a computation.

It has since been published as a paper.

This talk (given in 2021 for the Rencontres Arithmétiques de l’Informatique Mathématique 2021) covers my work on encapsulated error, a method designed to measure the numerical error of computations while being efficient enough to be applied to large parallel applications running on a supercomputer.

It has since been published as a paper.

This talk (given in 2020 for the Digital French-German Summer School with Industry) covers my work on using machine learning to predict the performance of linear solvers and preconditioners when solving a given linear system.

We showed that we could predict the convergence profile of the solver with enough accuracy to determine which solver should be used given some target precision and time constraints.

You can find further information in my Ph.D.

Here are posters I presented:

This poster (presented in 2021 at the Platform for Advanced Scientific Computing (PASC) Conference) covers my work on measuring numerical error and tracing it through computations.

This poster (presented in 2020 at MASCOT-NUM) covers some of my work on measuring numerical error and its application in comparing various uncertainty quantification methods and measuring their sensitivity to numerical error in their inputs.

This particular case study is further detailed in my Ph.D.