Talk on algorithmic differentiation of QR factorization

Date:

This is a talk at the Algorithmic Differentiation 2024 conference.

I speak about extending the QR factorization gradient for backprop to the wide case, software implementations and more.

Slides

Equation handout sheet

Our section of talks included a wonderful keynote by Nathan Killoran from Xanadu, a quantum computing firm based in Toronto, Canada.

After Nathan worked through the prerequisites of qubits, bras & kets etc. he then went on to discuss how the Xanadu toolkit often relies on Tensorflow and PyTorch for backend Machine Learning pipelines-A sensible thing to do IMO.

However, one of the limitations he highlighted was that for quantum, the operators need to support complex entries. For many algorithmic differentiation routines using linear algebra, there are rough edges for the complex values.

It was a nice motivation for why we need complex values inside our matrix operators.

In my talk I mostly skipped over the complex values in the interest of time. I had focused more on the real valued case because my assumptions and experience were on applications uses real values. In hindsight any future talks I give on this topic will likely have slightly more emphasis on complex values, at least noting that you need them to do quantum computing.

Yet another thing I learned at the conference.

In our talk I tried to focus on how the two equations are mathematically equivalent-equations 3.3 and 3.8 of our paper. The fact that they’re equivalent mathematically means that there is some flexibility on implementation.

However, from an empirical perspective the runtime seems to be better if you use equation 3.3. It also works regardless of if you’re using real or complex values whereas the equation 3.8 requirements an adjustment along the main diagonal entries which is further code and maintenance burden. For more details take a look at the slides or better yet read our paper.

Krishna’s talk followed mine. He also gave a great talk. He spoke on quantum computing experiments and some rough edges when you have eigenvalue problems with multiplicity-either geometric or algebraic. Apparently a way to handle this has been known in the algorithmic differentiation community for awhile but not all libraries have this corner case implemented or supported. Krishna shared the paper with me and now I’ve got a TODO item to read this paper closely-it’s pretty dense.

I’ll probably write up a listicle with some key points and takeaways at some point soon.

In the interim I’ll thank the Algorithmic Differentiation 2024 organizers for their hard work and to the attendees for fruitful discussions. Last but not least, thanks to the Department of Energy (DOE) for offsetting the costs associated with the conference so that the attendees can register at a reduced rate.

IMHO the AI revolution wouldn’t be happening were it not for the DOEs visionary leadership and nurturing of the Algorithmic differentiation community.