One of the most important results in all of science is the link established between mathematical logic and computer programming in the 1930-1960 time frame. The link is known as the Curry-Howard correspondence and it is popularly referred to as "programs as proofs". This "programs as proofs" equivalence will be the long-term context for our work. It is very expensive to do in practice and it has only been used in projects in which any software bugs could have catastrophic consequences, either massive financial losses or unacceptable loss of life.
Our position is that a buggy (or not properly understood) AI will be the biggest of such potentially catastrophic cases that we will face in our collective future and therefore "programs as proofs" will matter the most. We may not have the proper resources to fully implement "programs as proofs" in practice at this time, but it will be in our minds at least. Any small success in this direction will be a treasure.
The first step in this context would be to formalize existing AI laws. This means finding the appropriate system of logic (or equivalently, the appropriate programming language) into which to translate the AI laws. Again, it may not happen within a year or within five years. But eventually it will have to happen. We will begin experimenting with selective parts of the upcoming AI Bill of Rights which appear to have the clearest formulations and therefore be easier to formalize. . A program implementing a formally specified AI law will be nothing but a formal proof of that specification when the specification is viewed as a mathematical theorem.
How is all this context and formalism lined up with our mission? Simply put, "programs as proofs" establishes trust. Autocratic institutions have solved the AI trust issue quite simply: there is none. There is no requirement to prove that a surveillance system satisfies lawful constraints or even to prove that it works as designed. But if AI systems are to help strengthen democratic institutions, trusting them is essential. Because current laws are written in natural language (or a tighter form commonly referred to as "legalese"), most people understand them and can easily decide to trust or not trust them. But trusting AI systems requires some work.
Specifications for AI systems are also currently written in natural language and they can also be understood. But the programs implementing such AI specifications in a practical programming language are understood only by a few software developers who specialize in that language. Establishing popular trust in such programs is inherently more difficult and the hope is that the trust we have in our mathematics will transfer into public trust (for those AI programs that are formally written), when they are viewed as mathematical proofs.
It will take a lot of time and effort until we will be able to fully develop our work in that context and make full use of formal methods. To complicate matters even more, AI algorithms are not fully understood at this time, especially neural networks. We do not have a theory about how neural networks learn from data, we just "know" that they do. So in the absence of such understanding we will have to add learnability as an axiom to whatever formal system we use for proofs. In a much deeper way, it may actually be that learnability is provably independent of all the standard axioms used in mathematics
(the so-called ZFC axioms), but that is going way beyond this introductory explanation (see this if interested).