We have tentatively divided our work into eleven programs, ten of them covering practical and more immediate applications and one of them being more theoretical and establishing a longer-term context and a more formal approach for our work. They are described below:


Using AI to Enhance Democratic Institutions

(and increase trust in AI)

AI currently means statistical learning from large datasets. Therefore curating data is of the highest importance. Establish public trust in clean unbiased correct data. A democracy can be (and currently is) easily subverted by misinformation and disinformation, increasingly created with the use of AI. Fund projects that aim at labeling false information as such. Promote initiatives for online voter education and participation. Support development of a stronger online identity system and trust in that identity. Strengthen cyber security systems.

Democratizing the Development of AI

(and the access to it)

We'll work with various institutions to make more of their data public. And with high-tech companies to make more of their algorithms public. Facilitate people's access to large AI models. Encourage development of smaller models. Participate in setting the blockchain foundations for Web 3.0, especially decentralization of institutional control and authority. Allow more people to participate in the development of those smart contracts that encapsulate social policy, healthcare policy, housing policy, etc.

ONE THEORETICAL PROGRAM: Context and Formalism

One of the most important results in all of science is the link established between mathematical logic and computer programming in the 1930-1960 time frame. The link is known as the Curry-Howard correspondence and it is popularly referred to as "programs as proofs". This "programs as proofs" equivalence will be the long-term context for our work. It is very expensive to do in practice and it has only been used in projects in which any software bugs could have catastrophic consequences, either massive financial losses or unacceptable loss of life.

Our position is that a buggy (or not properly understood) AI will be the biggest of such potentially catastrophic cases that we will face in our collective future and therefore "programs as proofs" will matter the most. We may not have the proper resources to fully implement "programs as proofs" in practice at this time, but it will be in our minds at least. Any small success in this direction will be a treasure.

The first step in this context would be to formalize existing AI laws. This means finding the appropriate system of logic (or equivalently, the appropriate programming language) into which to translate the AI laws. Again, it may not happen within a year or within five years. But eventually it will have to happen. We will begin experimenting with selective parts of the upcoming AI Bill of Rights which appear to have the clearest formulations and therefore be easier to formalize. . A program implementing a formally specified AI law will be nothing but a formal proof of that specification when the specification is viewed as a mathematical theorem.

How is all this context and formalism lined up with our mission? Simply put, "programs as proofs" establishes trust. Autocratic institutions have solved the AI trust issue quite simply: there is none. There is no requirement to prove that a surveillance system satisfies lawful constraints or even to prove that it works as designed. But if AI systems are to help strengthen democratic institutions, trusting them is essential. Because current laws are written in natural language (or a tighter form commonly referred to as "legalese"), most people understand them and can easily decide to trust or not trust them. But trusting AI systems requires some work.

Specifications for AI systems are also currently written in natural language and they can also be understood. But the programs implementing such AI specifications in a practical programming language are understood only by a few software developers who specialize in that language. Establishing popular trust in such programs is inherently more difficult and the hope is that the trust we have in our mathematics will transfer into public trust (for those AI programs that are formally written), when they are viewed as mathematical proofs.

It will take a lot of time and effort until we will be able to fully develop our work in that context and make full use of formal methods. To complicate matters even more, AI algorithms are not fully understood at this time, especially neural networks. We do not have a theory about how neural networks learn from data, we just "know" that they do. So in the absence of such understanding we will have to add learnability as an axiom to whatever formal system we use for proofs. In a much deeper way, it may actually be that learnability is provably independent of all the standard axioms used in mathematics
(the so-called ZFC axioms), but that is going way beyond this introductory explanation (see this if interested).

Projects and Posts

Each program consists of a number of projects, which you can see through the Projects menu item on the public menu bar.
In turn, each project consists of a number of posts. Each post has a type, e.g article, research, code, etc. The types are described here.


On our home page we put out a bold vision. But given the subject matter of that vision, we obviously cannot and should not go it alone. We need to engage, we need to influence, and we need to collaborate with many people across all sections of society. And we need to stay positive and have some fun in this process too. Therefore, in all our work and in our community we'll put a premium on high quality, generous, and respectful, interactions and content.

How We Work with the General Public

[draft] Our most important work is public and can be seen through the top menu of our website. If you want to participate in our work, the easiest and quickest path is to join our community. We also plan to organize an annual conference and various symposiums and town hall meetings. Our executives can be contacted via email or social media; the corresponding links pop up as you hover the mouse over the photo of the particular executive you want to engage with.

How We Work with Research Institutions and the Industry

[draft] We are working on setting up a charter for Corporate Membership and University Membership, which will link our contributing experts with leaders in the private sector and in the academia. Online meetings will be held with corporate or academic leaders on various topics. The coordination of all these activities are done by our Director of University Relations and our Director of Industry Relations.

How We Work with the Government

[draft] We are setting up programs to connect members of the U.S. House and Senate and their staffs with our contributing experts in order to provide expertise and testimonials on the use of AI to strengthen our institutions. Online meetings will be held with members of Congress on topics of their choice. These meetings can be arranged by request from individual members of Congress, and usually coordinated by our Director of Government Relations. In time, there will also be special events for congressional staff, including invitations to SD-AI meetings, dinners and symposia. These events will be shown on the News & Events page.