Democratize AI


Only large high-tech companies had access to powerful language models until recently. But there is a stable trend to have access to those models opened to the public. Although more can be done in both publishing of data sets and of the pre-trained models, this trend is encouraging, and we'll work on having this trend expand. See Hugging Face if you want to experiment with some of these models.

How to Make Artificial Intelligence More Democratic

By Scientific American/ Jan 2, 2022

"GPT-3 demonstrates a broader trend in artificial intelligence. Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models. These resources are more accessible for researchers at large companies and elite universities. As a result, a study from Western University suggests, there has been a "de-democratization" in AI: the number of researchers able to contribute to cutting-edge developments is shrinking. This narrows the pool of people who are able to define the research directions for this pivotal technology, which has social implications. It may even be contributing to some of the ethical challenges facing AI development, including privacy invasion, bias and the environmental impact of large models."

Google Open-Sources Trillion-Parameter AI Language Model Switch Transformer

By InfoQ/ Feb 16, 2021

"The Transformer architecture has become the primary deep-learning model used for NLP research. Recent efforts have focused on increasing the size of these models, measured in number of parameters, with results that can exceed human performance. A team from OpenAI, creators of the GPT-3 model, found that NLP performance does indeed scale with number of parameters, following a power-law relationship. In developing the Switch Transformer, the Google Brain team sought to maximize parameter count while keeping constant the number of FLOPS per training example and training on "relatively small amounts of data.""

New open-source model that dwarfs GPT-3 aims to free AI from Big Tech labs

By Thomas Macauley/ July 12, 2022

"A language model bigger than GPT-3 has arrived with a bold ambition: freeing AI from Big Tech’s clutches. Named BLOOM, the large language model (LLM) promises a similar performance to Silicon Valley’s leading systems — but with a radically different approach to access. While tech giants tend to keep their vaunted LLMs hidden from the public, BLOOM is available to anyone for free."


Democratizing access to large-scale language models with OPT-175B

By Susan Zhang, Mona Diab, Luke Zettlemoyer/ May 3, 2022

"In line with Meta AI’s commitment to open science, we are sharing Open Pretrained Transformer (OPT-175B), a language model with 175 billion parameters trained on publicly available data sets, to allow for more community engagement in understanding this foundational new technology. For the first time for a language technology system of this size, the release includes both the pretrained models and the code needed to train and use them. To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license to focus on research use cases. Access to the model will be granted to academic researchers; those affiliated with organizations in government, civil society, and academia; along with industry research laboratories around the world."

Top Open Source Large Language Models

By KDnuggets/ Sep 14, 2022

"In 2019, there was a big boost in the popularity of Language Modelling thanks to the development of transformers like BERT, GPT-2, and XLM. These transformer-based models can be adapted from a general-purpose language model to a specific downstream task which is known as fine-tuning. The process of fine-tuning requires much fewer data than training the language model from scratch. That’s one of the reasons that makes transformer-based modes remarkable compared to previous approaches used in Language Modelling."

OSTP Issues Guidance to Make Federally Funded Research Freely Available Without Delay

By The White House Office of Science and Technology Policy/ August 25, 2022

"Today, the White House Office of Science and Technology Policy (OSTP) updated U.S. policy guidance to make the results of taxpayer-supported research immediately available to the American public at no cost. In a memorandum to federal departments and agencies, Dr. Alondra Nelson, the head of OSTP, delivered guidance for agencies to update their public access policies as soon as possible to make publications and research funded by taxpayers publicly accessible, without an embargo or cost. All agencies will fully implement updated policies, including ending the optional 12-month embargo, no later than December 31, 2025."