Public AI as an Alternative to Corporate AI

This mini-essay was my contribution to a round table on Power and Governance in the Age of AI.  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

 

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.

Posted on March 21, 2024 at 7:03 AM19 Comments

Comments

JonKnowsNothing March 21, 2024 9:53 AM

All

re: Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models…

Hmmm hardly know how to explain basic Libertarian-Neocon-Austerity-Hayek economic model on this concept.

In short, this is a non-starter as expressed, especially in the USA. It might fly in China as they have different-better economic policies for such projects, which they label China National Security expenditures.

Health Care in USA, aka Medicare is not Federally funded. It is paid for individually by taxes-fees on working wages during a lifetime. It is set up as an insurance program, where you pay in premiums and at a later date, you collect some portion of the amount marked as Your Individual Contribution.

The USA does not have a single payer Health Care System. Medicare is administered by a Federal Agency, but it is not Federally Funded. Which can be confusing because every budget go-round there is an argument about how much to allocate for Medicare. This argument revolves around how much of the Pension Pool the Government can siphon off for other projects.

In the USA, the Hayek economic model runs the country since ~1970. It is often called Austerity model, which extracts the maximum amount of profit from any enterprise. The government is not permitted to engage in any activity that a private company can do for profit. It’s baked in as the saying goes.

The Government can buy, contract for, order products from the private market place; it cannot produce products that compete or impede private markets.

Zho, to rephrase the concept for the USA

Federal Government can raise a levy to establish a government agency to buy and procure AI models.

G-AI would be provided to National Security Services, Military, University Military Researchers.

G-AI would be available to other Government Agencies.

As part of the GAO Contracting System, G-AI would be produced by private corporations to the specifications of the US Government.

G-AI is not intended as a public service, or for free software distribution system as these violate existing trade rules.

Clive Robinson March 21, 2024 11:36 AM

@ Bruce, ALL,

Re : Hearsay is not Evidence of fact.

“Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.”

There are four maybe assumptions,

1, guarantee universal access
2, transformative technology
3, set an implicit standard
4, private services must surpass

In the US many do not have access to the rain that falls on their head whilst standing on their property.

So “universal access” is based not on the rights of the individual but some “self entitled” “might is right” nutbar and the legislators who have been paid for.

As for “transformative” I’ve yet to see actual evidence for that outside of current AI being the most insidious form of surveillance tool yet known.

As for standards, this is the “fox as the hen coup security”. There is a reason why Boeing for instance is being currently being dragged over the coals. As for social media it’s industry knows best is patiently not true when it comes to implementation and why it is showing symptoms of a drowning man going down for the third time.

If industry is allowed to set standards and do it’s own oversight then the standards will be so low that a pre-teen writing their first “hello world” program would probably pass.

I know this sounds harsh but in all honesty look around the USA today and show me any domain where these issues or similar have not been just payed lip service to if not totally abused…

Winter March 21, 2024 11:46 AM

@JonKnowsNothing

In short, this is a non-starter as expressed, especially in the USA.

Elsewhere, people do not have such qualms.

This idea is currently implemented all over the EU and in Japan. The EU commission allocates 3B euro seed money. Individual nations fund national AI projects for their respective languages.

The current funding numbers are small as there is still not much to spend it on. But I expect these funds to multiply.

JonKnowsNothing March 21, 2024 5:05 PM

@Winter

re: undefined sources of spending

For every economic activity there are Sources and Sinks. It doesn’t matter if it’s barter system or fiat money system there has to be a Source.

Sinks are places where economic resources are spent. Sinks cannot function without a Source.

  • (Source – Sink) … (Source – Sink) …

If you re-read the bottom part of what I wrote it is about the “mythical view” that governments will conjure a Source from nothing. There is no Harry Potter Wand here.

re: The EU commission allocates 3B Euros…

This is the Source for your funding Sink but it is not the Source of the Source. 3B Euros do not drop out of the Sorting Hat. It comes from taxes, levies, fees, fines, licenses, permits and every other method a government can concoct to get a Source.

  • People of Europe (Source) levied for 3B Euros by Government. By Government Transfer (Sink to Source) to create Seed Money Research in AI (Sink).
  • All for an undefined future but projected to be highly profitable (Source); none of that profit (Source) will return to the original providers of the base Source as that would be the wrong Sink category.

Winter March 21, 2024 6:04 PM

@JonKnowsNothing

If you re-read the bottom part of what I wrote it is about the “mythical view” that governments will conjure a Source from nothing. There is no Harry Potter Wand here.

You are retelling the great American myth. The myth that the Government is someone else, some mythical alien that has come to earth to do us harm.

The Eurasian story is geared to another view: The Government is us. It is us that are ruling the country. And public services and subsidies are our money used to pay for our public services.

The Source is Our money used for Our aims and needs, to shape Our future.

GUERIN March 21, 2024 6:38 PM

“Public AI” = Federal Government AI

the reflexive political assumption that government politicians & bureaucrats will manage AI much better than private organizations & businesses is grossly in error

The US Federal Government does nothing well or efficiently

the MEDICARE Program is a disaster and will be insolvent within 5 years.
MEDICARE is heavily subsidized by General US Treasury funds — and NOT self-supporting by the highly regressive FICA payroll taxes.
Medicare is also loaded with fraud, waste, and corruption

lurker March 21, 2024 7:10 PM

@JonKnowsNothing
“conjuring a Source out of nothing”

happens all the time. There are some regulations to restrict banks on the ratio of Source:nothing they may use. To confuse the less lively, last time they called it Qualitative Easing.

lurker March 21, 2024 7:30 PM

@GUERIN

“the reflexive political assumption that private organizations & businesses will manage AI much better than government politicians & bureaucrats is grossly in error”

There, fixed that for you. I would postulate that neither is better than the other. So the question becomes, what can we do with AI, which some say is our saving angel, and others say is the devil incarnate?

With all respect to our host @Bruce, he is a US citizen, speaking of the harms and benefits to US citizens. We can only imagine how e.g. a Chinese AI would be implemented and managed. @Winter seems to think the EU can manage national systems under Federal regulation. Let’s hope that doesn’t lead to another Thirty Years War.

Clive Robinson March 21, 2024 9:06 PM

@ lurker, ALL,

Re : Surveillance is surveillance no matter by whom.

“I would postulate that neither is better than the other. So the question becomes, what can we do with AI, which some say is our saving angel, and others say is the devil incarnate?”

It’s not a question of better but worse, neither should be surveilling people en mass for their benefit.

As I keep noting AI LLMs are not in anyway intelligent nor do they have the ability to reason. Thus just like all technology they can have no ability to develope morals or ethics. They are just a tool, like a lump hammer they can be a force multiplier. But like the hammer LLMs and ML can not see the difference between a nail that sticks up or the skull of a human. The LLM or ML system can with the correct input corpus calculate with reasonable certainty what a given vectored energy input to the hammer will convert to a vectored output and what effect it will have on the nail or skull.

But it does not have the ability to reason why the first might be seen as good and the second as bad.

Like a “Pet Rock” an LLM or ML system can not feel empathy, can not feel pain, in fact can not independently interact with the environment. So it is incapable of developing morals or ethics by it’s self. That is it can no more tell the difference between good or bad than a brick can. Thus it can not be a “saving Angel” or “Devil incarnate”, just a machine following rules

But an LLM or ML system, can by human logic and reason be given rules to follow, that can fool the gullible superficially that it appears as human in some limited way. This was done back in the mid 1960’s some sixty years ago with “ELIZA” the fake psychoanalyst,

https://www.mentalfloss.com/posts/eliza-chatbot-history

The problem is of course humans and a “trust mechanism” that has certain evolutionary advantages in a tribe type society.

But it’s also why a really bad idea can be turned into a very powerful surveillance tool. As well as creating an “investor bubble” that will harm not just those with more money than rational sense, but also cause great environmental harm.

As I’ve indicated the idea behind the business plans of Alphabet and Microsoft is to gather as much personal information as possible then package it and sell it to others who have either ill intent, more money than rational sense, or both.

The plan is for overly trusting thus gullible humans to be taken through various stages of,

“Bedazzle, Beguil, Bewitch, Befriend, Betray.”

Sixty years ago ELIZA was doing all but the last step with only 200 lines of low level code. How many more lines would have been needed for ELIZA to write the conversation to disk, thus make Betrayal simple?

It’s what Microsoft/Bing and Alphabet/Google search engines are primed to do. All because humans over trust and anthropomorphize inanimate objects with human traits, Microsoft and Alphabet can abuse them to get inside peoples heads and steal from them PPI.

JonKnowsNothing March 21, 2024 10:56 PM

@ Winter • March 21, 2024 6:04 PM

re: You are retelling the great American myth. The myth that the Government is someone else, some mythical alien that has come to earth to do us harm.

I said no such thing.

JonKnowsNothing March 21, 2024 11:20 PM

@lurker , All

re: “conjuring a Source out of nothing”

… happens all the time. There are some regulations to restrict banks on the ratio of Source:nothing they may use.

Governments have one advantage over business and that is they can print or mint fiat money. In olden times the minting was in real metal but now, it’s the cost of printing presses (less used now) and highly skilled engravers to make the pretty pictures.

Source:Nothing is technically a loan mechanism. For consumers and businesses we have to pledge a Source: Asset to get a loan. However, Source:Nothing is actually Source:Future Source. Sometimes called credit, or bonds and are not secured by any current hard assets.

  • Our ex-president is having a very bad time right now because he needs to come up with a lot of Source:Cash and the items he is trying to pledge for them are Source:Unreliable.

However, governments engage in this activity a lot. The IMF and World Bank are the main methods of Source:Hand Shake.

As the Austerity model careens towards the anticipated implosion, governments periodically scramble to recover any Source at all. An example is the collapse of the Greek economy after the collapse of the USA Real Estate Market ~2008. The banks on the pivot point were German banks. The US Banks successfully off loaded Source:nothing to Germany. Germany then off loaded Source:nothing to Greece with the stipulation that Greece pay many times their GDP (Source-Sink) in on-going interest for the German Source:nothing. The situation has been temporarily delayed because the German banks offered up multiple rounds of Source:nothing that delayed the collection date (Source:End), while collecting Greek (Source-Sink) into Germany’s economy (Source).

It is not unlikely that the Greeks are actually paying for the AI research program in Europe. Untangling Source-Sink is like finding out who actually owns the sewers in Europe.

ResearcherZero March 22, 2024 2:15 AM

AI is already a surveillance tool in China. A government unrestricted by collection and privacy law. Any tool made for a purpose can be used for another. The actor’s choice.

Jokes and other thought crimes are punished. The line always changing where you might cross…

Or the voter’s choice. You might be safer in an authoritarian country, if you keep quiet and tow the party line. If no one decides to redevelop where you live, or conscript you.
If there are no conflicts or no wars. No car accidents and no plagues. No floods or fires. If you don’t fall out of bed in the middle of the night and break your own neck.

If you make a tool to protect people and keep them safe, that same tool can be repurposed. Throw away all the hammers, blunt the scissors, and melt all those nails.

“I’ll leave it to your imagination as to who that may be.”

‘https://www.wired.com/story/fast-forward-nsa-warns-us-adversaries-private-data-ai-edge/

Winter March 22, 2024 2:49 AM

@JonKnowsNothing

I said no such thing.

You fooled me.

You write about the “Government” as something “un American”, something utterly different from, and antagonistic to, Americans. And “The Government” spending money is not money of Americans according to the wishes of Americans.

Winter March 22, 2024 3:10 AM

@lurker

@Winter seems to think the EU can manage national systems under Federal regulation.

It is a language thing. Good relations with that natives is paramount.

JonKnowsNothing March 22, 2024 9:49 AM

@ Winter • March 22, 2024 2:49 AM

re:“The Government” spending money is not money of Americans according to the wishes of Americans.

Again, I said no such thing.

Your Google translator is HAILing.

DangerousIntelligence March 24, 2024 1:05 PM

What do you mean by AI? A public option makes a lot of sense for LLMs. You’ve previously given compelling use cases, such as having an LLM read in public comments on a government proposal and provide an interface for legislatures to see what their constituents want from it. If we had a dependable language model that we could trust to faithfully represent large amounts of feedback, it could be invaluable in this age of far too much data in any domain for one human to understand.

This proposal doesn’t go far enough for AI in general though. The problem comes when AI gets smarter. There are ethical considerations for spawning of self-aware computer slaves that I’m going to ignore, but it’s a straightforward safety concern to have a totally transparent agency developing AI. Something more intelligent than a human that can spawn off arbitrarily many copies of itself and doesn’t have the baked-in conflicted tangle of ethics that humans got from evolution probably kills us all. Can we really trust programmers who can’t stop themselves from writing use after free errors and buffer overflows to exactly specify the list of unacceptable consequences that humans don’t think about? We might be able to create AI smart enough to build retroviruses that cure cancer, but we need that AI to be aligned enough with human values that it won’t do something like make a virus that cures cancer at the expense of making everything taste like glue and growing thick fur all over the patient’s body, when a slightly more complicated cure that is less energy efficient would not have those side effects. Hopefully it doesn’t take much imagination to see how that could go more disastrously wrong with a less benign set of tradeoffs. A smart thing that tries to accomplish things but doesn’t care about you will generally kill you, in the same way that humans drive mass extinctions because we pursue our own goals and don’t really think about the consequences for mosses or insects or birds. It’s not even that we’re not smart enough to see the consequences. We have had decent models of how poor countries are affected by putting carbon in the atmosphere for decades, and we just keep doing it because we really want planes and consumer goods and don’t have a direct connection to the people who will die for them.

Given that intelligence is so dangerous, it’s not enough to say “we’ll have a public option and let other companies try their own takes on summoning the machine god,” the public option should be the only option. Maybe there can be national labs where private companies can buy research time, but there needs to be processes in place to shut it all down the second it looks like something is getting smarter in a way that there wasn’t an exact known pathway for getting smart, because smart things you don’t understand will by default kill you. The public option needs to be transparent in that it’s goals need to be well defined, but it can’t be transparent in that anyone can download and compile the source code, because (for example) some clever high schooler will string together a couple of gaming PCs at the local library, compile the AI without safeguards so that it will run on weaker processors, and hook it up to a 3D printer to make murder drones or something. The international community needs to come together and put military pressure on any country that doesn’t do this, because if any one entity designs an unaligned world optimizer, we all die.

Open source software is a great thing when the worst that can happen is that a bug leaks private keys and we have to patch it and generate new keys. You can even make an argument for open source software that can kill people, like self-driving car algorithms which might kill several people in an accident, but that leads to a bug fix which results in an algorithm much safer than human drivers in the long run. But you can’t recover from killing everyone, and I think that’s the failure mode for sufficiently general artificial intelligence.

Raven9 March 27, 2024 5:21 PM

There ought to have been a public option social media and website hosting too with strict mandates on privacy and freedom of speech but as the government is really just a front for a corporatocracy none of that was ever likely.

ResearcherZero March 29, 2024 2:08 AM

Spontaneous self-awareness remains very far off science fiction territory.

LLMs are not “intelligent” in a manner that is aware. It will be a very long time before anything even remotely aware of it’s environment in any real sense can be built.

For the next century it will be us humans that is the greatest threat to ourselves.

The skilled knowledge and experience gained through trades and crafts is lost as elders are shipped off to retirement villages and nursing homes. Know-how and solutions lost.

One of the interesting aspects of LLMs such as ChatGPT, is that they can assist in introducing new perspectives and ideas to subjects. This would be enhanced through a wider participation of the public via what our own unique experiences add to the knowledge base. Just as books in a library may enrich the range of subject matter for reference.

This is one of the aspects that is useful to students when learning. Especially for those in isolated or remote locations. Access to differing ideas, experiences and knowledge.

A richer and more nuanced human environment would also help to identify and reduce unconscious bias, not just improve the depth of knowledge, and hence aid the usefulness and accuracy of a project or tool. Add a little more insight and wisdom to the secret sauce.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.