Rethinking Democracy for the Age of AI

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing—and my thinking. So everything is subject to change without notice.

OK, so let’s go.

We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology—social media—breaks them both.

So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

We have accepted these costs of capitalism—and democracy—because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems—complex, wicked, global problems—that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated—they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better—and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide—or the people with the money.

Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins—and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

Why can’t we have a game where everybody wins?

This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws—and especially security technologies—are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war”—or climate migration—are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking—and also red teaming. In this context, both journalism and civil society are important checks on government.

I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war—right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in—and in charge of—the process of governance.

And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about—incentives, hacking, power, complexity—also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

This feels like a challenge worthy of our security expertise.

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in Cyberscoop. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn’t.

Posted on June 18, 2024 at 7:04 AM42 Comments

Comments

K.S. June 18, 2024 8:05 AM

Misinformation is not the primary source of ‘fringe’ (unpopular, counterproductive) ideas making it into the policy. Political lobbyists are and their legal bribing via political contributions. Just look at right-to-repair as an example, there is overwhelming popular support for it yet legislature gets watered down and stalled all the time.

K.S. June 18, 2024 8:20 AM

“They tend to be rights based, not permissions based.”

As someone who grew up in a Soviet Block, the alternative to rights-based system is highly undesirable on the individual level. Move away from rights-based system in a technocratic society is aptly satirized in Warhammer 40K universe. Personally, I do not wish to live in a Soviet-like or W40K-like society even if it might be more efficient on the population level.

Any criticism of the existing rights-based system misses the point – its main purpose is not optimization of the economic output, but minimizing the downsides of the human condition. Human tendencies to form hierarchies in any social context that turn oppressive and tyrannical. Even at a very local level, like with a homeowner association, permission-based setups tend to go off rails and into tyrannical absurd.

Michele San Michele June 18, 2024 8:50 AM

I love you, Bruce, not personally – we’ve only chatted a couple of times – but for your brilliance and dedication to exactly this kind of well-reasoned alarm. Cory Doctorow’s there, too.

I think, though, that this is not an era of well-reasoned debate and technolateral thinking. I think we’re past that moment, and by a significant margin – the tipping point has tipped and the rest is, however temporarily, in the hands of social gravity writ large. I think that what’s coming isn’t an opportunity in the conventional sense but a reset – a maximum, global-scale reset that will solve the runway technological component by main force. We’re going to paint the rain red and poisonous and everyone everywhere will get a more or less equal share of that poison. (I have a phrase: don’t take my word for it, ask the dinosaurs.)

I’ve lived through actual hurricanes, a plurality thereof. Like this era, they don’t come in easily managed sizes and they don’t give a damn about our plans. They have one job: they’re a reset.

This isn’t hopeless giving up or quitterism, just a practicality in recognizing how tall and fast the wave is, how strong the wind, and how horizontal and stinging is the rain. And acknowledging that nobody is going to ride it to success. It’s too big for that, and we’re too small and too fragile and too selfish and too late.

Hope to see you on the other side of the storm, both of us whole and hale and healthy. Maybe we’ll get to restart things better.

I wish us both – all of us, really, give or take a few – something better than mere survival. And a new start.

Nolan June 18, 2024 12:06 PM

“We need to create new systems of governance”

Oh, right — we all want to change the world and have grand ideas to so, sorta.

But this lonnng meandering speech transcript never gets around to describing this new government system, even in general terms.
Apparently it will somehow properly ‘align incentives” ?

Very obvious that the author has not coherently organized his thoughts … and is severely hindered by a simplistic view of government, democracy, economics, and world history.

Clive Robinson June 18, 2024 12:31 PM

@ Bruce, ALL,

First off I have seen this coming for some time, and have commented on aspects of it, some of which got removed such is the fate of being a weather vane sometimes lightning strikes.

I don’t think there is a paragraph in there that I could not comment on and offer a different perspective to also consider or different opinion based on the point of view I stand at.

But I’m not going to write a point v. counter point because one I’d monopolise this thread for a very long time and secondly I know neither of us would be right in just a few weeks.

But I’ll note one thing that every one should be aware of,

“There is nolonger certainty in any depth nor can there ever be again.”

People do not realise just how much fear uncertainty creates and all that follows on from it.

Which brings us to,

“James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate.”

Whilst Madison’s two points are not wrong for the time they miss the point for current times.

If you search for “army of one” on this blog I’ve talked about the very real difference between a tangible physical universe and an intangible information universe.

Back in Madison’s time what a physical representative could do was limited by locality. They could only talk to maybe a hundred or so people at a time in a field or town square and probably only once in any election cycle.

I pointed out that traditional physical crime needed the criminal to be at the place at a given time and that limited their abilities.

But that cyber crime was not limited, a single individual could with planing attack thousands of places all at the same time. Which is a reason I noted that traditional insurance for a physical world was going to fail and fail badly in an informational world. We’ve come close a few times, but we’ve been lucky in that “the planning” was in some way defective. I think it’s safe to say that planning will improve with time. It’s also the reason I talk about why the almost perversion of communications needs to be limited. Electric typewriters that were in use through to the 1990’s were not connected and not vulnerable. Likewise the early Personal computers. Modern highly connected computers that have replaced those early Personal Computers around the turn of the century are seeing vulnerabilities appearing faster than a human can read a brief description of the vulnerability.

What does this have to do with politics? Well the aim is to get the message out there, form a dialog and give the feeling of a dialog on an almost personal basis.

Anyone who looked at the technical side of the Original Trump campaign can see that he or his advisors understood this, as did those who payed the likes of Cambridge Analytica for “services”.

The recent Indian elections and some EU elections show that AI can make people targets of what feels like a “personal connection”.

Thus unlike in Madason’s time speed and distance have been removed and this makes the “filter effect” not just very weak it can make it work the other way. Which also means those with extremist views are not just ‘in the game’ but ‘front runners’ over those going down the traditional path.

In effect 2010-2015 saw what was traditional politics in effect die.

That is,

Certainty fell to the army of one.

And that genie is not going to go back in the bottle.

Daniel Popescu June 18, 2024 1:08 PM

@Bruce & Clive – thank you, beatiful article and opinion.

@Clive – welcome back and…I never thought that I would see Mr. Farage coming above ground again :).

Winter June 18, 2024 1:21 PM

@Daniel Popescu

I never thought that I would see Mr. Farage coming above ground again

There was an extremely funny piece in our local newspaper about Farage.

I’ll share the automatic (AI) translation in full:

Nigel Farage, the original Brexiteer, back in British politics since June 3 as leader of Reform UK: “Who’s waking me up way too early, after a drinking party with hectoliters of Brexit beer?”

Ursula von der Leyen, President of the European Commission: ‘Brussels is calling you, dear Nigel. A special jury has declared you the first winner of a new prize named after me: the Ursula Medal for indirect services to European unification.’

Farage: ‘What!? Am I half drunk, or is it you? Have you ever searched for Nigel Kennedy or Nigel Short’s number on your iPhone? You called Nigel Farage with your old lady fingers! F-a-r-a-ge: you know, the brutal stock trader who became famous with tirades against the European monster project, who provoked the Brexit referendum.’

Von der Leyen: ‘I mean one hundred percent Nigel Farage. I am always down-to-earth and precise.’

Farage: ‘What on earth did I do wrong to be awarded by the EU? I have been swearing at the EU in the European Parliament at Europe’s expense for twenty years. I have called the EU President ‘a wet mop with the appearance of a second-rate bank employee’ and the Commission ‘an association of dishcloths’. I dropped my pants in Brussels catering establishments, I peed next to Manneken Pis. Thanks to me, the British are out of the EU.’

Von der Leyen: ‘The Ursula Medal is a prize for indirect achievements. We think that your person and Brexit have had a deterrent effect. Not a single important national populist politician talked about leaving the EU during the most recent European elections. You no longer hear Marine Le Pen talking about a Frexit, and Mr Wilders no longer talking about a Nexit. Giorgia Meloni has even become a friend of mine.’

Farage: ‘Just wait until serious work is done on Brexit. Until now there have been weak idiots at the helm in London. I will be elected to the House of Commons on July 4.”

Von der Leyen: ‘I’ll tell you something: people in the Commission spontaneously started clapping when they heard the news about your comeback. You’re going to take votes from the Conservatives, Labor is going to win those elections. Since the resignation of Jeremy Corbyn, your secret ally in the Brexit project, the pro-European wing of Labor has dared to speak out again. The new British government will probably do everything it can to further mitigate the consequences of Brexit. In Brussels we will have to deal with a Great Britain that joins the European Union, but no longer has a say in it.’

Farage: ‘Are you saying that you are secretly happy that I dragged the British out of the EU?’

Von der Leyen: ‘Of course I will never say that officially. But I sometimes think: what if the Remain camp had won that referendum? Great Britain gave Brussels not only alcoholic charlatans, but also lucid Eurosceptics. David Cameron had won a package of concessions for that referendum. If Remain had won, a powerful country could have been obstructive within the EU all along. Brexiteers had continued to cause trouble in the European Parliament and had inflamed Frexiteers and Nexiteers.

‘I will read you something from the Ursula Penning jury report: The more dubious the opponent, the more credible the EU. Nigel Farage is a typical political con man. He has the integrity of a shrewd used car dealer. The kind of person who grins when customers, to whom he has just sold a car, stand still with the engine smoking. Nigel Farage has made an above-average contribution to undermining the credibility of more serious opponents of the European Union.”

Farage: ‘Is there actually a serious amount of money attached to that Ursula Medal?’

finagle June 18, 2024 3:37 PM

@Clive I concur I could also comment on pretty much the whole essay paragraph by paragraph.

I’ll also limit my comments though.

The first problem I see is that the starting point is wrong. We’re not in a democracy, we’re in an oligarchy, and the elected ‘representatives’ represent their party, and whomever sets policy in that party. They do not represent their constituents except accidentally. They are elected or ousted based on the perception of their party, usually in a small number of constituencies and based on very limited understanding of what their party might achieve. I think Liz Truss should have made a lot of people wake up to this.

The second point I’d make is that some of the policies of those parties are set by the companies that should be regulated. From the environment that is being impacted by AI clusters and resource harvesting for next year’s must have device (must have if you want to buy food, get medicines, vote, have a social life…) to the control of dissemination and discovery of information. It is close to impossible to function as a small business without paying a tax, not to the government, but to Google, Meta or Amazon.

These companies are de facto unregulated, and the way to regulate them is not agility in changing the laws, it is enforcing robust and well written laws. GDPR was pretty well written, it has been poorly enforced. If it had been enforced correctly shareholders in Meta would be facing bankruptcy proceedings. If copyright were enforced correctly OpenAI would have been closed down day one. GDPR has some intelligent ideas. Cookies have been subverted and superceded by local storage and other browser manufacturer tricks to avoid legislation, so GDPR specifies technologies by class not name. We need laws like that. Where companies and individuals can equally be held to book on existing legislation. Tech company specific loopholes like safe harbour need closing, if not completely then significantly.

I’d argue we need fewer laws, unambiguous, robust laws, with no loopholes and then government becomes moot. Governments are then responding to outlier events and existential threats, not scrambling over each other about insignificant tax cuts, immigration policies and who told what lie. The current election is not being fought on what is the best government, but whose turn it is at the trough. The options on offer do not include getting rid of the political parties, and while they persist, this will be an oligarchy.

As for how to get from here to there. If you live in an oligarchy, how do you create change, when the gatekeepers of change have a vested interest in preventing anything that undermines their rule? And the tech companies pulling strings in the wings. I’ve not given up on the idea change is possible, but I do not know what catalyst we will need to take a desire for better government from an ideal to a reality. Sadly I think it’s likely to be world war or a genuine existential threat before change is possible. And while I hope for change, I hope not to see those catalysts in my lifetime.

mark June 18, 2024 4:59 PM

Bruce,

Thanks for that good, thoughtful view. I agree with a lot – perhaps one thing we could use AI for, and this would ONLY be if it can show how it arrived at that conclusion, running a simulation of a proposal, and the several likeliest outcomes. That would be on both a personal and governmental level.

I have the latter in my fiction, future timeline.

There is, however, one major exception I take: NO, an AI can’t drive a car. Feel free to contact me, and I’ll point you on google maps to a street I drive on regularly, and any “self-driving car” would fail, completely and totally. Three lanes, one parking, two-way, and buses use it. And there’s no center line a good bit of the way.

Winter June 18, 2024 5:21 PM

@finagle

I’d argue we need fewer laws, unambiguous, robust laws, with no loopholes and then government becomes moot.

You are arguing for bugless code and want to remove human decision making.

That has been the legal program of the US for the last century. It is exactly what brought the US in this mess.

Simply said, there is no code without bugs and loopholes (read @bruce Hacking book) and no set of rules will be able to capture reality to the extent that human decisions are not needed anymore.

It has been tried for a century and it never worked. And it cannot work.

Winter June 18, 2024 6:01 PM

@echo

Still, it’s very useful gathering all thoughts into one place. Now we know a whole lot of nothing. Impressive!

You are too harsh.

Besides, without errors, no one would learn anything. And you know the proverb:
Anybody can learn from their errors, but the smart people learn from other people’s errors.

Anyhow, if we would have to wait for the experts to chime in and tell us how it really is, this would be a very dull place.

finagle June 18, 2024 6:54 PM

@Winter

Yes I’m arguing for bugless code. No I’m not arguing to remove human decision making, I’d argue that human decision making is completely orthogonal to well written laws.

I think we disagree on what law should look like. I don’t think you can say given the complexity of US laws that making simple unambiguous enforceable laws is something they have even remotely tried. Let alone had as a program for a century. I think the tendency is to try to legislate for every possible conceivable case, and we should not. We should make the law simple and obvious. Have you read any laws passed in the US or UK? They are far from that.
When I say simple I mean simple. Moses tablets simple. Thou shalt not kill. I’ll accept that needs qualification, in so much as it applies to humans, domestic animals, protected species. It doesn’t protect disease causing viruses, vermin, or arguably politicians… The definitions do not need to be complex or endlessly tweaked.
Why do we need insanely incomprehensibly complex definitions and laws that are unenforceable (Digital Economy Act), morally wrong (Rwanda deportation bill) or just badly written (Sexual Offences Act 2003).

As for human decision making can you tell me what human decisions are involved in saying ‘thou shalt not kill’? That has been morally accepted since biblical times. Humans will write the intent of the laws. Humans will decide whether at any point to obey them. How is having simple human understandable, easy to enforce laws undermining the right to human decision making? Rather I would say it better supports it than having laws few people understand or could point to. And those laws do not have to be absolute and inflexible. Thou shalt not kill, except where there is a reason sufficient to be understood and approved by 12 of your peers. That allows for self defence, it allows for killing one person to save many in a crisis. It allows for failure of judgement under pressure, or anything a human can then decide on. According to the mores of the time.

The current UK law is unknowably huge and complex. It relies on Acts of Parliament which are not law (weird but true) until they are enforced and have been tested in court against the massive back catalogue of precedents. Precedent and judgement make up most of the actual law as practised. However we all know, you don’t kill, you don’t steal, you pay your taxes, drive on the left. What we all know (whether we obey it or not) is the law individuals obey and for the most part it isn’t complex. So let’s clear out all the complexity that we can’t see and that is not necessary. Same for companies. Make the law simple, knowable and enforceable.

I totally disagree with the conclusion it cannot work. I concede it cannot work if we carry on from where we are, without a step change, but allow for a step change and all bets are off. Draft a new simple set of laws, test them against AI LLM models. Translate them into code and unit test them. Behaviour test them. Test and test and test till there is the minimum code/law to make the society we want to live in then throw out the old ones. New laws, or changes to laws need testing. When we find a loophole or bug, TEST if it is one, then modify the law to cover it. Testing all the while.

What makes it impossible is that politicians will not change a system that protects them and whose loopholes support them. Rather than it cannot work, it will never be allowed to work. There are too many interests who do not want it to.

Another analogy that may be useful. Newtons Laws of Motion are inaccurate. They do not model reality finely. They do model it well enough for a good approximation in many cases. You couldn’t use them to get to the Moon, but they’re fine. Why do laws need to reflect reality at infinitesimal levels of detail? Reality may be fractal, law does not have to be. Laws should provide a moral code for us to live by, and to create the society we want to live in. How finely do they need to model it? The more we attempt to refine them, the harder they are to apply, maintain, debug. Relativity refines Newton, but while most school children can grasp Newton, few can grasp Einstein. We need a Newtonian legal code. At present what we have Einstein couldn’t fathom.

TL;DR – I disagree with your conclusions, and some of Bruce’s. Somewhat.

Clive Robinson June 18, 2024 7:01 PM

@ Daniel Popescu, Winter, ALL,

“I never thought that I would see Mr. Farage coming above ground again :)”

Damn, and there I was thinking I’d hammered the stake in good and proper 😉 just shows you can not keep a bad blood sucker down…

The reason he’s still around is he gives “false hope of certainty” to those who can not live without the feeling of it in times of uncertainty.

The truth is I can not think of any of the grandiose promises he uttered that have come close to anything other than “wrong beyond belief”.

Back in the early 1990’s I was doing my MSc in Information systems design, and one of the “readers” was giving one of their high speed talks.

And I unfortunately “crashed it” with a very simple question,

“What is the value of information in transit?”

Back then even “High Speed Trading” was effectively an unknown, so no one had given any real thought to it.

For those who have not thought about some of the odd aspects, consider bank notes and coins in your pocket. Few realise they in effect earn interest. Not for the person who holds them but the bank that printed them. Well bank notes and coins have not been “tangible worth” in quite some time, what they are is “tokens” that “hold information” in circulation.

Thus you get to see the parallel with information in transit that is not held in a physical token. As such the only real limit on it is the speed of light in the transmission medium.

This means that the information starts suffering not just from relativistic issues but the fun of in effect being at a “cusp” mathematically.

I could go on but the point to note is that due to amongst other things “time cones”, there can not be any certainty in a transaction etc, just a low grade probabilistic prediction.

We joke about “Chaos in Banking” but Mr Farage was a protagonist of apparent “Chaos for you” but “Certainty for him” and his cronies.

It would not be legal to call him a crook because as far as I’m aware he’s not been convicted. In part because there has to be legislation to define a course of criminal activity, and lobbyists and the like stop that being written…

But under the “Duck Test” of “social norms” I think many would regard him as being neither honest or upright, and untrustworthy at best.

In what now appears the distant past it was suggested that “Farage” get a dictionary definition equivalent to “committing unnatural sex acts in public places / car parks”,

https://www.urbandictionary.com/define.php?term=Farage

Personally I would not hold anything against him… As I would not want to hold a fomite.

https://www.wordnik.com/words/fomite

Winter June 18, 2024 10:51 PM

@finagle

Yes I’m arguing for bugless code.

That already defeats your plan. That does not exist, never has, never will. Our host writes book about this very fact.

As for human decision making can you tell me what human decisions are involved in saying ‘thou shalt not kill’?

It starts with, “has someone be killed?”, then “was it intentional?”, and continues with “was it premeditated?”. Then we just start with a long process to prove who did it? What is valid evidence, how was it obtained, was it manipulated, is it complete?

This is just the “simplest” of crimes, someone died or not.

Theft, fraud, embezzlement and all these other crimes that involve the transfer of ownership are built on concepts of who owns what, when she owns it, and when and how ownership is transfered, etc.. In a complex financial system of an industrial society there are no simple rules or simple decisions.

Not to try to defend the current US or UK situations. The US legal system is dysfunctional and the UK is utterly opaque to me. It is just that modern industrial societies are very complex, with complex interactions and responsibilities [1], and they need laws and rules to match the complexity of life.

If you want to have an entertaining read, pick “The death of common sense”. It documents where “laws should not need human decisions” ends.

[1] That is just not “modern” society. A good historical read is “Money changes everything”
‘https://www.goodreads.com/book/show/26597296-money-changes-everything

Winter June 18, 2024 11:26 PM

@finagle
Re: The current UK [USA] law is unknowably huge and complex.

I think there is a fundamental difference between the common law of the US and UK and the civil law systems in other countries.[1]

The civil law countries periodically overhaul their penal and civil codes. For instance, Dutch Civil law has been completely overhauled between 1970-1990, with more additions since then. Other countries have seen comparable changes. These changes include weeding out and modernizing redundant and outdated legislation.

I know of no such updates in US and UK law. The USA and UK still regularly beat citizens with 19th century laws on the books. Recent examples in the news are the Comstock act and the “goods or chattels” ruling in Fairfax Virginia.

[1] Just USA and UK, I have no idea how this works in other common law countries.

finagle June 19, 2024 5:03 AM

@Winter

The UK removed several hundred ‘defunct’ laws during the 1990s under ‘New’ Labour. Things like every yeoman was required to do archery practise every Sunday. However since then they’ve added lots of moribund Acts like the Digital Economy Act, which will never become law because it is unenforceable.

finagle June 19, 2024 5:19 AM

@Winter, @Bruce

It is absolutely possible to write bugless code. I completely agree that the more complex a system is the more likely it is to have bugs, but that does not make it impossible to debug, just harder. The mere fact of having written a book positing it, and arguing it does not make it true. I’ll agree it’s exceptional, but it is absolutely 100% not impossible. Though to someone who has never done it it may be so hard and unlikely as to be so.

I have written, and deployed code which has no bugs and does non trivial tasks. Actually, in the real world. It’s not easy, but I never did computer science, so I never learnt the bad habits I’ve seen taught since, and I learned about specifications and testing in another discipline where they matter. Where bugs kill people, not just inconvenience them.

Clive Robinson June 19, 2024 5:47 AM

@ finagle

“The UK removed several hundred ‘defunct’ laws during the 1990s under ‘New’ Labour. Things like every yeoman was required to do archery practise every Sunday. However since then they’ve added lots of moribund Acts like the Digital Economy Act, which will never become law because it is unenforceable.”

A couple of points,

Firstly, the way you put it, you make it sounds like “New Labour” were responsible for the Digital Economy Act(s)…

Secondly there were two “Digital Economy Acts and they were both failures.

They were both “argued out” for many good reasons through the usuall palimentary times. The “slipped in the back door” via the “Wash-up” process at the end of parliamentary times, because they appeared to be about “election promises” of the then incumbents.

As you note for various reasons they are mostly unenforceable three of which are

1, Politician’s have next to no clue about how the information communications and economy work. Visible by the fact they can not even produce a 100,000ft model of the respective functioning.

2, Much of the legislation was at the instigation of the “copyright industry” that is a parasite that harms not just the consumer but the artists as well.

3, Parts of the legislation were designed to increase not decrease the “surveillance economy” none of which would benefit the UK as an independent state, and would actively harm most adults in the UK.

Winter June 19, 2024 6:55 AM

@finagle

It is absolutely possible to write bugless code.

I think everyone here is anxiously awaiting a proof. None has been delivered yet. We see that the number of software bugs found over time in use shows an unending tail suggesting that bugs will be found forever, albeit probably less and less over time if the code is not changed.

I have written, and deployed code which has no bugs and does non trivial tasks.

3 years ago a 24 year old bug was discovered in the TCP implementation of the Linux kernel. [1] That looks like a piece of code that has been stress tested and studied quite a lot. It is such cases why people want actual proof that the code is without any bugs.

It is a pity you did not study CS as there are a few proven theorems that show that it might actually not always be possible to write non-trivial code that takes any input and then always supplies the desired output.

[1] ‘https://engineering.skroutz.gr/blog/uncovering-a-24-year-old-bug-in-the-linux-kernel/

Clive Robinson June 19, 2024 7:53 AM

@ Winter, finagle, ALL,

Re : Writing bug free code.

“I think everyone here is anxiously awaiting a proof. None has been delivered yet.”

There have been several proofs but I suspect not for what you are thinking of as code.

The reason there are bugs in a sequential system is due to,

1, Incompleteness.
2, Complexity.

Of which the solution is to use a finite state machine, where every state change has a correctly determined state to end up in from a correctly defined transition.

I’ve written code that way, and providing the underlying system is “correct” then the overall system is “correct”.

In order to remove “incorrect tools” the code was written in assembler and hand verified. This is not a fast or inexpensive process but it tends to stop things falling out of the sky on your head. Or the Petrochemical site going “high-order”.

finagle June 19, 2024 8:12 AM

@Winter

We’re about to go into philosophy, and life is too short. You are never going to accept any proof I can offer, because I don’t have the tools or time someone like Gödel could bring to this.

At the end of the day, if you choose your abstraction, the level of language you wish to work in, it is perfectly possible to write code that does what it is meant to. That does not fail or exhibit bugs. It is when we want to model reality perfectly that we find our model does not reflect reality, because the model we built reflects what we understood yesterday and today we understand more. But who asked us to write a perfect model of reality in order to get things done. No-one. My argument is perhaps that abstraction is a necessary evil, referring back to a previous essay. Too much overloading, too much complexity makes for useless systems. KISS, separation of concerns, and test. test for things you think can never go wrong.

That TCP bug is funny. It is an untested bad optimisation. As soon as someone said ‘this is not acceptable’ it was simply fixed, albeit a pain to track down. But for 24 years people just said networking was buggy and it was simpler to restart a process than look under the hood. Kudos to the guys who lifted the hood. Shame on those who accepted the optimisation into the codebase in the first place, and on all of us who never bothered to lift the lid.

If I accept bugs are inevitable, and nothing I can do can eliminate them then I might as well just give up and go work for a tech giant. But while I believe we can eliminate bugs in non trivial systems, life is worth living, and I will continue to aspire, and improve and teach others to try harder and be better. I do not believe bugs are inevitable or acceptable. In computer systems, laws or life.

And no amount of CS specific theorems are going to convince me. We stand opposed. I accept bugs happen, I refuse to accept they are inevitable and an insurmountable problem.

finagle June 19, 2024 8:16 AM

@Clive

Thanks, you put it so much more tersely and better than I.

Complexity and completeness are key concepts.

Winter June 19, 2024 8:51 AM

@Clive

Of which the solution is to use a finite state machine, where every state change has a correctly determined state to end up in from a correctly defined transition.

The existing proofs are for Turing complete machines. If you take away the ability to write from the state machine [1], then I could see how it could be verifiably correct.

But I think there is a reason people prefer Turing complete computers over Finite State Machines. I do not really see myself, or anyone else, writing a word processor or spreadsheet program as a finite state machine. Or an OS kernel for that matter.

[1] Maybe a stack machine would work too?

Raul June 19, 2024 9:08 AM

“We security technologists have a lot of expertise in both secure system design and hacking”.

Oh really? How many information security incidents have mr Bruce Schneier personally resolved? Reading about them from the news doesn’t count.

And to what kind of “democracy” have mr Bruce Schneier here referring to? That flawed and arrogant one you have in US, that you are exporting around the world, forcefully, supported by weapons, like you did in Iraq, Afghanistan and now in Ukraine?

Or maybe you are referring to that “democracy” where NSA secretly backdoored Skype?

Maybe dilettantes should stop preaching their incompetent belief.

finagle June 19, 2024 9:58 AM

@Winter

Then maybe the problem is that Turing complete systems cannot be bug free. Which would posit a bug in the definition or usefulness of Turing complete systems. Would it not? That said we haven’t thrown away arithmetic post Gödel. 1 + 1 still equals…

If you fancy a giggle search “finite state machine” “word processor”. First result was a design for a word processor expressed as a finite state machine.

Anyway, you’ve hijacked things away from my points about democracy into CS. I think we need simpler laws that people can understand in their entirety. Laws should change slowly, and help to keep society pointed the way we all want it. We don’t need to fix democracy we need less of it and the impact of it should be slow. The implementation and enforcement of laws is going to change rapidly, but should happen with expert input, and with reference to the law, which enshrines the society we want to live in and the values and rights we share. Separation of concerns.

Whether the limited field of computer science which has grown organically and with little or no guidance or governance is capable of delivering systems which work to specification in all cases is irrelevant to laws and democracy. Except as an example of how not to do it perhaps.

Winter June 19, 2024 3:43 PM

@finagle

Anyway, you’ve hijacked things away from my points about democracy into CS.

Yes, rules in a democracy can de hacked too (see latest book of our host), but you cannot limit the rules and laws of society to a finite state machine.

We have seen over the years how the parties in the US, and especially the GOP, have “hacked” the elections such that they can get a majority in congres with support of a minority of the people.

Simpler rules and laws are not immune to hacking.

cmeier June 19, 2024 8:52 PM

There is a whole lot of hand waving in this article. The fields of political science, economics, and sociology have spent the last two or three hundred years studying, debating, and theorizing about governance, resource allocation, and human interactions. This article seems, at best, only superficially aware of any of that.

It is a problem I see with much AI research. A lot of smart mathematicians and computer scientists are building algorithms that are devoid of any understanding of the political, economic, social, and historical context of their work. This problem is why a broad liberal arts education is so important and why technology policy is too important to be left exclusively to technologists.

Clive Robinson June 20, 2024 2:49 AM

@ Winter, finagle, ALL,

“Yes, rules in a democracy can be hacked too (see latest book of our host), but you cannot limit the rules and laws of society to a finite state machine.”

Actually you can but it’s undesirable.

The biggest hack of the laws and rules of society is actually “justice”.

Which allows for compassion and understanding for peoples actions.

Unfortunately the legal profession has given “justice” a bad name by trying to exploit it for their “clients”. It’s one of the reasons there is a prevailing feeling that “justice can be bought” and certainly in the US “The wealthy never go to jail” because they can buy representation that is smarter than the judge and jury.

Many years ago @Wael and I were having a conversation at the bottom of a thread about legislation, regulation, rules and similar

And I noted that all such should have “exceptions” that is “allowable defence” even for murder with,

“All rules should have exceptions”

To which @Wael pointed out the logical fallacy of Whitehead. So the rule becomes,

“With the exception of this rule, all rules should have exceptions”

Which indicates that you can not have “a set of all rules” and treat them all the same as the rules of sets demand. You have to have meta-rules and multiple sets of rules, and that is where all the cracks start to appear.

finagle June 20, 2024 5:47 AM

@Winter

Clive mentioned finite state machines, I didn’t.

I don’t think CS has much or any relevance to the points I’m trying to make. I agree the ‘democracy’ you or I live in can be hacked. The legal and electoral systems are a mess and full of holes, contradictions and seem designed for abuse. Bruce wrote about current systems and I do not disagree with him.

I am arguing that we do not live in an actual Aristotelian democracy, under his definitions I believe you and I both live in oligarchies. A significant amount of power rests in the hands of the political parties who make it an oligarchy and they modify the system continually to preserve that. I think we agree there?

What they are changing are laws. My contention is that by simplifying laws, making them clear and slow to change we strip much of that power, and so discussing the defence of the current system from technological threats becomes moot. I think we can agree that simpler systems are harder to hack. We thus end up with a legal system that guides society, that makes it clear what kinds of things we do and do not tolerate. Hacking that is made hard through simplicity, lack of ambiguity and actively testing for and removing loopholes before implementation.

Having a more rigid framework of laws is going to make it harder to argue that you have not broken the law on a technicality. Society can choose through how the laws are enforced and applied the consequences of breaking the law. That is going to change, and be subject to interpretation and where technology can influence potentially. However it won’t be in the hands of in-expert temporal, venal, sometimes criminal politicians. At any point we should be able to look back at the law and decide how the implementation reflects the aspirations of the greater society.

All of which is making me think of the original, pre-amendment US Constitution. Perhaps what we need is to revisit that, and throw away everything else. Fold in the thing that matter like the Bill of Rights, throw out the antiquated irrelevancies and remove ambiguities.

cls June 20, 2024 10:36 AM

(very minor nit: GDP is not comparable to market capitalization. GDP is cash flow, so more like annual turnover. Neither measure says anything about “good put” of the cash flow.)

Thanks for a very stimulating essay. Much to think about.

Regarding the discussion about bug free code above, back on track: what we’d like is bug free legislation. To the extent that law is like code, that isn’t possible. Indeed, in the context of the essay, the defects of the legislation are often intended, it’s a feature, not a bug!

So as much as I resent Theil hacking Roth IRAs to avoid taxes, I don’t want code / law to be so flexible and responsive that it could be used to target him, or anyone. “Bill of attainder”.

Poster @finagle seems to think that given perfect specifications, bug free code is achievable. As noted many times elsewhere, writing good legislation is difficult, because of risks of under and over specifying. In that way, it’s just like writing specs for code. Not possible to idiot-proof, cover every possibility and every creative turn, when the world just makes bigger idiots, like Theil.

Winter June 20, 2024 5:29 PM

@cls

As noted many times elsewhere, writing good legislation is difficult, because of risks of under and over specifying.

I think you explained my thoughts better than I did myself.

Clive Robinson June 20, 2024 8:37 PM

@ cls, finagle, Winter,

“As noted many times elsewhere, writing good legislation is difficult, because of risks of under and over specifying. In that way, it’s just like writing specs for code.”

Actually that is an over simplification.

Think not of being one side or the other of a desired line.

Think more like a map of a random landscape with a 3D surface that also has gravity. The Einstein “rubber sheet” universe, which includes worm holes as standard (rather than exceptions).

Now fold that into a rather more than three dimensional surface, so it becomes like a weird solid with billions of joined up bubbles and passages, and you start to get a feel for the way certain legal systems are.

It’s why just a single apparently trivial or irrelevant line can on later analysis have devastating effects to vast tracts of legislation.

ResearcherZero June 23, 2024 2:12 AM

Will there be enough fresh water for all this AI?

If we plug AI data centers directly into nuclear power plants, we can assume greater water use in the local area, especially if industrial zones are also located in close proximity.

“…1.2 to 2.5x the increase in direct economic cost of drought-induced fossil fuel electricity generation.”

‘https://www.pnas.org/doi/10.1073/pnas.2300395120

47% of the world’s thermal power plant capacity—mostly coal, natural gas and nuclear—and 11% of hydroelectric capacity are located in highly water-stressed areas.

https://www.wri.org/insights/water-stress-threatens-nearly-half-worlds-thermal-power-plant-capacity

Nuclear uses 700 – 1,100 gallons per MWh in closed-loop systems and 25,000 – 60,000 gal per MWh in open-loop.Coal uses 500 – 600 gal per MWh in closed-loop systems and 20,000 – 50,000 gal per MWh in open-loop.

‘https://www.eia.gov/todayinenergy/detail.php?id=50698

“intrinsic water-carbon interconnections that have critical implications on reliable electricity output and energy security”

https://news.stonybrook.edu/university/taking-a-global-look-at-dry-and-alternative-water-cooling-of-power-plants-2/

Water used in natural gas operations is approximately one-half to one-third that for coal for a given cooling technology.

‘https://iopscience.iop.org/article/10.1088/1748-9326/8/1/015031

“Scientists with Sandia National Laboratories who’ve studied carbon capture and storage say CCS will increase water withdrawal and use by 25 percent to 40 percent.”

https://www.circleofblue.org/2010/world/a-desperate-clinch-coal-production-confronts-water-scarcity/

Winter June 23, 2024 6:23 AM

@ResearcherZero

Is there an incentive to tell the truth in a society without privacy?

The two links are rather complementary. They tell the story of Divide and Conquer. We need to collaborate to achieve anything lasting, but it is soooo tempting to get a short term advantage by lies and deception.

The link about primates and their strive for social supremacy suggests a nice soundbite:

It is the alpha ape that brings down society

I think Zuckerberg et al. are perfect examples.

‘https://hbr.org/2007/10/unmasking-the-alpha-male

Winter June 23, 2024 6:37 AM

A relevant comment article in Nature:

Misinformation poses a bigger threat to democracy than you might think
‘https://www.nature.com/articles/d41586-024-01587-3

In today’s polarized political climate, researchers who combat mistruths have come under attack and been labelled as unelected arbiters of truth. But the fight against misinformation is valid, warranted and urgently required.

Winter June 23, 2024 7:25 AM

Another interesting comment article in Nature on actual use of GenAI in Indian elections

How prevalent is AI misinformation? What our studies in India show so far
‘https://www.nature.com/articles/d41586-024-01588-2

A sample of roughly two million WhatsApp messages highlights urgent concerns about the spread and prevalence of AI-generated political content.

Spoiler alert: not much, yet

Clive Robinson June 24, 2024 7:17 AM

@ Winter,

Re : Misinformation poses a bigger threat…

The article is not at all well written and almost classifies as misinformation it’s self.

Research on “cognitive bias” has shown that you need to sperate things out into message delivery and message content.

Overly briefly there are generally three key points made about delivery to achieve bias be it true, false, ambiguous,

1, He who speaks first
2, He who speaks loudest
3, He who speaks for longest

And that content needs to be adjusted to which method is being used to bias.

Importantly the message content can be true yet be used to paint a false image. As can the opposite of false message content used to paint a true image.

The second technique is used to discredit an opponents position and can be used in two major ways. The first is when the opponent is projecting a false image, the second and more subtle is to destroy a true image. We’ve seen both used in climate science and they can be quite effective.

The real problem –and yes I’m aware of the irony– is when people try to “refine” or “simplify” an argument.

Most subjects of merit that arise these days are reasonably complex and as such understanding requires a level of knowledge few posses. Nor do they want to posses because they would rather use their time for something else. Hence you get statements like,

“Will some one take this bag of snakes and lay them out straight for me?”

Well reducing complexity all to often is done by leaving out information. The aim being to get down to an “either or” choice on two propositions. The idea being that a balance can be used on the “pros or cons” of each proposition.

The problem is that all to often is there are actually more than two propositions and none of them are in a majority on pros or cons.

Thus someone can ignore strong propositions and take a weak proposition and contrast it with an even weaker proposition so that the weak proposition looks like the correct choice when it is not. Likewise you can take a weak proposition and by choice of what you contrast, make it look stronger than a strong proposition.

You can actually form a tree of such arguments to get any of the propositions promoted or knocked out.

These are all games that are actively played out everyday where people competitively via for dominance.

Thus fake/false gets through by being selectively chosen.

There is no simple solution or “magic bullet” to these issues, and nearly everyone needs to be either highly educated in the subject domain or guided by those that are. Most suggestions are for the latter.

The problem is what do you do when the supposed subject matter experts are wrong or biased. It’s embarrassing to see just how many experts are wrong simply due to the fact they work at the leading edge or research where information is minimal and can so often be wrong.

The number of scientific papers that are wrong or become wrong and have to be withdrawn tells us that caution is needed. In part this is because much of human knowledge is “untestable” for various reasons. The fact you can ask a question does not mean that you can come up with a test let alone a reliable one, or one that passes fundamental scientific rigor.

Worse all to often people chase solutions where there are none to be found. Consider the “Strong Leader” notion that any action is better than no action therefore being a leader requires you set a clear direction and do not change it…

Many a ship was lost to this sort of nonsense, similarly wars started and lost.

The real lesson is two fold

1, Humans make mistakes more often than not.
2, Situations always evolve and can change direction.

Thus the real lesson to learn is not how do you stop fake news –which you cannot– but,

“How do you mitigate the issues of human mistakes and situations evolving?”

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.